Search for Well Architected Advice
-
Operational Excellence
-
- Resources have identified owners
- Processes and procedures have identified owners
- Operations activities have identified owners responsible for their performance
- Team members know what they are responsible for
- Mechanisms exist to identify responsibility and ownership
- Mechanisms exist to request additions, changes, and exceptions
- Responsibilities between teams are predefined or negotiated
-
- Executive Sponsorship
- Team members are empowered to take action when outcomes are at risk
- Escalation is encouraged
- Communications are timely, clear, and actionable
- Experimentation is encouraged
- Team members are encouraged to maintain and grow their skill sets
- Resource teams appropriately
- Diverse opinions are encouraged and sought within and across teams
-
- Use version control
- Test and validate changes
- Use configuration management systems
- Use build and deployment management systems
- Perform patch management
- Implement practices to improve code quality
- Share design standards
- Use multiple environments
- Make frequent, small, reversible changes
- Fully automate integration and deployment
-
- Have a process for continuous improvement
- Perform post-incident analysis
- Implement feedback loops
- Perform knowledge management
- Define drivers for improvement
- Validate insights
- Perform operations metrics reviews
- Document and share lessons learned
- Allocate time to make improvements
- Perform post-incident analysis
-
Security
-
- Separate workloads using accounts
- Secure account root user and properties
- Identify and validate control objectives
- Keep up-to-date with security recommendations
- Keep up-to-date with security threats
- Identify and prioritize risks using a threat model
- Automate testing and validation of security controls in pipelines
- Evaluate and implement new security services and features regularly
-
- Define access requirements
- Grant least privilege access
- Define permission guardrails for your organization
- Manage access based on life cycle
- Establish emergency access process
- Share resources securely within your organization
- Reduce permissions continuously
- Share resources securely with a third party
- Analyze public and cross-account access
-
- Perform regular penetration testing
- Deploy software programmatically
- Regularly assess security properties of the pipelines
- Train for Application Security
- Automate testing throughout the development and release lifecycle
- Manual Code Reviews
- Centralize services for packages and dependencies
- Build a program that embeds security ownership in workload teams
-
-
Reliability
-
- Be aware of service quotas and constraints in Cloud Services
- Manage service quotas across accounts and Regions
- Accommodate fixed service quotas and constraints through architecture
- Monitor and manage quotas
- Automate quota management
- Ensure sufficient gap between quotas and usage to accommodate failover
-
- Use highly available network connectivity for your workload public endpoints
- Provision Redundant Connectivity Between Private Networks in the Cloud and On-Premises Environments
- Ensure IP subnet allocation accounts for expansion and availability
- Prefer hub-and-spoke topologies over many-to-many mesh
- Enforce non-overlapping private IP address ranges in all private address spaces where they are connected
-
- Monitor end-to-end tracing of requests through your system
- Conduct reviews regularly
- Analytics
- Automate responses (Real-time processing and alarming)
- Send notifications (Real-time processing and alarming)
- Define and calculate metrics (Aggregation)
- Monitor End-to-End Tracing of Requests Through Your System
- Define and calculate metrics
- Send notifications
- Automate responses
-
- Monitor all components of the workload to detect failures
- Fail over to healthy resources
- Automate healing on all layers
- Rely on the data plane and not the control plane during recovery
- Use static stability to prevent bimodal behavior
- Send notifications when events impact availability
- Architect your product to meet availability targets and uptime service level agreements (SLAs)
-
-
Cost Optimization
-
- Establish ownership of cost optimization
- Establish a partnership between finance and technology
- Establish cloud budgets and forecasts
- Implement cost awareness in your organizational processes
- Monitor cost proactively
- Keep up-to-date with new service releases
- Quantify business value from cost optimization
- Report and notify on cost optimization
- Create a cost-aware culture
-
- Perform cost analysis for different usage over time
- Analyze all components of this workload
- Perform a thorough analysis of each component
- Select components of this workload to optimize cost in line with organization priorities
- Perform cost analysis for different usage over time
- Select software with cost effective licensing
-
-
Performance
-
- Learn about and understand available cloud services and features
- Evaluate how trade-offs impact customers and architecture efficiency
- Use guidance from your cloud provider or an appropriate partner to learn about architecture patterns and best practices
- Factor cost into architectural decisions
- Use policies and reference architectures
- Use benchmarking to drive architectural decisions
- Use a data-driven approach for architectural choices
-
- Use purpose-built data store that best support your data access and storage requirements
- Collect and record data store performance metrics
- Evaluate available configuration options for data store
- Implement Strategies to Improve Query Performance in Data Store
- Implement data access patterns that utilize caching
-
- Understand how networking impacts performance
- Evaluate available networking features
- Choose appropriate dedicated connectivity or VPN for your workload
- Use load balancing to distribute traffic across multiple resources
- Choose network protocols to improve performance
- Choose your workload's location based on network requirements
- Optimize network configuration based on metrics
-
- Establish key performance indicators (KPIs) to measure workload health and performance
- Use monitoring solutions to understand the areas where performance is most critical
- Define a process to improve workload performance
- Review metrics at regular intervals
- Load test your workload
- Use automation to proactively remediate performance-related issues
- Keep your workload and services up-to-date
-
-
Sustainability
-
- Scale workload infrastructure dynamically
- Align SLAs with sustainability goals
- Optimize geographic placement of workloads based on their networking requirements
- Stop the creation and maintenance of unused assets
- Optimize team member resources for activities performed
- Implement buffering or throttling to flatten the demand curve
-
- Optimize software and architecture for asynchronous and scheduled jobs
- Remove or refactor workload components with low or no use
- Optimize areas of code that consume the most time or resources
- Optimize impact on devices and equipment
- Use software patterns and architectures that best support data access and storage patterns
- Remove unneeded or redundant data
- Use technologies that support data access and storage patterns
- Use policies to manage the lifecycle of your datasets
- Use shared file systems or storage to access common data
- Back up data only when difficult to recreate
- Use elasticity and automation to expand block storage or file system
- Minimize data movement across networks
- Implement a data classification policy
- Remove unneeded or redundant data
-
- Articles coming soon
< All Topics
Print
Automate healing on all layers
PostedDecember 20, 2024
UpdatedMarch 22, 2025
ByKevin McCaffrey
ID: REL_REL11_3
Workloads that require high availability must implement strategies to handle component failures automatically. By automating healing processes, you can guarantee quicker recovery and minimal disruption, essential for user satisfaction and maintaining service level agreements (SLAs).
Best Practices
Implement Automated Monitoring and Alerts
- Use AWS CloudWatch to monitor application performance and system metrics in real time.
- Set up alarms to detect anomalies that may indicate a failure or performance degradation.
- Integrate with AWS Lambda to trigger automated remediation actions upon alarm performance.
Enable Auto Recovery for EC2 Instances
- Use EC2 Auto Recovery to automatically recover instances when impairment is detected due to hardware failure.
- Leverage Amazon EC2 health checks to ensure that non-responsive instances are identified correctly.
- Combine with Auto Scaling to ensure that healthy instances can replace any that fail.
Utilize Managed Services with Built-in Resilience
- Leverage AWS managed services such as RDS, DynamoDB, and ECS, which include automatic failover and recovery features.
- Ensure that your architecture focuses on microservices that can individually scale and be independently updated.
- Evaluate which components can benefit from AWS Fault Isolation options to reduce the impact of a failure.
Establish Automated Backup and Restore Procedures
- Use AWS Backup to automate backup schedules across services, ensuring data protection and retention policies are adhered to.
- Regularly test your restoration processes to ensure they function as expected during an actual incident.
- Integrate AWS CloudFormation or infrastructure as code (IaC) scripts into the backup lifecycle to redeploy infrastructure when necessary.
Implement Blue/Green Deployments or Canary Releases
- Use deployment strategies such as blue/green or canary to minimize downtime and reduce the impact of failed deployments.
- Automate the rollback process using tools like AWS CodeDeploy when a failure in the new version is detected.
- Monitor application metrics closely during the deployment to facilitate quick remediation if issues arise.
Questions to ask your team
- What automated processes are in place to detect and respond to component failures?
- How are healing actions initiated upon failure detection, and what systems are involved?
- Can you provide examples of recent incidents where automated healing successfully mitigated downtime?
- Is there a documented procedure for the types of failures that the automation can handle?
- How often are the automated recovery mechanisms tested to ensure they function as expected?
- Are there metrics collected related to the effectiveness of the automated healing processes?
- How is the team notified of incidents where automated healing was triggered?
- What mechanisms are in place to continuously improve the automation based on past incidents?
Who should be doing this?
Cloud Architect
- Design resilient architectures that incorporate healing mechanisms.
- Select appropriate AWS services that support automated healing.
- Establish redundancy and failover strategies for critical components.
DevOps Engineer
- Implement automated scripts for monitoring and healing of workloads.
- Configure AWS services for auto-recovery and self-healing.
- Continuously test and validate healing processes to ensure effectiveness.
Site Reliability Engineer (SRE)
- Monitor system performance and detect failures in real-time.
- Analyze failure patterns to improve automated healing strategies.
- Collaborate with development teams to integrate healing features into applications.
Systems Administrator
- Manage and maintain infrastructure resources, ensuring they support automated healing.
- Respond to alerts and adjust healing policies as necessary.
- Document processes related to automated recovery and system healing.
What evidence shows this is happening in your organization?
- Automated Healing Playbook: A comprehensive guide that outlines procedures and best practices for implementing automated healing mechanisms across all layers of the workload, ensuring rapid recovery from component failures.
- Incident Response and Remediation Plan: A structured plan detailing the steps to detect failures, trigger automated remediation actions, and manage incidents to minimize downtime and improve MTTR.
- Reliability Dashboard: An interactive dashboard that visualizes real-time metrics related to workload health, failure incidents, and automated recovery actions, allowing teams to quickly assess and respond to reliability issues.
- Infrastructure as Code (IaC) Template: A template that demonstrates how to provision resilient infrastructure with automated healing capabilities, such as AWS CloudFormation or Terraform scripts that include health checks and auto-recovery configurations.
- Best Practices Checklist for Automated Healing: A checklist of actionable items to assess and improve the automated healing capabilities of the workload, ensuring all layers are capable of recovering from failures.
Cloud Services
AWS
- Amazon EC2 Auto Scaling: Automates the scaling of EC2 instances based on current demand and health checks, enhancing the workload’s reliability.
- AWS Lambda: Can be used to trigger automated responses to failures, such as restarting services or notifying stakeholders.
- Amazon CloudWatch: Monitors the health of AWS resources and applications, enabling automated actions to remediate issues promptly.
Table of Contents