-
Operational Excellence
-
- Resources have identified owners
- Processes and procedures have identified owners
- Operations activities have identified owners responsible for their performance
- Team members know what they are responsible for
- Mechanisms exist to identify responsibility and ownership
- Mechanisms exist to request additions, changes, and exceptions
- Responsibilities between teams are predefined or negotiated
-
- Executive Sponsorship
- Team members are empowered to take action when outcomes are at risk
- Escalation is encouraged
- Communications are timely, clear, and actionable
- Experimentation is encouraged
- Team members are encouraged to maintain and grow their skill sets
- Resource teams appropriately
- Diverse opinions are encouraged and sought within and across teams
-
- Use version control
- Test and validate changes
- Use configuration management systems
- Use build and deployment management systems
- Perform patch management
- Implement practices to improve code quality
- Share design standards
- Use multiple environments
- Make frequent, small, reversible changes
- Fully automate integration and deployment
-
Security
-
- Evaluate and implement new security services and features regularly
- Automate testing and validation of security controls in pipelines
- Identify and prioritize risks using a threat model
- Keep up-to-date with security recommendations
- Keep up-to-date with security threats
- Identify and validate control objectives
- Secure account root user and properties
- Separate workloads using accounts
-
- Analyze public and cross-account access
- Manage access based on life cycle
- Share resources securely with a third party
- Reduce permissions continuously
- Share resources securely within your organization
- Establish emergency access process
- Define permission guardrails for your organization
- Grant least privilege access
- Define access requirements
-
- Build a program that embeds security ownership in workload teams
- Centralize services for packages and dependencies
- Manual code reviews
- Automate testing throughout the development and release lifecycle
- Train for application security
- Regularly assess security properties of the pipelines
- Deploy software programmatically
- Perform regular penetration testing
-
-
Reliability
-
- How do you ensure sufficient gap between quotas and maximum usage to accommodate failover?
- How do you automate quota management?
- How do you monitor and manage service quotas?
- How do you accommodate fixed service quotas and constraints through architecture?
- How do you manage service quotas and constraints across accounts and Regions?
- How do you manage service quotas and constraints?
- How do you build a program that embeds reliability into workload teams?
-
- How do you enforce non-overlapping private IP address ranges in all private address spaces?
- How do you prefer hub-and-spoke topologies over many-to-many mesh?
- How do you ensure IP subnet allocation accounts for expansion and availability?
- How do you provision redundant connectivity between private networks in the cloud and on-premises environments?
- How do you use highly available network connectivity for workload public endpoints?
-
- Monitor end-to-end tracing of requests through your system
- Conduct reviews regularly
- Analytics
- Automate responses (Real-time processing and alarming)
- Send notifications (Real-time processing and alarming)
- Define and calculate metrics (Aggregation)
- Monitor End-to-End Tracing of Requests Through Your System
-
- Monitor all components of the workload to detect failures
- Fail over to healthy resources
- Automate healing on all layers
- Rely on the data plane and not the control plane during recovery
- Use static stability to prevent bimodal behavior
- Send notifications when events impact availability
- Architect your product to meet availability targets and uptime service level agreements (SLAs)
-
-
Cost Optimization
-
- Establish ownership of cost optimization
- Establish a partnership between finance and technology
- Establish cloud budgets and forecasts
- Implement cost awareness in your organizational processes
- Monitor cost proactively
- Keep up-to-date with new service releases
- Quantify business value from cost optimization
- Report and notify on cost optimization
- Create a cost-aware culture
-
- Perform cost analysis for different usage over time
- Analyze all components of this workload
- Perform a thorough analysis of each component
- Select components of this workload to optimize cost in line with organization priorities
- Perform cost analysis for different usage over time
- Select software with cost effective licensing
-
-
Performance
-
- Learn about and understand available cloud services and features
- Evaluate how trade-offs impact customers and architecture efficiency
- Use guidance from your cloud provider or an appropriate partner to learn about architecture patterns and best practices
- Factor cost into architectural decisions
- Use policies and reference architectures
- Use benchmarking to drive architectural decisions
- Use a data-driven approach for architectural choices
-
- Use purpose-built data store that best support your data access and storage requirements
- Collect and record data store performance metrics
- Evaluate available configuration options for data store
- Implement Strategies to Improve Query Performance in Data Store
- Implement data access patterns that utilize caching
-
- Understand how networking impacts performance
- Evaluate available networking features
- Choose appropriate dedicated connectivity or VPN for your workload
- Use load balancing to distribute traffic across multiple resources
- Choose network protocols to improve performance
- Choose your workload's location based on network requirements
- Optimize network configuration based on metrics
-
- Establish key performance indicators (KPIs) to measure workload health and performance
- Use monitoring solutions to understand the areas where performance is most critical
- Define a process to improve workload performance
- Review metrics at regular intervals
- Load test your workload
- Use automation to proactively remediate performance-related issues
- Keep your workload and services up-to-date
-
-
Sustainability
-
- Optimize geographic placement of workloads based on their networking requirements
- Align SLAs with sustainability goals
- Optimize geographic placement of workloads based on their networking requirements
- Stop the creation and maintenance of unused assets
- Optimize team member resources for activities performed
- Implement buffering or throttling to flatten the demand curve
-
- Optimize software and architecture for asynchronous and scheduled jobs
- Remove or refactor workload components with low or no use
- Optimize areas of code that consume the most time or resources
- Optimize impact on devices and equipment
- Use software patterns and architectures that best support data access and storage patterns
- Remove unneeded or redundant data
- Use technologies that support data access and storage patterns
- Use policies to manage the lifecycle of your datasets
- Use shared file systems or storage to access common data
- Back up data only when difficult to recreate
- Use elasticity and automation to expand block storage or file system
- Minimize data movement across networks
-
- Articles coming soon
Use multiple environments
Using Multiple Environments for Development and Testing
Using multiple environments is essential for experimenting, developing, and testing your workload effectively. By employing separate environments for development, testing, staging, and production, teams can ensure that changes are thoroughly validated before deployment. Introducing increasing levels of control and rigor as environments approach production helps gain confidence that workloads will operate as intended.
Separate Environments for Development, Testing, and Production
Create distinct environments for different stages of the development lifecycle:
- Development Environment: A flexible environment for developers to experiment, write code, and prototype features.
- Testing Environment: Used to perform various tests, such as unit, integration, performance, and user acceptance testing.
- Staging Environment: A near-replica of the production environment to validate changes before final deployment.
- Production Environment: The live environment that serves end users and handles real-world workloads.
Maintaining separate environments ensures that experiments and testing activities do not interfere with the stability of the production environment.
Increase Controls as Environments Approach Production
Apply increasing levels of control and rigor as changes move through the development lifecycle and approach production. This includes:
- Testing and Validation: In the testing and staging environments, implement comprehensive testing procedures, including functional, performance, and security tests.
- Access Control: Restrict access to environments closer to production, allowing only authorized individuals to make changes.
- Change Management: Introduce formal change management processes in staging and production environments to ensure changes are reviewed and approved before implementation.
Gain Confidence with Realistic Testing
Use the staging environment to perform realistic tests that replicate the production setup as closely as possible. Testing in an environment that mirrors production provides greater confidence that workloads will operate as expected after deployment. This helps catch configuration issues, performance bottlenecks, or potential failures before they impact end users.
Automate Environment Creation and Maintenance
Automate the creation and maintenance of environments to reduce manual effort, minimize errors, and ensure consistency across environments. Automated provisioning helps ensure that environments are set up consistently and that development, testing, and staging environments match production as closely as possible.
Supporting Questions
- How are multiple environments used to experiment, develop, and test workloads?
- What controls are applied as environments approach production to ensure reliability?
- How is the consistency of environments maintained across different stages?
Roles and Responsibilities
Developer
Responsibilities:
- Use development environments for experimenting and building features.
- Validate changes in the testing environment before promoting them to staging.
QA Engineer
Responsibilities:
- Perform comprehensive testing in the testing and staging environments.
- Ensure that tests in staging reflect real-world scenarios to gain confidence in the production deployment.
DevOps Engineer
Responsibilities:
- Automate the creation and maintenance of multiple environments to ensure consistency.
- Apply increasing levels of control and security as changes move through staging and production environments.
Artifacts
- Environment Deployment Plan: A document outlining the setup, purpose, and controls for each environment (development, testing, staging, production).
- Testing and Validation Checklist: A checklist used to verify that all required tests have been completed before promoting changes to the next environment.
- Access Control Policy: A policy that defines access permissions and restrictions for each environment to ensure secure operations.
Relevant AWS Tools
Environment Provisioning Tools
- AWS CloudFormation: Automates the provisioning of environments using templates, ensuring consistency across development, testing, staging, and production.
- AWS Elastic Beanstalk: Provides an easy way to deploy and manage applications in multiple environments, automating the provisioning of infrastructure.
Environment Management and Security Tools
- AWS Systems Manager: Manages and maintains multiple environments, providing capabilities for patching, monitoring, and automation.
- AWS IAM (Identity and Access Management): Controls access to different environments, applying stricter controls for environments closer to production.
Monitoring and Validation Tools
- Amazon CloudWatch: Monitors the health and performance of workloads across environments, ensuring that testing and validation are effective before deployment to production.
- AWS CodePipeline: Automates the build, test, and deployment processes, promoting changes through different environments based on defined criteria and quality checks.