Search for Well Architected Advice
-
Operational Excellence
-
- Resources have identified owners
- Processes and procedures have identified owners
- Operations activities have identified owners responsible for their performance
- Team members know what they are responsible for
- Mechanisms exist to identify responsibility and ownership
- Mechanisms exist to request additions, changes, and exceptions
- Responsibilities between teams are predefined or negotiated
-
- Executive Sponsorship
- Team members are empowered to take action when outcomes are at risk
- Escalation is encouraged
- Communications are timely, clear, and actionable
- Experimentation is encouraged
- Team members are encouraged to maintain and grow their skill sets
- Resource teams appropriately
- Diverse opinions are encouraged and sought within and across teams
-
- Use version control
- Test and validate changes
- Use configuration management systems
- Use build and deployment management systems
- Perform patch management
- Implement practices to improve code quality
- Share design standards
- Use multiple environments
- Make frequent, small, reversible changes
- Fully automate integration and deployment
-
- Have a process for continuous improvement
- Perform post-incident analysis
- Implement feedback loops
- Perform knowledge management
- Define drivers for improvement
- Validate insights
- Perform operations metrics reviews
- Document and share lessons learned
- Allocate time to make improvements
- Perform post-incident analysis
-
Security
-
- Separate workloads using accounts
- Secure account root user and properties
- Identify and validate control objectives
- Keep up-to-date with security recommendations
- Keep up-to-date with security threats
- Identify and prioritize risks using a threat model
- Automate testing and validation of security controls in pipelines
- Evaluate and implement new security services and features regularly
-
- Define access requirements
- Grant least privilege access
- Define permission guardrails for your organization
- Manage access based on life cycle
- Establish emergency access process
- Share resources securely within your organization
- Reduce permissions continuously
- Share resources securely with a third party
- Analyze public and cross-account access
-
- Perform regular penetration testing
- Deploy software programmatically
- Regularly assess security properties of the pipelines
- Train for Application Security
- Automate testing throughout the development and release lifecycle
- Manual Code Reviews
- Centralize services for packages and dependencies
- Build a program that embeds security ownership in workload teams
-
-
Reliability
-
- Be aware of service quotas and constraints in Cloud Services
- Manage service quotas across accounts and Regions
- Accommodate fixed service quotas and constraints through architecture
- Monitor and manage quotas
- Automate quota management
- Ensure sufficient gap between quotas and usage to accommodate failover
-
- Use highly available network connectivity for your workload public endpoints
- Provision Redundant Connectivity Between Private Networks in the Cloud and On-Premises Environments
- Ensure IP subnet allocation accounts for expansion and availability
- Prefer hub-and-spoke topologies over many-to-many mesh
- Enforce non-overlapping private IP address ranges in all private address spaces where they are connected
-
- Monitor end-to-end tracing of requests through your system
- Conduct reviews regularly
- Analytics
- Automate responses (Real-time processing and alarming)
- Send notifications (Real-time processing and alarming)
- Define and calculate metrics (Aggregation)
- Monitor End-to-End Tracing of Requests Through Your System
- Define and calculate metrics
- Send notifications
- Automate responses
-
- Monitor all components of the workload to detect failures
- Fail over to healthy resources
- Automate healing on all layers
- Rely on the data plane and not the control plane during recovery
- Use static stability to prevent bimodal behavior
- Send notifications when events impact availability
- Architect your product to meet availability targets and uptime service level agreements (SLAs)
-
-
Cost Optimization
-
- Establish ownership of cost optimization
- Establish a partnership between finance and technology
- Establish cloud budgets and forecasts
- Implement cost awareness in your organizational processes
- Monitor cost proactively
- Keep up-to-date with new service releases
- Quantify business value from cost optimization
- Report and notify on cost optimization
- Create a cost-aware culture
-
- Perform cost analysis for different usage over time
- Analyze all components of this workload
- Perform a thorough analysis of each component
- Select components of this workload to optimize cost in line with organization priorities
- Perform cost analysis for different usage over time
- Select software with cost effective licensing
-
-
Performance
-
- Learn about and understand available cloud services and features
- Evaluate how trade-offs impact customers and architecture efficiency
- Use guidance from your cloud provider or an appropriate partner to learn about architecture patterns and best practices
- Factor cost into architectural decisions
- Use policies and reference architectures
- Use benchmarking to drive architectural decisions
- Use a data-driven approach for architectural choices
-
- Use purpose-built data store that best support your data access and storage requirements
- Collect and record data store performance metrics
- Evaluate available configuration options for data store
- Implement Strategies to Improve Query Performance in Data Store
- Implement data access patterns that utilize caching
-
- Understand how networking impacts performance
- Evaluate available networking features
- Choose appropriate dedicated connectivity or VPN for your workload
- Use load balancing to distribute traffic across multiple resources
- Choose network protocols to improve performance
- Choose your workload's location based on network requirements
- Optimize network configuration based on metrics
-
- Establish key performance indicators (KPIs) to measure workload health and performance
- Use monitoring solutions to understand the areas where performance is most critical
- Define a process to improve workload performance
- Review metrics at regular intervals
- Load test your workload
- Use automation to proactively remediate performance-related issues
- Keep your workload and services up-to-date
-
-
Sustainability
-
- Scale workload infrastructure dynamically
- Align SLAs with sustainability goals
- Optimize geographic placement of workloads based on their networking requirements
- Stop the creation and maintenance of unused assets
- Optimize team member resources for activities performed
- Implement buffering or throttling to flatten the demand curve
-
- Optimize software and architecture for asynchronous and scheduled jobs
- Remove or refactor workload components with low or no use
- Optimize areas of code that consume the most time or resources
- Optimize impact on devices and equipment
- Use software patterns and architectures that best support data access and storage patterns
- Remove unneeded or redundant data
- Use technologies that support data access and storage patterns
- Use policies to manage the lifecycle of your datasets
- Use shared file systems or storage to access common data
- Back up data only when difficult to recreate
- Use elasticity and automation to expand block storage or file system
- Minimize data movement across networks
- Implement a data classification policy
- Remove unneeded or redundant data
-
- Articles coming soon
< All Topics
Print
Evaluate available networking features
PostedDecember 20, 2024
UpdatedMarch 21, 2025
ByKevin McCaffrey
Selecting and configuring the right networking resources is crucial to achieving optimal performance in your workloads. By evaluating and leveraging available networking features, you can enhance throughput and mitigate latency and jitter, leading to improved user experiences and resource utilization.
Best Practices
Leverage Advanced Networking Features
- Utilize AWS Global Accelerator to improve application availability and performance by routing user traffic to the optimal endpoint based on health, geography, and routing policies.
- Implement Amazon Route 53 for DNS management to facilitate low-latency routing decisions and traffic management through health checks.
- Use AWS Direct Connect for dedicated connectivity between on-premises resources and AWS to achieve consistent network performance and reduced latency.
- Evaluate the use of Amazon CloudFront as a content delivery network (CDN) to cache content closer to users, decreasing latency and improving load times.
- Conduct performance testing and monitoring to quantify the impact of these networking features, adjusting configurations based on real-time metrics and analysis.
Questions to ask your team
- Have you identified the latency and throughput requirements of your application?
- What specific networking features have you evaluated for improving performance?
- How have you measured the impact of any networking configurations implemented?
- Have you considered using edge locations to reduce latency for geographically dispersed users?
- What tools or metrics do you use to analyze network performance?
- Are you monitoring network jitter and its impact on user experience?
- How does your current networking solution support scalability and future growth?
Who should be doing this?
Cloud Solutions Architect
- Assess workload requirements for latency, throughput, and bandwidth.
- Select appropriate networking resources based on performance metrics.
- Design and implement network architecture that aligns with user and on-premises resource constraints.
- Evaluate and configure networking features to optimize performance.
- Conduct testing and analysis to measure the impact of networking choices.
Network Engineer
- Implement the networking solutions designed by the Cloud Solutions Architect.
- Monitor network performance metrics and optimize configurations as needed.
- Configure network-level features to reduce latency and jitter.
- Work with edge locations to improve user experience and resource placement.
- Collaborate with the development team to ensure seamless integration of networking solutions.
DevOps Engineer
- Ensure continuous integration and deployment of applications with a focus on performance efficiency.
- Automate testing of network performance and feature evaluations.
- Gather and analyze metrics related to application performance and network impact.
- Facilitate communication between development, operations, and networking teams.
- Provide feedback for iteration on networking configurations and features.
What evidence shows this is happening in your organization?
- Networking Features Evaluation Template: A structured template to assess various cloud networking features that can enhance performance, including criteria for latency, throughput, and jitter.
- Performance Metrics Analysis Report: A comprehensive report documenting the impact of implemented networking features on workload performance, including metrics such as latency and bandwidth usage.
- Cloud Networking Strategy Guide: A strategic guide outlining best practices for selecting and configuring networking resources in the cloud, including evaluations of specific networking capabilities.
- Networking Configuration Checklist: A detailed checklist to ensure all aspects of networking configurations are evaluated and optimized for performance efficiency before deployment.
- Performance Testing Playbook: A playbook providing step-by-step instructions for conducting performance tests on networking features, complete with measurable objectives and analysis methods.
Cloud Services
AWS
- Amazon CloudFront: A content delivery network (CDN) that accelerates delivery of your websites, APIs, and video content by caching copies at edge locations.
- AWS Global Accelerator: A service that improves the availability and performance of your applications with global users by routing traffic through AWS’s global network.
- Amazon VPC: Allows you to create isolated networks and control the placement of your resources, enabling optimization based on latency and bandwidth needs.
Azure
- Azure Traffic Manager: A DNS-based traffic load balancer that enables you to distribute traffic optimally across global Azure regions.
- Azure ExpressRoute: Allows you to extend your on-premises networks into the Microsoft cloud over a private connection, enhancing performance and reliability.
- Azure Virtual Network: Enables you to create private networks and customize your networking configuration to address latency and throughput.
Google Cloud Platform
- Cloud Load Balancing: Distributes user traffic across multiple backend instances dynamically, improving responsiveness and minimizing latency.
- Cloud CDN: Accelerates content delivery by caching content at locations close to users, reducing latency.
- Virtual Private Cloud (VPC): Allows users to define their own private network in the cloud, optimizing the network configuration based on performance requirements.
Question: How do you select and configure networking resources in your workload?
Pillar: Performance Efficiency (Code: PERF)
Table of Contents