Search for Well Architected Advice
-
Operational Excellence
-
- Resources have identified owners
- Processes and procedures have identified owners
- Operations activities have identified owners responsible for their performance
- Team members know what they are responsible for
- Mechanisms exist to identify responsibility and ownership
- Mechanisms exist to request additions, changes, and exceptions
- Responsibilities between teams are predefined or negotiated
-
- Executive Sponsorship
- Team members are empowered to take action when outcomes are at risk
- Escalation is encouraged
- Communications are timely, clear, and actionable
- Experimentation is encouraged
- Team members are encouraged to maintain and grow their skill sets
- Resource teams appropriately
- Diverse opinions are encouraged and sought within and across teams
-
- Use version control
- Test and validate changes
- Use configuration management systems
- Use build and deployment management systems
- Perform patch management
- Implement practices to improve code quality
- Share design standards
- Use multiple environments
- Make frequent, small, reversible changes
- Fully automate integration and deployment
-
- Have a process for continuous improvement
- Perform post-incident analysis
- Implement feedback loops
- Perform knowledge management
- Define drivers for improvement
- Validate insights
- Perform operations metrics reviews
- Document and share lessons learned
- Allocate time to make improvements
- Perform post-incident analysis
-
Security
-
- Separate workloads using accounts
- Secure account root user and properties
- Identify and validate control objectives
- Keep up-to-date with security recommendations
- Keep up-to-date with security threats
- Identify and prioritize risks using a threat model
- Automate testing and validation of security controls in pipelines
- Evaluate and implement new security services and features regularly
-
- Define access requirements
- Grant least privilege access
- Define permission guardrails for your organization
- Manage access based on life cycle
- Establish emergency access process
- Share resources securely within your organization
- Reduce permissions continuously
- Share resources securely with a third party
- Analyze public and cross-account access
-
- Perform regular penetration testing
- Deploy software programmatically
- Regularly assess security properties of the pipelines
- Train for Application Security
- Automate testing throughout the development and release lifecycle
- Manual Code Reviews
- Centralize services for packages and dependencies
- Build a program that embeds security ownership in workload teams
-
-
Reliability
-
- Be aware of service quotas and constraints in Cloud Services
- Manage service quotas across accounts and Regions
- Accommodate fixed service quotas and constraints through architecture
- Monitor and manage quotas
- Automate quota management
- Ensure sufficient gap between quotas and usage to accommodate failover
-
- Use highly available network connectivity for your workload public endpoints
- Provision Redundant Connectivity Between Private Networks in the Cloud and On-Premises Environments
- Ensure IP subnet allocation accounts for expansion and availability
- Prefer hub-and-spoke topologies over many-to-many mesh
- Enforce non-overlapping private IP address ranges in all private address spaces where they are connected
-
- Monitor end-to-end tracing of requests through your system
- Conduct reviews regularly
- Analytics
- Automate responses (Real-time processing and alarming)
- Send notifications (Real-time processing and alarming)
- Define and calculate metrics (Aggregation)
- Monitor End-to-End Tracing of Requests Through Your System
- Define and calculate metrics
- Send notifications
- Automate responses
-
- Monitor all components of the workload to detect failures
- Fail over to healthy resources
- Automate healing on all layers
- Rely on the data plane and not the control plane during recovery
- Use static stability to prevent bimodal behavior
- Send notifications when events impact availability
- Architect your product to meet availability targets and uptime service level agreements (SLAs)
-
-
Cost Optimization
-
- Establish ownership of cost optimization
- Establish a partnership between finance and technology
- Establish cloud budgets and forecasts
- Implement cost awareness in your organizational processes
- Monitor cost proactively
- Keep up-to-date with new service releases
- Quantify business value from cost optimization
- Report and notify on cost optimization
- Create a cost-aware culture
-
- Perform cost analysis for different usage over time
- Analyze all components of this workload
- Perform a thorough analysis of each component
- Select components of this workload to optimize cost in line with organization priorities
- Perform cost analysis for different usage over time
- Select software with cost effective licensing
-
-
Performance
-
- Learn about and understand available cloud services and features
- Evaluate how trade-offs impact customers and architecture efficiency
- Use guidance from your cloud provider or an appropriate partner to learn about architecture patterns and best practices
- Factor cost into architectural decisions
- Use policies and reference architectures
- Use benchmarking to drive architectural decisions
- Use a data-driven approach for architectural choices
-
- Use purpose-built data store that best support your data access and storage requirements
- Collect and record data store performance metrics
- Evaluate available configuration options for data store
- Implement Strategies to Improve Query Performance in Data Store
- Implement data access patterns that utilize caching
-
- Understand how networking impacts performance
- Evaluate available networking features
- Choose appropriate dedicated connectivity or VPN for your workload
- Use load balancing to distribute traffic across multiple resources
- Choose network protocols to improve performance
- Choose your workload's location based on network requirements
- Optimize network configuration based on metrics
-
- Establish key performance indicators (KPIs) to measure workload health and performance
- Use monitoring solutions to understand the areas where performance is most critical
- Define a process to improve workload performance
- Review metrics at regular intervals
- Load test your workload
- Use automation to proactively remediate performance-related issues
- Keep your workload and services up-to-date
-
-
Sustainability
-
- Scale workload infrastructure dynamically
- Align SLAs with sustainability goals
- Optimize geographic placement of workloads based on their networking requirements
- Stop the creation and maintenance of unused assets
- Optimize team member resources for activities performed
- Implement buffering or throttling to flatten the demand curve
-
- Optimize software and architecture for asynchronous and scheduled jobs
- Remove or refactor workload components with low or no use
- Optimize areas of code that consume the most time or resources
- Optimize impact on devices and equipment
- Use software patterns and architectures that best support data access and storage patterns
- Remove unneeded or redundant data
- Use technologies that support data access and storage patterns
- Use policies to manage the lifecycle of your datasets
- Use shared file systems or storage to access common data
- Back up data only when difficult to recreate
- Use elasticity and automation to expand block storage or file system
- Minimize data movement across networks
- Implement a data classification policy
- Remove unneeded or redundant data
-
- Articles coming soon
< All Topics
Print
Use shared file systems or storage to access common data
PostedDecember 20, 2024
UpdatedMarch 29, 2025
ByKevin McCaffrey
ID: SUS_SUS4_5
Implementing shared file systems or storage solutions minimizes data duplication and optimizes storage efficiency. By centralizing data access, organizations can streamline their infrastructure, significantly reduce resource consumption, and contribute to sustainability goals by lowering their environmental impact.
Best Practices
Leverage Shared File Systems for Efficient Data Management
- Implement Amazon FSx for shared file systems to centralize storage, reducing the need for multiple copies of the same data across different services. This minimizes storage costs and optimizes resource usage.
- Utilize Amazon S3 with lifecycle policies to manage data efficiently. Set up rules to transition older data to less expensive storage classes (e.g., S3 Infrequent Access) or delete data that is no longer needed, thereby reducing your storage footprint.
- Evaluate VPC peering or Transit Gateway options to optimize data transfer among different AWS accounts or services, minimizing latency and costs related to data duplication.
- Encourage a data governance strategy across your teams to ensure that data replication is minimized, and data is tagged and tracked for easy lifecycle management, aligning with sustainability goals.
- Regularly review and analyze storage usage to identify unnecessary data replication and swiftly address areas for improvement, reinforcing a commitment to efficient resource utilization.
Questions to ask your team
- Have you identified instances of data duplication in your storage systems?
- What shared file systems or storage solutions have you implemented to streamline data access?
- How do you monitor the efficiency of your data access and storage usage?
- Are there policies in place to regularly review and clean up unused data?
- Have you assessed the impact of shared storage solutions on your overall sustainability goals?
Who should be doing this?
Data Architect
- Design and implement data management policies to optimize storage usage.
- Evaluate and select shared file systems or storage solutions that promote data efficiency.
- Ensure data lifecycle management practices are in place to move less-used data to appropriate storage.
Cloud Engineer
- Deploy and configure shared storage solutions according to organizational standards.
- Monitor resource utilization to identify opportunities for reducing provisioned storage.
- Assist in automating data lifecycle policies for archiving and deleting unnecessary data.
Data Analyst
- Assess data usage patterns to inform decisions about data lifecycle management.
- Collaborate with stakeholders to define data access needs and requirements.
- Provide insights into the business value of data to prioritize storage optimization efforts.
IT Operations Manager
- Oversee the implementation of data management practices across teams.
- Ensure compliance with sustainability goals through effective data policies.
- Facilitate training and awareness programs on the use of shared file systems and best practices.
What evidence shows this is happening in your organization?
- Shared Storage Usage Policy: A formal policy explaining how to use shared file systems or storage for common data access. It covers procedures for data classification, access controls, and guidelines to minimize duplication and maintain efficiency.
- Shared Storage Architecture Diagram: A visual representation of how shared storage interacts with various systems and services. It illustrates how to centralize data access to reduce redundant copies and improve sustainability within the infrastructure.
- Shared Storage Efficiency Checklist: A step-by-step checklist for regularly auditing shared storage usage, verifying permissions, cleaning up stale data, and monitoring capacity. It helps ensure that data stays consolidated and resource use remains aligned with sustainability goals.
Cloud Services
AWS
- Amazon S3: Amazon S3 offers scalable object storage that can be used with lifecycle policies to transition data to less expensive and less performant storage classes over time.
- Amazon EFS: Amazon Elastic File System (EFS) provides a shared file system that allows multiple Amazon EC2 instances to access data concurrently, thus reducing data duplication.
- AWS Data Lifecycle Manager: AWS Data Lifecycle Manager helps automate the creation, retention, and deletion of EBS snapshots, helping manage data effectively across its lifecycle.
Azure
- Azure Blob Storage: Azure Blob Storage enables scalable object storage with the capability to set lifecycle management policies to transition data between access tiers.
- Azure Files: Azure Files offers fully managed file shares in the cloud that allow simultaneous access from multiple VMs, helping to reduce data duplication.
- Azure Data Lake Storage: Azure Data Lake Storage provides a scalable data lake solution, enabling management of data essential for analytics while optimizing storage and access.
Google Cloud Platform
- Google Cloud Storage: Google Cloud Storage offers unified object storage with lifecycle management capabilities to automatically delete or transition data based on policies.
- Filestore: Filestore provides a fully managed file storage service for applications that require a shared file system, reducing data redundancy among instances.
- BigQuery: BigQuery is a fully managed, serverless data warehouse that allows for efficient big data analytics, optimizing resource usage and reducing costs.
Table of Contents