dc dotCreds
AWS Certified Solutions Architect - Associate (SAA-C03)

AWS SAA-C03 Practice Test

Start a free 30-question AWS SAA-C03 daily set with source-backed explanations, local progress, and a fresh rotation every morning.

30 daily web questions Source-backed explanations 7-day score history Questions updated at Apr 15, 2026, 12:06 AM CDT
AWS SAA-C03 icon

AWS SAA-C03

AWS Certified Solutions Architect - Associate (SAA-C03)

Why this page works

  • Thirty focused questions every day
  • Source links on every explanation
  • Local progress saved automatically
  • Email sync path ready for later
  • Apps provide deeper drills when available
Today's 30 AWS SAA-C03 questions

Use this AWS SAA-C03 practice test to review AWS Solutions Architect Associate. Questions rotate daily and each answer links back to the source used to write it.

Today’s Set
30 questions
Daily set rotates at 10:00 AM local time
Progress
0/30
Answered on this page session
Accuracy
0%
Loading countdown…

7-day score keeper

Answer questions today and this will become a rolling 7-day scorecard.

Local history
Optional progress sync

Keep today’s practice moving

Guest progress saves automatically on this device. Add an email later when you want a magic link that keeps your daily SAA-C03 practice in sync across browsers.

Guest progress saves on this device automatically

Guest progress is available without an account.

132 verified questions are currently in the live bank. Questions updated at Apr 15, 2026, 12:06 AM CDT. The daily set rotates at 10:00 AM local time, and each explanation links back to the source used to write it. Use the web set for quick practice, then switch to the app when available for larger banks and deeper review.

Official exam resources

Use these official AWS resources alongside the daily practice set. They cover the provider's own exam page, study guide, or prep material.

Need adjacent AWS practice pages too? AWS practice hub.

Question 1 of 30
Objective 2.2 Design Resilient Architectures

When designing a multi-region architecture for an application with strict RTO requirements, which AWS service should you use to ensure rapid failover and minimal data loss?

Concept tested: Design Resilient Architectures

A. Correct: Amazon Route 53 DNS Failover provides rapid failover capabilities across multiple Regions, ensuring minimal downtime and data loss during a disaster.

B. Incorrect: AWS CloudFormation StackSets is incorrect because it is used for deploying stacks in multiple accounts and regions but does not provide the immediate failover capability required to meet strict RTO requirements.

C. Incorrect: Amazon S3 Cross-Region Replication is incorrect because it is designed for replicating objects across different Regions, which does not address rapid failover or minimize data loss during a disaster scenario.

D. Incorrect: AWS Lambda Cross-Account Invocation is incorrect because it allows functions in one account to invoke functions in another but does not provide the necessary failover mechanisms required for strict RTO compliance.

Why this matters: This matters because architecture questions ask you to match availability, latency, and recovery requirements to the feature designed for that job.
Question 2 of 30
Objective 1.3 Design Secure Architectures

To secure data in transit and at rest for a web application hosted on Amazon EC2, which combination of AWS services should you use?

Concept tested: Design Secure Architectures

A. Correct: AWS KMS with TLS certificates from ACM is correct because aWS KMS manages encryption keys to secure data at rest and ACM provides TLS certificates to secure data in transit.

B. Incorrect: Amazon S3 with server-side encryption is incorrect because it addresses storage security but does not cover securing data in transit for an EC2-hosted web application.

C. Incorrect: AWS CloudHSM with IAM policies is incorrect because aWS CloudHSM, while providing hardware-based key management, does not offer the necessary TLS certificate functionality to secure data in transit like ACM does.

D. Incorrect: AWS Secrets Manager with VPC endpoints is incorrect because aWS Secrets Manager and VPC endpoints are useful for managing secrets and controlling access but do not provide encryption keys or TLS certificates needed for securing both data at rest and in transit.

Why this matters: This matters because secure-architecture questions test the control that actually mitigates the stated risk, not a nearby security service.
Question 3 of 30
Objective 3.5 Design High-Performing Architectures

Which AWS service would you use to process and analyze streaming data in real-time, while also supporting batch processing for historical analysis?

Concept tested: Design High-Performing Architectures

A. Incorrect: Amazon Redshift Spectrum with AWS Lake Formation is incorrect because amazon Redshift Spectrum and AWS Lake Formation are designed for querying data stored in Amazon S3 and managing access to it, but they do not support real-time streaming.

B. Incorrect: AWS Glue with Amazon S3 is incorrect because aWS Glue is an ETL service that helps with extracting, transforming, and loading data into a target storage like Amazon S3, but it does not handle real-time or batch processing of streaming data directly.

C. Incorrect: Amazon EMR with Apache Spark is incorrect because it can process large datasets in batch mode using Hadoop and Spark, but it lacks the capability to ingest and analyze streaming data in real time.

D. Correct: Amazon Kinesis Data Streams with Amazon Athena is correct because amazon Kinesis Data Streams processes and analyzes streaming data in real-time, while Amazon Athena supports ad-hoc querying of historical data stored in S3 for batch processing.

Why this matters: This matters because data architecture questions test whether ingestion, storage, processing, and governance choices match the workload.
Question 4 of 30
Objective 4.2 Design Cost-Optimized Architectures

Which AWS compute service should you choose for a workload that requires high availability and can tolerate interruptions but has an unpredictable usage pattern over the next year?

Concept tested: Design Cost-Optimized Architectures

A. Incorrect: Amazon EC2 On-Demand Instances is incorrect because on-Demand Instances offer no savings over standard rates and do not tolerate interruptions.

B. Incorrect: AWS Lambda is incorrect because charges per request and duration, making it less cost-effective for unpredictable usage patterns compared to Spot Instances.

C. Correct: Amazon EC2 Spot Instances is correct because spot Instances provide significant cost savings by leveraging unused EC2 capacity and can handle interruptions, fitting workloads with flexible start times and unpredictable demand.

D. Incorrect: Reserved Instances is incorrect because they offer a discount but require commitment over one or three years, which doesn't align well with an unpredictable usage pattern.

Why this matters: This matters because Copilot administration questions separate license assignment, billing policy, analytics, and security controls.
Question 5 of 30
Objective 2.1 Design Resilient Architectures

In a scenario where a media streaming service needs to process user activity data asynchronously and ensure loose coupling between services, which AWS service would best support this requirement by enabling event-driven processing through queues?

Concept tested: Design Resilient Architectures

A. Incorrect: Amazon S3 is incorrect because it is a storage service and does not support event-driven processing.

B. Incorrect: AWS AppSync is incorrect because it is designed for real-time data synchronization and GraphQL APIs, not asynchronous message queuing.

C. Correct: Amazon SQS provides reliable message queues that enable loose coupling between services through event-driven architecture.

D. Incorrect: Amazon CloudWatch Events is incorrect because although Amazon CloudWatch Events can trigger actions in response to events, it does not provide a queue mechanism for storing messages.

Why this matters: This matters because resilient design depends on choosing decoupling services when workloads need buffering, retries, or asynchronous processing.
Question 6 of 30
Objective 1.1 Design Secure Architectures

Which AWS service should be utilized to create a temporary session for a user with least privilege access across multiple accounts, ensuring secure cross-account resource access?

Concept tested: Design Secure Architectures

A. Incorrect: IAM Users is incorrect because they do not provide temporary credentials that adhere to least privilege access principles.

B. Incorrect: IAM Roles is incorrect because they can be used for cross-account access but they are not designed specifically for creating temporary sessions with limited permissions.

C. Incorrect: IAM Groups is incorrect because they manage collections of users and roles, but they do not create temporary security credentials or enforce least privilege access across multiple accounts.

D. Correct: STS provides temporary security credentials that adhere to the principle of least privilege, allowing secure cross-account resource access for a specified duration.

Why this matters: This matters because AI questions test whether the control changes model behavior, data handling, or evaluation in the way the scenario requires.
Question 7 of 30
Objective 3.2 Design High-Performing Architectures

When designing a cost-effective solution for processing large datasets in real-time with independent scaling and decoupling, which AWS service should be prioritized to minimize costs while ensuring high performance?

Concept tested: Design High-Performing Architectures

A. Incorrect: Amazon EC2 Spot Instances is incorrect because it provides cost savings through unused EC2 capacity but does not automatically scale and decouple workloads as effectively as AWS Lambda for real-time processing.

B. Incorrect: Amazon Elastic Container Service (ECS) is incorrect because although ECS can handle containerized applications, it requires managing clusters and scaling manually or with additional services, which adds complexity and potential costs compared to the automatic scaling of AWS Lambda.

C. Correct: AWS Lambda is correct because it automatically scales based on incoming request volume and only charges for compute time consumed, making it cost-effective for real-time processing without manual intervention.

D. Incorrect: Amazon AppStream 2.0 is incorrect because it is designed for streaming applications and desktops, not for processing large datasets in real-time with independent scaling.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 8 of 30
Objective 4.4 Design Cost-Optimized Architectures

Which option minimizes egress costs when transferring large datasets from an on-premises data center to Amazon S3 using a combination of Direct Connect and edge locations?

Concept tested: Design Cost-Optimized Architectures

A. Incorrect: Use a NAT gateway for all transfers is incorrect because using a NAT gateway does not reduce egress costs and can add overhead to data transfers.

B. Incorrect: Establish a Direct Connect connection with multiple 1 Gbps links is incorrect because establishing multiple Direct Connect links increases infrastructure costs without necessarily minimizing egress charges for transferring large datasets.

C. Correct: Configure an S3 Transfer Acceleration endpoint at the nearest AWS Region's edge location is correct because configuring an S3 Transfer Acceleration endpoint leverages AWS edge locations, reducing latency and optimizing transfer speeds while minimizing egress costs.

D. Incorrect: Set up a site-to-site VPN tunnel exclusively for data transfer is incorrect because it is a site-to-site VPN tunnel does not utilize the benefits of edge locations and may incur higher egress fees compared to using S3 Transfer Acceleration.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 9 of 30
Objective 2.2 Design Resilient Architectures

Which AWS service is best suited for designing a multi-region architecture that ensures high durability and minimizes Recovery Point Objective (RPO) by continuously replicating data across multiple geographic locations?

Concept tested: Design Resilient Architectures

A. Incorrect: AWS CloudFormation StackSets is incorrect because it is used to deploy and update stacks across multiple accounts and regions but does not handle data replication.

B. Correct: Amazon S3 Cross-Region Replication ensures high durability by continuously replicating objects from one region to another, minimizing Recovery Point Objective (RPO).

C. Incorrect: Amazon Route 53 DNS Failover is incorrect because they help in routing traffic based on health checks but do not provide data replication across regions.

D. Incorrect: AWS Lambda Cross-Account Triggering is incorrect because it allows functions to trigger across different accounts but does not support continuous data replication for durability.

Why this matters: This matters because architecture questions ask you to match availability, latency, and recovery requirements to the feature designed for that job.
Question 10 of 30
Objective 1.2 Design Secure Architectures

When designing a VPC for an application that requires strict network segmentation, which combination of AWS services would you use to enforce security policies and monitor potential threats?

Concept tested: Design Secure Architectures

A. Incorrect: AWS WAF with Shield is incorrect because they focus on protecting web applications from common web exploits and does not provide network segmentation.

B. Incorrect: Amazon GuardDuty with Macie is incorrect because it is services for threat detection and data protection, but they do not enforce network traffic rules within a VPC.

C. Incorrect: AWS Secrets Manager with Cognito is incorrect because they manage secrets and user authentication respectively, which are unrelated to enforcing strict network policies in a VPC.

D. Correct: VPC Flow Logs with Network ACLs allow for detailed monitoring and enforcement of network traffic rules, ensuring strict security policies are adhered to.

Why this matters: This matters because secure-architecture questions test the control that actually mitigates the stated risk, not a nearby security service.
Question 11 of 30
Objective 3.1 Design High-Performing Architectures

When designing a storage solution for an application that requires high throughput and frequent access to large files, which AWS service should be considered?

Concept tested: Design High-Performing Architectures

A. Incorrect: Amazon FSx for Windows File Server is incorrect because it is designed more for file-level access and compatibility with Windows applications rather than high throughput and large files.

B. Incorrect: Amazon EBS General Purpose SSD (gp2) is incorrect because eBS General Purpose SSD (gp2) provides balanced performance but may not offer the necessary throughput or cost-effectiveness for frequent, high-volume data access.

C. Correct: Amazon S3 Standard is designed to provide high throughput and durability, making it suitable for applications that require frequent access to large files.

D. Incorrect: Amazon Elastic Block Store (EBS) Provisioned IOPS SSD (io1) is incorrect because eBS Provisioned IOPS SSD (io1) is optimized for random I/O-intensive workloads rather than the sequential read/write patterns typical of large file storage.

Why this matters: This matters because storage-performance questions test whether you match latency, throughput, and access patterns to the right storage service.
Question 12 of 30
Objective 4.3 Design Cost-Optimized Architectures

When planning a database solution for an application with high read traffic and low write traffic, which AWS service would best optimize costs while ensuring automatic scaling and minimal management overhead?

Concept tested: Design Cost-Optimized Architectures

A. Incorrect: Amazon RDS Multi-AZ deployments is incorrect because it provides high availability and durability but does not automatically scale based on read traffic demand.

B. Incorrect: Amazon DynamoDB Accelerator (DAX) is incorrect because it is DAX, but it can improve performance for DynamoDB, it does not address the cost optimization needs for a relational database with high read traffic.

C. Correct: Amazon Aurora Serverless v2 is correct because it automatically scales capacity up or down based on application needs, minimizing costs and management overhead.

D. Incorrect: Amazon ElastiCache is incorrect because elastiCache is designed to cache data to speed up reads but does not provide an underlying database solution.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 13 of 30
Objective 2.2 Design Resilient Architectures

In a scenario where minimizing Recovery Time Objective (RTO) is critical, which AWS service would best support rapid failover and automatic scaling across multiple Availability Zones within the same region?

Concept tested: Design Resilient Architectures

A. Incorrect: Amazon S3 Cross-Region Replication is incorrect because it focuses on cross-region replication rather than intra-region failover and scaling.

B. Incorrect: AWS CloudFormation StackSets is incorrect because it is for deploying stacks across multiple accounts and regions, not for rapid failover within a single region.

C. Incorrect: Amazon Route 53 DNS Failover is incorrect because they help with routing traffic to healthy endpoints but do not provide automatic scaling or instance management.

D. Correct: Amazon EC2 Auto Scaling is correct because it supports rapid failover and automatic scaling of EC2 instances across multiple Availability Zones within the same region.

Why this matters: This matters because architecture questions ask you to match availability, latency, and recovery requirements to the feature designed for that job.
Question 14 of 30
Objective 1.1 Design Secure Architectures

What is the most secure method to provide a third-party contractor with access to resources in multiple AWS accounts while adhering to least privilege principles?

Concept tested: Design Secure Architectures

A. Incorrect: Use IAM users and groups across all accounts is incorrect because it does not provide centralized management of user access and roles across multiple accounts.

B. Correct: Deploy IAM Identity Center (SSO) with roles scoped to specific tasks is correct because deploying IAM Identity Center (SSO) with roles scoped to specific tasks allows for centralized management while adhering to the principle of least privilege.

C. Incorrect: Create an IAM role with cross-account permissions for each account is incorrect because creating an IAM role with cross-account permissions for each account can lead to overly permissive policies if not carefully managed.

D. Incorrect: Configure resource-based policies granting direct access to contractors is incorrect because configuring resource-based policies granting direct access to contractors bypasses the benefits of centralized identity and access management, increasing security risks.

Why this matters: This is important because centralized management of user access ensures secure and efficient control over who can perform specific tasks across multiple AWS accounts.
Question 15 of 30
Objective 3.4 Design High-Performing Architectures

In a scenario where an application requires low-latency access to resources across multiple AWS regions, which service would you configure to ensure traffic is routed efficiently and securely between these regions?

Concept tested: Design High-Performing Architectures

A. Incorrect: AWS CloudFront is incorrect because it is designed for content delivery and global distribution of static and dynamic web content, but it does not provide low-latency routing between multiple regions.

B. Incorrect: AWS Direct Connect is incorrect because they provide dedicated network connections from on-premises data centers to AWS, but it does not route traffic efficiently across different AWS Regions.

C. Correct: AWS Global Accelerator provides static IP addresses for routing traffic with low latency across AWS Regions, ensuring efficient and secure access to resources in multiple regions.

D. Incorrect: AWS PrivateLink is incorrect because it enables communication between services hosted on the Amazon Virtual Private Cloud (VPC) and other AWS services without traversing the public internet, but it does not specifically address cross-region traffic optimization.

Why this matters: This matters because architecture questions ask you to match availability, latency, and recovery requirements to the feature designed for that job.
Question 16 of 30
Objective 4.1 Design Cost-Optimized Architectures

Which AWS storage class is most suitable for infrequently accessed data that requires low-cost, durable storage with secure access controls and compliance features?

Concept tested: Design Cost-Optimized Architectures

A. Correct: Amazon S3 Glacier Deep Archive is correct because it provides low-cost storage with secure access controls and compliance features suitable for infrequently accessed data.

B. Incorrect: Amazon S3 Standard is incorrect because it offers higher costs compared to Amazon S3 Glacier Deep Archive and is designed for frequently accessed data, not long-term archiving.

C. Incorrect: Amazon EFS is incorrect because it is intended for file storage that requires frequent access and does not offer the cost savings or compliance features needed for infrequently accessed data.

D. Incorrect: AWS Snowball is incorrect because it is a physical device used to transfer large amounts of data into or out of AWS, rather than providing long-term durable storage.

Why this matters: This matters because Copilot governance questions test which Purview control handles AI-specific data exposure, compliance risk, or posture.
Question 17 of 30
Objective 2.1 Design Resilient Architectures

Which AWS service would be most suitable for implementing a publish-subscribe model to decouple components in a microservice architecture that processes large volumes of user-generated content asynchronously?

Concept tested: Design Resilient Architectures

A. Incorrect: Amazon S3 is incorrect because it is a storage service and does not support messaging patterns like publish-subscribe.

B. Correct: Amazon Simple Queue Service (SQS) supports the publish-subscribe model to decouple components in microservices, ensuring scalability and asynchronous processing of large volumes of user-generated content.

C. Incorrect: AWS AppSync is incorrect because it is a GraphQL service for real-time data synchronization and does not provide queue-based messaging functionality.

D. Incorrect: Amazon CloudWatch Events is incorrect because they are used for triggering actions based on events but do not support the publish-subscribe model for decoupling microservices.

Why this matters: This matters because AI questions test whether the control changes model behavior, data handling, or evaluation in the way the scenario requires.
Question 18 of 30
Objective 1.2 Design Secure Architectures

Which AWS service would you use to protect web applications from SQL injection and cross-site scripting attacks while ensuring compliance with PCI DSS standards?

Concept tested: Design Secure Architectures

A. Incorrect: AWS Shield is incorrect because they focus on DDoS protection and does not provide features to specifically guard against SQL injection and cross-site scripting attacks.

B. Incorrect: Amazon Macie is incorrect because it is designed for data security and privacy, particularly for sensitive data like PII, but it doesn't offer web application firewall capabilities.

C. Incorrect: Amazon GuardDuty is incorrect because it monitors AWS environments for malicious activity and unauthorized behavior, but it does not provide protection against SQL injection or cross-site scripting attacks.

D. Correct: AWS WAF allows you to customize security rules that block common exploits such as SQL injection and cross-site scripting, while also supporting compliance with PCI DSS standards.

Why this matters: This matters because Copilot governance questions test which Purview control handles AI-specific data exposure, compliance risk, or posture.
Question 19 of 30
Objective 3.3 Design High-Performing Architectures

In a scenario where an application experiences high read traffic and low write frequency, which AWS database solution would be most suitable to minimize latency while ensuring high availability?

Concept tested: Design High-Performing Architectures

A. Incorrect: Amazon RDS for MySQL with read replicas is incorrect because it uses read replicas to handle high read traffic but does not provide automatic failover for high availability.

B. Incorrect: Amazon DynamoDB with on-demand capacity mode is incorrect because it is DynamoDB with on-demand capacity mode, but it can scale reads efficiently, it may incur higher costs and does not inherently offer the same level of built-in high availability as Aurora's Multi-AZ deployment.

C. Correct: Amazon Aurora with Multi-AZ deployment provides automatic failover for high availability and optimizes read performance through distributed replicas.

D. Incorrect: Amazon ElastiCache for Redis with replication groups is incorrect because although ElastiCache for Redis can handle high read traffic, it acts more as a caching layer rather than a primary database solution, potentially requiring additional setup to ensure high availability.

Why this matters: This matters because database questions hinge on matching scaling, availability, and access patterns to the right managed data service.
Question 20 of 30
Objective 4.4 Design Cost-Optimized Architectures

Which option minimizes egress costs when transferring large amounts of data from an on-premises environment to Amazon S3 using a Direct Connect connection?

Concept tested: Design Cost-Optimized Architectures

A. Correct: Deploy a Direct Connect gateway with optimized routing to S3 is correct because it uses Direct Connect with optimized routing to S3, minimizing egress costs.

B. Incorrect: Establish a site-to-site VPN tunnel exclusively for S3 transfers is incorrect because establishing a site-to-site VPN tunnel for S3 transfers incurs higher egress fees compared to Direct Connect.

C. Incorrect: Configure S3 Transfer Acceleration and use the public internet is incorrect because using S3 Transfer Acceleration over the public internet does not minimize egress costs as effectively as Direct Connect.

D. Incorrect: Use a NAT gateway for outbound traffic is incorrect because it is a NAT gateway increases costs and does not provide optimized routing to S3, unlike a Direct Connect gateway.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 21 of 30
Objective 2.1 Design Resilient Architectures

In a scenario where an e-commerce platform needs to process large volumes of order data asynchronously and ensure loose coupling between services, which AWS service would best support this requirement by enabling event-driven processing through queues?

Concept tested: Design Resilient Architectures

A. Incorrect: Amazon S3 is incorrect because it is a storage service and does not support asynchronous processing through queues.

B. Correct: Amazon SQS enables event-driven processing by allowing services to communicate asynchronously via message queues, ensuring loose coupling between them.

C. Incorrect: AWS Lambda is incorrect because it can be used with SQS for event-driven architectures but it does not provide the queue functionality needed for this scenario.

D. Incorrect: Amazon RDS is incorrect because it is a relational database service and does not support asynchronous messaging or decoupling of services.

Why this matters: This matters because resilient design depends on choosing decoupling services when workloads need buffering, retries, or asynchronous processing.
Question 22 of 30
Objective 1.3 Design Secure Architectures

Which combination of AWS services should you use to ensure data at rest is encrypted and key management policies are enforced for a critical database?

Concept tested: Design Secure Architectures

A. Incorrect: Amazon S3 with server-side encryption and IAM policies is incorrect because they focus on object storage rather than relational databases.

B. Incorrect: AWS CloudHSM with custom HSM modules and AWS Secrets Manager is incorrect because they provide hardware-based key management but lacks the flexibility of AWS KMS for policy enforcement and integration with other services.

C. Incorrect: Amazon RDS with automatic backups and VPC flow logs is incorrect because they does not directly address encryption or key management policies, which are crucial for securing data at rest.

D. Correct: AWS KMS with customer master keys (CMKs) and key policies provides a robust solution for encrypting data at rest and enforcing strict access controls through key policies.

Why this matters: This matters because database questions hinge on matching scaling, availability, and access patterns to the right managed data service.
Question 23 of 30
Objective 3.5 Design High-Performing Architectures

Which AWS service would you select for ingesting and transforming semi-structured data in real-time, while also supporting batch processing for historical analysis?

Concept tested: Design High-Performing Architectures

A. Incorrect: AWS Glue ETL Jobs is incorrect because it is designed for batch processing and do not support real-time data ingestion.

B. Correct: Amazon Kinesis Data Streams can handle high volumes of streaming data in near real-time and integrate with AWS services like AWS Glue for historical analysis.

C. Incorrect: Amazon S3 Batch Operations is incorrect because it is used to manage large-scale operations on objects stored in Amazon S3, not for ingesting or transforming semi-structured data in real-time.

D. Incorrect: Amazon Redshift Spectrum is incorrect because it allows querying of data directly from Amazon S3 but does not support real-time ingestion and transformation of streaming data.

Why this matters: This matters because data architecture questions test whether ingestion, storage, processing, and governance choices match the workload.
Question 24 of 30
Objective 4.3 Design Cost-Optimized Architectures

Which AWS service would be most cost-effective for a high-read, low-write application that requires automatic scaling and minimal management overhead?

Concept tested: Design Cost-Optimized Architectures

A. Incorrect: Amazon Aurora Serverless v2 is incorrect because it requires manual configuration to optimize costs and does not automatically scale as efficiently as DynamoDB for high-read workloads.

B. Incorrect: Amazon RDS Multi-AZ deployment is incorrect because it involves a multi-AZ deployment that incurs higher costs due to redundancy, making it less cost-effective than DynamoDB's on-demand capacity mode.

C. Correct: Amazon DynamoDB with on-demand capacity mode is correct because it automatically scales to handle read/write workloads and minimizes management overhead, making it ideal for high-read, low-write applications.

D. Incorrect: Amazon ElastiCache for Redis is incorrect because it is ElastiCache for Redis, but it can scale effectively, it requires more manual tuning and management compared to DynamoDB's on-demand capacity mode.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 25 of 30
Objective 2.2 Design Resilient Architectures

In a multi-region architecture, which AWS service is best suited for maintaining data durability and minimizing Recovery Point Objective (RPO) by continuously replicating data across multiple geographic locations?

Concept tested: Design Resilient Architectures

A. Incorrect: AWS Lambda Provisioned Concurrency is incorrect because optimizes function performance and availability but does not replicate data across regions.

B. Incorrect: AWS CloudFormation StackSets is incorrect because it enables the deployment of infrastructure stacks across multiple accounts and regions, but they do not handle data replication for durability.

C. Correct: Amazon S3 Cross-Region Replication continuously copies objects from one region to another, ensuring high availability and minimizing recovery point objectives.

D. Incorrect: Amazon RDS Multi-AZ Deployments is incorrect because although Amazon RDS Multi-AZ Deployments provide automatic failover within a single AWS Region, they do not replicate data across multiple geographic regions.

Why this matters: This matters because architecture questions ask you to match availability, latency, and recovery requirements to the feature designed for that job.
Question 26 of 30
Objective 1.2 Design Secure Architectures

When designing a VPC for an application that needs to comply with GDPR and CCPA, which combination of AWS services would you use to ensure data privacy and secure network segmentation?

Concept tested: Design Secure Architectures

A. Incorrect: AWS WAF and Shield is incorrect because they are designed to protect web applications from common exploits and DDoS attacks but do not provide network segmentation.

B. Correct: AWS Security Groups and Network ACLs offer the necessary controls for network segmentation and data privacy within a VPC, aligning with GDPR and CCPA compliance requirements.

C. Incorrect: AWS Cognito and Secrets Manager is incorrect because they are focused on user authentication and managing secrets securely but do not address network-level security or segmentation.

D. Incorrect: AWS GuardDuty and Macie is incorrect because they provide threat detection and data protection services respectively, but they do not offer the granular control over network traffic required for compliance.

Why this matters: This matters because secure-architecture questions test the control that actually mitigates the stated risk, not a nearby security service.
Question 27 of 30
Objective 3.1 Design High-Performing Architectures

Which AWS storage service should be chosen for a workload that requires secure, frequent access to small files with high IOPS and low latency while ensuring compliance controls?

Concept tested: Design High-Performing Architectures

A. Incorrect: Amazon S3 Glacier is incorrect because it is designed for long-term archive storage and retrieval of infrequently accessed data, which does not meet the requirement for frequent access to small files with high IOPS.

B. Correct: Amazon EBS Provisioned IOPS SSD provides consistent, predictable performance with high levels of I/O operations per second (IOPS) and low latency, making it ideal for workloads requiring secure, frequent access to small files while ensuring compliance controls.

C. Incorrect: Amazon FSx for Windows File Server is incorrect because although Amazon FSx for Windows File Server offers file system support with high performance, it does not specifically provide the level of IOPS and low-latency requirements needed for this scenario as effectively as EBS Provisioned IOPS SSD.

D. Incorrect: Amazon Elastic Block Store (EBS) is incorrect because standard Amazon Elastic Block Store (EBS) volumes do not offer the same level of guaranteed I/O performance and low latency that are critical for workloads requiring high IOPS.

Why this matters: This matters because Copilot governance questions test which Purview control handles AI-specific data exposure, compliance risk, or posture.
Question 28 of 30
Objective 4.2 Design Cost-Optimized Architectures

Which AWS compute solution is most suitable for a workload that experiences significant variability in demand and can tolerate interruptions, but requires cost optimization over the next two years?

Concept tested: Design Cost-Optimized Architectures

A. Incorrect: Reserved Instances with On-Demand Instances is incorrect because reserved Instances provide a discount but do not offer significant cost savings compared to Spot Instances when demand varies.

B. Correct: Spot Instances with Auto Scaling are ideal for workloads that can tolerate interruptions and benefit from substantial cost savings while handling variable demand efficiently.

C. Incorrect: Savings Plans with EC2 Dedicated Instances is incorrect because savings Plans offer long-term discounts, but EC2 Dedicated Instances do not provide the same level of cost optimization as Spot Instances for fluctuating demands.

D. Incorrect: Lambda functions with provisioned concurrency is incorrect because they are best suited for stateless workloads and do not offer the cost savings or demand variability handling that Spot Instances provide.

Why this matters: This matters because cost questions reward matching pricing behavior to workload patterns, not choosing the most familiar service.
Question 29 of 30
Objective 2.1 Design Resilient Architectures

When designing a system to process high volumes of user-generated content asynchronously, which AWS service would best support horizontal scaling and loose coupling in a microservice architecture?

Concept tested: Design Resilient Architectures

A. Incorrect: Amazon S3 is incorrect because it is a storage service and does not support processing high volumes of user-generated content asynchronously.

B. Incorrect: AWS AppSync is incorrect because it is an API service for real-time data exchange between clients and backend services, but it doesn't handle horizontal scaling or loose coupling in microservices.

C. Incorrect: Amazon SQS is incorrect because although Amazon SQS provides a message queue to decouple components, it does not inherently support the processing logic required for handling user-generated content.

D. Correct: AWS Lambda supports serverless computing, enabling automatic scaling and event-driven architectures that are ideal for asynchronous processing of high volumes of user-generated content.

Why this matters: This matters because Design Resilient Architectures questions test whether AWS Lambda fits the scenario's constraints, not just whether the term sounds familiar.
Question 30 of 30
Objective 1.1 Design Secure Architectures

How can an organization ensure that a third-party contractor has secure and limited access to specific AWS resources for a short-term project, adhering to the principle of least privilege?

Concept tested: Design Secure Architectures

A. Incorrect: Create an IAM user with full administrative permissions is incorrect because it grants full administrative permissions, violating the principle of least privilege.

B. Incorrect: Grant direct S3 bucket access using pre-signed URLs is incorrect because direct S3 bucket access using pre-signed URLs does not provide temporary and limited access as required by the principle of least privilege.

C. Correct: Assign an IAM role with cross-account permissions through STS is correct because it assigns an IAM role with cross-account permissions through STS, ensuring secure and limited access adhering to the principle of least privilege.

D. Incorrect: Use IAM Identity Center (formerly AWS Single Sign-On) for federated access is incorrect because federated access via IAM Identity Center does not specifically address cross-account temporary permissions for third-party contractors.

Why this matters: This matters because access-control questions test whether you choose least-privilege patterns instead of broad credentials or root access.
Where to go after the daily web set

How are AWS SAA-C03 questions generated?

dotCreds builds AWS SAA-C03 practice questions from public exam objectives and AWS certification and documentation references. The questions are written for realistic study practice, not copied from exam dumps.

How are explanations sourced?

Each question includes an explanation and, when available, a source link back to the provider documentation or reference used to validate the answer. That keeps the practice tied to study material you can actually review.

What score do I get?

The page tracks today's answered count and accuracy for the 30-question daily set, then saves a 7-day score history on this device so you can see your recent practice trend.

Why use this site?

The site is the fastest way to start AWS SAA-C03 practice without installing anything. It is built for daily recall, quick weak-topic discovery, and source-backed explanations you can review immediately.

Why use the app when available?

The web page is the quick free sampler. If a dotCreds app is available for AWS SAA-C03, the app is better for larger banks, focused weak-domain drills, longer review sessions, and mobile study routines.