Free – AWS Certified Solution Architect – Associate Exam Practice Questions

AWS Certified Solution Architect – Associate Exam Practice

Are you prepared for your upcoming AWS Certified Solutions Architect – Associate (SAA-C03) exam?

Assess your understanding with these free – AWS Certified Solutions Architect – Associate exam practice questions. Just click the View Answer button to reveal the correct answer along with comprehensive explanations.

Let’s Start Test

Question 1 A company is running a highly scalable and fault-tolerant web application on Amazon EC2 instances. The application requires real-time access to frequently updated data. Which AWS service should be used to store and retrieve this data? 

a) Amazon DynamoDB
b) Amazon Redshift
c) Amazon S3
d) Amazon RDS

View Answer

Answer is: a – Amazon DynamoDB

Explanation: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance for real-time access to frequently updated data. It is well-suited for highly scalable and fault-tolerant web applications.

Question 2 A software development company wants to build a Serverless application that requires running code in response to events. Which AWS service can they use for this purpose? 

a) Utilize AWS Lambda to run code in response to events, such as changes to data in Amazon S3 or updates in an Amazon DynamoDB table.

b) Deploy Amazon EC2 instances and configure them to trigger code execution based on events using AWS CloudWatch Events.

c) Leverage Amazon RDS to execute code in response to events, with triggers set up on database changes.

d) Implement AWS App Mesh to handle event-based code execution, ensuring high scalability and fault tolerance.

View Answer

Answer is: a – Utilize AWS Lambda to run code in response to events, such as changes to data in Amazon S3 or updates in an Amazon DynamoDB table.

Explanation: AWS Lambda is a serverless compute service that allows developers to run code without provisioning or managing servers. It can be triggered by various events, making it suitable for building event-driven applications efficiently.

Question 3 A healthcare organization wants to store and analyze large amounts of medical data securely. The data needs to be encrypted at rest and have granular access controls. Which AWS service should be used?

a) Amazon S3
b) Amazon RDS
c) Amazon Redshift
d) Amazon Glacier

View Answer

Answer is: c – Amazon Redshift

Explanation: Amazon Redshift is a fully managed data warehousing service that provides secure storage and analysis of large datasets. It supports encryption at rest and offers granular access controls to ensure the security of sensitive medical data.

Question 4 A company wants to implement a serverless architecture for its application to eliminate the need for server management and reduce costs. Which AWS service would be most suitable for this?

a) AWS Lambda
b) Amazon EC2
c) Amazon S3
d) Amazon RDS

View Answer

Answer is: a – AWS Lambda

Explanation: AWS Lambda is a serverless compute service that allows developers to run code without provisioning or managing servers. It automatically scales based on the incoming request volume and helps reduce costs by charging only for the compute time consumed by the code.

Question 5 A media streaming company wants to deliver video content to a global audience with low latency and high performance. Which AWS service can they use to achieve this?

a) Use Amazon CloudFront, a content delivery network (CDN), to distribute video content to edge locations worldwide for fast and reliable delivery.

b) Deploy Amazon S3 to store the video content and stream it directly from the S3 bucket using signed URLs for authentication.

c) Utilize AWS Direct Connect to establish a dedicated network connection between the media streaming company and their global audience.

d) Implement Amazon Elastic Transcoder to convert video content into various formats suitable for different devices and regions.

View Answer

Answer is: a – Use Amazon CloudFront, a content delivery network (CDN), to distribute video content to edge locations worldwide for fast and reliable delivery.

Explanation: Amazon CloudFront is a global content delivery network that caches and delivers content to end-users with low latency and high performance. By using CloudFront, the media streaming company can ensure efficient and fast delivery of video content to their global audience.

Question 6 A company wants to automate the deployment of their application infrastructure using code. Which AWS service can help achieve this?

a) AWS Lambda
b) AWS Elastic Beanstalk
c) AWS CloudFormation
d) Amazon EC2

View Answer

Answer is: c – AWS CloudFormation.

Explanation: AWS CloudFormation is a service that enables infrastructure as code by automating the deployment and management of AWS resources. It allows the company to define their application infrastructure in a template and provision it consistently and efficiently.

Question 7 A retail company wants to securely process credit card transactions on their website. Which AWS service can help achieve this?

a) AWS KMS
b) AWS IAM
c) Amazon S3
d) Amazon RDS

View Answer

Answer is: b – AWS IAM

Explanation: AWS Identity and Access Management (IAM) is a service that helps securely control access to AWS resources. By using IAM, the retail company can manage and control user access to their website’s credit card processing functionality, ensuring secure and authorized transactions.

Question 8 A financial institution wants to establish a highly available database solution in AWS to handle their transactional workload. Which AWS service can fulfill their requirement?

a) Deploy an Amazon Aurora database in Multi-AZ configuration to ensure automatic failover in the event of a primary database failure.

b) Utilize Amazon Redshift for their transactional workload as it provides high availability and scalability for analytical queries.

c) Implement Amazon ElastiCache to cache frequently accessed data and improve the performance of the transactional workload.

d) Leverage Amazon Neptune, a graph database service, for handling their transactional workload with high availability and scalability.

View Answer

Answer is: a – Deploy an Amazon Aurora database in Multi-AZ configuration to ensure automatic failover in the event of a primary database failure.

Explanation: Amazon Aurora provides a highly available and scalable database solution. By deploying it in Multi-AZ configuration, automatic failover is enabled, ensuring high availability for the transactional workload.

Question 9 A company wants to store and analyze large volumes of log data. They require a service that can ingest and process real-time streaming data. Which AWS service should be used?

a) AWS Glue
b) Amazon Redshift
c) Amazon S3
d) Amazon Kinesis

View Answer

Answer is: d – Amazon Kinesis

Explanation: Amazon Kinesis is a fully managed service for real-time processing of streaming data at scale. It can ingest and process large volumes of log data in real-time, making it suitable for storing and analyzing log data.

Question 10 A large e-commerce company wants to store and process large amounts of unstructured data, such as images and videos, in a scalable and cost-effective manner. Which AWS service should they choose?

a) Store data in Amazon S3 and process it using AWS Lambda and Amazon Rekognition for image and video analysis.

b) Use Amazon EBS for data storage and processing, leveraging EC2 instances for computational tasks.

c) Utilize Amazon RDS for data storage and Amazon Redshift for processing large-scale analytics on the data.

d) Employ Amazon Glacier for long-term data archival and processing using AWS Snowball for data retrieval and analysis.

View Answer

Answer is: a – Store data in Amazon S3 and process it using AWS Lambda and Amazon Rekognition for image and video analysis.

Explanation: Amazon S3 is ideal for storing large amounts of unstructured data, while AWS Lambda and Amazon Rekognition provide serverless capabilities for processing and analyzing the data, making it a scalable and cost-effective solution.

Question 11 A company wants to run their application in a highly available manner across multiple AWS Availability Zones. Which AWS service can help achieve this?

a) Amazon Route 53
b) Amazon VPC
c) AWS CloudFormation
d) Amazon S3

View Answer

Answer is: b – Amazon VPC

Explanation: Amazon VPC (Virtual Private Cloud) enables the creation of a virtual network in the AWS cloud. By deploying resources within a VPC, the company can span their application across multiple Availability Zones, ensuring high availability and fault tolerance.

Question 12 A company wants to analyze historical data to gain insights and make informed business decisions. Which AWS service can help perform this analysis?

a) Amazon Redshift
b) Amazon Athena
c) AWS Glue
d) AWS Elastic Beanstalk

View Answer

Answer is: b – Amazon Athena

Explanation: Amazon Athena is an interactive query service that allows you to analyze data stored in Amazon S3 using standard SQL queries. It enables the company to perform ad hoc analysis on their historical data, extracting valuable insights for informed decision-making.

Question 13 A company wants to build a scalable and fault-tolerant web application. They need a load balancer service to distribute traffic evenly across multiple instances. Which AWS service should be used?

a) Elastic Load Balancing
b) Amazon CloudFront
c) Amazon Route 53
d) AWS Direct Connect

View Answer

Answer is: a – Elastic Load Balancing

Explanation: Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances, making the web application scalable and fault-tolerant. It helps ensure high availability and provides a seamless user experience.

Question 14 A retail company wants to build a web application that can handle unpredictable spikes in traffic, ensuring a smooth user experience. Which AWS service can help with this?

a) Implement AWS Elastic Beanstalk to deploy and manage the web application automatically, allowing for easy scaling and load balancing.

b) Deploy Amazon CloudFront in front of the web application to cache content and distribute it globally, reducing the load on the application servers.

c) Leverage Amazon DynamoDB, a fully managed NoSQL database, for the web application’s data storage to handle high concurrency and scale dynamically.

d) Utilize Amazon EC2 Auto Scaling to automatically adjust the number of EC2 instances based on traffic demand, ensuring the application can handle spikes in traffic.

View Answer

Answer is: d – Utilize Amazon EC2 Auto Scaling to automatically adjust the number of EC2 instances based on traffic demand, ensuring the application can handle spikes in traffic.

Explanation: Amazon EC2 Auto Scaling allows for automatic scaling of EC2 instances based on predefined rules and metrics. It ensures that the web application can dynamically adjust its capacity to handle spikes in traffic, providing a smooth user experience during high-demand periods.

Question 15 A company wants to run containers on AWS with an orchestration service that automatically scales and manages containerized applications. Which service should they use?

a) Amazon ECS (Elastic Container Service)
b) Amazon EKS (Elastic Kubernetes Service)
c) AWS Lambda
d) AWS Fargate

View Answer

Answer is: d – AWS Fargate

Explanation: AWS Fargate is a serverless compute engine for containers that allows companies to run containers without managing the underlying infrastructure. It works with Amazon ECS and Amazon EKS to automatically scale and manage containerized applications, simplifying the deployment and management process.

Question 16 A software development company wants to ensure that their customer-facing web application is highly available and fault-tolerant to provide an uninterrupted user experience. The application consists of multiple components, including web servers, application servers, and a database. Which AWS service can help achieve this requirement?

a) Utilize AWS CloudFormation to automate the deployment and management of the entire application stack, including the infrastructure and services required for high availability and fault tolerance.

b) Implement Amazon CloudFront, a globally distributed content delivery network (CDN), to cache and deliver static and dynamic content with low latency and high availability.

c) Deploy AWS Elastic Beanstalk, a fully managed platform as a service (PaaS), to automatically handle capacity provisioning, load balancing, and application health monitoring for the web application.

d) Configure Amazon Route 53, a highly available and scalable DNS service, to perform health checks and route traffic to healthy instances of the web application across multiple availability zones.

View Answer

Answer is: c – Deploy AWS Elastic Beanstalk, a fully managed platform as a service (PaaS), to automatically handle capacity provisioning, load balancing, and application health monitoring for the web application.

Explanation: AWS Elastic Beanstalk is a PaaS offering that simplifies the deployment and management of web applications. It automatically handles underlying infrastructure tasks, such as capacity provisioning, load balancing, and health monitoring, ensuring high availability and fault tolerance for the customer-facing web application. By utilizing Elastic Beanstalk, the software development company can focus on application development while AWS takes care of the operational aspects of their application stack.

Question 17 A company is planning to migrate their on-premises database to AWS and wants a fully managed database service with automatic backups, automated software patching, and high availability. Which AWS service should they choose?

a) Amazon RDS for MySQL
b) Amazon DynamoDB
c) Amazon Redshift
d) Amazon Aurora

View Answer

Answer is: a – Amazon RDS for MySQL

Explanation: Amazon RDS for MySQL is a fully managed relational database service that provides automatic backups, automated software patching, and high availability through Multi-AZ deployments. It is suitable for migrating on-premises databases to AWS.

Question 18 A financial services company is migrating its on-premises infrastructure to AWS. They want to ensure secure communication between their on-premises data center and their VPC in AWS. The company also requires support for bidirectional traffic, as multiple applications will be communicating across the hybrid environment. Which solution should the company implement to meet these requirements?

a) Deploy an AWS Site-to-Site VPN connection between the on-premises data center and the AWS VPC using a customer gateway device in the on-premises data center and a virtual private gateway in the AWS VPC. This will establish an encrypted VPN tunnel for secure communication.

b) Implement AWS Direct Connect to establish a dedicated network connection between the on-premises data center and the AWS VPC, providing a high-bandwidth, low-latency connection with enhanced security.

c) Configure AWS Transit Gateway to connect the on-premises data center and the AWS VPC, enabling scalable and efficient communication across the hybrid environment.

d) Utilize AWS App Mesh to manage and secure the communication between services running in the on-premises data center and the AWS VPC, providing fine-grained control and observability over the traffic flow.

View Answer

Answer is: c – Configure AWS Transit Gateway to connect the on-premises data center and the AWS VPC, enabling scalable and efficient communication across the hybrid environment.

Explanation: AWS Transit Gateway simplifies network architecture and allows the financial services company to connect their on-premises data center and AWS VPC. It provides scalable and efficient communication, supporting bidirectional traffic between the environments. With AWS Transit Gateway, the company can maintain network segmentation and control while ensuring secure and reliable communication across the hybrid environment.

Question 19 A retail company wants to ensure the availability and durability of their data stored in Amazon S3. They require a solution that automatically replicates their data to another AWS region to protect against region-wide outages. Which feature should they enable to meet this requirement?

a) Enable cross-region replication for the Amazon S3 bucket, which automatically replicates the data to a bucket in another AWS region.

b) Implement Amazon CloudFront as a content delivery network to distribute and cache their data globally, ensuring availability and durability.

c) Configure Amazon S3 versioning, which maintains multiple versions of each object in the bucket, providing data protection against accidental deletions or overwrites.

d) Utilize Amazon S3 Transfer Acceleration to optimize the upload and download speed of data to and from the Amazon S3 bucket.

View Answer

Answer is: a – Enable cross-region replication for the Amazon S3 bucket, which automatically replicates the data to a bucket in another AWS region.

Explanation: By enabling cross-region replication for the Amazon S3 bucket, the retail company can automatically replicate their data to a bucket in another AWS region. This ensures the availability and durability of their data by protecting it against region-wide outages. In the event of a failure in one region, the data is readily available in the replicated bucket in another region, allowing seamless access and continuity of operations.

Question 20 Which of the following options best describes VPC peering in AWS?

A) VPC peering allows two VPCs to communicate privately over the Internet using public IP addresses.

B) VPC peering enables a VPC to extend its network connectivity to an on-premises data center securely.

C) VPC peering establishes a direct private connection between two VPCs, enabling them to communicate using private IP addresses.

D) VPC peering allows VPCs in different regions to communicate with each other over AWS Direct Connect.

View Answer

Answer is: c – VPC peering establishes a direct private connection between two VPCs, enabling them to communicate using private IP addresses.

Explanation: VPC peering is a networking connection between two VPCs that enables them to communicate with each other using private IP addresses. It establishes a direct, secure, and private connection, allowing the VPCs to behave as if they were part of the same network. This enables resources within the peered VPCs to communicate with each other as if they were on the same network. VPC peering is useful for scenarios where you have multiple VPCs in the same AWS account or across different AWS accounts and you want to enable communication between them.

Question 21 An IT company utilizes scaling policies to maintain a specific number of Amazon EC2 instances across multiple Availability Zones based on application workloads. The development team has recently released a new version of the application, which requires updating the instances with a new AMI. As the Solution Architect, your task is to ensure a quick phasing out of instances using the previous AMI. Which termination criteria would be most suitable for this requirement?

a) Specify termination criteria using “ClosestToNextInstanceHour” predefined termination policy

b) Specify termination criteria using “OldestInstance” predefined termination policy

c) Specify termination criteria using “ProximityToRetirement” predefined termination policy

d) Specify termination criteria using “OldestLaunchTemplate” predefined termination policy

View Answer

Answer is: d – Specify termination criteria using “OldestLaunchTemplate” predefined termination policy

Explanation: To quickly phase out instances using the previous AMI, the “OldestLaunchTemplate” predefined termination policy would be the best choice. This policy identifies and terminates instances that are using the oldest launch template version, ensuring that the new AMI is updated across all instances efficiently.

Question 22 A large engineering company wants to ensure cross-region disaster recovery for their distributed application using Amazon Aurora as the database. The primary requirement is to restore the database with a Recovery Time Objective (RTO) of one minute in the event of a service degradation in the primary region. Additionally, they want to minimize administrative work during the service restoration. Which approach should be initiated to design an Aurora database that meets these disaster recovery requirements?

a) Use Amazon Aurora Global Database and designate the secondary region as a failover for service degradation in the primary region.

b) Deploy Multi-AZ deployments with Aurora Replicas and trigger failover to one of the replicas in case of service degradation in the primary region.

c) Create DB Snapshots of the existing Amazon Aurora database and store them in an Amazon S3 bucket. In case of service degradation in the primary region, create a new database instance in a different region using these snapshots.

d) Utilize Amazon Aurora point-in-time recovery to automatically store backups in an Amazon S3 bucket. When service degradation occurs in the primary region, restore a new database instance in a different region using these backups.

View Answer

Answer is: a – Use Amazon Aurora Global Database and designate the secondary region as a failover for service degradation in the primary region.

Explanation: To meet the cross-region disaster recovery requirements with an RTO of one minute and minimize administrative work during service restoration, the company should use Amazon Aurora Global Database. By configuring a Global Database, the secondary region can be designated as a failover for service degradation in the primary region. This allows for near-instantaneous failover and recovery, ensuring business continuity. The other options (b, c, and d) do not specifically address the requirement of restoring the database with a one-minute RTO and may involve additional administrative work compared to the Global Database approach.

Question 23 A financial company has a business-critical application deployed on an Amazon EC2 instance that is front-ended by an internet-facing Application Load Balancer. To protect the expensive computational resources of the application, the security team wants to limit requests to these resources based on a threshold number of sessions. Requests beyond this threshold should be dropped. How can a security policy be designed to achieve the required protection for the application?

a) Create AWS WAF blanket rate-based rules and attach them to the Application Load Balancer.

b) Create AWS WAF URI-specific rate-based rules and attach them to the Application Load Balancer.

c) Create AWS WAF IP reputation rate-based rules and attach them to the Application Load Balancer.

d) Create AWS WAF Managed rule group statements and attach them to the Application Load Balancer.

View Answer

Answer is: b – Create AWS WAF URI-specific rate-based rules and attach them to the Application Load Balancer.

Explanation: To achieve the required protection for the application and limit requests to the expensive computational resources based on a threshold number of sessions, AWS WAF URI-specific rate-based rules should be created and attached to the Application Load Balancer. By specifying rate-based rules based on specific URIs, the security team can effectively control the number of requests to those resources and drop requests beyond the defined threshold. The other options (a, c, and d) do not specifically address the requirement of limiting requests to a threshold number of sessions for the expensive computational resources and may not provide the desired protection for the application.

Question 24 An IT company has deployed highly available resilient web servers on Amazon EC2 instances, with Application Load Balancers serving as the front-end. They want to configure failover using Amazon Route 53, where the EC2 instances in AWS Cloud act as the primary resources and the web servers at the on-premises data center serve as the secondary resources. How should the Amazon Route 53 health checks be designed to achieve the desired results?

a) For the primary resources in the AWS Cloud, create alias records and set Evaluate Target Health to Yes. For the secondary records, create a health check in Route 53 for the web servers in the data center. Create a single failover alias record for both primary and secondary resources.

b) For the primary resources in the AWS Cloud, create alias records and health checks. For the secondary records, create a health check in Route 53 for the web servers in the data center. Create a single failover alias record for both primary and secondary resources.

c) For the primary resources in the AWS Cloud, create alias records and set Evaluate Target Health to Yes. For the secondary records, create a health check in Route 53 for the web servers in the data center. Create two failover alias records, one for the primary resource and one for the secondary resource.

d) For the primary resources in the AWS Cloud, create alias records and health checks. For the secondary records, create a health check in Route 53 for the web servers in the data center. Create two failover alias records, one for the primary resource and one for the secondary resource.

View Answer

Answer is: C

Explanation: To configure failover using Amazon Route 53, with AWS Cloud EC2 instances as primary and on-premises web servers as secondary, the following approach should be taken:

  • For the primary resources in the AWS Cloud, create alias records and set Evaluate Target Health to Yes, ensuring that Route 53 evaluates the health of the primary resources.
  • For the secondary resources (web servers in the data center), create a health check in Route 53 to monitor their health.
  • Create two failover alias records, one for the primary resource and one for the secondary resource, allowing for failover between them based on health status.

This configuration ensures that Route 53 performs health checks on the primary and secondary resources separately and directs traffic accordingly.

Question 25 A startup firm has a large number of application servers hosted on VMs associated with VMware vCenter at their on-premises data center. Each VM has a different operating system. The firm plans to migrate these servers to the AWS Cloud and needs to estimate Amazon EC2 sizing based on resource utilization data from the on-premises servers. The data should include key parameters like CPU, disk, memory, and network, and it should be saved in an encrypted format to be shared with the Subject Matter Expert (SME) handling the migration. Which method is best suited to obtain these server details?

a) Use the Agentless-discovery method with AWS Application Discovery Service.

b) Use the Agentless-discovery method with AWS Server Migration Service.

c) Use the Agent-based discovery method with AWS Server Migration Service.

d) Use the Agent-based discovery method with AWS Application Discovery Service.

View Answer

Answer is: a – Use the Agentless-discovery method with AWS Application Discovery Service.

Explanation: To gather the resource utilization details from the on-premises servers and estimate Amazon EC2 sizing in the AWS Cloud, the startup firm should use the Agentless-discovery method with AWS Application Discovery Service. This method allows for collecting server details, including key parameters like CPU, disk, memory, and network, without requiring the installation of agents on the VMs. The data is collected in an encrypted format, ensuring security, and can be shared with the Subject Matter Expert (SME) responsible for the migration.

Question 26 A company is planning to migrate their on-premises SQL Servers to the AWS Cloud. They require a non-disruptive migration process with minimal downtime. As part of the migration, they need to perform prerequisite tests and capture the results before promoting the servers in AWS as primary servers. Which cost-effective automated tool is best suited for performing the SQL server migration?

a) AWS Migration Hub

b) AWS Application Migration Service

c) AWS DataSync

d) AWS Server Migration Service

View Answer

Answer is: b – AWS Application Migration Service

Explanation: To achieve a non-disruptive migration of SQL Servers with minimal downtime, the company should utilize the AWS Application Migration Service. This service provides a cost-effective and automated solution for migrating SQL Servers to the AWS Cloud. It allows for seamless migration with minimal disruption to the application. Additionally, it offers prerequisite tests to validate the migration process and capture the results before promoting the AWS servers as primary servers.

Question 27 A healthcare organization stores patient data in Amazon S3 buckets. The organization needs an automated solution to scan all the data stored in these buckets and generate a report identifying any instances of sensitive patient information. The Chief Information Security Officer (CISO) is looking for the most suitable tool to fulfill this requirement. Which solution should be implemented?

a) Enable Amazon GuardDuty on the Amazon S3 buckets

b) Enable Amazon Inspector on the Amazon S3 buckets

c) Enable Amazon Macie on the Amazon S3 buckets

d) Enable Amazon CloudWatch on the Amazon S3 buckets

View Answer

Answer is: c – Enable Amazon Macie on the Amazon S3 buckets

Explanation: To scan all data in the Amazon S3 buckets and generate a report based on the presence of sensitive patient information, the healthcare organization should enable Amazon Macie. Amazon Macie is an automated data security and data privacy service that utilizes machine learning to discover, classify, and safeguard sensitive data in AWS. It is specifically designed to handle sensitive data like healthcare records and can scan the content of S3 buckets, detect sensitive patient information, and generate detailed reports.

Question 28 An e-commerce company is migrating its data from on-premises storage to Amazon S3 buckets. The data to be migrated is approximately 50 TB in size and needs to undergo processing using a customized AWS Lambda function before being stored in the Amazon S3 bucket. Which design approach would be most suitable for this data transfer?

a) Migrate data using AWS Snowball Edge

b) Migrate data using AWS Snowcone

c) Migrate data using AWS Transfer Family with FTPS

d) Migrate data using AWS Snowcone SSD

View Answer

Answer is: a – Migrate data using AWS Snowball Edge

Explanation: To transfer a large amount of data (50 TB) to Amazon S3 and perform custom processing using an AWS Lambda function, the recommended approach is to use AWS Snowball Edge. AWS Snowball Edge is a data transfer device that allows for secure and efficient migration of large datasets to the cloud. It offers offline data transfer capabilities and includes compute resources to run applications such as Lambda functions on the device itself. This ensures that the data can be processed before being transferred to the Amazon S3 bucket.

Question 29 A government municipality corporation, Utopia Municipality Corporation, operates an application for local citizens on Amazon EC2 instances while keeping its database in an on-premise data center. Due to regulatory compliances, the organization is not willing to migrate the database to AWS but wants to extend it to the cloud to enable seamless communication with other AWS services. They also require a region-specific, frictionless, low-latency environment that provides a consistent hybrid experience. As a Solutions Architect, which solution would you recommend to meet these requirements?

a) AWS Outposts

b) Use AWS Snowball Edge to copy the data and upload to AWS

c) AWS DataSync

d) AWS Storage Gateway

View Answer

Answer is: a -AWS Outposts

Explanation: To achieve a consistent hybrid experience and enable seamless communication between AWS services and the on-premise database, the recommended solution is to use AWS Outposts. AWS Outposts brings native AWS services, infrastructure, and operating models to customers’ on-premises data centers. It allows organizations to run AWS services locally, providing low-latency access to data and applications. With AWS Outposts, Utopia Municipality Corporation can extend their database environment to the cloud while keeping it in their on-premise data center, ensuring compliance with regulatory requirements.

Question 30 A drug research company receives sensitive scientific data from multiple sources in different formats. They aim to create a Data Lake on AWS to perform thorough data analysis. Data cleansing is a critical step, and any matching data should be removed. Additionally, the solution needs to provide secure access to sensitive data with granular controls at the column, row, and cell levels. As a Solutions Architect, which option do you think would best address these requirements?

a) Amazon Redshift Spectrum

b) Amazon Redshift

c) AWS Glue Data Catalog

d) AWS Lake Formation

View Answer

Answer is: d – AWS Lake Formation

Explanation: To resolve the problem of creating a Data Lake with data cleansing capabilities and secure access controls, the recommended solution is AWS Lake Formation. AWS Lake Formation simplifies the process of building and managing a secure Data Lake by providing tools for data ingestion, data cataloging, data transformation, and fine-grained access controls. With AWS Lake Formation, the drug research company can easily clean and transform the data before it lands in the Data Lake, removing any matching data as required. It also allows them to implement granular access controls at the column, row, and cell levels to ensure secure access to sensitive data)

Question 31 A financial institution needs to store a large volume of financial transaction records for compliance purposes. These records are rarely accessed but must be retained for a minimum of seven years. In the event of an audit or investigation, the institution requires immediate retrieval of the relevant records. As a Solutions Architect, you are tasked with recommending a suitable storage class to meet these requirements.
Which of the following storage classes would you recommend for this scenario?

a) Amazon S3 Standard

b) Amazon S3 Standard-Infrequent Access

c) Amazon S3 Glacier Instant Retrieval

d) AWS S3 One Zone-Infrequent Access

View Answer

Answer is: c – Amazon S3 Glacier Instant Retrieval

Explanation: To meet the compliance requirement of storing financial transaction records for an extended period and allowing immediate retrieval when needed, the recommended storage class is Amazon S3 Glacier Instant Retrieval. This storage class offers durable, secure, and low-cost storage for long-term retention of data) Although the data is rarely accessed, when it is required, Amazon S3 Glacier Instant Retrieval provides retrieval times in milliseconds, ensuring quick access to the relevant records.

Question 32 A technology startup is launching a new application that requires a highly scalable and cost-effective relational database solution in AWS. The application’s workload is expected to vary significantly throughout the day, with peak periods of high traffic and lower activity during off-peak hours. The company wants a database that can automatically scale its compute capacity based on the application’s needs and minimize costs during periods of low usage.
Which of the following RDS types would best meet the startup’s requirements?

a) Amazon Aurora

b) Amazon RDS for MySQL

c) Amazon Aurora Serverless

d) Amazon RDS for PostgreSQL

View Answer

Answer is: c – Amazon Aurora Serverless

Explanation: To fulfill the startup’s requirement for a low-cost, on-demand auto-scaling database that can handle variable and unpredictable workloads, the recommended RDS type is Amazon Aurora Serverless. Amazon Aurora Serverless offers a serverless architecture that automatically scales the compute capacity up and down based on the application’s needs. It eliminates the need to provision and manage database instances, resulting in cost savings during periods of low usage. With Aurora Serverless, the startup can focus on developing their application without worrying about database capacity management.

Question 33 A popular e-commerce company processes large volumes of customer data to generate personalized recommendations and analyze customer behavior. The company leverages AWS Redshift to perform analytics on petabytes of structured and semi-structured data stored in their data warehouse, operational database, and Amazon S3 files. However, as the volume of data continues to increase rapidly, the company encounters performance issues related to network bandwidth and memory processing (CPU), leading to slow query performance.

a) Utilize Amazon S3 Transfer Acceleration to expedite data transfer to a centralized S3 bucket, and leverage Redshift Spectrum for query processing.

b) Leverage Amazon Redshift Spectrum to optimize query performance.

c) Implement AQUA (Advanced Query Accelerator) for Amazon Redshift to accelerate query processing.

d) Enable and configure caching solutions using Amazon ElastiCache Memcached to improve query performance.

View Answer

Answer is: c – Implement AQUA (Advanced Query Accelerator) for Amazon Redshift to accelerate query processing.

Explanation: To address the performance bottlenecks related to network bandwidth and CPU processing, implementing AQUA for Amazon Redshift is the recommended solution. AQUA is a hardware-accelerated caching layer that offloads and accelerates certain data processing operations, resulting in improved query performance. It leverages AWS-designed analytics processors and performs parallel and distributed query execution, enabling faster data processing without increasing operational overhead or costs. By integrating AQUA with Amazon Redshift, the media company can significantly enhance their query performance and handle petabyte-scale data analysis efficiently.

Question 34 A software development company manages a large number of EC2 instances across multiple AWS accounts and regions. These instances are built on custom Amazon Machine Images (AMIs). The company is concerned about the accidental deletions of AMIs used for critical production EC2 instances. They are looking for a solution that can ensure quick recovery from such accidental deletions. Which option below can be used to address this requirement?

a) Utilize version control and backup mechanisms provided by AWS CodeCommit to track changes to the AMIs and facilitate easy restoration in case of accidental deletions.

b) Implement AWS CloudFormation StackSets to automate the deployment and management of AMIs, ensuring consistent configuration across all instances and enabling quick recovery in case of accidental deletions.

c) Leverage AWS Elastic Beanstalk to create and manage scalable applications, but not specifically focused on AMI recovery from accidental deletions.

d) Take regular snapshots of the EBS volumes attached to all EC2 instances and utilize them for restoring the AMIs in case of accidental deletions.

View Answer

Answer is: a – Utilize version control and backup mechanisms provided by AWS CodeCommit to track changes to the AMIs and facilitate easy restoration in case of accidental deletions.

Explanation: To address the concern of accidental AMI deletions and ensure quick recovery, the recommended solution is to utilize version control and backup mechanisms provided by AWS CodeCommit. By tracking changes to the AMIs using version control and implementing backup mechanisms, the company can easily restore the AMIs to previous versions in case of accidental deletions. This approach provides a reliable and efficient way to recover from such incidents and minimize downtime.

Question 35 An organization has two AWS accounts, each with its own Virtual Private Cloud (VPC) in the same region. An application running in one VPC needs to access resources located in the other VPC. Which of the following options ensures that the resources can be accessed as required?

a) Establish a Network Address Translation (NAT) instance between both accounts to facilitate communication between the VPCs.

b) Use a Virtual Private Network (VPN) connection between both accounts to establish a secure connection for accessing resources.

c) Set up a NAT Gateway between both accounts to provide network address translation and enable communication between the VPCs.

d) Use VPC Peering between both accounts to establish a direct and secure connection for accessing resources.

View Answer

Answer is: d – Use VPC Peering between both accounts to establish a direct and secure connection for accessing resources.

Explanation: To enable access to resources between two VPCs in different AWS accounts within the same region, the recommended solution is to use VPC Peering. VPC Peering allows the VPCs to communicate with each other using private IP addresses without the need for internet gateways, NAT instances, or VPN connections. It establishes a direct and secure connection between the VPCs, enabling seamless resource access.

Question 36 A regional bank is migrating its data center to the AWS cloud and needs to transfer its data center storage to new S3 and EFS data stores in AWS. The data contains Personally Identifiable Information (PII), and the bank wants to ensure that the data transfer does not travel over the public internet. Which option provides the most efficient solution that meets the bank’s requirements?

a) Migrate the on-premises data to AWS using the DataSync agent with the use of a NAT Gateway.

b) Create a private VPC endpoint and configure the DataSync agent to communicate with the private DataSync service endpoints through the VPC endpoint using Direct Connect.

c) Migrate the on-premises data to AWS using the DataSync agent with the use of an Internet Gateway.

d) Create a public VPC endpoint and configure the DataSync agent to communicate with the DataSync private service endpoints through the VPC endpoint and VPN.

View Answer

Answer is: b – Create a private VPC endpoint and configure the DataSync agent to communicate with the private DataSync service endpoints through the VPC endpoint using Direct Connect.

Explanation: To transfer data from the data center to AWS without traveling over the public internet and ensuring the most efficient solution, the recommended approach is to create a private VPC endpoint and configure the DataSync agent to communicate with the private DataSync service endpoints through the VPC endpoint using Direct Connect. This solution provides a secure and private connection between the on-premises data center and AWS, allowing the transfer of PII data without exposure to the public internet.

Question 37 A financial services firm operates applications in a hybrid cloud model, with applications running on EC2 instances in a VPC that communicate with resources in an on-premises data center. The workload is located on an EC2 instance in one subnet, while the transit gateway association is in a different subnet. These subnets have different Network Access Control Lists (NACLs) associated with them. The NACL rules are used to control traffic to and from the EC2 instances and transit gateway. Which of the following statements is true regarding the NACL rules for traffic from the EC2 instances to the transit gateway?

a) Outbound rules use the source IP address to evaluate traffic from the instances to the transit gateway.

b) Outbound rules use the destination IP address to evaluate traffic from the instances to the transit gateway.

c) Outbound rules are not evaluated for the transit gateway subnet.

d) Inbound rules use the destination IP address to evaluate traffic from the transit gateway to the instances.

View Answer

Answer is: b – Outbound rules use the destination IP address to evaluate traffic from the instances to the transit gateway.

Explanation: In the given scenario, the NACL rules for traffic from the EC2 instances to the transit gateway use the destination IP address to evaluate the traffic. Outbound rules in the NACL are evaluated based on the destination IP address of the traffic leaving the EC2 instances and going towards the transit gateway. The NACL rules determine whether the traffic is allowed or denied based on the destination IP address specified in the rules.

Question 38 A media company that sells stock images and videos via a mobile app and website needs to restrict access to purchased content stored in S3 buckets. The company wants to ensure that users can only access the specific stock content they have purchased, without changing the URLs of each item. Which access control option would be the best fit for this scenario?

a) Use CloudFront signed URLs.

b) Use S3 presigned URLs.

c) Use CloudFront signed cookies.

d) Use S3 signed cookies.

View Answer

Answer is: c – Use CloudFront signed cookies

Explanation: In this scenario, the most suitable access control option is to use CloudFront signed cookies. CloudFront signed cookies allow you to provide secure access to content in your S3 buckets without requiring changes to the URLs of the stock items. By using CloudFront signed cookies, you can control access to specific content based on the user’s authentication status or other custom criteria. This ensures that only users who have purchased the stock content can access it, without exposing the actual S3 URLs. The other options (A, B, and D) do not provide the same level of flexibility and control for securing access to the stock content.

Question 39 As a solutions architect, you are tasked with configuring the access controls for an S3 bucket hosting a static website. The bucket contains content files owned by the AWS account used to create the bucket, as well as some objects owned by a parent company’s AWS account. The goal is to create a secure website accessible to the public. Which of the following configurations should be implemented? (Select TWO)

a) Create a bucket policy that grants s3:GetObject access to the objects owned by the parent company account.

b) Disable the block public access setting for the bucket and write a bucket policy that grants public read access for the bucket.

c) Create an object access control list to grant read permissions on objects owned by the account used to create the S3 bucket that will host the website.

d) Create an object access control list (ACL) that grants read access to the bucket.

e) Create a bucket policy that grants s3:GetObject access to the objects owned by the parent company account and the objects owned by the account used to create the S3 bucket that will host the website.

View Answer

Answer is: b and, d

Explanation: To achieve a secure website accessible to the public with the given requirements, you should implement the following configurations:

  • Disable the block public access setting for the bucket. This allows public access to the objects in the bucket.
  • Create an object access control list (ACL) that grants read access to the bucket. This ensures that the objects can be read by the public.

Question 40 As a solutions architect for a media company, you need to optimize the cost and performance of storing and accessing stock content in S3. The stock content includes images and videos, with varying sizes and access patterns. There are over 1 million objects in S3, and the access patterns change over time. Which S3 storage class should you choose to achieve the best cost optimization and performance?

a) S3 Standard

b) S3 Standard-IA

c) S3 Intelligent-Tiering

d) S3 One Zone-IA

View Answer

Answer is: c – S3 Intelligent-Tiering

Explanation: To optimize costs and performance for the stock content stored in S3, the recommended storage class is S3 Intelligent-Tiering. This storage class is designed to automatically move objects between two access tiers based on their changing access patterns. It uses machine learning algorithms to analyze access patterns and move objects to the most cost-effective access tier (Frequent Access or Infrequent Access) accordingly.

The S3 Intelligent-Tiering storage class is suitable for scenarios where access patterns are unpredictable or change over time. It ensures that frequently accessed objects remain in the Frequent Access tier for optimal performance, while infrequently accessed objects are moved to the Infrequent Access tier to reduce costs.

Question 41 As a solutions architect, you have created multiple SQS queues to store different types of customer requests. You now need a service that can collect messages from the frontend and distribute them to the respective queues using the publish/subscribe model. Which service would be the most appropriate choice for this scenario?

a) Amazon MQ

b) Amazon Simple Notification Service (SNS)

c) Amazon Simple Queue Service (SQS)

d) AWS Step Functions

View Answer

Answer is: b – Amazon Simple Notification Service (SNS)

Explanation: To collect messages from the frontend and distribute them to the relevant queues using the publish/subscribe model, the most suitable service is Amazon Simple Notification Service (SNS). Amazon SNS is a fully managed pub/sub messaging service that enables you to decouple the sending and receiving of messages. It allows you to publish messages to topics, which act as message buses, and subscribers can receive messages from these topics.

Question 42 A fictional e-learning platform called “LearnOnline” hosts a wide range of courses for students worldwide. The platform includes video lessons and interactive quizzes to assess students’ understanding. Recently, the platform has received feedback from its users, including visually impaired learners, requesting a feature that enables them to listen to the quiz questions instead of reading them. The platform aims to enhance accessibility and accommodate diverse learning needs.
As a solutions architect at LearnOnline, you are tasked with finding a suitable solution to implement this feature. Which of the following options can fulfill the given requirement?

a) Utilize Amazon Rekognition to identify the text from the quiz questions and convert it into speech, enabling users to listen to the questions.

b) Employ Amazon Textract to extract the text from the quiz questions and convert it into speech, providing an audio representation of the questions.

c) Utilize Amazon Comprehend to leverage its natural language processing (NLP) capabilities and implement the feature, allowing users to listen to the questions.

d) Integrate Amazon Polly into the platform to implement the desired feature, converting the quiz questions into audio for users to listen to.

View Answer

Answer is: d – Integrate Amazon Polly into the platform to implement the desired feature, converting the quiz questions into audio for users to listen to.

Explanation: Amazon Polly is a text-to-speech service offered by AWS. By integrating Amazon Polly into the LearnOnline platform, the quiz questions can be converted into audio, enabling users, including visually impaired learners, to listen to the questions rather than reading them. This solution effectively addresses the requirement of enhancing accessibility and accommodating diverse learning needs. Thus, option D is the correct answer.

Question 43 Jenna is tasked by her manager to develop a solution for detecting whether visitors entering their office building are wearing face masks or not. The building has two entrances equipped with CCTVs. The solution requires capturing data from these CCTVs and sending it to AWS for analysis and detection.
As Jenna explores various AWS services, she discovers that a combination of Amazon Kinesis and Amazon Rekognition can effectively meet the requirements. However, she is uncertain about which capability within Amazon Kinesis will be most suitable for this specific scenario.
Which of the following Kinesis capabilities is MOST appropriate for the given scenario?

a) Kinesis Data Firehose

b) Kinesis Data Analytics

c) Kinesis Video Streams

d) Kinesis Data Streams

View Answer

Answer is: c – Kinesis Video Streams

Explanation: In this scenario, the most appropriate capability within Amazon Kinesis is Kinesis Video Streams. Kinesis Video Streams is designed for securely capturing, processing, and storing video streams. It enables real-time video data ingestion, making it ideal for scenarios where video analysis and detection are required. By utilizing Kinesis Video Streams in combination with Amazon Rekognition, Jenna can effectively capture the CCTV data, send it for analysis, and detect whether visitors are wearing face masks or not. Therefore, option C is the correct answer.

Question 44 A large manufacturing company wants to efficiently store and visualize IoT sensor data collected from thousands of equipment across multiple factory units in real time. Which cost-effective database option in the AWS cloud should they choose for this purpose?

a) Send sensor data to Amazon RDS (Relational Database Service) using Amazon Kinesis and visualize data using Amazon QuickSight

b) Send sensor data to Amazon Neptune using Amazon Kinesis and visualize data using Amazon QuickSight

c) Send sensor data to Amazon DynamoDB using Amazon Kinesis and visualize data using Amazon QuickSight

d) Send sensor data to Amazon Timestream using Amazon Kinesis and visualize data using Amazon QuickSight

View Answer

Answer is: d – Send sensor data to Amazon Timestream using Amazon Kinesis and visualize data using Amazon QuickSight

Explanation: For efficiently storing and visualizing high-volume, real-time IoT sensor data, Amazon Timestream is a suitable database option in the AWS cloud. Timestream is purpose-built for time series data and offers scalable storage and query capabilities. By sending the sensor data to Amazon Timestream using Amazon Kinesis, the manufacturing company can efficiently handle the high-volume traffic. They can then visualize the data using Amazon QuickSight, which integrates well with Timestream for real-time analytics and visualization.

Question 45 A transportation company has a fleet of vehicles equipped with IoT devices that collect real-time data about vehicle performance, location, and fuel consumption. Hourly logs from these devices are stored in an Amazon S3 bucket. The management wants to have comprehensive dashboards that incorporate the usage of these vehicles and forecast usage trends. Which tool is best suited to create the required dashboard?

a) Use S3 as a source for Amazon QuickSight and create dashboards for usage and forecast trends.

b) Use S3 as a source for Amazon Redshift and create dashboards for usage and forecast trends.

c) Copy data from Amazon S3 to Amazon DynamoDB. Use Amazon DynamoDB as a source for Amazon QuickSight and create dashboards for usage and forecast trends.

d) Copy data from Amazon S3 to Amazon RDS. Use Amazon RDS as a source for Amazon QuickSight and create dashboards for usage and forecast trends.

View Answer

Answer is: a – Use S3 as a source for Amazon QuickSight and create dashboards for usage and forecast trends.

Explanation: To create comprehensive dashboards that incorporate the usage of vehicles and forecast usage trends, the transportation company should use Amazon QuickSight with Amazon S3 as a data source. Amazon QuickSight is a business intelligence tool that allows users to create interactive dashboards and visualizations. By connecting Amazon QuickSight directly to the S3 bucket storing the hourly logs from the IoT devices, the company can easily analyze and visualize the data. Option B suggests using Amazon Redshift, which is a data warehousing solution and may not be the most suitable option for real-time analytics.

Question 46 A company has launched Amazon EC2 instances in an Auto Scaling group for deploying a web application. The Operations Team is looking to capture custom metrics for this application from all the instances. These metrics should be viewed as aggregated metrics for all instances in an Auto Scaling group.
What configuration can be implemented to get the metrics as required?

a) Use Amazon CloudWatch metrics with detail monitoring enabled and send to CloudWatch console where all the metrics for an Auto Scaling group will be aggregated by default.

b) Install a unified CloudWatch agent on all Amazon EC2 instances in an Auto Scaling group and use “aggregation_dimensions” in an agent configuration file to aggregate metrics for all instances.

c) Install unified CloudWatch agent on all Amazon EC2 instances in an Auto Scaling group and use “append-config” in an agent configuration file to aggregate metrics for all instances.

d) Use Amazon CloudWatch metrics with detail monitoring enabled and create a single Dashboard to display metrics from all the instances.

View Answer

Answer is: b – Install a unified CloudWatch agent on all Amazon EC2 instances in an Auto Scaling group and use “aggregation_dimensions” in an agent configuration file to aggregate metrics for all instances.

Explanation: To capture custom metrics from all instances in an Auto Scaling group and view them as aggregated metrics, the company should install a unified CloudWatch agent on all Amazon EC2 instances in the Auto Scaling group and use the “aggregation_dimensions” configuration in the agent configuration file. This configuration allows the agent to aggregate metrics from all instances based on specified dimensions. By doing so, the Operations Team can obtain the desired aggregated metrics for the web application.

Question 47 A manufacturing company has deployed a critical web application on multiple Amazon EC2 instances, which are part of an Auto Scaling group. They need to perform a software upgrade on one of the instances without impacting the other instances in the group. After the upgrade, the same instance should be reintegrated into the Auto Scaling group. What steps can be initiated to complete this upgrade?

a) Hibernate the instance and perform the upgrade in offline mode. After the upgrade, start the instance, and it will be part of the same Auto Scaling group.

b) Use cooldown timers to perform the upgrade on the instance. Once the cooldown timers expire, the instance will be part of the same Auto Scaling group.

c) Put the instance in Standby mode. After the upgrade, move the instance back to InService mode, and it will be part of the same Auto Scaling group.

d) Use lifecycle hooks to perform the upgrades on the instance. Once the lifecycle hooks complete, the instance will be part of the same Auto Scaling group.

View Answer

Answer is: c – Put the instance in Standby mode. After the upgrade, move the instance back to InService mode, and it will be part of the same Auto Scaling group.

Explanation: To perform a software upgrade on one of the instances in an Auto Scaling group without impacting the other instances, the recommended approach is to put the instance in Standby mode. By putting the instance in Standby mode, it will be temporarily removed from the load balancer’s target group, ensuring that it doesn’t receive any traffic while the upgrade is performed. After the upgrade is completed, the instance can be moved back to InService mode, and it will be reintegrated into the Auto Scaling group, maintaining the desired scaling and availability characteristics.

Question 48 An e-commerce company is storing product images in an Amazon S3 bucket, which is accessed by global users. The Amazon S3 bucket is encrypted with AWS Key Management Service (KMS). The company plans to use Amazon CloudFront as a Content Delivery Network (CDN) to improve image loading performance. The Operations Team wants to implement an S3 bucket policy to restrict access to the bucket only through a specific CloudFront distribution.
How can the S3 bucket policy be implemented to control access to the S3 bucket?

a) Use a Principal element in the policy to match the service as the CloudFront distribution that contains the S3 origin.

b) Use a Condition element in the policy to allow CloudFront to access the bucket only when the request is on behalf of the CloudFront distribution that contains the S3 origin.

c) Use a Principal element in the policy to allow CloudFront Origin Access Identity (OAI) to access the bucket.

d) Use a Condition element in the policy to match the service as “cloudfront.amazonaws.com”.

View Answer

Answer is: b – Use a Condition element in the policy to allow CloudFront to access the bucket only when the request is on behalf of the CloudFront distribution that contains the S3 origin.

Explanation: To control access to the S3 bucket through a specific CloudFront distribution, you can implement an S3 bucket policy with a Condition element. By specifying the condition to allow access only when the request is on behalf of the CloudFront distribution that contains the S3 origin, you can enforce the desired restriction. This ensures that requests to the bucket must come through the specified CloudFront distribution, providing an additional layer of security and control.

Question 49 A company is using microservices-based applications using Amazon ECS for a healthcare management system. For different services, multiple tasks are created in containers using the EC2 launch type. The security team is looking for specific security controls for the tasks in the containers, along with granular network monitoring using various tools for each task.
What networking mode configuration can be considered with Amazon ECS to meet this requirement?

a) Use host networking mode for Amazon ECS tasks.

b) By default, an elastic network interface (ENI) with a primary private IP address is assigned to each task.

c) Use awsvpc networking mode for Amazon ECS tasks.

d) Use bridge networking mode for Amazon ECS tasks.

View Answer

Answer is: c – Use awsvpc networking mode for Amazon ECS tasks

Explanation: To meet the requirement of specific security controls for tasks in containers and granular network monitoring, the recommended networking mode configuration in Amazon ECS is awsvpc (AWS Virtual Private Cloud). When using the awsvpc mode, each task is provisioned with its own elastic network interface (ENI), allowing for fine-grained control over security settings and network monitoring. This enables tasks to have dedicated network resources and isolates network traffic at the task level.

Question 50 A large enterprise is planning to deploy container-based applications using Amazon ECS. The enterprise is looking for the least latency from their data center to the workloads in the containers. The proposed solution should be scalable and should support consistent high CPU and memory requirements.
What deployment can be implemented for this purpose?

a) Create a Fargate launch type with Amazon ECS and deploy it in the AWS Outpost

b) Create a Fargate launch type with Amazon ECS and deploy it in the AWS Local Zone

c) Create an EC2 launch type with Amazon ECS and deploy it in the AWS Local Zone

d) Create an EC2 launch type with Amazon ECS and deploy it in the AWS Outpost

View Answer

Answer is: d – Create an EC2 launch type with Amazon ECS and deploy it in the AWS Outpost

Explanation: To achieve the least latency from their data center to the workloads in the containers, while ensuring scalability and supporting high CPU and memory requirements, the large enterprise should create an EC2 launch type with Amazon ECS and deploy it in the AWS Outpost. AWS Outposts bring native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. By deploying EC2 instances with Amazon ECS in the AWS Outpost, the large enterprise can benefit from low-latency connectivity to their data center and have the flexibility to scale the deployment as needed.

Question 51 A healthcare organization is using Amazon S3 buckets to store sensitive patient data. There are some users in the IT department who have an IAM policy with full access to S3 buckets. The organization wants to enforce strict access control for the patient data bucket to ensure only authorized members from the Medical Records department have access. The access control should be applied automatically when new users with full S3 access are created.
What access control mechanism can be implemented with the least administrative effort?

a) Create an S3 bucket policy for the patient data bucket with explicit ‘deny all’ to Principal elements (users and roles) other than the authorized Medical Records department members.

b) Create an S3 bucket policy for the patient data bucket with explicit ‘deny all’ to Principal (only users) other than the authorized Medical Records department members.

c) Create an S3 bucket policy for the patient data bucket restricting access only to specific IAM roles. Create a role with access permissions to the bucket and allow only the authorized Medical Records department members to assume this role.

d) Create an S3 bucket policy for the patient data bucket with explicit deny to NotPrincipal element (users and roles) that would match all authorized Medical Records department members and deny access to all other users.

View Answer

Answer is: c – Create an S3 bucket policy for the patient data bucket restricting access only to specific IAM roles. Create a role with access permissions to the bucket and allow only the authorized Medical Records department members to assume this role.

Explanation: To enforce strict access control for the patient data bucket and ensure that only authorized members from the Medical Records department have access, the organization should create an S3 bucket policy using option C. By creating an S3 bucket policy that restricts access only to specific IAM roles, the organization can grant access permissions to a role associated with the authorized users in the Medical Records department. Other users and roles without the necessary permissions will be denied access to the bucket.

Question 52 A company has deployed multiple Amazon EC2 instances in a private subnet of their VPC. These instances are used for hosting a web application that requires access from specific IP addresses for administration purposes. The operations team wants to implement access controls to restrict access to the EC2 instances only from the designated IP addresses while allowing normal web traffic from any source.
How can the access controls be designed to meet this requirement without impacting normal web traffic?

a) Create a bastion host in a public subnet and configure SSH tunneling to the EC2 instances in the private subnet. Set up security groups for the bastion host and the EC2 instances to allow access only from the designated IP addresses.

b) Create a Network Load Balancer (NLB) and configure it to forward traffic to the EC2 instances. Configure an AWS WAF (Web Application Firewall) with IP whitelisting rules to allow access only from the designated IP addresses.

c) Create an AWS CloudFront distribution and configure it as a proxy for the EC2 instances. Set up AWS WAF with IP whitelisting rules on the CloudFront distribution to allow access only from the designated IP addresses.

d) Create a NAT Gateway in a public subnet and configure it as the outbound gateway for the EC2 instances. Set up security groups for the EC2 instances to allow inbound access only from the designated IP addresses.

View Answer

Answer is: a

Explanation: The recommended approach to restrict access to the EC2 instances from specific IP addresses without impacting normal web traffic is to create a bastion host. A bastion host acts as a gateway that allows secure SSH access to the instances in the private subnet. By configuring SSH tunneling, the operations team can connect to the bastion host and then securely access the EC2 instances. Security groups can be set up for both the bastion host and the EC2 instances to allow access only from the designated IP addresses, ensuring strict control over administrative access.

Question 53 A software development company is planning to deploy microservices applications on Amazon ECS. The Project Team anticipates occasional bursts in application usage and wants to ensure that the Amazon ECS clusters can scale automatically to meet these bursts without requiring manual interventions. Additionally, they want the relational database for the application to scale automatically based on application demand, without the need to manage the underlying instances.
What design recommendation can be made to achieve a scalable application?

a) Create Amazon ECS clusters with Fargate launch type. For the database, use Amazon DynamoDB.

b) Create Amazon ECS clusters with Fargate launch type. For the database, use Amazon Aurora Serverless.

c) Create Amazon ECS clusters with EC2 launch type. For the database, use Amazon Aurora Serverless.

d) Create Amazon ECS clusters with EC2 launch type. For the database, use Amazon DynamoDB.

View Answer

Answer is: b – Create Amazon ECS clusters with Fargate launch type. For the database, use Amazon Aurora Serverless.

Explanation: The recommended design for a scalable application is to create Amazon ECS clusters with the Fargate launch type and use Amazon Aurora Serverless for the database. This design allows the application to automatically scale with occasional bursts in usage without manual intervention. Amazon Aurora Serverless provides automatic scaling of the relational database based on application demand, eliminating the need to manage underlying instances.

Question 54 A manufacturing company has deployed multiple Amazon EC2 instances for its production system. The operations team is looking to retrieve the AMI ID for all these running instances. They are seeking your help with the correct URL for this purpose.
What command can be used to get this detail?

a) Use http://169.254.169.254/latest/meta-data/ami-id

b) Use http://168.254.168.254/latest/metadata/ami-id

c) Use http://169.254.169.254/latest/user-data/ami-id

d) Use http://168.253.168.253/latest/dynamic/ami-id

View Answer

Answer is: a – Use http://169.254.169.254/latest/meta-data/ami-id

Explanation: To retrieve the AMI ID for an Amazon EC2 instance, you can use the metadata service available at the URL http://169.254.169.254/latest/meta-data/ami-id. This URL provides the necessary information about the instance, including the AMI ID.

Question 55 While analyzing millions of sensor readings in an IoT company, you have noticed some data processing operations are exceeding SLAs even after using high-performance EC2 instances. After closely monitoring logs and tracing data, you discovered that data ingestion and transformation processes involving S3 content are causing 80% of the performance bottleneck. In comparison, data analysis and retrieval operations for real-time insights are responsible for the remaining 20% of congestion. Which two options are recommended to increase performance in this scenario? (Select TWO)

a) Implement a Lambda function triggered by S3 events to perform data pre-processing and transformation asynchronously, offloading the processing workload from the EC2 instances.

b) Utilize Amazon S3 Transfer Acceleration and parallelized multi-part uploads for faster ingestion and storage of sensor data, reducing the impact on processing operations.

c) Instead of direct EC2-based processing, leverage AWS Glue for ETL (Extract, Transform, Load) operations on S3 data, which offers scalable and serverless data processing capabilities.

d) Implement caching mechanisms using Amazon ElastiCache (Redis) to store frequently accessed sensor data, reducing the need for repeated read operations on S3.

e) Deploy an Amazon Redshift cluster for data warehousing, enabling efficient storage and analysis of large volumes of sensor data, while offloading the processing burden from the EC2 instances.

View Answer

Answer is: b and, C

Explanation: In this scenario, to increase performance, it is recommended to implement options B and C. Option B suggests leveraging Amazon S3 Transfer Acceleration and parallelized multi-part uploads to enhance the speed of data ingestion and storage, reducing the impact on processing operations. This ensures efficient and faster data transfer to S3. Option C proposes utilizing AWS Glue for ETL operations, which provides scalable and serverless data processing capabilities, offloading the processing workload from the EC2 instances and improving overall performance. By implementing these two options, the IoT company can address the performance bottlenecks and achieve improved data processing efficiency for their sensor readings.

Question 56 You are managing a cluster of Amazon EC2 instances for your high-performance computing (HPC) workloads. As part of your evaluation, you have been exploring the benefits of enhanced networking. However, you have encountered some issues with the settings, despite having VPC peering connections configured. Some of the instances in your cluster are ENA-capable, and you have verified that all instances have internet connectivity. You are using the AWS CLI to troubleshoot the problem and determine the correct statement in this scenario.
Which relevant statement is correct in this scenario?

a) All instance types are supported for using the ENA. Make sure you are using a supported version of the Linux kernel and a supported distribution, you run ethtool -i eth0 and you have to get ENA as the driver.

b) Some instance types are supported for using the ENA. Make sure you are using a supported version of the Linux kernel and a supported distribution, you run ethtool -i eth0 and you have to get ENA as the driver.

c) Neither Enhanced networking uses SR-IOV nor transitive peering is supported.

d) Enhanced networking uses SR-IOV and transitive peering is supported.

View Answer

Answer is: b – Some instance types are supported for using the ENA. Make sure you are using a supported version of the Linux kernel and a supported distribution, you run ethtool -i eth0 and you have to get ENA as the driver.

Explanation: In this scenario, the correct statement is B. Not all EC2 instance types support Enhanced Networking using Elastic Network Adapter (ENA). To determine the compatible instance types, it is important to consult the AWS documentation. Additionally, ensure that your instances are running a supported version of the Linux kernel and a compatible distribution. You can verify the presence of ENA as the driver by running the command ‘ethtool -i eth0’ on the instances. This statement highlights the need for specific instance types and the requirement for proper kernel and distribution compatibility to leverage Enhanced Networking with ENA.

Question 57 You are tasked with managing API keys in AWS Secrets Manager while ensuring automatic rotation to comply with your company’s security policy. Applications should be able to retrieve the latest version of the API credentials from Secrets Manager. You need to determine the appropriate approach for implementing key rotation.
Which option would you choose to implement key rotation in this scenario?

a) Use AWS Parameter Store to store and rotate the keys since Secrets Manager does not support automatic key rotation.

b) Add multiple keys directly in Secrets Manager and enable automatic rotation, ensuring the keys are rotated every year.

c) Customize a Lambda function to perform the rotation of secrets in Secrets Manager based on your specific requirements.

d) Create two secrets in Secrets Manager, each storing a different version of the API credentials, and modify the application logic to retrieve the appropriate key.

View Answer

Answer is: c – Customize a Lambda function to perform the rotation of secrets in Secrets Manager based on your specific requirements.

Explanation: In this scenario, the most suitable approach is option C. You can customize a Lambda function to perform the rotation of secrets in AWS Secrets Manager. By doing so, you can define the specific rotation logic and configure Secrets Manager to rotate the API keys automatically according to your company’s policy. This ensures that applications can always retrieve the latest version of the API credentials from Secrets Manager without manual intervention. Customizing the Lambda function provides flexibility and control over the key rotation process.

Question 58 A company uses IoT sensors to monitor the number of bags handled at an airport. The sensor data is sent to an Amazon Kinesis stream with default settings. Every alternate day, the data from the stream is supposed to be sent to an Amazon S3 bucket for further processing. However, it has been observed that not all of the data sent to the Kinesis stream is reaching the S3 bucket. What could be the reason for this issue?

a) The IoT sensors may have malfunctioned on some days, resulting in the data not being sent to the Kinesis stream.

b) Amazon S3 has a limited storage capacity and can only store data for a day, causing some data loss.

c) The default retention period of the Kinesis stream is set to 24 hours, which means that data older than 24 hours is automatically deleted from the stream, leading to data loss.

d) Kinesis streams are not designed to handle IoT-related data and may experience limitations or issues when processing such data.

View Answer

Answer is: C

Explanation: The most likely reason for not receiving all of the data in the S3 bucket is that the default retention period of the Kinesis stream is set to 24 hours. This means that data older than 24 hours is automatically deleted from the stream. Since the data is only sent to S3 every alternate day, any data that is older than 24 hours will be lost before it can be processed. To resolve this issue, the retention period of the Kinesis stream should be adjusted accordingly to ensure that all relevant data is available for processing in S3.

Question 59 You have a social media application that processes user comments and notifications. The frontend application sends the user-generated content to an Amazon SQS queue for further processing. The backend infrastructure consists of multiple EC2 instances behind an Application Load Balancer (ALB). You want to ensure that the EC2 instances scale dynamically based on the incoming workload in the SQS queue. Which of the following CloudWatch metrics would you choose to monitor the length of the SQS queue?

a) ApproximateNumberOfMessagesVisible

b) ApproximateNumberOfMessagesNotVisible

c) NumberOfMessagesReceived

d) NumberOfMessagesDeleted

View Answer

Answer is: a – ApproximateNumberOfMessagesVisible

Explanation: To monitor the length of the SQS queue, you would choose the CloudWatch metric “ApproximateNumberOfMessagesVisible.” This metric represents the approximate number of messages that are visible and available for processing in the queue. By monitoring this metric, you can configure your EC2 instances to scale dynamically based on the queue size, ensuring that the backend infrastructure adapts to the incoming workload.

Question 60 Which component of Amazon SageMaker is responsible for preprocessing data and transforming it into a format suitable for training machine learning models?

a) Amazon SageMaker Notebooks

b) Amazon SageMaker Ground Truth

c) Amazon SageMaker Processing

d) Amazon SageMaker Training

View Answer

Answer is: c – Amazon SageMaker Processing

Explanation: Amazon SageMaker Processing. Amazon SageMaker Processing is a fully managed service that allows you to preprocess data in order to transform it into a format suitable for training machine learning models. It provides a consistent and reproducible way to run your data preprocessing tasks on large datasets. With Amazon SageMaker Processing, you can choose from a wide range of built-in processing algorithms or bring your own custom processing code.

Free – AWS Certified Solution Architect – Associate Exam Practice Questions
Scroll to top