Free – AWS Certified Solutions Architect – Professional Exam Practice Questions

AWS Certified Solutions Architect – Professional Exam Practice

Are you prepared for your upcoming AWS Certified Solutions Architect – Professional exam?

Assess your understanding with these free AWS Certified Solutions Architect – Professional exam practice questions. Just click the View Answer button to reveal the correct answer along with comprehensive explanations.

  • Exam Name: AWS Certified Solutions Architect – Professional
  • Level: Professional
  • Total Number of Practice Questions: 20

Let’s Start Test

Question 1 A high-profile e-commerce company expects a significant increase in traffic during seasonal sales events. They require an autoscaling solution that can handle sudden spikes in traffic while ensuring a seamless user experience. Additionally, the solution should automatically scale down resources during low-traffic periods to optimize costs. Which AWS service or feature should the solutions architect recommend to meet these requirements?

a) Implement an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group with scheduled scaling actions. Configure the scaling group to automatically increase the number of instances before the expected traffic surge and reduce the capacity when traffic decreases, based on predefined schedules.

b) Utilize Amazon Elastic Load Balancing (ELB) with Elastic Load Balancer Auto Scaling. Configure ELB to distribute incoming traffic across EC2 instances and enable Auto Scaling to automatically adjust the number of instances based on demand.

c) Implement an AWS Lambda function with Amazon CloudWatch Events. Configure the Lambda function to trigger based on a predefined schedule, ensuring that additional resources are provisioned during peak traffic periods and deprovisioned during low-traffic periods.

d) Utilize AWS Elastic Beanstalk to deploy and manage the web application. Configure Elastic Beanstalk to automatically scale the application based on predefined performance thresholds and traffic patterns.

View Answer

Answer is: b – Utilize Amazon Elastic Load Balancing (ELB) with Elastic Load Balancer Auto Scaling. Configure ELB to distribute incoming traffic across EC2 instances and enable Auto Scaling to automatically adjust the number of instances based on demand.

Explanation: To meet the requirements of handling sudden spikes in traffic and providing a seamless user experience while optimizing costs during low-traffic periods, the solutions architect should recommend utilizing Amazon Elastic Load Balancing (ELB) with Elastic Load Balancer Auto Scaling. ELB distributes incoming traffic across EC2 instances, and Auto Scaling automatically adjusts the number of instances based on demand. This combination ensures scalability, fault tolerance, and cost optimization for the e-commerce company’s seasonal sales events. Options a, c, and d do not provide the same level of scalability and cost optimization for the scenario.

Question 2 A large enterprise organization wants to implement an effective backup and disaster recovery strategy for their critical database workloads running on AWS. They require a solution that offers point-in-time recovery, low recovery time objectives (RTO), and high durability. Which AWS service or feature should the solutions architect recommend to meet these requirements?

a) Utilize Amazon RDS (Relational Database Service) Multi-AZ deployment. Configure Multi-AZ to automatically replicate database changes synchronously to a standby replica in a different Availability Zone, enabling automatic failover and minimizing RTO.

b) Implement AWS Backup to create backup plans for the databases. Configure backup retention policies to retain backups for the desired duration and schedule regular backups to capture point-in-time recovery points.

c) Deploy Amazon Aurora Global Database with cross-region replication. Configure Aurora Global Database to replicate the primary database to a secondary region, ensuring high durability and low RTO in the event of a region-wide failure.

d) Utilize Amazon S3 (Simple Storage Service) with versioning enabled. Configure S3 to store database backups as object versions, allowing point-in-time recovery by selecting the desired version during restore.

View Answer

Answer is: c – Deploy Amazon Aurora Global Database with cross-region replication. Configure Aurora Global Database to replicate the primary database to a secondary region, ensuring high durability and low RTO in the event of a region-wide failure.

Explanation: To meet the requirements of point-in-time recovery, low recovery time objectives (RTO), and high durability for critical database workloads, the solutions architect should recommend implementing Amazon Aurora Global Database with cross-region replication. Aurora Global Database provides fast cross-region replication, enabling automatic failover in the event of a region-wide failure and ensuring low RTO. This solution offers high durability and satisfies the requirements of the enterprise organization. Options a, b, and d offer backup and recovery capabilities but do not provide the same level of cross-region replication and low RTO as Aurora Global Database for the scenario.

Question 3 A company has multiple AWS accounts, each managed by different business units. The company wants to enforce a consistent security baseline across all AWS accounts while allowing individual business units to manage their resources. The security team wants to ensure that all accounts comply with security best practices, such as enabling encryption at rest for Amazon S3 buckets and enforcing strong password policies for IAM users. What solution should the solutions architect recommend to meet these requirements?

a) Implement AWS Control Tower and configure guardrails to enforce security policies across all AWS accounts. Use pre-defined guardrails to enable encryption at rest for S3 buckets and enforce strong password policies.

b) Utilize AWS CloudFormation StackSets to deploy a security template that enables encryption at rest for S3 buckets and enforces strong password policies. Apply the template to all AWS accounts in the organization.

c) Create an IAM policy that mandates encryption at rest for S3 buckets and strong password policies for IAM users. Attach the policy to a central IAM role and associate the role with all AWS accounts.

d) Configure AWS Organizations service control policies (SCPs) to enforce encryption at rest for S3 buckets and strong password policies. Apply the SCPs to all AWS accounts within the organization.

View Answer

Answer is: b – Utilize AWS CloudFormation StackSets to deploy a security template that enables encryption at rest for S3 buckets and enforces strong password policies. Apply the template to all AWS accounts in the organization.

Explanation: To enforce a consistent security baseline across multiple AWS accounts while allowing individual business units to manage their resources, the solutions architect should recommend utilizing AWS CloudFormation StackSets. By deploying a security template using StackSets, encryption at rest for S3 buckets and strong password policies can be enabled across all AWS accounts in the organization. This ensures consistent security practices without compromising the autonomy of the business units. Options a, c, and d do not provide the same level of scalability and centralized control as option B for the given scenario.

Question 4 A company has multiple AWS accounts, each dedicated to different projects. The company wants to implement a centralized logging solution to monitor and analyze logs across all accounts. The solution should provide real-time log analysis, automated log retention, and the ability to define custom log metrics. Which AWS service or feature should the solutions architect recommend to meet these requirements?

a) Utilize Amazon CloudWatch Logs and create a centralized log group. Configure cross-account log streaming to ingest logs from all AWS accounts. Use CloudWatch Logs Insights for real-time log analysis and create custom log metrics for monitoring.

b) Implement AWS CloudTrail to capture and log API activity across all AWS accounts. Enable log file integrity validation and use AWS Glue to automate log retention and analysis.

c) Deploy Amazon Elasticsearch Service (Amazon ES) and configure cross-account access to ingest logs from all AWS accounts. Utilize Kibana for real-time log analysis and create custom log dashboards for monitoring.

d) Utilize AWS Lambda to create a custom log processing function. Configure the function to receive logs from all AWS accounts and store them in a centralized Amazon S3 bucket. Use Amazon Athena for log analysis and create custom SQL queries for monitoring.

View Answer

Answer is: a – Utilize Amazon CloudWatch Logs and create a centralized log group. Configure cross-account log streaming to ingest logs from all AWS accounts. Use CloudWatch Logs Insights for real-time log analysis and create custom log metrics for monitoring.

Explanation: To meet the requirements of a centralized logging solution for monitoring and analyzing logs across multiple AWS accounts, the solutions architect should recommend utilizing Amazon CloudWatch Logs. By creating a centralized log group and configuring cross-account log streaming, logs from all AWS accounts can be ingested. CloudWatch Logs Insights provides real-time log analysis capabilities, while custom log metrics can be defined for monitoring purposes. This solution offers the desired features and scalability for the scenario. Options b, c, and d – do not provide the same level of centralized log analysis and custom metric capabilities as option A for the given scenario.

Question 5 A development team is building an HTML form that needs to be hosted on a public Amazon S3 bucket. The form uses JavaScript to post data to an API Gateway API endpoint, which is integrated with AWS Lambda functions. The team wants to ensure that the form can successfully post data to the API endpoint and receive a valid response. What steps must the team complete to achieve this?
(Select more than one answer)

a) Configure the S3 bucket to enable cross-origin resource sharing (CORS), allowing JavaScript requests from the form hosted on the S3 bucket to the API Gateway API endpoint.

b) Host the HTML form on an Amazon EC2 instance rather than on Amazon S3.

c) Request a quota increase for the API Gateway to handle the expected traffic from the HTML form.

d) Enable cross-origin resource sharing (CORS) in the API Gateway to allow requests from the JavaScript code in the HTML form.

e) Configure the S3 bucket for web hosting to serve the HTML form as a static website.

View Answer

Answer is: a, d – (a) Configure the S3 bucket to enable cross-origin resource sharing (CORS), allowing JavaScript requests from the form hosted on the S3 bucket to the API Gateway API endpoint, and (d) Enable cross-origin resource sharing (CORS) in the API Gateway to allow requests from the JavaScript code in the HTML form.

Explanation: With options a, d – steps ensure that the necessary cross-origin resource sharing is in place, allowing the form to communicate with the API Gateway API endpoint without any issues. Options b, c, and e are not relevant or necessary for achieving successful form submission to the API endpoint in this scenario.

Question 6 A company has a web application hosted on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The company wants to ensure that the web application can handle sudden spikes in traffic and remain highly available. Which solution should the solutions architect recommend to meet these requirements?

a) Utilize Amazon CloudFront in front of the ALB to cache and serve static content, reducing the load on the EC2 instances.

b) Configure Amazon CloudWatch alarms to monitor the ALB’s request count per target and trigger an Auto Scaling group to add additional EC2 instances when the threshold is exceeded.

c) Implement Amazon Route 53 with a failover routing policy to route traffic to a secondary region in case of a failure in the primary region.

d) Enable AWS Shield Advanced to provide DDoS protection for the web application and automatically mitigate attacks.

View Answer

Answer is: b – Configure Amazon CloudWatch alarms to monitor the ALB’s request count per target and trigger an Auto Scaling group to add additional EC2 instances when the threshold is exceeded.

Explanation: To ensure that the web application can handle sudden spikes in traffic and remain highly available, the solutions architect should recommend configuring Amazon CloudWatch alarms to monitor the ALB’s request count per target. When the threshold is exceeded, an Auto Scaling group should be triggered to add additional EC2 instances, automatically scaling the capacity to handle the increased traffic. This solution provides the necessary scalability and availability for the given scenario. Options a, c, and d – do not directly address the requirement of handling sudden spikes in traffic and maintaining high availability.

Question 7 A company has a serverless architecture that utilizes Amazon S3 for static website hosting, AWS Lambda functions, Amazon DynamoDB, and Amazon Simple Notification Service (SNS) for event notifications. The company is experiencing performance issues with its serverless application, and users are reporting delays and timeouts when accessing the website. Upon investigation, it is found that the Lambda functions are taking longer than expected to execute, impacting the overall application performance. Which solution should the solutions architect recommend to address this issue?

a) Increase the memory allocation for the Lambda functions. Higher memory allocation provides additional CPU power and proportional increases in allocated network bandwidth and disk I/O.

b) Implement Amazon CloudFront in front of the S3 bucket to cache the website content and reduce the load on the Lambda functions.

c) Configure Amazon CloudWatch alarms to monitor the average duration of the Lambda functions. Trigger an Auto Scaling policy to increase the number of concurrent function executions when the duration exceeds the specified threshold.

d) Modify the DynamoDB table to use provisioned capacity instead of on-demand capacity. Provisioned capacity ensures that sufficient read and write capacity is allocated to handle the workload.

View Answer

Answer is: c – Configure Amazon CloudWatch alarms to monitor the average duration of the Lambda functions. Trigger an Auto Scaling policy to increase the number of concurrent function executions when the duration exceeds the specified threshold.

Explanation: To address the performance issues and delays in the serverless application, the solutions architect should recommend configuring Amazon CloudWatch alarms to monitor the average duration of the Lambda functions. When the duration exceeds the specified threshold, an Auto Scaling policy can be triggered to increase the number of concurrent function executions. This will help distribute the workload across multiple instances of the Lambda functions, improving performance and reducing delays. Options a, b, and d – do not directly address the issue of longer execution times for the Lambda functions and the resulting impact on overall application performance.

Question 8 A company has a distributed application that spans multiple AWS accounts. The company wants to ensure secure communication between the application components and centralized control over network traffic. Which solution should the solutions architect recommend to meet these requirements?

a) Implement AWS Direct Connect to establish private network connections between the AWS accounts and ensure secure and dedicated communication.

b) Configure VPC peering connections between the VPCs in each AWS account to enable private communication and centralized control over network traffic.

c) Utilize AWS PrivateLink to securely access services across different AWS accounts over private network connections.

d) Set up a VPN connection between the on-premises network and each AWS account to establish secure communication and centralized control over network traffic.

View Answer

Answer is: c – Utilize AWS PrivateLink to securely access services across different AWS accounts over private network connections.

Explanation: To ensure secure communication between the application components across multiple AWS accounts and centralized control over network traffic, the solutions architect should recommend utilizing AWS PrivateLink. AWS PrivateLink enables access to services hosted in different AWS accounts over private network connections, providing secure and dedicated communication between the components. Other Options :

(a) AWS Direct Connect is not recommended for this scenario as it establishes private network connections between on-premises infrastructure and AWS, rather than between different AWS accounts.

(b) VPC peering connections enable private communication between VPCs in different AWS accounts, but they do not provide centralized control over network traffic.

(d) VPN connections between the on-premises network and each AWS account can provide secure communication, but they do not offer centralized control over network traffic within the distributed application spanning multiple AWS accounts.

Question 9 A company wants to enhance the security of its Amazon S3 buckets by implementing encryption at rest. The company has a mix of new and existing buckets and wants to ensure that all data stored in the buckets is encrypted using server-side encryption. Which solution should the company use to achieve this?

a) Use AWS CloudFormation to create a bucket policy that enforces server-side encryption for all new buckets.

b) Use AWS Key Management Service (KMS) to create a customer managed key (CMK) and enable default encryption for all existing and new buckets.

c) Enable default encryption for the S3 buckets using AWS Identity and Access Management (IAM) policies.

d) Use AWS S3 Batch Operations to apply server-side encryption to all existing buckets and configure default encryption for new buckets.

View Answer

Answer is: b – Use AWS Key Management Service (KMS) to create a customer managed key (CMK) and enable default encryption for all existing and new buckets.

Explanation: To ensure that all data stored in both existing and new Amazon S3 buckets is encrypted using server-side encryption, the company should use AWS Key Management Service (KMS) to create a customer managed key (CMK) and enable default encryption for the buckets. This solution provides centralized control and consistent encryption across all buckets. Other Options :

(a) is incorrect because using AWS CloudFormation to create a bucket policy will only enforce server-side encryption for new buckets, not existing ones.

(c) is incorrect because enabling default encryption through IAM policies does not apply to existing buckets.

(d) is incorrect because AWS S3 Batch Operations can apply server-side encryption to existing buckets, but it does not enable default encryption for new buckets.

Question 10 A company wants to ensure high availability for its web application hosted on Amazon EC2 instances. The application is currently running on a single EC2 instance and experiences downtime during instance maintenance events. Which solution should the company implement to achieve high availability?

a) Configure Amazon CloudWatch alarms to automatically replace the EC2 instance when it becomes unreachable.

b) Use an Auto Scaling group with a minimum and maximum instance count of 2 and enable health checks to automatically replace any unhealthy instances.

c) Set up an Amazon CloudFront distribution in front of the EC2 instance to cache content and improve availability.

d) Use AWS Elastic Beanstalk to deploy the web application and let AWS handle the scaling and availability automatically.

View Answer

Answer is: b – Use an Auto Scaling group with a minimum and maximum instance count of 2 and enable health checks to automatically replace any unhealthy instances.

Explanation: To achieve high availability for the web application and minimize downtime during instance maintenance events, the company should use an Auto Scaling group. By configuring the Auto Scaling group with a minimum and maximum instance count of 2 and enabling health checks, any unhealthy instances can be automatically replaced, ensuring continuous availability of the application. Other Options :

(a) is incorrect because configuring CloudWatch alarms to replace the EC2 instance when it becomes unreachable does not provide automatic failover or high availability.

(c) is incorrect because setting up an Amazon CloudFront distribution improves performance and scalability but does not address instance availability during maintenance events.

(d) is incorrect because while AWS Elastic Beanstalk can handle scaling and availability, it may not provide the same level of control and customization as an Auto Scaling group.

Question 11 A company is planning to migrate its existing on-premises database to AWS. The database contains critical customer data that needs to be securely transferred to AWS with minimal downtime. Which solution should the company implement to meet these requirements?

a) Use AWS Snowball to physically transfer the database backups to an AWS data center. Once the backups are transferred, restore them to Amazon RDS for seamless migration.

b) Set up a VPN connection between the on-premises network and the Amazon VPC. Use the AWS Database Migration Service (DMS) to replicate the database to Amazon RDS in real time.

c) Export the database schema and data to CSV files. Use AWS Direct Connect to establish a private network connection between the on-premises environment and an Amazon S3 bucket. Upload the CSV files to the S3 bucket and import them into Amazon RDS.

d) Create a snapshot of the on-premises database. Use the AWS Database Migration Service (DMS) to migrate the snapshot to Amazon RDS. Configure DMS to replicate ongoing changes from the on-premises database to the RDS instance.

View Answer

Answer is: d – Create a snapshot of the on-premises database. Use the AWS Database Migration Service (DMS) to migrate the snapshot to Amazon RDS. Configure DMS to replicate ongoing changes from the on-premises database to the RDS instance.

Explanation: To securely transfer the on-premises database to AWS with minimal downtime, the company should create a snapshot of the database and use the AWS Database Migration Service (DMS) for migration. The snapshot provides a consistent point-in-time copy of the database, and DMS allows for seamless migration with ongoing replication of changes from the on-premises database to the Amazon RDS instance. Other Options :

(a) is incorrect because using AWS Snowball for physically transferring database backups is a slower and less efficient method compared to DMS migration.

(b) is incorrect because while setting up a VPN connection and using DMS for real-time replication is a valid approach, it may result in downtime during the initial migration.

(c) is incorrect because manually exporting the database to CSV files and uploading them to S3 for import into Amazon RDS can be time-consuming and may cause significant downtime.

Question 12 A company is building a real-time analytics platform on AWS. The platform receives streaming data from thousands of IoT devices. The data is ingested into Amazon Kinesis Data Streams, and an AWS Lambda function is triggered to process and analyze each data record. The Lambda function requires access to an external API to retrieve additional information for the analysis. Currently, the Lambda function makes a direct API call for each data record, resulting in a high number of API requests and increased costs. The company wants to optimize the architecture to reduce API costs without compromising the analysis process. Which changes should the company implement?
(Select more than one answer)

a) Modify the Lambda function to batch multiple data records and make a single API call for the batch.

b) Implement an Amazon SQS queue to buffer the incoming data records. Configure a separate Lambda function to process the queue in batches and make API calls for each batch.

c) Implement AWS Step Functions to orchestrate the processing of data records and API calls, enabling parallel execution for increased efficiency.

d) Replace the Lambda function with an Amazon Kinesis Data Firehose delivery stream. Configure Firehose to transform the data and make API calls before delivering the processed data to the target destination.

e) Implement an AWS Lambda provisioned concurrency configuration to keep a pool of warm Lambda instances, reducing the cold start time and enabling faster processing of data records.

View Answer

Answer is: a, b – (a) Modify the Lambda function to batch multiple data records and make a single API call for the batch, and (b) Implement an Amazon SQS queue to buffer the incoming data records. Configure a separate Lambda function to process the queue in batches and make API calls for each batch.

Explanation: To reduce API costs while maintaining the analysis process, the company should implement two changes. First, modifying the Lambda function to batch multiple data records and make a single API call for the batch (Option a) reduces the number of API requests, resulting in cost savings. Second, implementing an Amazon SQS queue to buffer the incoming data records and configuring a separate Lambda function to process the queue in batches and make API calls for each batch (Option b) further optimizes the architecture by decoupling the data processing from the API calls and enabling more efficient utilization of resources. Other Options :

(c) is incorrect because AWS Step Functions are more suitable for orchestrating complex workflows, but in this case, the primary concern is reducing API costs.

(d) is incorrect because using Amazon Kinesis Data Firehose with API calls before delivering the processed data does not address the issue of reducing the number of API requests and associated costs.

(e) is incorrect because AWS Lambda provisioned concurrency is not directly related to reducing API costs, but rather optimizing the function’s performance and minimizing cold start delays.

Question 13 A company is migrating its application to AWS and wants to ensure high availability and fault tolerance for the database tier. The application uses Amazon RDS for database hosting. Which combination of actions should the company take to meet these requirements?
(Select more than one answer)

a) Enable Multi-AZ deployment for the RDS database instance.

b) Configure scheduled automated backups for the RDS database instance.

c) Implement Amazon CloudWatch Alarms to monitor the RDS database instance.

d) Enable read replicas for the RDS database instance to offload read traffic.

e) Use Amazon RDS Performance Insights to optimize database performance.

View Answer

Answer is: a, d – (a) Enable Multi-AZ deployment for the RDS database instance, and (d) Enable read replicas for the RDS database instance to offload read traffic.

Explanation: To achieve high availability and fault tolerance for the database tier, the company should take two actions. First, enabling Multi-AZ deployment for the RDS database instance (Option a) ensures that a standby replica is automatically provisioned in a different Availability Zone for automatic failover in case of a primary instance failure. This increases availability and fault tolerance. Second, enabling read replicas for the RDS database instance (Option d) offloads read traffic from the primary instance and improves scalability and fault tolerance by providing additional replicas for read operations.. Other Options :

(b) is incorrect because scheduled automated backups do not directly address high availability and fault tolerance.

(c) is incorrect because although CloudWatch Alarms help in monitoring, they do not provide the required features for high availability and fault tolerance.

(e) is incorrect because while RDS Performance Insights helps optimize performance, it does not specifically contribute to high availability and fault tolerance.

Question 14 A company is running a social media application on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Auto Scaling group across multiple Availability Zones. During peak usage hours, the application experiences high latency and increased error rates due to the heavy load. The company wants to improve the performance and reliability of the application. Which combination of architectural changes should a solutions architect recommend?
(Select more than one answer)

a) Implement Amazon CloudFront in front of the ALB to cache and serve static content.

b) Use Amazon ElastiCache to offload database reads and reduce database load.

c) Increase the size of the EC2 instances in the Auto Scaling group to handle the increased load.

d) Implement Amazon DynamoDB as a fully managed database service for the application.

e) Configure AWS Elastic Beanstalk to automatically scale the application based on demand.

View Answer

Answer is: a, b – (a) Implement Amazon CloudFront in front of the ALB to cache and serve static content, and (b) Use Amazon ElastiCache to offload database reads and reduce database load.

Explanation: To improve the performance and reliability of the social media application, the company should implement two architectural changes. First, implementing Amazon CloudFront in front of the ALB (Option a) allows static content to be cached and served from edge locations, reducing the load on the EC2 instances and improving performance. Second, using Amazon ElastiCache (Option b) helps offload database reads by caching frequently accessed data, reducing the load on the database and improving response times. Other Options:

(c) is incorrect because simply increasing the size of the EC2 instances may not be sufficient to handle the increased load and may not address the underlying performance issues.

(d) is incorrect because implementing Amazon DynamoDB as a fully managed database service requires significant changes to the application’s data model and access patterns.

(e) is incorrect because while AWS Elastic Beanstalk can automatically scale the application, it may not directly address the latency and error rate issues caused by the heavy load.

Question 15 A company has a website hosted on Amazon S3 and uses Amazon CloudFront as the content delivery network (CDN) to improve performance. The company wants to ensure that only authorized users can access certain files on the website, while still allowing public access to other files. Which combination of actions should a solutions architect take to meet this requirement?
(Select more than one answer)

a) Enable Amazon S3 bucket policies to restrict access to specific files.

b) Use AWS Lambda@Edge to implement custom authentication logic for the files.

c) Set up CloudFront signed URLs to grant temporary access to authorized users.

d) Enable Amazon CloudFront field-level encryption to protect the content of the files.

e) Implement AWS Identity and Access Management (IAM) roles to control access to the files.

View Answer

Answer is: a, c – (a) Enable Amazon S3 bucket policies to restrict access to specific files, (c) Set up CloudFront signed URLs to grant temporary access to authorized users.

Explanation: To ensure that only authorized users can access certain files on the website, while still allowing public access to other files, the solutions architect should take two actions. First, enabling Amazon S3 bucket policies (Option a) allows the restriction of access to specific files by defining the appropriate policies. Second, setting up CloudFront signed URLs (Option c) grants temporary access to authorized users, ensuring that only those with the correct URL can access the protected files. Other Options:

(b) is incorrect because while AWS Lambda@Edge can be used to implement custom logic at the CDN edge locations, it is not the ideal solution for enforcing authentication on specific files.

(d) is incorrect because Amazon CloudFront field-level encryption is used to protect sensitive data within the files, but it does not address the requirement of restricting access to specific files.

(e) is incorrect because while AWS Identity and Access Management (IAM) roles can control access to AWS resources, they are not directly applicable to controlling access to specific files within the website hosted on Amazon S3.

Question 16 A company is planning to deploy its microservices-based application on AWS. The application consists of multiple services that communicate with each other over HTTP. The company wants to ensure secure communication between services while minimizing management overhead. Which combination of steps will meet these requirements with the LEAST change to the architecture?
(Select more than one answer)

a) Deploy the services in a single Amazon Elastic Kubernetes Service (Amazon EKS) cluster and configure mutual Transport Layer Security (mTLS) authentication between services.

b) Use Amazon API Gateway to front the services and enable Transport Layer Security (TLS) termination at the API Gateway level.

c) Implement AWS PrivateLink to establish private connectivity between services without traversing the public internet.

d) Enable AWS App Mesh and configure service mesh level encryption to secure communication between services.

e) Deploy services in separate Amazon EC2 instances within a single Amazon Virtual Private Cloud (Amazon VPC) and configure network security groups to control access.

f) Use AWS Secrets Manager to securely store and retrieve the necessary authentication credentials for inter-service communication.

View Answer

Answer is: b, c, f

Explanation: To ensure secure communication between services while minimizing management overhead, the company should take three steps. First, using Amazon API Gateway (Option b) allows TLS termination at the API Gateway level, providing encryption and securing the communication between services. Second, implementing AWS PrivateLink (Option c) establishes private connectivity between services without traversing the public internet, further enhancing the security of communication. Finally, using AWS Secrets Manager (Option f) to securely store and retrieve authentication credentials ensures that sensitive information is protected during inter-service communication. Other Options:

(a) is incorrect because deploying services in a single Amazon EKS cluster with mutual mTLS authentication may introduce higher management overhead and complexity compared to other options.

(d) is incorrect because while AWS App Mesh provides service mesh capabilities, it may introduce additional complexity and management overhead for securing communication between services.

(e) is incorrect because deploying services in separate EC2 instances within a single VPC with network security groups does not directly address secure communication between services at the transport layer.

Question 17 A company is planning to migrate its existing application to the AWS Cloud. The application consists of a front-end web tier, an application tier, and a backend database. The company wants to ensure high availability and scalability while minimizing operational overhead. Which combination of actions should the solutions architect take to meet these requirements?
(Select more than one answer)

a) Deploy the front-end web tier on Amazon EC2 instances behind an Application Load Balancer (ALB).

b) Implement Amazon ElastiCache to cache frequently accessed data in the application tier.

c) Migrate the backend database to Amazon RDS Multi-AZ for automatic replication and failover.

d) Use AWS Fargate to run the application tier containers for automated scaling and management.

e) Implement Amazon DynamoDB to replace the backend database for improved scalability and performance.

View Answer

Answer is: a, c

Explanation: Option (a) Deploying the front-end web tier on Amazon EC2 instances behind an Application Load Balancer (ALB) allows for automatic scaling, load balancing, and improved availability. This setup enhances scalability by automatically adjusting the number of EC2 instances based on traffic patterns, and it reduces operational overhead by managing the distribution of traffic and scaling of instances without manual intervention. Option (c) Migrating the backend database to Amazon RDS Multi-AZ provides automatic replication and failover capabilities, ensuring high availability without manual intervention. With Multi-AZ deployment, a standby replica of the database is created in a different Availability Zone, minimizing downtime and providing automated failover in case of a primary database failure. This improves the application’s resilience and availability while minimizing operational overhead. Other Options:

(b) Implementing Amazon ElastiCache to cache frequently accessed data in the application tier can improve performance by reducing the load on the backend database. However, it does not directly address high availability and scalability concerns or minimize operational overhead.

(d) Using AWS Fargate to run the application tier containers provides automated scaling and management, but it may introduce additional operational overhead in terms of container management, monitoring, and maintenance.

(e) Implementing Amazon DynamoDB to replace the backend database can improve scalability and performance, but it may require significant application-level changes and may not align with the requirement of minimizing operational overhead.

Question 18 A company is developing a new mobile application that requires real-time data synchronization across multiple devices. The application architecture uses AWS services to achieve this functionality. The company wants to ensure that data synchronization is reliable and scalable. Which combination of services should the company use?
(Select more than one answer)

a) Amazon DynamoDB with DynamoDB Streams
b) AWS Step Functions with AWS Lambda
c) Amazon SQS with Amazon SNS
d) AWS AppSync with Amazon Cognito
e) Amazon Kinesis Data Streams with AWS Lambda

View Answer

Answer is: a, d

Explanation: Option (a), using Amazon DynamoDB with DynamoDB Streams, provides a scalable and reliable way to capture and process changes to the data in real-time. DynamoDB Streams can be used to trigger events or updates to other components in the system, ensuring synchronization across devices. Option (d), using AWS AppSync with Amazon Cognito, offers a managed service for real-time data synchronization and provides secure authentication and authorization for the mobile application users. Other Options:

(b) is incorrect because AWS Step Functions with AWS Lambda is more suitable for orchestrating complex workflows and business processes, rather than real-time data synchronization.

(c) is incorrect because Amazon SQS with Amazon SNS is primarily used for decoupling and asynchronous messaging, which may not provide the desired real-time synchronization.

(e) is incorrect because Amazon Kinesis Data Streams with AWS Lambda is designed for processing and analyzing streaming data at scale, but it may introduce unnecessary complexity for simple data synchronization use cases.

Question 19 A healthcare organization is migrating its legacy on-premises infrastructure to the AWS Cloud. The organization wants to ensure high availability and fault tolerance for its applications. Which combination of services should the organization use?
(Select more than one answer)

a) Amazon S3 with Amazon Glacier
b) AWS Lambda with Amazon API Gateway
c) Amazon RDS Multi-AZ with Amazon Route 53
d) Amazon EC2 Auto Scaling with Elastic Load Balancer
e) Amazon Redshift with Amazon CloudFront

View Answer

Answer is: c, d

Explanation: Option (c), using Amazon RDS Multi-AZ with Amazon Route 53, ensures high availability and fault tolerance for the organization’s database. Multi-AZ replication provides automatic failover to a standby instance in a different Availability Zone, while Route 53 can be used for DNS failover to redirect traffic to the active instance. Option (d), using Amazon EC2 Auto Scaling with Elastic Load Balancer, enables automatic scaling of EC2 instances based on demand and distributes incoming traffic across multiple instances, improving fault tolerance and availability. Other Options:

(a) is incorrect because Amazon S3 with Amazon Glacier is a storage solution for object data and may not directly address application availability and fault tolerance.

(b) is incorrect because AWS Lambda with Amazon API Gateway is primarily used for serverless compute and API management, which may not directly ensure high availability for the applications.

(e) is incorrect because Amazon Redshift with Amazon CloudFront is a combination for data warehousing and content delivery, which may not directly address application-level fault tolerance and availability.

Question 20 A company wants to implement a scalable and fault-tolerant architecture for its web application on AWS. The architecture should handle sudden spikes in traffic while maintaining high availability. Which solution should the company use to meet these requirements?

a) Deploy the web application on a single Amazon EC2 instance and configure auto-scaling rules to automatically add more instances when CPU utilization exceeds a certain threshold.

b) Use an Application Load Balancer (ALB) to distribute incoming traffic across multiple EC2 instances in an Auto Scaling group. Configure the Auto Scaling group to dynamically scale based on CPU utilization and network traffic.

c) Deploy the web application on an Amazon RDS database instance and enable Multi-AZ (Availability Zone) deployment. Configure an ALB in front of the RDS instance to balance the traffic load.

d) Use AWS Elastic Beanstalk to deploy and manage the web application. Enable auto-scaling and configure Elastic Beanstalk to automatically scale the environment based on CPU utilization and request latency.

View Answer

Answer is: b – Use an Application Load Balancer (ALB) to distribute incoming traffic across multiple EC2 instances in an Auto Scaling group. Configure the Auto Scaling group to dynamically scale based on CPU utilization and network traffic.

Explanation: To implement a scalable and fault-tolerant architecture for the web application, the company should use an Application Load Balancer (ALB) in conjunction with an Auto Scaling group. The ALB distributes incoming traffic across multiple EC2 instances, providing scalability and fault tolerance. The Auto Scaling group dynamically scales the number of instances based on CPU utilization and network traffic, ensuring the architecture can handle sudden spikes in traffic while maintaining high availability.

Free – AWS Certified Solutions Architect – Professional Exam Practice Questions
Scroll to top