Free – AWS Certified Developer – Associate Exam Practice Questions

AWS Certified Developer – Associate Exam Practice

Are you prepared for your upcoming AWS Certified Developer – Associate exam?

Assess your understanding with these free AWS Certified Developer – Associate exam practice questions. Just click the View Answer button to reveal the correct answer along with comprehensive explanations.

Let’s Start Test

Question 1 A company is developing a web application that requires a highly scalable and fault-tolerant database solution. They want to choose an AWS service that can handle the growing volume of data, provide automatic scaling, and ensure high availability. Which service should they use?

a) Deploy an Amazon RDS instance with Multi-AZ deployment and leverage read replicas for horizontal scaling.

b) Utilize Amazon DynamoDB, a fully managed NoSQL database service with built-in auto-scaling capabilities.

c) Implement Amazon Redshift, a fully managed data warehousing service, for the web application’s database needs.

d) Leverage Amazon ElastiCache with Redis as a scalable and high-performance in-memory caching service.

View Answer

Answer is: b – Utilize Amazon DynamoDB, a fully managed NoSQL database service with built-in auto-scaling capabilities.

Explanation: For a highly scalable and fault-tolerant database solution, Amazon DynamoDB is a suitable choice. It can handle high write and read throughput while automatically adjusting provisioned capacity based on demand, ensuring scalability. With built-in fault tolerance and data replication across multiple Availability Zones, DynamoDB provides high availability and durability. Other Options : (a) Amazon RDS with Multi-AZ deployment and read replicas offers high availability and scalability, but DynamoDB’s auto-scaling capabilities make it a better fit for the given requirements, (c) Amazon Redshift is a data warehousing service optimized for analytics workloads and may not provide the same level of scalability and auto-scaling as DynamoDB, and (d) Amazon ElastiCache with Redis is an in-memory caching service that can enhance performance but may not meet the requirement for a highly scalable and fault-tolerant database solution.

Question 2 A company wants to build a serverless web application that requires real-time data updates and synchronization across multiple connected devices. They need a service that can handle real-time messaging and event-driven communication. Which AWS service should they use?

a) Utilize Amazon SNS to publish and subscribe to real-time notifications and events.

b) Implement AWS AppSync to enable real-time data synchronization and offline capabilities for the web application.

c) Leverage Amazon Kinesis Data Streams to capture, process, and analyze real-time streaming data.

d) Deploy AWS Step Functions to build serverless workflows and manage the flow of events and data.

View Answer

Answer is: a – Utilize Amazon SNS to publish and subscribe to real-time notifications and events.

Explanation: Amazon SNS (Simple Notification Service) is a fully managed messaging service that enables real-time messaging and event-driven communication. It supports publish/subscribe messaging patterns, allowing you to send messages to multiple subscribers simultaneously. It is suitable for scenarios that require real-time data updates and synchronization across multiple devices.

Question 3 A company wants to implement centralized logging for their AWS infrastructure. They need a solution that can collect logs from multiple AWS services and store them in a centralized location for analysis and monitoring. Which AWS service should they use?

a) Deploy Amazon CloudWatch Logs to collect, store, and analyze logs from various AWS services.

b) Utilize Amazon Elasticsearch Service to aggregate and analyze logs from different AWS services.

c) Leverage AWS Glue to catalog and analyze logs from multiple AWS services.

d) Implement Amazon Kinesis Data Firehose to collect and deliver logs from AWS services to storage and analytics services such as Amazon S3 and Amazon Redshift.

View Answer

Answer is: a – Deploy Amazon CloudWatch Logs to collect, store, and analyze logs from various AWS services.

Explanation: Amazon CloudWatch Logs is a service that collects, stores, and analyzes log data from various AWS services and custom sources. It provides a centralized location for log management and offers features like real-time monitoring, log filtering, and storage retention. It is the recommended service for implementing centralized logging in AWS infrastructure.

Question 4 A company is developing a mobile application that requires user authentication and authorization. They need a fully managed service that can handle user management, authentication, and authorization. Which AWS service should they use?

a) Deploy AWS Single Sign-On (SSO) to enable seamless authentication and authorization across multiple applications.

b) Implement Amazon IAM (Identity and Access Management) to manage user access and permissions for the mobile application.

c) Leverage AWS Directory Service to authenticate and authorize users for the mobile application.

d) Utilize AWS Cognito to handle user authentication, authorization, and user management.

View Answer

Answer is: d- Utilize AWS Cognito to handle user authentication, authorization, and user management.

Explanation: AWS Cognito is a fully managed service that handles user authentication, authorization, and user management for web and mobile applications. It provides secure and scalable user sign-up, sign-in, and access control capabilities. It integrates with popular identity providers and supports social login. It is the recommended service for implementing user authentication and authorization in mobile applications.

Question 5 A company wants to implement a serverless data lake architecture on AWS. They need a service that can efficiently store and analyze large volumes of structured and unstructured data. Which AWS service should they use?

a) Utilize Amazon S3 (Simple Storage Service) to store and manage the data lake.

b) Implement Amazon Athena to analyze data directly from the data lake.

c) Leverage AWS Glue to discover, catalog, and transform data for analysis in the data lake.

d) Deploy AWS Redshift to create a data warehouse for analytical queries.

View Answer

Answer is: a – Utilize Amazon S3 (Simple Storage Service) to store and manage the data lake

Explanation: Amazon S3 is a highly scalable and durable object storage service that is well-suited for building a serverless data lake. It can efficiently store and manage large volumes of structured and unstructured data. Other AWS services like Amazon Athena and AWS Glue can be used in conjunction with Amazon S3 for data analysis and processing within the data lake architecture.

Question 6 A company is planning to deploy a highly available and scalable web application on AWS. They need a service that can automatically distribute incoming traffic across multiple instances and provide load balancing. Which AWS service should they use?

a) Leverage Amazon Route 53, a scalable DNS (Domain Name System) web service, to distribute traffic across multiple EC2 instances based on DNS routing policies.

b) Utilize Amazon CloudFront, a global content delivery network (CDN), to distribute web application traffic to edge locations for low-latency access.

c) Deploy an Application Load Balancer (ALB) to distribute traffic to multiple EC2 instances based on application-level routing and load balancing algorithms.

d) Implement AWS Global Accelerator to improve the availability and performance of the web application by routing traffic through the AWS global network.

View Answer

Answer is: c – Deploy an Application Load Balancer (ALB) to distribute traffic to multiple EC2 instances based on application-level routing and load balancing algorithms.

Explanation: An Application Load Balancer (ALB) is designed to distribute incoming traffic across multiple EC2 instances in a highly available and scalable manner. It operates at the application layer and can perform advanced routing and load balancing based on application-level characteristics. ALB supports HTTP and HTTPS traffic and provides features like content-based routing, SSL termination, and integration with other AWS services.

Question 7 A company wants to implement secure and encrypted data storage for their sensitive data on AWS. They need a service that provides server-side encryption and manages encryption keys. Which AWS service should they use?

a) Utilize Amazon S3 with server-side encryption enabled.

b) Implement AWS KMS (Key Management Service) to manage encryption keys and perform server-side encryption for data stored in Amazon S3.

c) Leverage AWS CloudHSM (Hardware Security Module) to store and manage encryption keys securely.

d) Deploy AWS Secrets Manager to securely store and manage encryption keys for sensitive data.

View Answer

Answer is: b – Implement AWS KMS (Key Management Service) to manage encryption keys and perform server-side encryption for data stored in Amazon S3.

Explanation: AWS KMS (Key Management Service) is a fully managed service that provides centralized key management and encryption capabilities. It integrates with various AWS services, including Amazon S3, to perform server-side encryption of data at rest. AWS KMS allows you to create and manage encryption keys, control access to the keys, and audit key usage. It offers a secure and scalable solution for storing and managing encryption keys for sensitive data.

Question 8 A company wants to implement real-time analytics on streaming data from various sources. They need a service that can process and analyze streaming data at scale with low latency. Which AWS service should they use?

a) Deploy Amazon Kinesis Data Streams to ingest and process streaming data in real-time.

b) Utilize AWS Glue Streaming ETL to perform extract, transform, and load operations on streaming data.

c) Leverage AWS Lambda to process streaming events and trigger real-time analytics.

d) Implement Amazon SQS (Simple Queue Service) to handle streaming data and decouple the processing of events.

View Answer

Answer is: a – Deploy Amazon Kinesis Data Streams to ingest and process streaming data in real-time.

Explanation: Amazon Kinesis Data Streams is a fully managed service that allows you to ingest, process, and analyze streaming data in real-time. It provides high throughput and low latency data streaming, making it suitable for real-time analytics and processing. With Kinesis Data Streams, you can scale the processing capacity based on the incoming data rate and perform real-time analytics using other AWS services like Amazon Kinesis Data Analytics or custom applications.

Question 9 A company wants to automate the deployment and management of their application infrastructure on AWS. They need a service that can provision and manage the necessary AWS resources based on predefined templates. Which AWS service should they use?

a) Deploy AWS Systems Manager to automate the management of EC2 instances, including patching, configuration, and inventory.

b) Implement AWS Elastic Beanstalk to deploy and manage applications in multiple languages.

c) Leverage AWS OpsWorks to automate the configuration and management of applications and infrastructure using Chef or Puppet.

d) Utilize AWS CloudFormation to create and manage a collection of AWS resources as a single unit called a stack.

View Answer

Answer is: d – Utilize AWS CloudFormation to create and manage a collection of AWS resources as a single unit called a stack.

Explanation: AWS CloudFormation is a service that allows you to provision and manage AWS resources using infrastructure as code. With CloudFormation, you can define templates in JSON or YAML format to describe the desired state of your infrastructure. These templates can be version-controlled, reused, and easily deployed to create a collection of resources as a single unit called a stack. CloudFormation automates the provisioning and configuration of resources, ensuring consistent and reliable deployments.

Question 10 A company is looking for a scalable and highly available storage solution for their data. They need a service that can automatically distribute data across multiple servers to ensure durability and availability. Which AWS service should they use?

a) Deploy Amazon S3 (Simple Storage Service) to store and retrieve any amount of data at any time.

b) Utilize Amazon EBS (Elastic Block Store) to provide block-level storage volumes for use with EC2 instances.

c) Leverage Amazon Glacier to store long-term archival data. This service offers low-cost storage with retrieval times ranging from minutes to hours.

d) Implement Amazon EFS (Elastic File System) to provide scalable and elastic file storage for EC2 instances.

View Answer

Answer is: a – Deploy Amazon S3 (Simple Storage Service) to store and retrieve any amount of data at any time.

Explanation: Amazon S3 is a highly durable and scalable storage service that automatically distributes data across multiple servers. It is designed to provide 99.999999999% (11 nines) durability, ensuring that data remains available even in the event of hardware failures. S3 is suitable for a wide range of use cases, including storing and retrieving large amounts of data, hosting static websites, and serving as a backup and archival storage solution. With its automatic data distribution and high availability, S3 is an ideal choice for the company’s scalable and highly available storage needs.

Question 11 A company is developing a web application that needs to process large amounts of data in near real-time. The application should be able to scale dynamically based on incoming traffic and ensure high availability. Which AWS service would best fulfill these requirements?

a) Deploy the application on Amazon EC2 instances and use Auto Scaling to dynamically adjust the capacity based on traffic.

b) Utilize Amazon S3 to store and process the data, leveraging AWS Lambda functions for near real-time processing.

c) Implement Amazon Elastic Container Service (ECS) to containerize the application and enable automatic scaling with AWS Fargate.

d) Utilize Amazon Redshift for large-scale data processing and analytics, leveraging its scalability and high availability features.

View Answer

Answer is: c – Implement Amazon Elastic Container Service (ECS) to containerize the application and enable automatic scaling with AWS Fargate.

Explanation: Amazon Elastic Container Service (ECS) provides a scalable and highly available platform for containerized applications. By leveraging ECS with AWS Fargate, developers can easily deploy and manage containers without the need to provision and manage infrastructure. This enables dynamic scaling based on traffic, ensuring efficient processing of large amounts of data in near real-time.

Question 12 A company is deploying a microservices-based application on AWS. They want to ensure efficient communication and coordination between the services, as well as automated scaling based on demand. Which AWS service can help achieve these goals?

a) Utilize AWS Step Functions to orchestrate the workflow and communication between the microservices, ensuring efficient coordination.

b) Implement Amazon API Gateway to expose and manage the APIs of the microservices, enabling efficient communication between them.

c) Utilize Amazon Kinesis Data Streams to capture and process streaming data between the microservices, enabling real-time communication.

d) Implement AWS App Mesh to control and monitor the communication between the microservices, ensuring efficient and scalable networking.

View Answer

Answer is: a – Utilize AWS Step Functions to orchestrate the workflow and communication between the microservices, ensuring efficient coordination.

Explanation: AWS Step Functions is a serverless workflow service that enables developers to coordinate and manage the flow of microservices. With Step Functions, developers can define complex workflows using visual representations, making it easy to coordinate the communication and execution of microservices. Additionally, Step Functions provides built-in capabilities for error handling, retries, and parallel processing. This, combined with its integration with other AWS services, makes it an ideal choice for efficient communication and coordination between microservices.

Question 13 As an AWS Certified Developer, you have been tasked with improving the security of a web application deployed on AWS. Which security best practice should you consider implementing?

a) Implement HTTPS for secure communication between the application and its users.

b) Utilize multi-factor authentication (MFA) for enhanced user authentication.

c) Implement input validation and output encoding to protect against common web vulnerabilities.

d) Regularly update and patch the operating system and application software to address security vulnerabilities.

View Answer

Answer is: c – Implement input validation and output encoding to protect against common web vulnerabilities.

Explanation: Input validation and output encoding are essential security practices in web development. Input validation ensures that user inputs are properly validated to prevent common attacks such as SQL injection and cross-site scripting (XSS). Output encoding ensures that any data displayed to users is properly encoded to prevent XSS attacks. By implementing these practices, developers can significantly enhance the security of their web applications.

Question 14 A company wants to optimize the performance of a database-intensive application deployed on AWS. Which technique should be considered to improve database performance?

a) Implement read replicas to offload read traffic from the primary database.

b) Utilize database caching to reduce the load on the database and improve query response times.

c) Partition the database to distribute data across multiple storage resources and improve parallel processing.

d) Enable database query optimization by creating appropriate indexes and analyzing query execution plans.

View Answer

Answer is: a – Implement read replicas to offload read traffic from the primary database.

Explanation: Read replicas are an effective way to optimize database performance in read-heavy applications. By creating read replicas, read traffic can be distributed across multiple database instances, reducing the load on the primary database and improving overall performance. This technique improves scalability and ensures high availability by enabling parallel processing of read queries.

Question 15 When developing serverless applications on AWS, what is a key benefit of using AWS Lambda?

a) Automatic scaling and resource allocation based on incoming request volume.

b) Direct control over the underlying server infrastructure for fine-tuning performance.

c) Persistent storage options for maintaining application state between invocations.

d) Built-in load balancing capabilities for distributing requests across multiple Lambda functions.

View Answer

Answer is: a – Automatic scaling and resource allocation based on incoming request volume.

Explanation: One of the key benefits of AWS Lambda is its automatic scaling capability. Lambda automatically scales the number of instances based on the incoming request volume, ensuring that there are sufficient resources available to handle the workload. This eliminates the need for manual capacity planning and provides cost-effective scalability for serverless applications.

Question 16 A developer is designing an application that requires real-time, bidirectional communication between the client and server. Which AWS service can be used to facilitate this communication?

a) Amazon S3 for storing and retrieving static web content.

b) AWS Step Functions for orchestrating serverless workflows.

c) Amazon SNS for pub/sub messaging and push notifications.

d) Amazon API Gateway for building and managing APIs.

View Answer

Answer is: d – Amazon API Gateway for building and managing APIs.

Explanation: Amazon API Gateway is a fully managed service that enables developers to create, publish, and manage APIs for their applications. It supports bidirectional communication through the use of WebSocket APIs, allowing real-time, two-way communication between the client and server. This makes it an ideal choice for applications that require real-time, interactive communication.

Question 17 A developer is implementing a CI/CD pipeline for an application hosted on AWS. Which AWS service can be used to automate the deployment process?

a) AWS CodeDeploy for deploying applications to EC2 instances or on-premises servers.

b) AWS Elastic Beanstalk for deploying and managing applications in a platform-as-a-service (PaaS) environment.

c) AWS CloudFormation for provisioning and managing AWS resources using infrastructure-as-code.

d) AWS CodePipeline for automating the release process and managing the pipeline stages.

View Answer

Answer is: d – AWS CodePipeline for automating the release process and managing the pipeline stages.

Explanation: AWS CodePipeline is a fully managed service that allows developers to build, test, and deploy applications quickly and consistently. It provides a graphical interface for creating CI/CD pipelines, which automate the release process and manage the various stages, such as source code retrieval, build, test, and deployment. By using CodePipeline, developers can automate the deployment process and ensure the efficient delivery of their applications.

Question 18 Your team is building a serverless application using AWS Lambda and Amazon S3. You want to ensure that the Lambda function has the correct permissions to access the S3 bucket and write logs to CloudWatch. How should you configure the permissions?

a) Create an IAM user with the necessary permissions and assign the user to the Lambda function.

b) Create an IAM role with the necessary permissions and assign the role to the Lambda function.

c) Store the IAM credentials as environment variables in the Lambda function.

d) Assign the Lambda function to the same security group as the S3 bucket.

View Answer

Answer is: b – Create an IAM role with the necessary permissions and assign the role to the Lambda function.

Explanation: The recommended approach is to create an IAM role with the necessary permissions and assign that role to the Lambda function. This allows you to define fine-grained access control for the Lambda function and manage permissions separately from user accounts. Environment variables are typically used for storing configuration values, not IAM credentials.

Question 19 You are responsible for deploying a web application on AWS Elastic Beanstalk. The application uses an Amazon RDS database for storing data. How can you ensure that the Elastic Beanstalk environment allows long-lived TCP/IP sockets?

A) Configure the security group associated with the Elastic Beanstalk environment to allow inbound and outbound traffic on the necessary ports.

B) Modify the Elastic Beanstalk environment’s instance type to support long-lived TCP/IP sockets.

C) Enable the Enhanced Networking feature for the Elastic Beanstalk environment.

D) Configure the Elastic Load Balancer associated with the Elastic Beanstalk environment to allow TCP/IP traffic.

View Answer

Answer is: a – Configure the security group associated with the Elastic Beanstalk environment to allow inbound and outbound traffic on the necessary ports.

Explanation: To allow long-lived TCP/IP sockets in an Elastic Beanstalk environment, you need to configure the associated security group to allow inbound and outbound traffic on the necessary ports. By modifying the security group rules, you can control the network traffic flow and enable communication between the Elastic Beanstalk environment and the Amazon RDS database.

Question 20 Your company is migrating a legacy application to AWS, and you need to implement a secure way to store and automatically rotate the database credentials for an Amazon RDS for MySQL DB instance. Which solution will meet these requirements?

a) Store the database credentials in environment variables on the EC2 instances. Rotate the credentials by relaunching the EC2 instances.

b) Store the database credentials in an Amazon Machine Image (AMI) and rotate the credentials by replacing the AMI.

c) Store the database credentials in AWS Secrets Manager and configure it to automatically rotate the credentials.

d) Store the database credentials in AWS Systems Manager Parameter Store and configure it to automatically rotate the credentials.

View Answer

Answer is: c – Store the database credentials in AWS Secrets Manager and configure it to automatically rotate the credentials.

Explanation: To securely store and automatically rotate database credentials for an Amazon RDS for MySQL DB instance, you should use AWS Secrets Manager. Secrets Manager provides a secure and scalable solution for storing and managing secrets, such as database credentials. It also has built-in capabilities for automatic rotation of credentials, ensuring that the credentials are regularly updated for enhanced security.

Question 21 You are developing a web application that requires user sign-up and sign-in functionality. Additionally, the application needs to log user sign-in events to a custom analytics solution. How should you implement these requirements?

a) Use Amazon Cognito to provide the sign-up and sign-in functionality and invoke an Amazon API Gateway method to log user sign-in events.

b) Use AWS Identity and Access Management (IAM) to provide the sign-up and sign-in functionality and configure an AWS Config rule to log user sign-in events.

c) Use Amazon Cognito to provide the sign-up and sign-in functionality and invoke an AWS Lambda function to log user sign-in events.

d) Use AWS Identity and Access Management (IAM) to provide the sign-up and sign-in functionality and configure Amazon CloudFront to log user sign-in events.

View Answer

Answer is: a – Use Amazon Cognito to provide the sign-up and sign-in functionality and invoke an Amazon API Gateway method to log user sign-in events.

Explanation: To implement sign-up and sign-in functionality in a web application and log user sign-in events to a custom analytics solution, you should use Amazon Cognito. Amazon Cognito provides a comprehensive solution for user authentication and authorization. You can integrate it with other AWS services, such as Amazon API Gateway, to invoke methods and log user sign-in events efficiently.

Question 22 In Amazon S3, what feature enables client web applications loaded in one domain to interact with resources from a different domain?

a) Cross-Origin Resource Sharing (CORS) Configuration

b) Public Object Permissions

c) Public ACL Permissions

d) None of the above

View Answer

Answer is: a – Cross-Origin Resource Sharing (CORS) Configuration

Explanation: Cross-Origin Resource Sharing (CORS) Configuration is used in Amazon S3 to allow client web applications hosted on one domain to access resources (such as objects) hosted on a different domain. By configuring CORS rules, you can control which origins are allowed to access your S3 resources, specify the HTTP methods and headers allowed for cross-origin requests, and set other access control parameters. This enables secure and controlled cross-origin communication between web applications and S3 resources.

Question 23 What is the purpose of the following bucket policy?

{ 
   "Version":"2012-10-17", 
   "Id":"default", 
   "Statement":[ 
{ 
         "Sid":"Statement-1", 
         "Effect":"Allow", 
         "Principal":"*", 
         "Action":[ 
            "s3:GetObject", 
            "s3:PutObject" 
         ], 
         "Resource":"arn:aws:s3:::mybucket/*", 
         "Condition":{ 
            "StringLike":{ 
               "aws:Referer":[ 
                  "http://www.example.com/*", 
                  "http://www.demo.com/*" 
               ] 
            } 
         } 
      }
 ] 
}

a) It allows read or write actions on the bucket ‘mybucket’.

b) It allows read access to the bucket ‘mybucket’, but only if accessed from example.com or demo.com.

c) It allows unlimited access to the bucket ‘mybucket’.

d) It allows read or write access to the bucket ‘mybucket’, but only if accessed from example.com or demo.com.

View Answer

Answer is: d – It allows read or write access to the bucket ‘mybucket’, but only if accessed from example.com or demo.com

Explanation: The bucket policy allows the “s3:GetObject” and “s3:PutObject” actions on objects within the ‘mybucket’ bucket. However, access is restricted to requests that originate from the “http://www.example.com/” or “http://www.demo.com/” domains. This means that read or write access to the bucket is only allowed if the request comes from either of these two domains.

Question 24 What is the default visibility timeout for an SQS queue in AWS?

a) 1 minute

b) 5 minutes

c) 10 minutes

d) 15 minutes

View Answer

Answer is: b – 5 minutes

Explanation: The default visibility timeout for an SQS (Simple Queue Service) queue is 5 minutes. This means that after a message is retrieved from the queue by a consumer, it remains invisible to other consumers for a period of 5 minutes by default. If the message is not deleted or processed within this visibility timeout, it becomes visible to other consumers again.

Question 25 In AWS CloudFormation, when a failure occurs during stack creation, does a rollback occur by default?

a) True

b) False

View Answer

Answer is: A – True

Explanation: By default, when a failure occurs during stack creation in AWS CloudFormation, a rollback occurs. This means that any resources that were created during the stack creation process are deleted in order to revert the stack to its previous state. Rollbacks help maintain the integrity and consistency of the stack and its resources.

Question 26 A cloud administrator is encountering an error while attempting to create a new bucket in Amazon S3. You suspect that the bucket limit has been reached. What is the maximum number of S3 buckets allowed per AWS account?

a) 100

b) 50

c) 1000

d) 150

View Answer

Answer is: a – 100

Explanation: The maximum number of S3 buckets allowed per AWS account is 100. If the administrator tries to create a new bucket and the account has already reached this limit, they will receive an error indicating that the bucket limit has been exceeded. To create additional buckets, the administrator may need to delete unused buckets or request a limit increase from AWS support.

Question 27 In CloudFormation, which function is used to retrieve an object from a set of objects?

a) Fn::GetAtt

b) Fn::Combine

c) Fn::Join

d) Fn::Select

View Answer

Answer is: d – Fn::Select

Explanation: The Fn::Select function in CloudFormation is used to retrieve an object from a set of objects. It allows you to specify an index and a list of values, and it returns the value at the specified index from the list. This function is commonly used to retrieve specific elements from arrays or lists within a CloudFormation template.

Question 28 In CloudFormation, which function is used to append a set of values into a single value?

a) Fn::GetAtt

b) Fn::Combine

c) Fn::Join

d) Fn::Select

View Answer

Answer is: c – Fn::Join

Explanation: The Fn::Join function in CloudFormation is used to concatenate a set of values into a single value. It allows you to specify a delimiter and a list of values, and it joins the values together using the specified delimiter. This function is commonly used to construct strings or resource identifiers by combining different parts or variables within a CloudFormation template.

Question 29 When creating an index or table in Amazon DynamoDB and specifying the capacity requirements for read and write activity, what is the maximum size of an item that corresponds to a single write capacity unit?

a) 1 KB

b) 4 KB

c) 2 KB

d) 8 KB

View Answer

Answer is: a – 1 KB

Explanation: In Amazon DynamoDB, the maximum size of an item that corresponds to a single write capacity unit is 1 KB. This means that each write operation consumes a write capacity unit for every 1 KB of data written. If an item exceeds 1 KB, additional write capacity units will be consumed for each additional 1 KB of data. It’s important to consider the size of your items when provisioning the write capacity for your DynamoDB table or index.

Question 30 In DynamoDB, what can be used as a part of the Query API call to filter results based on the values of primary keys?

a) Expressions

b) Conditions

c) Query API

d) Scan API

View Answer

Answer is: a – Expressions

Explanation: In DynamoDB, you can use expressions as part of the Query API call to filter results based on the values of primary keys. Expressions allow you to define complex conditions and filter the query results based on specific criteria. You can use comparison operators, logical operators, and functions within expressions to create sophisticated filters for your queries. This provides flexibility in querying DynamoDB tables and retrieving only the relevant data based on your filtering requirements.

Question 31 It is possible to create a global secondary index at the same time as the table creation in DynamoDB.

a) True

b) False

View Answer

Answer is: a – True

Explanation: Yes, it is possible to create a global secondary index (GSI) at the same time as the table creation in DynamoDB. When creating a table, you have the option to define one or more GSIs along with the table schema. GSIs allow you to create additional indexes on attributes other than the primary key, which enables more flexible querying capabilities. By specifying the index key schema and projection attributes during the table creation, you can create the GSI and have it ready for use as soon as the table is created. This simplifies the process and ensures that the GSI is available immediately for efficient data retrieval.

Question 32 Can a Global Secondary Index (GSI) have a different partition key and sort key compared to its base table?

a) False

b) True

View Answer

Answer is: b – True

Explanation: Yes, a Global Secondary Index (GSI) in DynamoDB can have a different partition key and sort key compared to its base table. GSIs provide flexibility in querying data by allowing you to define alternative attribute combinations as the primary key for the index. This means that you can specify different attributes as the partition key and sort key for the GSI, separate from the primary key attributes of the base table. This enables efficient querying of data based on different access patterns and allows you to optimize your application’s performance by tailoring the index keys to specific query requirements.

Question 33 Which of the following can be used to restrict access to Amazon Simple Workflow Service (SWF)?

a) ACL

b) SWF Roles

c) IAM

d) None of the above

View Answer

Answer is: c – IAM

Explanation: Access to Amazon SWF can be restricted using IAM (Identity and Access Management). IAM allows you to define fine-grained permissions and access policies for users, groups, and roles in your AWS account. By configuring IAM policies, you can control who has access to SWF resources such as domains, workflow types, and activities. IAM policies enable you to grant or deny specific actions on SWF resources based on the principle of least privilege. This helps ensure that only authorized entities can interact with SWF and perform operations according to defined permissions and access restrictions.

Question 34 An IT admin wants to enable long polling in their Amazon Simple Queue Service (SQS) queue. What must be done to enable long polling in SQS?

a) Create a dead letter queue

b) Set the message size to 256KB

c) Set the ReceiveMessageWaitTimeSeconds property of the queue to 0 seconds

d) Set the ReceiveMessageWaitTimeSeconds property of the queue to 20 seconds

View Answer

Answer is: d – Set the ReceiveMessageWaitTimeSeconds property of the queue to 20 seconds

Explanation: Long polling is a feature in Amazon SQS that allows a client to retrieve messages from a queue with a reduced number of empty responses, improving efficiency and reducing costs. To enable long polling, the ReceiveMessageWaitTimeSeconds property of the SQS queue needs to be set to a non-zero value. By default, this value is set to 0 seconds, which disables long polling. Setting it to a higher value, such as 20 seconds, enables long polling and instructs the queue to wait for up to the specified duration to receive a message before returning a response. This helps reduce the number of empty responses when there are no messages in the queue, resulting in more efficient polling and better resource utilization.

Question 35 According to the AWS Identity and Access Management (IAM) decision logic, what is the first step in evaluating access permissions for a resource?

a) A default deny

b) An explicit deny

c) An allow

d) An explicit allow

View Answer

Answer is: a – A default deny

Explanation: The IAM decision logic follows a default deny approach, which means that access to a resource is denied by default unless an explicit allow policy is in place. When evaluating access permissions for a resource, IAM first considers the default deny, assuming that access is not allowed. If there is an explicit allow policy that grants access, it overrides the default deny and allows access to the resource. This approach ensures that access is restricted unless explicitly granted, providing a secure and controlled environment.

Question 36 To bundle an Amazon EC2 instance store-backed Windows instance, which API call should be used?

a) AllocateInstance

b) CreateImage

c) BundleInstance

d) ami-register-image

View Answer

Answer is: a – BundleInstance

Explanation: To create a custom Amazon Machine Image (AMI) from an EC2 instance store-backed Windows instance, the API call used is “BundleInstance.” The “BundleInstance” API call takes the running instance, bundles the root file system, and uploads it to Amazon S3 as a storage-backed AMI. This custom AMI can then be used to launch new instances with the specified configuration and software pre-installed, providing a convenient way to replicate and deploy similar instances in the future.

Question 37 A developer is working on an application that needs to access an S3 bucket. An IAM role has been created with the necessary permissions to access the S3 bucket. Which API call should the developer use in the application to assume the IAM role and obtain temporary security credentials for accessing the S3 bucket?

a) IAM: AssumeRole

b) STS: GetFederationToken

c) IAM: GetRole

d) STS: AssumeRole

View Answer

Answer is: d – STS: AssumeRole

Explanation: The developer should use the API call “STS: AssumeRole” to assume the IAM role and obtain temporary security credentials. This API call allows the application to assume the specified IAM role and receive temporary credentials, including an access key, secret access key, and session token. These temporary credentials can then be used to access the S3 bucket and perform the necessary operations as allowed by the IAM role’s policies.

Question 38 A company is developing a Lambda function that will be deployed in multiple stages, such as development, testing, and production. The function relies on various external services, and it needs to call different endpoints for these services based on the deployment stage.
What feature of AWS Lambda can the developer use to ensure that the code references the correct endpoints based on the stage?

a) Tagging

b) Concurrency

c) Aliases

d) Environment variables

View Answer

Answer is: d – Environment variables

Explanation: The developer can utilize environment variables in AWS Lambda to ensure that the code references the correct endpoints based on the stage. Environment variables provide a way to pass configuration information to Lambda functions, allowing the function code to dynamically read these variables at runtime. By setting different values for the environment variables based on the deployment stage, the Lambda function can determine the appropriate endpoints to call for the external services. This provides flexibility and allows the same function code to adapt to different environments seamlessly.

Question 39 You are utilizing AWS SAM (Serverless Application Model) to define a Lambda function and have configured CodeDeploy to manage deployment patterns. After successfully testing the new Lambda function, you want to shift traffic from the original Lambda function to the new one in the shortest possible time frame.
Which deployment configuration in CodeDeploy will accomplish this?

a) Canary10Percent5Minutes

b) Linear10PercentEvery10Minutes

c) Canary10Percent15Minutes

d) Linear10PercentEvery5Minute

View Answer

Answer is: a – Canary10Percent5Minutes

Explanation: The deployment configuration “Canary10Percent5Minutes” is the most suitable choice for shifting traffic from the original Lambda function to the new one in the shortest time frame. This configuration gradually shifts a portion of the traffic (10%) to the new Lambda function over a period of 5 minutes. It allows for a controlled and incremental rollout, enabling monitoring and verification of the new function’s performance before fully transitioning the traffic. The other options either have longer time intervals or different traffic distribution patterns, which may not achieve the desired objective of a quick transition.

Question 40 You are developing an application that integrates with AWS X-Ray for distributed tracing. As part of your development workflow, you need to test the application’s latency locally using the X-Ray daemon. However, you want to exclude certain AWS service calls from being traced by the X-Ray daemon during local testing. Which configuration setting can be used to achieve this?

a) ~/xray-daemon$ ./xray -r

b) ~/xray-daemon$ ./xray -t

c) ~/xray-daemon$ ./xray -x

d) ~/xray-daemon$ ./xray -m

View Answer

Answer is: c – ~/xray-daemon$ ./xray -x

Explanation: The option “~/xray-daemon$ ./xray -x” is the correct configuration setting for the X-Ray daemon to exclude certain AWS service calls from being traced during local testing. The “-x” flag stands for “exclude-aws-sdk” and allows you to skip tracing AWS SDK calls made by the X-Ray daemon itself. This can be useful to avoid including unnecessary traces and reduce overhead during local testing. The other options (-r, -t, -m) are not valid configuration settings for excluding AWS service calls from X-Ray tracing.

Question 41 Your company operates a high-traffic website that relies on retrieving objects from Amazon S3. Currently, the website experiences a high request rate of around 11000 GET requests per second. To enhance security and comply with regulations, you decide to enable encryption at rest for the S3 objects using AWS Key Management Service (KMS). However, after enabling encryption, you start experiencing performance issues. What could be the main reason behind this?

a) Amazon S3 will now throttle the requests since they are now being encrypted using KMS.

b) You need to also enable versioning to ensure optimal performance.

c) You are now exceeding the throttle limits for KMS API calls.

d) You need to also enable content delivery network (CDN) to ensure optimal performance.

View Answer

Answer is: c – You are now exceeding the throttle limits for KMS API calls

Explanation: Enabling encryption at rest for S3 objects using KMS introduces additional overhead in terms of cryptographic operations. If the request rate exceeds the throttle limits for KMS API calls, it can result in performance issues. It is important to monitor and ensure that the KMS API calls are within the service limits to maintain optimal performance.

Question 42 Your company is using Elastic Beanstalk to host a web application. They require that when updates are made to the application, the infrastructure should maintain its full capacity to ensure high availability. Which of the following deployment methods should you use for this requirement? (Select two)

a) All at once

b) Rolling

c) Immutable

d) Blue/green

View Answer

Answer is: c and, d

Explanation: (c) Immutable: In the immutable deployment method, Elastic Beanstalk creates a new set of Amazon EC2 instances with the updated application version. Once the new instances pass health checks, traffic is shifted to these new instances while keeping the old instances running. This ensures that the infrastructure maintains its full capacity during the deployment process, providing high availability.

(d) Blue/green: In the blue/green deployment method, Elastic Beanstalk provisions a separate environment (green) with the updated application version alongside the existing environment (blue). Once the green environment passes health checks, traffic is routed to the green environment, while the blue environment remains operational. This ensures that the infrastructure maintains its full capacity and high availability during updates.

Question 43 You have a DynamoDB table that stores items with an average size of 8 KB. You need to perform a strongly consistent read operation on a single item. How many read capacity units (RCUs) would be consumed for this operation?

a) 1

b) 2

c) 4

d) 8

View Answer

Answer is: b – 2

Explanation: Each strongly consistent read request on DynamoDB consumes 1 read capacity unit (RCU) per 4 KB of item size. In this case, the item size is 8 KB, so it would consume 2 RCUs for the strongly consistent read operation.

Question 44 Your application is designed to collect metrics from multiple servers and send them to CloudWatch. Occasionally, the application encounters client 429 errors. What can be done from the programming side to address these errors?

a) Use the AWS CLI instead of the SDK to send the metrics.

b) Ensure that all metrics have a timestamp before sending them.

c) Implement exponential backoff in the requests.

d) Enable request encryption for the metrics.

View Answer

Answer is: c – Implement exponential backoff in the requests.

Explanation: Client 429 errors in AWS typically indicate that the application is making too many requests and exceeding the rate limits. To handle these errors, implementing exponential backoff in the requests is recommended. Exponential backoff is a technique where the application progressively waits longer before retrying the request after encountering errors. This approach helps reduce the request rate and mitigates the occurrence of 429 errors.

Question 45 You have defined a DynamoDB table with a provisioned read capacity of 5 and a provisioned write capacity of 5. Which of the following statements are TRUE?

a) Strongly consistent reads are limited to a maximum of 20 KB per second.

b) Eventually consistent reads are limited to a maximum of 20 KB per second.

c) Strongly consistent reads are limited to a maximum of 40 KB per second.

d) Eventually consistent reads are limited to a maximum of 40 KB per second.

e) The maximum write capacity is 5 KB per second.

View Answer

Answer is: a, d and e

Explanation: When you provision the read capacity for a DynamoDB table, you have the option to choose between strongly consistent reads and eventually consistent reads.

A provisioned read capacity of 5 allows for a maximum of 5 strongly consistent reads per second, and each strongly consistent read can retrieve up to 4 KB of data. Therefore, the maximum read capacity for strongly consistent reads is 5 * 4 KB = 20 KB per second (option A).

For eventually consistent reads, the read capacity is double that of strongly consistent reads. Therefore, the maximum read capacity for eventually consistent reads is 40 KB per second (option D).

The provisioned write capacity of 5 allows for a maximum write capacity of 5 KB per second (option E).

Question 46 Which of the following options can be used to retrieve the most recent results efficiently from an Amazon DynamoDB table with a Global Secondary Index, while minimizing the impact on the Read Capacity Units (RCU)?

a) Perform a Query operation with ConsistentRead enabled.

b) Perform a Scan operation with ConsistentRead enabled.

c) Perform a Query operation with Eventual Consistency.

d) Perform a Scan operation with Eventual Consistency.

View Answer

Answer is: c – Perform a Query operation with Eventual Consistency.

Explanation: When querying a DynamoDB table with a Global Secondary Index, using the Query operation with Eventual Consistency provides the most recent results while minimizing the impact on RCU. This is because Eventual Consistency allows for slightly stale data, but it requires fewer resources compared to using ConsistentRead, which guarantees up-to-date data but consumes more RCU.

Question 47 You are tasked with creating a bucket policy for an Amazon S3 bucket to enforce server-side encryption at rest for all objects. To accomplish this, you need to include a specific header in the bucket policy.
Which header should be included in the bucket policy to enforce server-side encryption with SSE-S3 for the bucket?

a) Set the “x-amz-server-side-encryption-customer-algorithm” as the AES256 request header.

b) Set the “x-amz-server-side-encryption-bucket” as the AES256 request header.

c) Set the “x-amz-server-side-encryption-context” as the AES256 request header.

d) Set the “x-amz-server-side-encryption” as the AES256 request header.

View Answer

Answer is: d – Set the “x-amz-server-side-encryption” as the AES256 request header.

Explanation: To enforce server-side encryption with SSE-S3 for a specific bucket, you need to include the “x-amz-server-side-encryption” header in the bucket policy and set its value as “AES256”. This header informs Amazon S3 to encrypt all objects stored in the bucket at rest using AES256 encryption.

Question 48 You are using Amazon DynamoDB to store product details for an online furniture store. During query operations, you want to retrieve the “Colour” and “Size” attributes from the table. Which of the following expressions can be used for this purpose?

a) Update Expressions

b) Condition Expressions

c) Projection Expressions

d) Expression Attribute Names

View Answer

Answer is: c – Projection Expressions

Explanation: In Amazon DynamoDB, Projection Expressions are used to specify the attributes to be returned in the query results. By specifying the desired attributes, you can optimize the query and retrieve only the necessary data. In this case, you can use a Projection Expression to specify that you want to retrieve the “Colour” and “Size” attributes of the table during query operations.

Question 49 With respect to strongly consistent read requests from an application to a DynamoDB with a DAX cluster, the following is true:

a) All requests are forwarded to DynamoDB & results are cached

b) All requests are forwarded to DynamoDB & results are stored in Item Cache before passing to the application

c) All requests are forwarded to DynamoDB & results are stored in Query Cache before passing to the application

d) All requests are forwarded to DynamoDB & results are not cached

View Answer

Answer is: d – All requests are forwarded to DynamoDB & results are not cached

Explanation: When using a DAX (DynamoDB Accelerator) cluster with DynamoDB, strongly consistent read requests are forwarded directly to DynamoDB. DAX acts as a front-end cache and does not cache the results of strongly consistent reads. Instead, it caches the results of eventually consistent reads to improve performance. Therefore, for strongly consistent read requests, the results are not cached in DAX and are always fetched from DynamoDB.

Question 50 To base the creation of environments on the values passed at runtime to a CloudFormation template, you can achieve this by:

a) Specify an Outputs section

b) Specify a parameters section

c) Specify a metadata section

d) Specify a transform section

View Answer

Answer is: b – Specify a parameters section

Explanation: In CloudFormation templates, the parameters section allows you to define input parameters that can be passed during runtime. These parameters can be used to customize the behavior of the template and create resources based on the provided values. By specifying a parameters section in the CloudFormation template, you can dynamically configure the template based on the input values provided at runtime.

Question 51 To customize the error response and make it more user-readable when receiving the “Missing Authentication Token” response from an undefined API resource in AWS API Gateway, you can achieve this by:

a) By setting up the appropriate method in the API gateway

b) By setting up the appropriate method integration request in the API gateway

c) By setting up the appropriate gateway response in the API gateway

d) By setting up the appropriate gateway request in the API gateway

View Answer

Answer is: c – By setting up the appropriate gateway response in the API gateway

Explanation: In AWS API Gateway, you can customize the error responses by setting up appropriate gateway responses. By defining a gateway response for the specific HTTP status code and error condition, you can provide a more user-readable error message in the response. In this case, you can set up a gateway response for the “Missing Authentication Token” error to provide a more meaningful error message to the user.

Question 52 To host the Docker containers application in AWS with built-in orchestration services, you can consider using the Elastic Container Service (ECS).

a) Considering building a Kubernetes cluster on EC2 Instances: While you can create a Kubernetes cluster on EC2 instances in AWS, it requires manual setup and management of the cluster.

b) Considering building a Kubernetes cluster on your on-premise infrastructure: This option is not applicable as the requirement is to host the application in AWS.

c) Considering using the Elastic Container Service (ECS): ECS is a fully managed container orchestration service provided by AWS. It simplifies the deployment and management of Docker containers on AWS. With ECS, you can define tasks, services, and clusters to run and scale your containers. It integrates well with other AWS services such as load balancers, auto scaling, and IAM.

d) Considering using the Simple Storage Service (S3) to store your Docker containers: S3 is an object storage service in AWS and is not specifically designed for hosting and orchestrating Docker containers.

View Answer

Answer is: c

Explanation: Elastic Container Service (ECS) is an AWS service that provides built-in orchestration capabilities for hosting Docker containers. It simplifies the deployment, scaling, and management of containers, making it an ideal choice for hosting Docker applications in AWS.

Question 53 To determine the consumed capacity for queries in DynamoDB during the development phase, you can achieve this by setting the ReturnConsumedCapacity parameter in the query request to TOTAL.

a) The queries by default sent via the program will return the consumed capacity as part of the result: By default, the consumed capacity is not returned as part of the query result.

b) Ensure to set the ReturnConsumedCapacity in the query request to TRUE: Setting ReturnConsumedCapacity to TRUE will include the consumed capacity in the response, but it will not provide a detailed breakdown of the consumed capacity.

c) Ensure to set the ReturnConsumedCapacity in the query request to TOTAL: Setting ReturnConsumedCapacity to TOTAL will include the consumed capacity for the query and provide a detailed breakdown of the capacity units consumed (read capacity units or write capacity units).

d) Use the Scan operation instead of the query operation: Using the Scan operation instead of the query operation is not recommended as it would retrieve all items in the table, which may not accurately represent the consumed capacity specifically for the queries being fired.

View Answer

Answer is: c

Explanation: By setting the ReturnConsumedCapacity parameter in the query request to TOTAL, you can retrieve the consumed capacity for queries in DynamoDB and get a detailed breakdown of the capacity units used. This allows you to monitor and analyze the consumed capacity during the development phase of your application.

Question 54 You are tasked with deploying a new Lambda Function using AWS CloudFormation Templates. You need to configure the function to test it with 5% of the traffic being routed to the new version. Which of the following attributes can be used to achieve this?

a) aws lambda create-alias –name alias-name –function-name function-name \–routing-config AdditionalVersionWeights={“2″=0.05}

b) aws lambda create-alias –name alias-name –function-name function-name \–routing-config AdditionalVersionWeights={“2″=5}

c) aws lambda create-alias –name alias-name –function-name function-name \–routing-config AdditionalVersionWeights={“2″=0.5}

d) aws lambda create-alias –name alias-name –function-name function-name \–routing-config AdditionalVersionWeights={“2″=5%}

View Answer

Answer is: a

Explanation: The correct option is A because it specifies the command aws lambda create-alias with the appropriate parameters to create an alias for the Lambda function and configure additional version weights. By setting the weight for the new version to 0.05, it allows 5% of the traffic to be routed to the new version for testing purposes.

Question 55 You are part of a development team responsible for deploying a serverless application using AWS CloudFormation. The application needs to be deployed in multiple AWS regions with minimal manual effort. Which of the following features of AWS CloudFormation would be most helpful in achieving this?

a) Creating CloudFormation ChangeSets

b) Using AWS CloudFormation StackSets

c) Implementing AWS CloudFormation Nested Stacks

d) Utilizing AWS CloudFormation Macros

View Answer

Answer is: b – Using AWS CloudFormation StackSets

Explanation: AWS CloudFormation StackSets allows you to create, update, or delete stacks across multiple AWS accounts and regions with a single CloudFormation template. This simplifies the deployment process and reduces manual effort by enabling you to manage the application infrastructure consistently across multiple regions.

Question 56 Your team is working on a project to deploy a serverless application using AWS CodeBuild to build the application code stored in an Amazon S3 bucket. However, when the build is executed, you encounter the following error:
Error: “The bucket you are attempting to access must be addressed using the specified endpoint…”
Which of the following could be the cause of the error?

a) The bucket is not in the same region as the CodeBuild project.

b) Code should ideally be stored on EBS volumes.

c) Versioning is enabled for the bucket.

d) MFA is enabled on the bucket.

View Answer

Answer is: a – The bucket is not in the same region as the CodeBuild project.

Explanation: The error message indicates that the bucket must be addressed using the specified endpoint. This error commonly occurs when the Amazon S3 bucket and the AWS CodeBuild project are in different regions. To resolve the issue, ensure that the Amazon S3 bucket is in the same region as the AWS CodeBuild project.

Question 57 Your team is developing a new web application hosted on AWS. The application needs to securely access multiple databases. How can you store the database credentials securely?

a) Store them in separate Lambda functions which can be invoked via HTTPS

b) Store them as secrets in AWS Secrets Manager

c) Store them in separate DynamoDB tables

d) Store them in separate S3 buckets

View Answer

Answer is: b – Store them as secrets in AWS Secrets Manager

Explanation: Storing the database passwords as secrets in AWS Secrets Manager is the ideal way to ensure secure storage. AWS Secrets Manager provides a secure and centralized solution for managing secrets, including database passwords. It offers encryption, fine-grained access control, and automatic rotation of secrets, making it a suitable choice for securely storing sensitive information like database passwords. Storing the passwords in separate Lambda functions, DynamoDB tables, or S3 buckets may not provide the same level of security and management capabilities as AWS Secrets Manager.

Question 58 The Development team at XYZ Corp is working on a new version of their flagship application. All their services are hosted on AWS, and the management team has decided to leverage AWS CodeStar along with GitHub and AWS CodeDeploy for the deployment process. However, during testing, the application is experiencing continuous breaks despite having the dependencies pointed to the latest version in the package.json file within the AWS CodeStar project. What solution is required to resolve this problem?

a) Change the repository to AWS CodeCommit since AWS CodeStar does not support GitHub.

b) Set the dependencies to point to a specific version to avoid application breaks.

c) Change the deployment tool to Jenkins since it is well-supported by AWS CodeStar.

d) Update the package.json file with relevant read and write permissions through AWS IAM.

View Answer

Answer is: b – Set the dependencies to point to a specific version to avoid application breaks.

Explanation: To avoid application breaks, it is recommended to set the dependencies in the package.json file to point to a specific version. This ensures that the application uses the intended version of the dependencies and reduces the risk of compatibility issues with newer versions.

Question 59 John is a software developer who frequently uses AWS Cloud9 as his preferred IDE and AWS CodeStar to streamline CI/CD pipelines for his projects. He is currently facing issues with opening a new environment for a Docker-based project. He is unable to establish a connection to the EC2 environment within the project VPC, which has been configured with an IPv4 CIDR block of 172.17.0.0/16. Which of the following solutions can help resolve this problem?

a) Enable Advanced Networking for the EC2 instance used by AWS Cloud9.

b) Configure a new VPC for the EC2 environment with a CIDR block of 192.168.0.0/16.

c) Upgrade the EC2 instance backing the environment from t2.micro to t3.large family and attempt reconnection.

d) Modify the IP address range of the existing VPC to 172.17.0.0/18.

View Answer

Answer is: b – Configure a new VPC for the EC2 environment with a CIDR block of 192.168.0.0/16.

Explanation: To resolve the connection issue, it is recommended to configure a new VPC specifically for the EC2 environment. By using a CIDR block of 192.168.0.0/16, a different IP address range will be assigned, which can help overcome the connectivity problem.

Question 60 A leading e-commerce platform provider, offering over 500 courses distributed across 300 microservices, is currently facing challenges related to high CPU usage during the JSON decentralization process used for application communications. This has led to bottlenecks, resulting in server errors due to timeouts. Additionally, manual code reviews have become time-consuming and laborious. Which of the following options can help resolve the application communication latencies, high CPU usage, and performance degradation?

a) Migrate the entire workload to AWS Elastic Kubernetes Service (EKS) to improve performance, enable automatic scaling, and enhance application availability.

b) Move the code to a test environment and integrate Amazon CodeReview Profiler to evaluate the performance of JSON code and native object serialization.

c) Modify the application load balancer to a network load balancer and add more listeners to improve scalability.

d) Upgrade the backend EC2 instance family to the latest Amazon EC2 M5 instances, which provide 25 Gbps of network bandwidth through enhanced networking capabilities.

View Answer

Answer is: b

Explanation: To address the challenges related to application communication latencies, high CPU usage, and performance degradation, it is recommended to move the code to a test environment and leverage the Amazon CodeReview Profiler. This tool can help evaluate the performance of JSON code and native object serialization, allowing for more efficient and optimized communication processes. This approach will help identify areas for improvement and streamline the JSON decentralization process.

Free – AWS Certified Developer – Associate Exam Practice Questions
Scroll to top