Get ready for the Amazon SAP-C02 exam with Certifiedumps. Access expert-verified study material, real exam-style questions, and 90 days of free updates—all backed by a pass guarantee.
Pass Amazon SAP-C02 in 2025 – Trusted Prep for AWS Success
1. Questions & Answers
(Demo Version - Limited Content)
Amazon
SAP-C02 Exam
AWS Certified Solutions Architect -
Professional
https://www.certifiedumps.com/amazon/sap-c02-dumps.html
Thank you for Downloading SAP-C02 exam PDF Demo
Get Full File:
2. Topic 1, Exam Pool A
Questions & Answers PDF
A company uses a service to collect metadata from applications that the company hosts on
premises. Consumer devices such as TVs and internet radios access the applications. Many
older devices do not support certain HTTP headers and exhibit errors when these headers
are present in responses. The company has configured an on-premises load balancer to
remove the unsupported headers from responses sent to older devices, which the company
identified by the User-Agent headers. The company wants to migrate the service to AWS,
adopt serverless technologies, and retain the ability to support the older devices. The
company has already migrated the applications into a set of AWS Lambda functions. Which
solution will meet these requirements? A. Create an Amazon CloudFront distribution for the
metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront
distribution to forward requests to the ALB. Configure the ALB to invoke the correct
Lambda function for each type of request. Create a CloudFront function to remove the
problematic headers based on the value of the User-Agent header. B. Create an Amazon API
Gateway REST API for the metadata service. Configure API Gateway to invoke the correct
Lambda function for each type of request. Modify the default gateway responses to remove
the problematic headers based on the value of the User-Agent header. C. Create an Amazon
API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the
correct Lambda function for each type of request. Create a response mapping template to
remove the problematic headers based on the value of the User-Agent. Associate the
response data mapping with the HTTP API. D. Create an Amazon CloudFront distribution for
the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront
distribution to forward requests to the ALB. Configure the ALB to invoke the correct
Lambda function for each type of request. Create a Lambda@Edge function that will
remove the problematic headers in response to viewer requests based on the value of the
User-Agent header.
Page 2
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html
Version: 23.0
Question: 1
Question: 2
Answer: D
www.certifiedumps.com
3. Questions & Answers PDF
Explanation:
minimizes operational + microservices that run on containers = AWS Elastic Beanstalk
A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances
behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The
ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum
value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS
Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in
the backup Region. The application presents an endpoint to end users by using an Amazon
Route 53 record. The company needs to reduce its RTO to less than 15 minutes by giving
the application the ability to automatically fail over to the backup Region. The company
does not have a large enough budget for an active-active strategy. What should a solutions
architect recommend to meet these requirements? A. Reconfigure the application’s Route
53 record with a latency-based routing policy that load balances traffic between the two
ALBs. Create an AWS Lambda function in the backup Region to promote the read replica
and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is
based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region.
A company is running a traditional web application on Amazon EC2 instances. The company needs to
refactor the application as microservices that run on containers. Separate versions of the application
exist in two distinct environments: production and testing. Load for the application is variable, but
the minimum load and the maximum load are known. A solutions architect needs to design the
updated application with a serverless architecture that minimizes operational complexity.
Which solution will meet these requirements MOST cost-effectively?
A. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the
associated Lambda functions to handle the expected peak load. Configure two separate Lambda
integrations within Amazon API Gateway: one for production and one for testing.
B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two
auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to
handle the expected load. Deploy tasks from the ECR images. Configure two separate Application
Load Balancers to direct traffic to the ECS clusters.
C. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two
auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to
handle the expected load. Deploy tasks from the ECR images. Configure two separate Application
Load Balancers to direct traffic to the EKS clusters.
D. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate
environments and deployments for production and testing. Configure two separate Application Load
Balancers to direct traffic to the Elastic Beanstalk deployments.
Page 3
Question: 3
Answer: B
www.certifiedumps.com
4. Explanation:
an AWS Lambda function in the backup region to promote the read replica and modify the Auto
Scaling group values, and then configuring Route 53 with a health check that monitors the web
application and sends an Amazon SNS notification to the Lambda function when the health check
status is unhealthy. Finally, the application's Route 53 record should be updated with a failover policy
that routes traffic to the ALB in the backup region when a health check failure occurs. This approach
provides automatic failover to the backup region when a health check failure occurs, reducing the
RTO to less than 15 minutes. Additionally, this approach is cost-effective as it does not require an
active-active strategy.
A company is hosting a critical application on a single Amazon EC2 instance. The application uses an
Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses
an Amazon RDS for MariaDB DB instance for a relational database. For the application to function,
each piece of the infrastructure must be healthy and must be in an active state.
A solutions architect needs to improve the application's architecture so that the infrastructure can
automatically recover from failure with the least possible downtime.
Which combination of steps will meet these requirements? (Select THREE.)
A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2
instances are part of an Auto Scaling group that has a minimum capacity of two instances.
B. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances Ensure that the EC2
instances are configured in unlimited mode.
C. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read
replica to be the primary DB instance in failure scenarios.
D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability
Zones.
Questions & Answers PDF Page 4
Configure the CloudWatch alarm to invoke the Lambda function.
B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the
Auto Scaling group values. Configure Route 53 with a health check that monitors the web application
and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function
when the health check status is unhealthy. Update the application’s Route 53 record with a failover
policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
C. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling
group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based
routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the
read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS
DB instances by using snapshots and Amazon S3.
D. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets.
Create an AWS Lambda function in the backup Region to promote the read replica and modify the
Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the
HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch
alarm to invoke the Lambda function.
Question: 4
Answer: B
www.certifiedumps.com
5. Questions & Answers PDF
Explanation:
Option A is correct because using an Elastic Load Balancer and an Auto Scaling group with a
minimum capacity of two instances can improve the availability and scalability of the EC2 instances
that host the application. The load balancer can distribute traffic across multiple instances and the
Auto Scaling group can replace any unhealthy instances automatically1
Option D is correct because modifying the DB instance to create a Multi-AZ deployment that extends
across two Availability Zones can improve the availability and durability of the RDS for MariaDB
database. Multi-AZ deployments provide enhanced data protection and minimize downtime by
automatically failing over to a standby replica in another Availability Zone in case of a planned or
unplanned outage4
Option F is correct because creating a replication group for the ElastiCache for Redis cluster and
enabling Multi-AZ on the cluster can improve the availability and fault tolerance of the in-memory
data store. A replication group consists of a primary node and up to five read-only replica nodes that
are synchronized with the primary node using asynchronous replication. Multi-AZ allows automatic
failover to one of the replicas if the primary node fails or becomes unreachable6
Reference: 1: https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-
balancing-works.html 2: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-
performance-instances-unlimited-mode.html 3:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html 4:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html 5:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoScaling.html 6:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.Redis.Groups.html
A retail company is operating its ecommerce application on AWS. The application runs on Amazon
EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB
instance as the database backend. Amazon CloudFront is configured with one origin that points to
the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.
After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway)
error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns
successfully when a solutions architect reloads the webpage immediately after the error occurs.
While the company is working on the problem, the solutions architect needs to provide a custom
error page instead of the standard ALB error page to visitors.
Which combination of steps will meet this requirement with the LEAST amount of operational
overhead? (Choose two.)
A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom
E. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto
Scaling group that has a minimum capacity of two instances.
F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
Page 5
Question: 5
Answer: A, D, F
www.certifiedumps.com
6. Explanation:
"Save your custom error pages in a location that is accessible to CloudFront. We recommend that you
store them in an Amazon S3 bucket, and that you don’t store them in the same place as the rest of
your website or application’s content. If you store the custom error pages on the same origin as your
website or application, and the origin starts to return 5xx errors, CloudFront can’t get the custom
error pages because the origin server is unavailable."
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorR
esponses.html
A company has many AWS accounts and uses AWS Organizations to manage all of them. A
solutions architect must implement a solution that the company can use to share a common
network across multiple accounts. The company's infrastructure team has a dedicated
infrastructure account that has a VPC. The infrastructure team must use this account to
manage the network. Individual accounts cannot have the ability to manage their own
networks. However, individual accounts must be able to create AWS resources within
subnets. Which combination of actions should the solutions architect perform to meet these
requirements? (Select TWO.) A. Create a transit gateway in the infrastructure account. B.
Enable resource sharing from the AWS Organizations management account. C. Create VPCs
in each AWS account within the organization in AWS Organizations. Configure the VPCs to
share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the
VPCs in each individual account with the VPC in the infrastructure account, D. Create a
resource share in AWS Resource Access Manager in the infrastructure account. Select the
specific AWS Organizations OU that will use the shared network. Select each subnet to
associate with the resource share. E. Create a resource share in AWS Resource Access
Manager in the infrastructure account. Select the specific AWS Organizations OU that will
use the shared network. Select each prefix list to associate with the resource share.
Questions & Answers PDF Page 6
error pages to Amazon S3.
B. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check
response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the
forwarding rule at the ALB to point to a publicly accessible web server.
C. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target
if the health check fails. Modify DNS records to point to a publicly accessible webpage.
D. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check
response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding
rule at the ALB to point to a public accessible web server.
E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records
to point to a publicly accessible web page.
Question: 6
Answer: C, E
www.certifiedumps.com
7. Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/vpc/latest/userguide/sharing-managed-prefix-lists.html
Explanation:
Reference architecture - https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-
saas.html
Note from documentation that Interface Endpoint is at client side
A company wants to use a third-party software-as-a-service (SaaS) application. The third-
party SaaS application is consumed through several API calls. The third-party SaaS
application also runs on AWS inside a VPC. The company will consume the third-party SaaS
application from inside a VPC. The company has internal security policies that mandate the
use of private connectivity that does not traverse the internet. No resources that run in the
company VPC are allowed to be accessed from outside the company’s VPC. All permissions
must conform to the principles of least privilege. Which solution meets these
requirements?
A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the
endpoint service
that the third-party SaaS application provides. Create a security group to limit the access to
the
endpoint. Associate the security group with the endpoint.
B. Create an AWS Site-to-Site VPN connection between the third-party SaaS application
and the
company VPC. Configure network ACLs to limit access across the VPN tunnels.
C. Create a VPC peering connection between the third-party SaaS application and the
company
VPUpdate route tables by adding the needed routes for the peering connection.
D. Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create
an
interface VPC endpoint for this endpoint service. Grant permissions for the endpoint
service to the
specific account of the third-party SaaS provider.
A company needs to implement a patching process for its servers. The on-premises
servers and Amazon EC2 instances use a variety of tools to perform patching.
Management requires a single report showing the patch status of all the servers and
instances. Which set of actions should a solutions architect take to meet these
requirements?
A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2
instances. Use
Systems Manager to generate patch compliance reports.
B. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances.
Use
Amazon OuickSight integration with OpsWorks to generate patch compliance reports.
C. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to apply patches by
Page 7
Question: 7
Question: 8
Answer: A
Answer: A, E
www.certifiedumps.com
8. Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/adding-lifecycle-hooks.html
- Refer to Default Result section - If the instance is terminating, both abandon and continue allow
the instance to terminate. However, abandon stops any remaining actions, such as other lifecycle
A company is running an application on several Amazon EC2 instances in an Auto Scaling group
behind an Application Load Balancer. The load on the application varies throughout the day, and EC2
instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a
central Amazon S3 bucket every 15 minutes. The security team discovers that log files are missing
from some of the terminated EC2 instances.
Which set of actions will ensure that log files are copied to the central S3 bucket from the terminated
EC2 instances?
A.
Create a script to copy log files to Amazon S3, and store the script in a file on the EC2 instance.
Create an Auto Scaling lifecycle hook and an Amazon EventBridge (Amazon CloudWatch Events) rule
to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the
autoscaling:EC2_INSTANCE_TERMINATING transition to send ABANDON to the Auto Scaling group to
prevent termination, run the script to copy the log files, and terminate the instance using the AWS
SDK.
B. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create
an Auto Scaling lifecycle hook and an Amazon EventBridge (Amazon CloudWatch Events) rule to
detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the
autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API
SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto
Scaling group to terminate the instance.
C. Change the log delivery rate to every 5 minutes. Create a script to copy log files to Amazon S3,
and add the script to EC2 instance user data. Create an Amazon EventBridge (Amazon CloudWatch
Events) rule to detect EC2 instance termination. Invoke an AWS Lambda function from the
EventBridge (CloudWatch Events) rule that uses the AWS CLI to run the user-data script to copy the
log files and terminate the instance.
D. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create
an Auto Scaling lifecycle hook that publishes a message to an Amazon Simple Notification Service
(Amazon SNS) topic. From the SNS notification, call the AWS Systems Manager API SendCommand
operation to run the document to copy the log files and send ABANDON to the Auto Scaling group to
terminate the instance.
Questions & Answers PDF Page 8
reports.
D. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-
Ray to post the patch status to AWS Systems Manager OpsCenter to generate patch compliance
reports.
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
Question: 9
Answer: B
Answer: A
www.certifiedumps.com
9. Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/private-hosted-zone-different-
account/
A company used Amazon EC2 instances to deploy a web fleet to host a blog site The EC2 instances
are behind an Application Load Balancer (ALB) and are configured in an Auto ScaSng group The web
application stores all blog content on an Amazon EFS volume.
The company recently added a feature 'or Moggers to add video to their posts, attracting 10 times
the previous user traffic At peak times of day. users report buffering and timeout issues while
attempting to reach the site or watch videos
Which is the MOST cost-efficient and scalable deployment that win resolve the issues for users?
A company is using multiple AWS accounts The DNS records are stored in a private hosted
zone for Amazon Route 53 in Account A The company's applications and databases are
running in Account B. A solutions architect win deploy a two-net application In a new VPC
To simplify the configuration, the db.example com CNAME record set tor the Amazon RDS
endpoint was created in a private hosted zone for Amazon Route 53. During deployment,
the application failed to start. Troubleshooting revealed that db.example com is not
resolvable on the Amazon EC2 instance The solutions architect confirmed that the record
set was created correctly in Route 53. Which combination of steps should the solutions
architect take to resolve this issue? (Select TWO )
A. Deploy the database on a separate EC2 instance in the new VPC Create a record set for
the
instance's private IP in the private hosted zone
B. Use SSH to connect to the application tier EC2 instance Add an RDS endpoint IP address
to the
/eto/resolv.conf file
C. Create an authorization lo associate the private hosted zone in Account A with the new
VPC In
Account B
D. Create a private hosted zone for the example.com domain m Account B Configure Route
53
replication between AWS accounts
E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the
association
authorization In Account A.
Questions & Answers PDF Page 9
hooks, and continue allows any other lifecycle hooks to complete.
https://aws.amazon.com/blogs/infrastructure-and-automation/run-code-before-terminating-an-ec2-
auto-scaling-instance/
https://github.com/aws-samples/aws-lambda-lifecycle-hooks-function
https://github.com/aws-samples/aws-lambda-lifecycle-hooks-
function/blob/master/cloudformation/template.yaml
Question: 10
Question: 11
Answer: C, E
www.certifiedumps.com
10. Questions & Answers PDF
A. Reconfigure Amazon EFS to enable maximum I/O.
B. Update the Nog site to use instance store volumes tor storage. Copy the site contents to the
volumes at launch and to Amazon S3 al shutdown.
C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate
the videos from EFS to Amazon S3.
D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the
ALB.
A company with global offices has a single 1 Gbps AWS Direct Connect connection to a
single AWS Region. The company's on-premises network uses the connection to
communicate with the company's resources in the AWS Cloud. The connection has a single
private virtual interface that connects to a single VPC. A solutions architect must implement
a solution that adds a redundant Direct Connect connection in the same Region. The
solution also must provide connectivity to other Regions through the same pair of Direct
Connect connections as the company expands into other Regions. Which solution meets
these requirements?
A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the
existing
connection. Create the second Direct Connect connection. Create a new private virtual
interlace on
each connection, and connect both private victual interfaces to the Direct Connect gateway.
Connect
the Direct Connect gateway to the single VPC.
B. Keep the existing private virtual interface. Create the second Direct Connect connection.
Create a
new private virtual interface on the new connection, and connect the new private virtual
interface to
the single VPC.
C. Keep the existing private virtual interface. Create the second Direct Connect connection.
Create a
new public virtual interface on the new connection, and connect the new public virtual
interface to
the single VPC.
D. Provision a transit gateway. Delete the existing private virtual interface from the existing
Page 10
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/
Using an Amazon S3 bucket
Using a MediaStore container or a MediaPackage channel
Using an Application Load Balancer
Using a Lambda function URL
Using Amazon EC2 (or another custom origin)
Using CloudFront origin groups
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-
balancer.html
Question: 12
Answer: C
www.certifiedumps.com
11. Questions & Answers PDF
Explanation:
A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway
in any Region and access it from all other Regions. The following describe scenarios where you can
use a Direct Connect gateway. https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-
connect-gateways-intro.html
A company has a web application that allows users to upload short videos. The videos are
stored on Amazon EBS volumes and analyzed by custom recognition software for
categorization. The website contains stat c content that has variable traffic with peaks in
certain months. The architecture consists of Amazon EC2 instances running in an Auto
Scaling group for the web application and EC2 instances running in an Auto Scaling group to
process an Amazon SQS queue The company wants to re-architect the application to reduce
operational overhead using AWS managed services where possible and remove
dependencies on third-party software. Which solution meets these requirements?
A. Use Amazon ECS containers for the web application and Spot Instances for the Auto
Scaling group
that processes the SQS queue. Replace the custom software with Amazon Recognition to
categorize
the videos.
B. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances
for Te
web application. Process the SOS queue with an AWS Lambda function that calls the
Amazon
Rekognition API to categorize the videos.
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3
event
notifications to publish events to the SQS queue Process the SQS queue with an AWS
Lambda
function that calls the Amazon Rekognition API to categorize the videos.
D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web
application and launch a worker environment to process the SQS queue Replace the custom
software
with Amazon Rekognition to categorize the videos.
Explanation: Option C is correct because hosting the web application in Amazon S3, storing
the uploaded videos in
Amazon S3, and using S3 event notifications to publish events to the SQS queue reduces the
operational overhead of managing EC2 instances and EBS volumes. Amazon S3 can serve
static
content such as HTML, CSS, JavaScript, and media files directly from S3 buckets. Amazon S3
can also
trigger AWS Lambda functions through S3 event notifications when new objects are created
or
existing objects are updated or deleted. AWS Lambda can process the SQS queue with an
AWS
Lambda function that calls the Amazon Rekognition API to categorize the videos. This
solution
eliminates the need for custom recognition software and third-party dependencies345
Reference: 1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-
instances.html 2:
Page 11
Question: 13
Answer: C
Answer: A
www.certifiedumps.com
12. Questions & Answers PDF
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/11/aws-lambda-supports-traffic-shifting-and-
phased-deployments-with-aws-codedeploy/
A company is planning to store a large number of archived documents and make the documents
available to employees through the corporate intranet. Employees will access the system by
connecting through a client VPN service that is attached to a VPC. The data must not be accessible to
the public.
The documents that the company is storing are copies of data that is held on physical media
elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of
the company.
Which solution will meet these requirements at the LOWEST cost?
A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access
A company has a serverless application comprised of Amazon CloudFront, Amazon API
Gateway, and AWS Lambda functions. The current deployment process of the application
code is to create a new version number of the Lambda function and run an AWS CLI script
to update. If the new function version has errors, another CLI script reverts by deploying the
previous working version of the function. The company would like to decrease the time to
deploy new versions of the application logic provided by the Lambda functions, and also
reduce the time to detect and revert when errors are identified. How can this be
accomplished?
A. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of
the AWS
CloudFront distribution and API Gateway, and the child stack containing the Lambda
function. For
changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are
triggered,
revert the AWS CloudFormation change set to the previous version.
B. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually
shift
traffic to the new version, and use pre-traffic and post-traffic test functions to verify code.
Rollback if
Amazon CloudWatch alarms are triggered.
C. Refactor the AWS CLI scripts into a single script that deploys the new Lambda version.
When
deployment is completed, the script tests execute. If errors are detected, revert to the
previous
Lambda version.
D. Create and deploy an AWS CloudFormation stack that consists of a new API Gateway
endpoint that
references the new Lambda version. Change the CloudFront origin to the new API Gateway
endpoint,
monitor errors and if detected, change the AWS CloudFront origin to the previous API
Gateway
endpoint.
Page 12
Question: 14
Question: 15
Answer: B
www.certifiedumps.com
13. Explanation:
The S3 Glacier Deep Archive storage class is the lowest-cost storage class offered by Amazon S3, and
it is designed for archival data that is accessed infrequently and for which retrieval time of several
hours is acceptable. S3 interface endpoint for the VPC ensures that access to the bucket is only from
resources within the VPC and this will meet the requirement of not being accessible to the public.
And also, S3 bucket can be configured for website hosting, and this will allow employees to access
the documents through the corporate intranet. Using an EC2 instance and a file system or block store
would be more expensive and unnecessary because the number of requests to the data will be low
and availability and speed of retrieval are not concerns. Additionally, using Amazon S3 bucket will
provide durability, scalability and availability of data.
A company is planning to store a large number of archived documents and make the documents
available to employees through the corporate intranet. Employees will access the system by
connecting through a client VPN service that is attached to a VPC. The data must not be accessible to
the public.
The documents that the company is storing are copies of data that is held on physical media
elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of
the company.
Which solution will meet these requirements at the LOWEST cost? A. Create an Amazon S3
bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access
(S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting.
Create an S3
interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
B. Launch an Amazon EC2 instance that runs a web server. Attach an Amazon Elastic File
System
(Amazon EFS) file system to store the archived data in the EFS One Zone-Infrequent Access
(EFS One
Zone-IA) storage class Configure the instance security groups to allow access only from
private
networks.
C. Launch an Amazon EC2 instance that runs a web server Attach an Amazon Elastic Block
Store
(Amazon EBS) volume to store the archived data. Use the Cold HDD (sc1) volume type.
Questions & Answers PDF Page 13
(S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3
interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
B. Launch an Amazon EC2 instance that runs a web server. Attach an Amazon Elastic File System
(Amazon EFS) file system to store the archived data in the EFS One Zone-Infrequent Access (EFS One
Zone-IA) storage class Configure the instance security groups to allow access only from private
networks.
C. Launch an Amazon EC2 instance that runs a web server Attach an Amazon Elastic Block Store
(Amazon EBS) volume to store the archived data. Use the Cold HDD (sc1) volume type. Configure the
instance security groups to allow access only from private networks.
D. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage
class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint.
Configure the S3 bucket to allow access only through that endpoint.
Question: 16
Answer: D
www.certifiedumps.com
14. Explanation:
The S3 Glacier Deep Archive storage class is the lowest-cost storage class offered by Amazon S3, and
it is designed for archival data that is accessed infrequently and for which retrieval time of several
hours is acceptable. S3 interface endpoint for the VPC ensures that access to the bucket is only from
resources within the VPC and this will meet the requirement of not being accessible to the public.
And also, S3 bucket can be configured for website hosting, and this will allow employees to access
the documents through the corporate intranet. Using an EC2 instance and a file system or block store
would be more expensive and unnecessary because the number of requests to the data will be low
and availability and speed of retrieval are not concerns. Additionally, using Amazon S3 bucket will
provide durability, scalability and availability of data.
A company is using an on-premises Active Directory service for user authentication. The
company wants to use the same authentication service to sign in to the company's AWS
accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already
exists between the on-premises environment and all the company's AWS accounts. The
company's security policy requires conditional access to the accounts based on user groups
and roles. User identities must be managed in a single location. Which solution will meet
these requirements?
A. Configure AWS Single Sign-On (AWS SSO) to connect to Active Directory by using SAML
2.0. Enable
automatic provisioning by using the System for Cross- domain Identity Management (SCIM)
v2.0
protocol. Grant access to the AWS accounts by using attribute-based access controls
(ABACs).
B. Configure AWS Single Sign-On (AWS SSO) by using AWS SSO as an identity source. Enable
automatic provisioning by using the System for Cross-domain Identity Management (SCIM)
v2.0
protocol. Grant access to the AWS accounts by using AWS SSO permission sets.
C. In one of the company's AWS accounts, configure AWS Identity and Access Management
(IAM) to
use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated
users. Grant
access that corresponds to appropriate groups in Active Directory. Grant access to the
required AWS
accounts by using cross-account IAM users.
D. In one of the company's AWS accounts, configure AWS Identity and Access Management
(IAM) to
use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to
the AWS
account for the federated users that correspond to appropriate groups in Active Directory.
Explanation:
https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/
Questions & Answers PDF Page 14
instance security groups to allow access only from private networks.
D. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage
class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint.
Configure the S3 bucket to allow access only through that endpoint.
Question: 17
Answer: D
Answer: D
www.certifiedumps.com
15. Questions & Answers PDF
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/aws-batch-requests-error/
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-429-limit/
A company is running a data-intensive application on AWS. The application runs on a cluster of
hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store
200 TB of data. The application reads and modifies the data on the shared file system and generates
a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes
about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances
that host the shared file system run continuously. The compute and storage instances are all in the
same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file
system must provide high performance access to the needed data for the duration of the 72-hour
run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3
Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create
a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the
shared storage for the duration of the job. Delete the file system when the job is complete.
B. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store
A software company has deployed an application that consumes a REST API by using Amazon API
Gateway. AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an
increase in the number of errors during PUT requests. Most of the PUT calls come from a small
number of clients that are authenticated with specific API keys.
A solutions architect has identified that a large number of the PUT requests originate from one client.
The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are
displayed to customers and are causing damage to the API's reputation.
What should the solutions architect recommend to improve the customer experience?
A. Implement retry logic with exponential backoff and irregular variation in the client
application. Ensure that the errors are caught and handled with descriptive error messages.
B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client
application handles code 429 replies without error.
C. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load
tests. Verify that the cache capacity is appropriate for the workload.
D. Implement reserved concurrency at the Lambda function level to provide the resources that
are needed during sudden increases in traffic.
Page 15
Question: 18
Question: 19
Answer: B
www.certifiedumps.com
16. Explanation:
https://aws.amazon.com/blogs/storage/new-enhancements-for-moving-data-between-amazon-fsx-
for-lustre-and-amazon-s3/
A company is developing a new service that will be accessed using TCP on a static port A
solutions architect must ensure that the service is highly available, has redundancy across
Availability Zones, and is accessible using the DNS name myservice.com, which is publicly
accessible The service must use fixed address assignments so other companies can add
the addresses to their allow lists. Assuming that resources are deployed in multiple
Availability Zones in a single Region, which solution will meet these requirements?
A. Create Amazon EC2 instances with an Elastic IP address for each instance Create a
Network Load
Balancer (NLB) and expose the static TCP port Register EC2
instances with the NLB Create a new name server record set named my service com, and
assign the
Elastic IP addresses of the EC2 instances to the record set Provide the Elastic IP addresses
of the EC2
instances to the other companies to add to their allow lists
B. Create an Amazon ECS cluster and a service definition for the application Create and
assign public
IP addresses for the ECS cluster Create a Network Load Balancer (NLB) and expose the TCP
port
Create a target group and assign the ECS cluster name to the NLB Create a new A record set
named
my service com and assign the public IP addresses of the ECS cluster to the record set
Provide the
public IP addresses of the ECS cluster to the other companies to add to their allow lists
C. Create Amazon EC2 instances for the service Create one Elastic IP address for each
Availability
Zone Create a Network Load Balancer (NLB) and expose the assigned TCP port Assign the
Elastic IP
addresses to the NLB for each Availability Zone Create a target group and register the EC2
instances
with the NLB Create a new A (alias) record set named my service com, and assign the NLB
DNS name
Questions & Answers PDF Page 16
(Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by
using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared
storage for the duration of the job. Detach the EBS volume when the job is complete.
C. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3
Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new
file system with the data from Amazon S3 by using batch loading. Use the new file system as the
shared storage for the duration of the job. Delete the file system when the job is complete.
D. Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs
each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use
the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
Question: 20
Answer: A
www.certifiedumps.com
17. Explanation:
Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP
addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances
with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name
to the record set. As it uses the NLB as the resource in the A-record, traffic will be routed through the
NLB, and it will automatically route the traffic to the healthy instances based on the health checks
and also it provides the fixed address assignments as the other companies can add the NLB's Elastic
IP addresses to their allow lists.
A company uses an on-premises data analytics platform. The system is highly available in a fully
redundant configuration across 12 servers in the company's data center.
The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users.
Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The
scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5
minutes and have no SL
A. The user jobs account for 35% of system usage. During system failures, scheduled jobs must
continue to meet SLAs. However, user jobs can be delayed.
A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-
based model to reduce costs with no long-term commitments. The solution must maintain high
availability and must not affect the SLAs.
Which solution will meet these requirements MOST cost-effectively? A. Split the 12
instances across two Availability Zones in the chosen AWS Region. Run two instances
in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four
instances in
each Availability Zone as Spot Instances.
B. Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of
the
Availability Zones, run all four instances as On-Demand Instances with Capacity
Reservations. Run the
remaining instances as Spot Instances.
C. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two
instances
in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in
each
Availability Zone as Spot Instances.
D. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three
instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run
one
instance in each Availability Zone as a Spot Instance.
Page 17
Question: 21
Answer: D
Answer: C
www.certifiedumps.com
18. A company is storing data in several Amazon DynamoDB tables. A solutions architect must use a
serverless architecture to make the data accessible publicly through a simple API over HTTPS. The
solution must scale automatically in response to demand.
Which solutions meet these requirements? (Choose two.)
A security engineer determined that an existing application retrieves credentials to an Amazon RDS
for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the
security engineer wants to implement the following application design changes to improve security:
The database must use strong, randomly generated passwords stored in a secure AWS managed
service.
The application resources must be deployed through AWS CloudFormation.
The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources specified in the CloudFormation template will meet the security engineer's
requirements with the LEAST amount of operational overhead?
A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS
Lambda function resource to rotate the database password. Specify a Secrets Manager
RotationSchedule resource to rotate the database password every 90 days.
B. Generate the database password as a SecureString parameter type using AWS Systems Manager
Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specify
a Parameter Store RotationSchedule resource to rotate the database password every 90 days.
C. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS
Lambda function resource to rotate the database password. Create an Amazon EventBridge
scheduled rule resource to trigger the Lambda function password rotation every 90 days.
D. Generate the database password as a SecureString parameter type using AWS Systems Manager
Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database
password every 90 days.
Explanation:
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-
functions-by-using-aws-secrets-manager/
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html
Questions & Answers PDF Page 18
By splitting the 12 instances across three Availability Zones, the system can maintain high availability
and availability of resources in case of a failure. Option D also uses a combination of On-Demand
Instances with Capacity Reservations and Spot Instances, which allows for scheduled jobs to be run
on the On-Demand instances with guaranteed capacity, while also taking advantage of the cost
savings from Spot Instances for the user jobs which have lower SLA requirements.
Question: 22
Question: 23
Answer: B
www.certifiedumps.com
19. Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-
works-tutorial.html
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-overview-developer-
experience.html
A company has registered 10 new domain names. The company uses the domains for
online marketing. The company needs a solution that will redirect online visitors to a
specific URL for each domain. All domains and target URLs are defined in a JSON document.
All DNS records are managed by Amazon Route 53. A solutions architect must implement a
redirect service that accepts HTTP and HTTPS requests. Which combination of steps should
the solutions architect take to meet these requirements with the LEAST amount of
operational effort? (Choose three.) A. Create a dynamic webpage that runs on an Amazon
EC2 instance. Configure the webpage to use the JSON document in combination with the
event message to look up and respond with a redirect URL. B. Create an Application Load
Balancer that includes HTTP and HTTPS listeners. C. Create an AWS Lambda function that
uses the JSON document in combination with the event message to look up and respond
with a redirect URL. D. Use an Amazon API Gateway API with a custom domain to publish
an AWS Lambda function. E. Create an Amazon CloudFront distribution. Deploy a
Lambda@Edge function. F. Create an SSL certificate by using AWS Certificate Manager
(ACM). Include the domains as Subject Alternative Names.
A. Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB
by using API Gateway’s AWS integration type.
B. Create an Amazon API Gateway HTTP API. Configure this API with direct integrations to Dynamo
DB by using API Gateway’s AWS integration type.
C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda
functions that return data from the DynamoDB tables.
D. Create an accelerator in AWS Global Accelerator. Configure this accelerator with AWS
Lambda@Edge function integrations that return data from the DynamoDB tables.
E. Create a Network Load Balancer. Configure listener rules to forward requests to the appropriate
AWS Lambda functions
Page 19
Question: 24
Question: 25
Answer: AC
Answer: CEF
www.certifiedumps.com
20. Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/configurecostallocreport.html
A company has 50 AWS accounts that are members of an organization in AWS Organizations Each
account contains multiple VPCs The company wants to use AWS Transit Gateway to establish
connectivity between the VPCs in each member account Each time a new member account is
created, the company wants to automate the process of creating a new VPC and a transit gateway
attachment.
Which combination of steps will meet these requirements? (Select TWO)
A. From the management account, share the transit gateway with member accounts by using AWS
Resource Access Manager
B. Prom the management account, share the transit gateway with member accounts by using an
AWS Organizations SCP
A company that has multiple AWS accounts is using AWS Organizations. The company’s AWS
accounts host VPCs, Amazon EC2 instances, and containers.
The company’s compliance team has deployed a security tool in each VPC where the company has
deployments. The security tools run on EC2 instances and send information to the AWS account that
is dedicated for the compliance team. The company has tagged all the compliance-related resources
with a key of “costCenter” and a value or “compliance”.
The company wants to identify the cost of the security tools that are running on the EC2 instances so
that the company can charge the compliance team’s AWS account. The cost calculation must be as
accurate as possible.
What should a solutions architect do to meet these requirements? A. In the management
account of the organization, activate the costCenter user-defined tag.
Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the
management
account. Use the tag breakdown in the report to obtain the total cost for the costCenter
tagged
resources.
B. In the member accounts of the organization, activate the costCenter user-defined tag.
Configure
monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management
account.
Schedule a monthly AWS Lambda function to retrieve the reports and calculate the total
cost for the
costCenter tagged resources.
C. In the member accounts of the organization activate the costCenter user-defined tag.
From the
management account, schedule a monthly AWS Cost and Usage Report. Use the tag
breakdown in
the report to calculate the total cost for the costCenter tagged resources.
D. Create a custom report in the organization view in AWS Trusted Advisor. Configure the
report to
generate a monthly billing summary for the costCenter tagged resources in the compliance
team’s
AWS account.
Page 20
Question: 26
Answer: A
www.certifiedumps.com
21. Questions & Answers PDF
C. Launch an AWS CloudFormation stack set from the management account that automatical^/
creates a new VPC and a VPC transit gateway attachment in a member account. Associate the
attachment with the transit gateway in the management account by using the transit gateway ID.
D. Launch an AWS CloudFormation stack set from the management account that automatical^
creates a new VPC and a peering transit gateway attachment in a member account. Share the
attachment with the transit gateway in the management account by using a transit gateway service-
linked role.
E. From the management account, share the transit gateway with member accounts by using AWS
Service Catalog
Answer: A, C
Explanation:
https://aws.amazon.com/blogs/mt/self-service-vpcs-in-aws-control-tower-using-aws-service-
catalog/
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-
transitgatewayattachment.html
An enterprise company wants to allow its developers to purchase third-party software through AWS
Marketplace. The company uses an AWS Organizations account structure with full features enabled,
and has a shared services account in each organizational unit (OU) that will be used by procurement
managers. The procurement team's policy indicates that developers should be able to obtain third-
party software from an approved list only and use Private Marketplace in AWS Marketplace to
achieve this requirement . The procurement team wants administration of Private Marketplace to be
restricted to a role named procurement-manager-role, which could be assumed by procurement
managers Other IAM users groups, roles, and account administrators in the company should be
denied Private Marketplace administrative access
What is the MOST efficient way to design an architecture to meet these requirements?
A. Create an IAM role named procurement-manager-role in all AWS accounts in the organization
Add the PowerUserAccess managed policy to the role Apply an inline policy to all IAM users and
roles in every AWS account to deny permissions on the AWSPrivateMarketplaceAdminFullAccess
managed policy.
B. Create an IAM role named procurement-manager-role in all AWS accounts in the organization Add
the AdministratorAccess managed policy to the role Define a permissions boundary with the
AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer roles.
C. Create an IAM role named procurement-manager-role in all the shared services accounts in the
organization Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role Create an
organization root-level SCP to deny permissions to administer Private Marketplace to everyone
except the role named procurement-manager-role Create another organization root-level SCP to
deny permissions to create an IAM role named procurement-manager-role to everyone in the
organization.
D. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by
developers. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an
SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the
role named procurement-manager-role. Apply the SCP to all the shared services accounts in the
organization.
Page 21
Question: 27
www.certifiedumps.com
22. Questions & Answers PDF
A company is in the process of implementing AWS Organizations to constrain its developers to use
only Amazon EC2. Amazon S3 and Amazon DynamoDB. The developers account resides In a
dedicated organizational unit (OU). The solutions architect has implemented the following SCP on
the developers account:
When this policy is deployed, IAM users in the developers account are still able to use AWS services
that are not listed in the policy. What should the solutions architect do to eliminate the developers'
ability to use services outside the scope of this policy?
A. Create an explicit deny statement for each AWS service that should be constrained
B. Remove the Full AWS Access SCP from the developer account's OU
Explanation:
SCP to deny permissions to administer Private Marketplace to everyone except the role named
procurement-manager-role. https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-
a-well-architected-private-marketplace-using-iam-and-aws-organizations/
This approach allows the procurement managers to assume the procurement-manager-role in shared
services accounts, which have the AWSPrivateMarketplaceAdminFullAccess managed policy attached
to it and can then manage the Private Marketplace. The organization root-level SCP denies the
permission to administer Private Marketplace to everyone except the role named procurement-
manager-role and another SCP denies the permission to create an IAM role named procurement-
manager-role to everyone in the organization, ensuring that only the procurement team can assume
the role and manage the Private Marketplace. This approach provides a centralized way to manage
and restrict access to Private Marketplace while maintaining a high level of security.
Page 22
Question: 28
Answer: C
www.certifiedumps.com
23. Questions & Answers PDF
C. Modify the Full AWS Access SCP to explicitly deny all services
D. Add an explicit deny statement using a wildcard to the end of the SCP
Explanation:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_a
uth.html
Explanation:
By breaking down the monolithic API into individual Lambda functions and using API Gateway to
handle the incoming requests, the solution can automatically scale to handle the new and varying
load without the need for manual scaling actions. Additionally, this option will automatically handle
the traffic without the need of having EC2 instances running all the time and only pay for the number
of requests and the duration of the execution of the Lambda function.
By updating the Route 53 record to point to the API Gateway, the solution can handle the traffic and
also it will direct the traffic to the correct endpoint.
A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in
public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on
Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP
addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden
increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying
load.
Which solution will meet these requirements with the LEAST operational overhead? A.
Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway
REST
API with Lambda integration for the backend. Update the Route 53 record to point to the
API
Gateway API.
B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS)
cluster. Run
the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the
Route 53
record to point to the Kubernetes ingress.
C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group.
Configure the
Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an
AWS
Lambda function that reacts to Auto Scaling group changes and updates the Route 53
record.
D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to
private
subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53
record to point
to the ALB.
Page 23
Question: 29
Answer: B
Answer: D
www.certifiedumps.com
24. Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/cur/latest/userguide/billing-cur-limits.html
Explanation:
https://aws.amazon.com/storagegateway/file/
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/migrate-files-to-fsx-datasync.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/prereqs-operating-
systems.html#prereqs-os-windows-server
A company's solutions architect is reviewing a web application that runs on AWS. The application
A company has created an OU in AWS Organizations for each of its engineering teams Each OU
owns multiple AWS accounts. The organization has hundreds of AWS accounts A solutions architect
must design a solution so that each OU can view a breakdown of usage costs across its AWS
accounts. Which solution meets these requirements?
A. Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager
Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account-
Allow each team to visualize the CUR through an Amazon QuickSight dashboard
C. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account Allow
each team to visualize the CUR through an Amazon QuickSight dashboard.
D. Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager Allow each team to
visualize the CUR through Systems Manager OpsCenter dashboards
A company is storing data on premises on a Windows file server. The company produces 5 GB of new
data daily.
The company migrated part of its Windows-based workload to AWS and needs the data to be
available on a file system in the cloud. The company already has established an AWS Direct Connect
connection between the on-premises network and AWS.
Which data migration strategy should the company use?
A. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server,
and point the existing file share to the new file gateway.
B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows
file server and Amazon FSx.
C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises
Windows file server and Amazon Elastic File System (Amazon EFS).
D. Use AWS DataSync to schedule a daily task lo replicate data between the on-premises Windows
file server and Amazon Elastic File System (Amazon EFS),
Page 24
Question: 30
Question: 31
Question: 32
Answer: B
Answer: B
www.certifiedumps.com
25. Questions & Answers PDF
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_f
ailover.html
A company is hosting a three-tier web application in an on-premises environment. Due to a recent
surge in traffic that resulted in downtime and a significant financial impact, company management
has ordered that the application be moved to AWS. The application is written in .NET and has a
dependency on a MySQL database A solutions architect must design a scalable and highly available
solution to meet the demand of 200000 daily users.
Which steps should the solutions architect take to design an appropriate solution?
A. Use AWS Elastic Beanstalk to create a new application with a web server environment and an
Amazon RDS MySQL Multi-AZ DB instance The environment should launch a Network Load Balancer
(NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones Use an Amazon
Route 53 alias record to route traffic from the company's domain to the NLB.
B. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front
of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a
Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an
Amazon Route 53 alias record to route traffic from the company's domain to the ALB
C. Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans
two separate Regions with an Application Load Balancer (ALB) in each Region. Create a Multi-AZ
deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica Use Amazon
Route 53 with a geoproximity routing policy to route traffic between the two Regions.
D. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front
of an Amazon ECS cluster of Spot Instances spanning three Availability Zones The stack should
references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs
resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second
Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53
public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure
the application to reference the objects by using the Route 53 DNS name.
B. Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in
the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in us-
east-1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3
buckets as origins.
C. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the
second Region Set up an Amazon CloudFront distribution with an origin group that contains the two
S3 buckets as origins.
D. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the
second Region. If failover is required, update the application code to load S3 objects from the S3
bucket in the second Region.
Page 25
Question: 33
Answer: C
www.certifiedumps.com
26. Explanation:
https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-
multiple-aws-accounts-and-regions/
A company is using AWS Organizations lo manage multiple AWS accounts For security
purposes, the company requires the creation of an Amazon Simple Notification Service
(Amazon SNS) topic that enables integration with a third-party alerting system in all the
Organizations member accounts A solutions architect used an AWS CloudFormation
template to create the SNS topic and stack sets to automate the deployment of
CloudFormation stacks Trusted access has been enabled in Organizations What should the
solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?
A. Create a stack set in the Organizations member accounts. Use service-managed
permissions. Set
deployment options to deploy to an organization. Use CloudFormation StackSets drift
detection.
B. Create stacks in the Organizations member accounts. Use self-service permissions. Set
deployment options to deploy to an organization. Enable the CloudFormation StackSets
automatic
deployment.
C. Create a stack set in the Organizations management account Use service-managed
permissions.
Set deployment options to deploy to the organization. Enable CloudFormation StackSets
automatic
deployment.
D. Create stacks in the Organizations management account. Use service-managed
permissions. Set
deployment options to deploy to the organization. Enable CloudFormation StackSets drift
detection.
A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux
and Windows. The company has a large on-premises intra structure that consists of physical
machines and VMs that host numerous applications.
The company must capture details about the system configuration. system performance. running
processure and network coi.net lions of its o. -premises ,on boards. The company also must divide
the on-premises applications into groups for AWS migrations. The company needs recommendations
for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-
Explanation:
Using AWS CloudFormation to launch a stack with an Application Load Balancer (ALB) in front of an
Amazon EC2 Auto Scaling group spanning three Availability Zones, a Multi-AZ deployment of an
Amazon Aurora MySQL DB cluster with a Retain deletion policy, and an Amazon Route 53 alias record
to route traffic from the company’s domain to the ALB will ensure that
Questions & Answers PDF Page 26
launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy Use an Amazon Route 53
alias record to route traffic from the company's domain to the ALB
Question: 34
Question: 35
Answer: C
Answer: C
www.certifiedumps.com
27. Explanation:
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html
https://docs.aws.amazon.com/migrationhub/latest/ug/ec2-recommendations.html
Explanation:
Create Amazon S3 gateway endpoint in the VPC and add a VPC endpoint policy. This VPC endpoint
policy will have a statement that allows S3 access only via access points owned by the organization.
A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two
Availability Zones. Each Availability Zone contains one public subnet and one private subnet.
The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the
public subnets is in front of the service. The service needs to communicate with the internet and
does so through two NAT gateways. The service uses Amazon S3 for image storage. The EC2 instances
retrieve approximately 1 ׀¢’׀ of data from an S3 bucket each day.
The company has promoted the service as highly secure. A solutions architect must reduce cloud
expenditures as much as possible without compromising the service's security posture or increasing
the time spent on ongoing operations.
Which solution will meet these requirements?
A. Replace the NAT gateways with NAT instances. In the VPC route table, create a route from the
private subnets to the NAT instances.
B. Move the EC2 instances to the public subnets. Remove the NAT gateways.
C. Set up an S3 gateway VPC endpoint in the VPC. Attach an endpoint policy to the endpoint to allow
the required actions on the S3 bucket.
D. Attach an Amazon Elastic File System (Amazon EFS) volume to the EC2 instances. Host the image
on the EFS volume.
Questions & Answers PDF Page 27
effective manner.
Which combination of steps should a solutions architect take to meet these requirements? (Select
THREE.)
A. Assess the existing applications by installing AWS Application Discovery Agent on the physical
machines and VMs.
B. Assess the existing applications by installing AWS Systems Manager Agent on the physical
machines and VMs
C. Group servers into applications for migration by using AWS Systems Manager Application
Manager.
D. Group servers into applications for migration by using AWS Migration Hub.
E. Generate recommended instance types and associated costs by using AWS Migration Hub.
F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost
optimization.
Question: 36
Answer: C
Answer: ADE
www.certifiedumps.com
28. Questions & Answers PDF
Explanation:
This solution meets the requirements by using Application Auto Scaling to automatically increase
capacity during the peak period, which will handle the double the average load. And by purchasing
reserved RCUs and WCUs to match the average load, it will minimize the cost of the table for the rest
of the week when the load is close to the average.
A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The
company measured the application load and configured the RCUs and WCUs on the DynamoDB table
to match the expected peak load. The peak load occurs once a week for a 4-hour period and is
double the average load. The application load is close to the average load tor the rest of the week.
The access pattern includes many more writes to the table than reads of the table.
A solutions architect needs to implement a solution to minimize the cost of the table.
Which solution will meet these requirements?
A. Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved
RCUs and WCUs to match the average load.
B. Configure on-demand capacity mode for the table.
C. Configure DynamoDB Accelerator (DAX) in front of the table. Reduce the provisioned read capacity
to match the new peak load on the table.
D. Configure DynamoDB Accelerator (DAX) in front of the table. Configure on-demand capacity mode
for the table.
A solutions architect needs to advise a company on how to migrate its on-premises data processing
application to the AWS Cloud. Currently, users upload input files through a web portal. The web
server then stores the uploaded files on NAS and messages the processing server over a message
queue. Each media file can take up to 1 hour to process. The company has determined that the
number of media files awaiting processing is significantly higher during business hours, with the
number of files rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?
A. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue.
When there are messages in the queue, invoke an AWS Lambda function to pull requests from the
queue and process the files. Store the processed files in an Amazon S3 bucket.
B. Create a queue using Amazon M. Configure the existing web server to publish to the new queue.
When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the
queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance
after the task is complete.
C. Create a queue using Amazon MO. Configure the existing web server to publish to the new queue.
When there are messages in the queue, invoke an AWS Lambda function to pull requests from the
queue and process the files. Store the processed files in Amazon EFS.
D. Create a queue using Amazon SOS. Configure the existing web server to publish to the new queue.
Page 28
Question: 37
Question: 38
Answer: D
www.certifiedumps.com
29. Explanation:
By reducing the number of data nodes in the cluster to 2 and adding UltraWarm nodes to handle the
expected capacity, the company can reduce the cost of running the cluster. Additionally, configuring
the indexes to transition to UltraWarm when OpenSearch Service ingests the data will ensure that
the data is stored in the most cost-effective manner. Finally, transitioning the input data to S3 Glacier
Deep Archive after 1 month by using an S3 Lifecycle policy will ensure that the data is retained for
compliance purposes, while also reducing the ongoing costs.
A company is using Amazon OpenSearch Service to analyze dat
a. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon
S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only
analysis. After 1 month, the company deletes the index that contains the data from the cluster. For
compliance purposes, the company must retain a copy of all input data.
The company is concerned about ongoing costs and asks a solutions architect to recommend a new
solution.
Which solution will meet these requirements MOST cost-effectively? A. Replace all the data
nodes with UltraWarm nodes to handle the expected capacity. Transition the
input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data
into the
cluster.
B. Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the
expected
capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service
ingests the
data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3
Lifecycle
policy.
C. Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle
the
expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch
Service
ingests the data. Add cold storage nodes to the cluster Transition the indexes from
UltraWarm to cold
storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle
policy.
D. Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to
handle
the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep
Archive when
the company loads the data into the cluster.
Explanation:
https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
Questions & Answers PDF Page 29
Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process
the files. Scale the EC2 instances based on the SOS queue length. Store the processed files in an
Amazon S3 bucket.
Question: 39
Question: 40
Answer: B
Answer: D
www.certifiedumps.com
30. Questions & Answers PDF
Explanation:
This solution will meet the requirement with the least operational overhead because it directly
denies the creation of the security group inbound rule with 0.0.0.0/0 as the source, which is the
exact requirement. Additionally, it does not require any additional steps or resources such as
invoking a Lambda function or adding a Config rule.
An SCP (Service Control Policy) is a policy that you can use to set fine-grained permissions for your
AWS accounts within your organization. You can use SCPs to set permissions for the root user of an
account and to delegate permissions to IAM users and roles in the accounts. You can use SCPs to set
permissions that allow or deny access to specific services, actions, and resources.
To implement this solution, you would need to create an SCP that denies the
ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is
0.0.0.0/0. This SCP would then be applied to the NonProd OU. This would ensure that any security
group inbound rule that includes 0.0.0.0/0 as the source will be denied, thus meeting the
requirement.
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_condition-keys.html
A company has 10 accounts that are part of an organization in AWS Organizations AWS Config is
configured in each account All accounts belong to either the Prod OU or the NonProd OU
The company has set up an Amazon EventBridge rule in each AWS account to notify an Amazon
Simple Notification Service (Amazon SNS) topic when an Amazon EC2 security group inbound rule is
created with 0.0.0.0/0 as the source The company's security team is subscribed to the SNS topic
For all accounts in the NonProd OU the security team needs to remove the ability to create a security
group inbound rule that includes 0.0.0.0/0 as the source
Which solution will meet this requirement with the LEAST operational overhead?
A. Modify the EventBridge rule to invoke an AWS Lambda function to remove the security group
inbound rule and to publish to the SNS topic Deploy the updated rule to the NonProd OU
B.
C.
Add the vpc-sg-open-only-to-authorized-ports AWS Config managed rule to the NonProd OU
Configure an SCP to allow the ec2 AulhonzeSecurityGrouplngress action when the value of the
aws Sourcelp condition key is not 0.0.0.0/0 Apply the SCP to the NonProd OU
D. Configure an SCP to deny the ec2 AuthorizeSecurityGrouplngress action when the value of the
aws Sourcelp condition key is 0.0.0.0/0 Apply the SCP to the NonProd OU
Page 30
Answer: D
www.certifiedumps.com
31. www.certifiedumps.com
Thank You for trying SAP-C02 PDF Demo
https://www.certifiedumps.com/amazon/sap-c02-dumps.html
[Limited Time Offer] Use Coupon "cert20" for extra 20%
discount on the purchase of PDF file. Test your SAP-C02
preparation with actual exam questions
Start Your SAP-C02 Preparation