Blog
Questionnaire Part 2 Certified Solutions Architect Associate Level

Here is the continuation part of some more Sample Questions for AWS Certification for AWS Certified Solution Architect. Answers with explanations are at the bottom. If you have not yet attempted. Cloud computing is the on-demand, pay-as-you-go distribution of IT resources over the Internet. Amazon Web Services (AWS) is the world’s leading cloud platform and provider of these resources traditionally in-house services to the cloud - to reap benefits like reduced costs and increased efficiency. Those with skills and certifications in the latest cloud computing solutions - especially those from AWS - will enjoy a wide range of job opportunities and top-tier salaries.

AWS Certification can help you advance your expertise. Once AWS Certified, you’ll be eligible for perks that help you show off your achievements and keep learning. AWS certifications demonstrate the skills to design and manage software solutions on Amazon's ultra-popular cloud platform.

Try to solve and analyze how many questions you can solve easily?


Question: You are setting up a VPC and you need to set up a public subnet within that VPC. Which following requirement must be met for this subnet to be considered a public subnet?

  • Subnet's traffic is not routed to an internet gateway but has its traffic routed to a virtual private gateway
  • Subnet's traffic is routed to an internet gateway
  • Subnet's traffic is not routed to an internet gateway
  • None of these answers can be considered a public subnet

Answer:
Explanation
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won't be connected to the Internet. If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet. If a subnet doesn't have a route to the internet gateway, the subnet is known as a private subnet. If a subnet doesn't have a route to the internet gateway but has its traffic routed to a virtual private gateway, the subnet is known as a VPN-only subnet.

Question: Can you specify the security group that you created for a VPC when you launch an instance in EC2- Classic?

  • No, you can specify the security group created for EC2-Classic when you launch a VPC instance
  • No
  • Yes
  • No, you can specify the security group created for EC2-Classic to a Non-VPC based instance only

Answer:
Explanation
If you're using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a security group in the same region as the instance. You can't specify a security group that you created for a VPC when you launch an instance in EC2-Classic

Question: While using the EC2 GET requests as URLs, the is a URL that serves as the entry point for the web service.

  • Token
  • End-point
  • Action
  • None of these

Answer: B
Explanation
The endpoint is the URL that serves as the entry point for the web service
 
Question: You have been asked to build a database warehouse using Amazon Redshift. You know a little about it, including that it is a SQL data warehouse solution and uses industry-standard ODBC and JDBC connections and Postgre SQL drivers. However, you are not sure about what sort of storage it uses for database tables. What sort of storage does Amazon Redshift use for database tables?

  • InnoDB Tables
  • NDB data storage
  • Columnar data storage
  • NDB CLUSTER Storage

Answer:
Explanation
Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes. Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk.

Question: You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably check to make sure that your application is not trying to drive more IOPS than you have provisioned.

  • Amount of IOPS that are available
  • Acknowledgment from the storage subsystem
  • Average queue length
  • The time it takes for the I/O operation to complete

Answer:
Explanation
In EBS workload demand plays an important role in getting the most out of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver the number of IOPS that are available, they need to have enough I/O requests sent to them. There is a relationship between the demand for the volumes, the amount of IOPS that are available to them, and the latency of the request (the amount of time it takes for the I/O operation to complete). Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends an IO, how long does it take to get an acknowledgment from the storage subsystem that the IO read or write is complete. If your I/O latency is higher than you require, check your average queue length to make sure that your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS while keeping latency down by maintaining a low average queue length (which is achieved by provisioning more IOPS for your volume)

Question: Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?

  • Public IP
  • Elastic IP
  • Private DNS
  • Private IP

Answer: B
Explanation
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of the EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.

Question: You have been given a scope to deploy some AWS infrastructure for a large organization. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fleet is high and conversely remove them when CPU utilization is low. Which AWS services would be best to use to accomplish this?

  • Auto Scaling, Amazon Cloud-Watch, and AWS Elastic Beanstalk
  • Auto Scaling, Amazon Cloud-Watch, and Elastic Load Balancing
  • Amazon Cloud-Front, Amazon Cloud-Watch, and Elastic Load Balancing
  • AWS Elastic Beanstalk, Amazon Cloud-Watch, and Elastic Load Balancing

Answer:
Explanation
Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling activities. You can use Amazon CIoudWatch to send alarms to trigger scaling activities and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilization

Question: You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this?

  • DB Parameter Groups
  • Read Replicas
  • Multi-AZ DB Instance deployment
  • Database Snapshots

Answer: B
Explanation
Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB Instance may make sense. Common reasons for deploying a Read Replica include: Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas. Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read replica(s). For this use case, keep in mind that the data on the Read Replica may be "state" since the source DB Instance is unavailable. Business reporting or data warehousing scenarios; you may want business reporting queries to run against a Read Replica, rather than your primary, production DB Instance

Question: In DynamoDB, could you use IAM to grant access to Amazon DynamoDB resources and API actions?

  • In DynamoDB there is no need to grant access
  • Depended to the type of access
  • No
  • Yes

Answer:
Explanation
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM). You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role

Question: Much of your company's data does not need to be accessed often and can take several hours for retrieval time, so it's stored on Amazon Glacier. However, someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in regards to this concern?

  • There is no encryption on Amazon Glacier, that's why it is cheaper
  • Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3 but you can change it to AES-256 if you are willing to pay more
  • Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3
  • Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3

Answer: C
Explanation
Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable. Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing

Question: Your EBS volumes do not seem to be performing as expected and your team leader has requested you look into improving their performance. Which of the following is not a true statement relating to the performance of your EBS volumes?

  • Frequent snapshots provide a higher level of data durability and they will not degrade the performance of your application while the snapshot is in progress
  • General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128 MB/s per volume
  • There is a relationship between the maximum performance of your EBS volumes, the amount of I/O you are driving to them, and the amount of time it takes for each transaction to complete
  • There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly created or restored EBS volume

Answer:
Explanation
Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and storage configuration. Frequent snapshots provide a higher level of data durability, but they may slightly degrade the performance of your application while the snapshot is in progress. This trade-off becomes critical when you have data that changes rapidly. Whenever possible, plan for snapshots to occur during off-peak times in order to minimize workload impact.

Question: You've created your first load balancer and have registered your EC2 instances with the load balancer. Elastic Load Balancing routinely performs health checks on all the registered EC2 instances and automatically distributes all incoming requests to the DNS name of your load balancer across your registered, healthy EC2 instances. By default, the load balancer uses the _ protocol for checking the health of your instances.

  • HTTPS
  • HTTP
  • ICMP
  • IPv6

Answer: B
Explanation
In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer. Currently, HTTP on port 80 is the default health check.
 
Question: A major finance organization has engaged your company to set up a large data mining application. Using AWS you decide the best service for this is Amazon Elastic Map Reduce (EMR) which you know uses Hadoop. Which of the following statements best describes Hadoop?

  • Hadoop is 3rd Party software which can be installed using AMI
  • Hadoop is an open-source python web framework
  • Hadoop is an open-source Java software framework
  • Hadoop is an open-source javascript framework

Answer: C
Explanation
Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open-source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware. Hadoop implements a programming model named "MapReduce," where the data is divided into many small fragments of work, each of which may be executed on any node in the cluster. This framework has been widely used by developers, enterprises, and startups and has proven to be a reliable software platform for processing up to petabytes of data on clusters of thousands of commodity machines.

Question: In Amazon EC2 Container Service, are other container types supported?

  • Yes, EC2 Container Service supports any container service you need
  • Yes, EC2 Container Service also supports Microsoft container service
  • No, Docker is the only container platform supported by EC2 Container Service presently
  • Yes, EC2 Container Service supports Microsoft container service and Open stack

Answer: C

Explanation:
In Amazon EC2 Container Service, Docker is the only container platform supported by EC2 Container Service presently.

Question: Is a fast, flexible, fully managed push messaging service?

  • Amazon SNS
  • Amazon SES
  • Amazon SQS
  • Amazon FPS

Answer: A

Explanation
Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push messaging service. Amazon SNS makes it simple and cost-effective to push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet-connected smart devices, as well as pushing to other distributed services.

Share: -5  -14  -9 
View:1319  

Thoughts on “Questionnaire Part 2 Certified Solutions Architect Associate Level ””

Leave a Reply

Your email address will not be published. Required fields are marked *

Categories

Recent Post

Top Trending Courses