AWS Certified Solutions Architect Exam Guide – Chapter-5

Understand what the Elastic Load Balancing service provides.
Elastic Load Balancing is a highly available service that distribute traffic across Amazon EC2 instances and includes options that provide flexibility and control of incoming requests to Amazon EC2 instances.

Know the types of load balancer the Elastic Load Balancing service provides and when to use each one.
An internet –facing load balancer is, as the name implies, a load balancer that takes request from clients over the internet and distributes them to Amazon EC2 instances that are registered with the Load balancer .

An internal load balancer is used to route traffic to your Amazon EC2 instances in VPCs with private subnets .
An HTTPS load balancer is used when you want to encrypt data between your load balancer and the clients that initiate HTTPS sessions and for connections between load balancer and your back-end instances.

Know the types of listeners the Elastic Load Balancing service provides and the use case and requirements for using each one.
A listener is a process that checks for connection request. It is configured with a protocol and a port for front-end (client to load balancer) connections and a protocol and a port for back- end (load balancer to back-end instance) connections.

Understand the configurations options for Elastic Load Balancing.
Elastic load Balancing allows you to configured many aspects of the load balancer, including idle connections time-out, cross-zone load balancing , connection draining , proxy protocol , sticky sessions , and health checks.

Know what an Elastic Load Balancing health check is and why it is important.

Elastic load balancing supports health checks to test the status of the Amazon EC2 instances behind an elastic load balancing load balancer.

Understand what the Amazon CloudWatch is a service provides and what use cases there are for using it.

Amazon CloudWatch is a service that you can use to monitor your AWS resources and your applications in real time. With Amazon cloud watch, you can collect and track metrics, create alarms that send notifications, and makes changes to the resources being monitored based on rules you define.
For example, you might choose to monitor CPU utilization decide when to add or remove Amazon EC2 instances in an application tier. Or, if a particular application specific metric that is not visible to AWS is the best indicator for assessing your scaling needs. You can perform a put request to push that metrics into Amazon cloud watch. you can then use this custom metric to manage capacity.

Know the differences between two types of monitoring-basic and detailed-for Amazon CloudWatch.

Amazon CloudWatch offers basic or detailed monitoring for supported AWS 
products. Basic monitoring sends data points to Amazon CloudWatch every five minutes for a limited number of preselected metrics at no charge. Detailed monitoring sends data points to Amazon CloudWatch every minute and allows data aggregation for an additional charge. If you want to use detailed monitoring, you must enable it—basic is the default.

Understand Auto Scaling and why it is an important advantage of the AWS Cloud.

A distinct advantage of deploying applications to the cloud is the ability to launch and then release servers in response to variable workloads. Provisioning servers on demand and then releasing them when they are no longer needed can provide significant cost savings for workloads that are not steady state.

Know when and why to use Auto Scaling.

Auto Scaling is a service that allows you to scale you Amazon EC2 capacity automatically by scaling out and scaling in according to criteria that you define. With Auto Scaling, you can ensure that the number of running Amazon EC2 instance increases during demand spikes or peak demand periods to maintain applications performance and decreases automatically during demand lulls or troughs to minimize costs.

Know the supported Auto Scaling plans.

Auto Scaling has several schemes or plans that you can use to control how you want Auto Scaling to perform. The Auto Scaling plans are named Maintain Current Instant Levels, Manual Scaling, Scheduled Scaling, and Dynamic Scaling.

Understand how to build an Auto Scaling launch configuration and an Auto Scaling group and what each is used for.

A launch configuration is the template that Auto Scaling uses to create new instances and is composed of the configuration name, AMI, Amazon EC2 instance type, security group, and instance key pair.

Know what a scaling policy is and what use cases to use it for.

A scaling policy is used by Auto Scaling with CloudWatch alarms to determine when your Auto Scaling group should scale out or scale in. Each CloudWatch alarm watches a single metric and sends messages to Auto Scaling when the metric breaches a threshold that you specify in your policy.

Understand how Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling are used together to provide dynamic scaling.

Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling can be used together to create a highly available application with a resilient architecture on AWS.

Exercises
For assistance in completing the following exercises, refer to the Elastic Load Balancing Developer Guide located at http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elastic-load-balancing.html, the Amazon CloudWatch Developer Guide at http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatch.html, and the Auto Scaling User Guide at http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html.

EXERCISE 5.1
Create an Elastic Load Balancing Load Balancer
In this exercise, you will use the AWS Management Console to create an Elastic Load Balancing load balancer.

  • Launch an Amazon EC2 instance using an AMI with a web server on it, or install and configure a web server.
  • Create a static page to display and a health check page that returns HTTP 200. Configure the Amazon EC2 instance to accept traffic over port 80.
  • Register the Amazon EC2 instance with the Elastic Load Balancing load balancer, and configure it to use the health check page to evaluate the health of the instance.

EXERCISE 5.2
Use an Amazon CloudWatch Metric

  • Launch an Amazon EC2 instance.
  • Use an existing Amazon CloudWatch metric to monitor a value.

EXERCISE 5.3
Create a Custom Amazon CloudWatch Metric

  • Create a custom Amazon CloudWatch metric for memory consumption.
  • Use the CLI to PUT values into the metric.

EXERCISE 5.4
Create a Launch Configuration and Auto Scaling Group

  • Using the AWS Management Console, create a launch configuration using an existing AMI.
  • Create an Auto Scaling group using this launch configuration with a group size of four and spanning two Availability Zones. Do not use a scaling policy. Keep the group at its initial size.
  • Manually terminate an Amazon EC2 instance, and observe Auto Scaling launch a new Amazon EC2 instance.

    EXERCISE 5.5
    Create a Scaling Policy
  • Create an Amazon Cloud Watch metric and alarm for CPU utilization using the AWS Management Console.
  • Using the Auto Scaling group from Exercise 5.4, edit the Auto Scaling group to include a policy that uses the CPU utilization alarm.
  • Drive CPU utilization on the monitored Amazon EC2 instance(s) up to observe Auto Scaling.

EXERCISE 5.6
Create a Web Application That Scales

  • Create a small web application architected with an Elastic Load Balancing load balancer, an Auto Scaling group spanning two Availability Zones that uses an Amazon CloudWatch metric, and an alarm attached to a scaling policy used by the Auto Scaling group.
  • Verify that Auto Scaling is operation correctly by removing instances and driving the metric up and down to force Auto Scaling.

    Review Questions
    Which of the following are required elements of an Auto Scaling group? (Choose 2 answers)
  • Minimum size
  • Health checks
  • Desired capacity
  • Launch configuration

You have created an Elastic Load Balancing load balancer listening on port 80, and you registered it with a single Amazon Elastic Compute Cloud (Amazon EC2) instance also listening on port 80. A client makes a request to the load balancer with the correct protocol and port for the load balancer. In this scenario, how many connections does the balancer maintain?

  • 1
  • 2
  • 3
  • 4

How long does Amazon CloudWatch keep metric data?

  • 1 day
  • 2 days
  • 1 week
  • 2 weeks

Which of the following are the minimum required elements to create an Auto Scaling launch configuration?

  • Launch configuration name, Amazon Machine Image (AMI), and instance type
  • Launch configuration name, AMI, instance type, and key pair
  • Launch configuration name, AMI, instance type, key pair, and security group
  • Launch configuration name, AMI, instance type, key pair, security group, and block device mapping

You are responsible for the application logging solution for your company’s existing applications running on multiple Amazon EC2 instances. Which of the following is the best approach for aggregating the application logs within AWS?

  • Amazon CloudWatch custom metrics
  • Amazon CloudWatch Logs Agent
  • An Elastic Load Balancing listener
  • An internal Elastic Load Balancing load balancer

    Which of the following must be configured on an Elastic Load Balancing load balancer to accept incoming traffic?
  • A port
  • A network interface
  • A listener
  • An instance

You create an Auto Scaling group in a new region that is configured with a minimum size value of 10, a maximum size value of 100, and a desired capacity value of 50. However, you notice that 30 of the Amazon Elastic Compute Cloud (Amazon EC2) instances within the Auto Scaling group fail to launch. Which of the following is the cause of this behavior?

  • You cannot define an Auto Scaling group larger than 20.
  • The Auto Scaling group maximum value cannot be more than 20.
  • You did not attach an Elastic Load Balancing load balancer to the Auto Scaling group.
  • You have not raised you default Amazon EC2 capacity (20) for the new region.

You want to host multiple Hypertext Transfer Protocol Secure (HTTPS) website on a fleet of Amazon EC2 instances behind an Elastic Load Balancing load balancer with a single X.509 certificate. How must you configure the Secure Sockets Layer (SSL) certificate so that clients connecting to the load balancer are not presented with a warning when they connect?

  • Create one SSL certificate with a Subject Alternative Name (SAN) value for each website name.
  • Create one SSL certificate with the Server Name Indication (SNI) value checked.
  • Create multiple SSL certificate with a SAN value for each website name.
  • Create SSL certificates for each Availability Zone with a SAN value for each website

Your web application front end consists of multiple Amazon Compute Cloud (Amazon EC2) instances behind an Elastic Load Balancing load balancer. You have configured the load balancer to perform health checks on these Amazon EC2 instances. If an instance fails to pass health checks, which statement will be true?

  • The instance is replaced automatically by the load balancer.
  • The instance is terminated automatically by the load balancer.
  • The load balancer stops sending traffic to the instance that failed its health check.
  • The instance is quarantined by the load balancer for root cause analysis.

In the basic monitoring package for Amazon Elastic Compute Cloud (Amazon EC2), what Amazon CloudWatch metrics are available?

  • Web server visible metrics such as number of failed transaction requests
  • Operating system visible metrics such as memory utilization
  • Database visible metrics such as number of connections
  • Hypervisor visible metrics such as CPU utilization

    A cell phone company is running dynamic-content television commercials for a contest. They want their website to handle traffic spikes that come after a commercial airs. The website is interactive, offering personalized content to each visitor based on location, purchase history, and the current commercial airing. Which architecture will configure Auto Scaling to scale out to respond to spikes of demand, while minimizing costs during quiet periods?
  • Set the minimum size of the Auto Scaling group so that it can handle high traffic volumes without needing to scale out.
  • Create an Auto Scaling group large enough to handle peak traffic loads, and then stop some instances. Configure Auto Scaling to scale out when traffic increases using the stopped instance, so new capacity will come online quickly.
  • Configure Auto Scaling to scale out as traffic increases. Configure the launch configuration to start new instances from a preconfigured Amazon Machine Image (AMI).
  • Use Amazon CloudFront and Amazon Simple storage Service (Amazon S3) to cache changing content, with the Auto Scaling group set as the origin. Configure Auto Scaling to have sufficient instances necessary to initially populate CloudFront and Amazon ElastiCache, and then scale in after the cache is fully populated.

For an application running in the ap-northest-1 region with three Availability Zones (ap-northeast-1a, ap-northeast-1b, and ap-northeast-1c), which instance deployment provides high availability for the application that normally requires nine running Amazon Elastic Compute Cloud (Amazon EC2) instances but can run on a minimum of 65 percent capacity while Auto Scaling launches replacement instances in the remaining Availability Zones?

  • Deploy the application on four servers in ap-northeast-1a and five servers in ap-northeast-1b, and keep five stopped instances in ap-northeast-1a as reserve.
  • Deploy the application on three servers in ap-northeast-1a, three servers in ap-northeast-1b, and three servers in ap-northeast-1c.
  • Deploy the application on six servers in ap-northeast-1b and three servers in ap-northeast-1c.
  • Deploy the application on nine servers in ap-northeast-1b, and keep nine stopped instances in ap-northeast-1a as reserve.

Which of the following are characteristics of the Auto Scaling service on AWS? (Choose 3 answers)

  • Sends traffic to healthy instances
  • Responds to changing conditions by adding or terminating Amazon Elastic Compute Cloud (Amazon EC2) instances
  • Collects and tracks metrics and sets alarms
  • Delivers push notifications
  • Launches instances from a specified Amazon Machine Image (AMI)
  • Enforces a minimum number of running Amazon EC2 instances

Why is the launch configuration referenced by the Auto Scaling group instead of being part of the Auto Scaling group?

  • It allows you to change the Amazon Elastic Compute Cloud (Amazon EC2) instance type and Amazon Machine Image (AMI) without disrupting the Auto Scaling group.
  • It facilitates rolling out a patch to an existing set of instances managed by an Auto Scaling group.
  • It allows you to change security groups associated with the instances launched without having to make changes to the Auto Scaling group
  • All of the above
  • None of the above

An Auto Scaling group may use: (Choose 2 answers)

  • On-Demand Instances
  • Stopped instances
  • Spot Instances
  • On-premises instances
  • Already running instances if they use the same Amazon Machine Image (AMI) as the Auto Scaling group’s launch configuration and are not already part of another Auto Scaling group

Amazon CloudWatch supports which types of monitoring plans? (Choose 2 answers)

  • Basic monitoring, which is free
  • Basic monitoring, which has an additional cost
  • Ad hoc monitoring, which is free
  • Ad hoc monitoring, which has an additional cost
  • Detailed monitoring, which is free
  • Detailed monitoring, which has an additional cost

Elastic Load Balancing health checks may be: (Choose 3 answers)

  • A ping
  • A key pair verification
  • A connection attempt
  • A page request
  • An Amazon Elastic Compute Cloud (Amazon EC2 instance status check

When an Amazon Elastic Compute Cloud (Amazon EC2) instance registered with an Elastic Load Balancing load balancer using connection draining is deregistered or unhealthy, which of the following will happen? (Choose 2 answers)

  • Immediately close all existing connections to that instance.
  • Keep the connections open to that instance, and attempt to complete in-flight requests.
  • Redirect the requests to a user-defined error page like “Oops this embarrassing” or “Under Construction.”
  • Forcibly close all connections to that instance after a timeout period.
  • Leave the connections open as long as the load balancer is running.

Elastic Load Balancing supports which of the following types of load balancers? (Choose 3 answers)

  • Cross-region
  • Internet-facing
  • Interim
  • Itinerant
  • Internal
  • Hypertext Transfer Protocol Secure (HTTPS) using Secure Sockets Layer (SSL)

Auto Scaling supports which of the following plans for Auto Scaling groups? (Choose 3 answers)

  • Predictive
  • Manual
  • Preemptive
  • Scheduled
  • Dynamic
  • End-user request driven
  • Optimistic
Rajesh Kumar
Follow me
Latest posts by Rajesh Kumar (see all)