You work for a gaming company that has built a serverless application on AWS using Lambda, API Gateway and DynamoDB. They release a new version of the lambda function and the application stops working. You need to get the application up and back online as fast as possible. What should you do?
- Roll your lambda function back to the previous version. (Ans)
- Create a cloudformation template of the environment. Deploy this template to a seperate region and then redirect Route53 to the new region.
- The new function has some dependencies not available tp lambda. Redeploy the application on EC2 and put the EC2 instances behind a network load balancer.
- DynamoDB is not serverless and is causing the error. Migrate your database to RDS and redeploy the lambda function.
You have created a serverless application which converts text in to speech using a combination of S3, API Gateway, Lambda, Polly, DynamoDB and SNS. Your users complain that only some text is being converted, where as longer amounts of text does not get converted. What could be the cause of this problem?
- AWS X-ray service is interfering with the appilcation and should be disabled.
- Your lambda function needs a longer execution time. You should change this to the maximum of 300 seconds. (Ans)
- You’ve placed your DynamoDB table in a single availability zone which is currently down, causing an outage.
- Polly has built in censorship, so if you try and send it text that is deemed offensive, it will not generate an MP3.
You have created a simple serverless website using S3, Lambda, API Gateway and DynamoDB. Your website will process the contact details of your customers, predict an expected delivery date of their order and store their order in DynamoDB. You test the website before deploying it in to production and you notice that all though the page executes, and the lambda function is triggered, it is unable to write to DynamoDB. What could be the cause of this issue?
- The availability zone that DynamoDB is hosted in is down.
- The availability zone that Lambda is hosted in is down.
- Your Lambda function does not have the sufficient Identity Access Management (IAM) permissions to write to DynamoDB. (Ans)
- You have written your function in python which is not supported as a run time environment for Lambda.
You have created an application using serverless architecture using Lambda, Api Gateway, S3 and DynamoDB. Your boss asks you to do a major upgrade to API Gateway and you do this and deploy it to production. Unfortunately something has gone wrong and now your application is offline. What should you do to bring your application up as quickly as possible?
- Restore your previous API gateway configuration using an EBS snapshot.
- Rollback your API Gateway to the previous stage. (Ans)
- Restart API Gateway for the new changes to take effect.
- Delete the existing API Gateway.
You have an internal API that you use for your corporate network. Your company has decided to go all in on AWS to reduce their data center footprint. They will need to leverage their existing API within AWS. What is the most efficient way to do this.
- Use the Swagger Importer tool to import your API in to API Gateway. (Ans)
- Replicate your API to API Gateway using the API Replication Master.
- Recreate the API manually.
- Use AWS API Import/Export feature of AWS Storage Gateway.
Which of the following needs a custom CloudWatch metric to monitor?
a. CPU Utilisation of an Amazon EC2 instance
b. Disk usage activity of the emphemeral volumes of an Amazon EC2 instance
c. Disk full percentage of an Elastic Block Store Volume (Ans)
d. Disk usage activity of an EBS volume attached to an Amazon EC2 instance
An instance status check checks what?
- Checks the VPC
- Checks the EC2 instance (Ans)
- Checks the EC2 Host
- Checks the weather
You have a web application which is using AutoScaling and Elastic Load Balancing. You want to monitor the application to make sure that it maintains a good customer experience which is defined by how long it takes to load the application for the end user in their browser.
What metric in Amazon’s CloudWatch can best be used for this?
- RequestCount reported by the ELB
- Aggregate CPU Utilization for the web tier
- Aggregate Networking for the web tier
- Latency reported by the elastic load balancer (ELB) (Ans)
As your web application has increased in popularity, reports of performance issues have grown. The current configuration initiates scaling actions based on Average CPU utilization; however during reports of slowness, CloudWatch graphs have shown that average CPU utilization remains steady at 30%. This is well below the alarm threshold of 55%. Your developers have discovered that performance degradation occurs on an instance when it is processing more than 300 threads and that this is due to the special way the application is programmed. What is the best way to ensure that your application scales to match demand?
- Launch 3 to 7 additional instances outside of the autoscaling group to handle the additional load
- Empirically determine the expected CPU use for 300 concurrent sessions and adjust the CloudWatch alarm threshold to be that CPU use.
- Populate a custom CloudWatch metric for concurrent sessions and initiate scaling actions based on that metric instead of on CPU use. (Ans)
- Add a script to each instance to detect the number of concurrent sessions. If the number of sessions remains over 300 for five minutes, have the instance increase the desired capacity of the AutoScaling group by one.
Using CloudWatch EC2 monitoring by default monitors; CPU, Disk, Network & Status Checks?
- True (Ans)
Your EBS Volume status check is showing a warning. What does this mean?
- Your volume is degraded or severely degraded. (Ans)
- Your volume is stalled or not available.
- Their is insufficient data.
- Your volume is performing as normal, but may need pre-warming.
For custom CloudWatch metrics, what is the minimum granularity in terms of time that CloudWAtch can monitor.
- 5 minutes
- 3 minutes
- 2 minutes
- 1 minutes (Ans)
- 1 second
A system status check, checks what?
- Checks the firewall
- Checks the host (Ans)
- Checks the virtual machine
- Checks’s the VPC
You have have designed a CloudFormation script to automatically deploy a database server running on EC2 with an attached database volume. This cloud formation script will run automatically when a predefined event takes place. The database volume must have provisioned IOPS and cannot have any kind of performance degradation after being deployed. What should you do to achieve this?
- Design the CloudFormation script to attach the database volume using S3, rather than EBS.
- Design the CloudFormation script to use MungoDB is designed for performance and is much better than any other database engine out there.
- Using a combination of CloudFormation and Python scripting, pre-warm the EBS volumes after the EBS volume has been deployed.
- You should not be using CloudFormation. Instead it would be better to script this using CodeDeploy.
- Test the CloudFormation script several times, and load-test it to a value matching the anticipated maximum peak load. (Ans)
Your instance status check shows a failure and you are unable to connect to your instance. What should you do?
- Raise a ticket to AWS support
- Restart the instance (Ans)
- Terminate the instance and then delete your VPC
- Stop the instance
Your system status check has failed. What should you do to trouble shoot the issue?
- Contact AWS support
- Restart the instance
- Stop the instance and then start it again. (Ans)
- Terminate the instance and then delete your VPC.
You are planning on deploying a production database to EC2 and need to choose the best storage type. You anticipate that at peak you will need 20,000 IOPS and an average of 8,000 – 10,000 IOPS. What storage medium should you choose?
- Magnetic Storage
- General Purpose SSD
- Provisioned IOPS (Ans)
You are running your production database in MySQL on an independent EBS volume and you are fast approaching an average IOPS of 3000 IOPS. You have decided to migrate your database to an EBS volume with provisioned IOPS. Your key users only use the database between 9am – 6pm so you can afford to have some down time out of hours, but not during the working day. Which is the best option below to achieve this migration.
- Choose a suitable time out of hours time. Stop the MySQL service. Take a snapsot of the EBS volume where the MYSQL database is running. Detach and then delete the old database volume. Restore the snapshot to a new volume running on magnetic storage.
- Take a snapshot of both the root device volume and the database volume at midday. Once the sanpshot is complete, terminate the EC2 instance and the database EBS volume. Restore the root device volume and EC2 instance using Provisioned IOPS SSD drives for both volumes.
- Choose a suitable time out of hours time. Stop the MySQL service. Take a snapshot of the EBS volume where the MySQL database is running. Detach and then delete the old database volume. Restore the snapshot to a new volume running on provisioned IOPs. (Ans)
- Choose a suitable time out of hours time. Stop the MySQL service. Move the database to S3. Restart the MySQL service, but set the congfiguration so that it addresses your new bucket S3://mydatabasebucket
Your EBS Volume status check is showing impaired. What does this mean?
- The volume is degraded or severely degraded.
- The volume is stalled or not available. (Ans)
- Their is insufficient data.
- The instance status must be impaired. You should stop and start the instance again.
The metric used to monitor the lag between the primary RDS instance and the read replica is called
- ReplicaLag (Ans)