AWS Cloud Practitioner Essentials (CLF-C01)
AWS Cloud Practitioner Essentials (CLF-C01)
AWS Cloud Practitioner Essentials (CLF-C01)
2nd Edition
Index
The structure of our Availability Zones is intentional and directly related to fault
tolerance.
Fault tolerance means a system can remain operational even if some of the
components of that system fail.
High availability ensures that your systems are always functioning and accessible
and that downtime is minimized as much as possible.
AWS management interfaces
AWS users can create and manage resource in three unique ways:
AWS Management Console, AWS CLI, AWS SDKs.
AWS Management Console provides a GUI to access AWS features.
AWS CLI lets you control AWS services from the command line.
AWS SDK enable you to access AWS using a variety of popular programming
languages.
You can personalize your experience in the console by creating shortcuts to the
services that you visit in the most often.
You can use Resource Groups to streamline your use of the console: you can
create a resource group for each application, service, collection of related
resources you frequently use.
The Tag Editor allows you to easily manage tags for resource types that support
tags and you can apply tag keys and values to multiple resources at one time.
The Tag Editor supports global ta searching, bulk editing, so you can find all
resources with a particular tag.
AWS CLI will allow you to automate and repeat the deployment of AWS resources
in a way that is programming language-agnostic.
AWS SDKs
AWS SDKs can help you use AWS in your existing applications, create applications
that can deploy and monitor complex systems using only code.
The AWS CLI and SDKs give you the flexibility to customize AWS features and
create your own tools specific to your business.
These language-specific SDKs contain APIs that allow you to easily incorporate
the connectivity and functionality of the wider range of AWS Cloud services into
your code without the difficulty of writing function yourself.
You can use all three of these modes interchangeably, they’re not exclusive.
Module 2: AWS core services
Amazon Elastic Compute Cloud (EC2)
EC2 stands for Elastic Compute Cloud.
Compute refers to the compute, or server, resources that are being presented.
Cloud refers to the fact that these are cloud-hosted compute resources.
Elastic refers to the fact that if properly configured, you can increase or decrease
the amount of servers required by an application automatically according to the
current demands on that application.
The proper name of EC2 servers is Amazon EC2 instances (or, merely, EC2
instances).
Instances are pay-as-you-go.
To launch an EC2 instance:
1. You choice the region,
2. You select the EC2 wizard,
3. You select the AMI (Amazon Machine Image) that providing us with a
software platform for our instance,
4. You select the instance type, referring to the hardware capabilities,
5. You configure network, storage, and key pairs, which will allow us to
connect to the instance after we’ve launched.
As your company grows, the amount of data stored on your EBS volumes will
likely also grow. EBS volumes have the ability to increase capacity and change to
different types.
Amazon Simple Storage Service (S3)
S3 stands for Simple Storage Service.
Amazon S3 is a fully managed storage service that provides a simple API for
storing and retrieving data; this means that the data you store in S3 isn’t
associated with any particular server, and you don’t have to manage any
infrastructure yourself.
You can put as many objects into S3 as you want.
Objects can be almost any data file, such as images, videos or server logs.
Amazon S3 also provides low-latency access to the data over the internet by HTTP
or HTTPS, so you can retrieve data anytime from anywhere.
By default, none of your data is shared publicly. You can also encrypt your data in
transit and choose to enable server-side encryption on your objects.
To store objects with S3, it’s necessary to create a container that hold your data
called bucket.
When we want to put an object into a bucket, we need to specify a key, which is
just a string that can be used to retrieve the object later.
When you create a bucket in S3, it’s associated with a particular AWS region:
whenever you store data in the bucket, it’s redundantly stored across multiple
AWS facilities within your selected region.
This is a URL for an object constructed from the bucket name S3 endpoint for the
selected region and the key we use when we stored the object:
In this way, Subnet A1 will become a public subnet, where non-local traffic is
routed through Test-IGW.
Subnet B1 will be our private subnet.
There are some new terms to learn when looking at the Application Load
Balancer:
• Listeners
A listener is a process that checks for connection requests, using the
protocol and port that you configure,
• Target
A target is a destination for traffic based on the established listener rules,
• Target group
Each target group routes requests to one or more registered targets using
the protocol and port number specified.
A target can be registered with multiple target groups.
When configuring the listeners for the load balancer, you create rules in order to
direct how the requests received by the load balancer will be routed to the backed
targets.
Auto Scaling
Auto Scaling helps you to ensure that you have the correct number of Amazon
EC2 instances available to handle the load for your application.
Auto Scaling removes the guesswork of how many EC2 instances you need at a
point in time to meet your workload requirements.
When you run your applications on EC2 instances, it’s critical to monitor the
performance of your workload using Amazon CloudWatch.
But, CloudWatch will not add or remove EC2 and here, Auto Scaling comes into
the picture.
In fact, Auto Scaling allows you to add or remove EC2 instances based on
conditions that you specify and it’s especially powerful in environments with
fluctuating performance requirements.
So, Auto Scaling really answers two critical questions:
1 – How can I ensure that my workload has enough EC2 resources to meet
fluctuating performance requirements?
2 – How can I automate EC2 resource provisioning to occur on-demand?
Suppose that a user opens a web browser and enter the domain name for a
website like example.com.
That query is typically routed to that user’s internet service provider’s DNS.
If the website is handled by Amazon Route 53, then the user’s internet service
provider’s DNS follows the request to Amazon Route 53 reaching the service
hosted and managed by Amazon Route 53 for you and Route 53 does the
translation, for example 54.85.178.219 :
Now, the web browser know the IP address of the website example.com and can
make requests for that specific IP address.
When you sign up for Route 53, the first thing to do is create a Hosted Zone.
Hosted Zone is where your DNS data will be kept.
When you do that, you receive 4 name servers where you can delegate your
domain.
Then, you specify your FQDN (Fully Qualified Domain Name), which is the domain
you have purchased with the DNS registrar, that could be external or you can use
Route 53 to purchase a domain.
A Hosted Zone will contain record sets, that are the DNS translations you want to
perform for that a specific domain, such as blog.example.com or
www.example.com.
One this is done, the Hosted Zone is ready to resolve DNS queries of that domain.
One of the most powerful features of Amazon RDS is the ability to configure your
database instance for high availability with a multi-agency deployment.
Once configured, Amazon RDS automatically generates a standby copy of the
database instance in another AZ within the same Amazon VPC.
After seeding the database copy, transactions are synchronously replicated to the
standby copy.
Amazon RDS also supports the creation of read replicas for MySQL, PostgreSQL,
MariaDB and Amazon Aurora.
You can reduce the load on your source database instance by routing read queries
from your applications to the read replica.
Using read replicas, you can also scale out beyond the capacity constraints of a
single database instance for read-heavy database workloads.
Read replicas can also be promoted to become the master database instance, but
due to the asynchronous replication, this requires manual action.
Read replicas can be created in a different region than the master database. This
feature can help satisfy disaster recovery requirements or cutting down on latency
by directing reads to a read replica closer to the user.
AWS Lambda runs your code on a highly available compute infrastructure, which
provides all administration and supports a variety of programming languages,
including Node.js, Java, C# and Python.
AWS Lambda is used for event-driven computing, so you can run code in response
to events, including changes to an Amazon S3 bucket.
You can build serverless applications that are triggered by AWS Lambda functions,
and you can automatically deploy them using AWS CodePipeline and AWS
CodeDeploy.
Use cases
With AWS Lambda, you can run code for virtually any application or backend
service.
Use cases:
1 – Real-Time image processing
2 – Real-time stream processing
You can use AWS Lambda among Amazon Kinesis to process real-time streaming
data.
You can use AWS Lambda to build your extract, transform and load pipelines and
to perform data validation, sorting or other transformations for every data change
in a DynamoDB table and load the transformed data to another data store.
4 – IoT Backends
5 – Mobile Backends
6 – Web Backends
With this cycle, it becomes really easy to update your application as easily s you
deploy it.
For example, you can use Amazon SNS if you have some event that you simply
need to send an email to administrators or system developers informing or some
event that happened in your architecture.
Amazon CloudWatch
Amazon CloudWatch is a monitoring service that allows you to monitor your AWS
resources and the applications you run on them in real time.
Some features of Amazon CloudWatch:
• Collect and track metrics,
• Collect and monitor log files,
• Set alarms,
• Automatically react to changes.
Use cases:
• Respond to state changes in your AWS resources,
• Automatically invoke an AWS Lambda function,
• Take a snapshot of an Amazon EBS volume on a schedule,
• Log S3 Object Level Operations using CloudWatch Events.
CloudWatch alarms
CloudWatch alarms watch a single metric.
It could perform one or more actions based on the value of that metric relative to
the threshold over a number of time periods.
The action can be:
CloudWatch logs
CloudWatch logs are log files used to monitor and troubleshoot systems and
applications.
So, we could monitor your log files for specific phrases, values or patterns,
retrieve the associated log data from CloudWatch logs and it all runs based on
agents that are installed on the O.S.
Additionally, you can store and monitor your application log files, collect those
metrics and they can be durably stored for a long period of time.
They can be visualized by admins in the console or they could be stored in S3 for
access by another service, user or tool, and, of course, we can do data processing
on that particular solution.
CloudWatch dashboards
CloudWatch dashboard is a customizable homepage within CloudWatch console
to monitor your resources through a single plane of glass, If you will.
You can create customized views of metrics and alarms for your AWS resources.
Each dashboard can display multiple metrics and could be accessorized with text
and images however you desire.
Amazon CloudFront
Amazon CloudFront allows you to scale out, save money and have more
performance on your applications.
Amazon CloudFront is a Content Delivery Network or CDN.
To deliver content to your users, Amazon CloudFront uses a global network of
more than 80 edge locations and more than 10 regional edge caches.
The edge locations are located in multiple countries around the world and this
number frequently increases.
So, by using CloudFront, you can leverage multiple locations around the world to
deliver your content allowing your users to interact with your application in a
lower latency.
For example, if your application is running in Singapore and your users are in New
York, you can use CloudFront to cache the content locally in New York and let the
service help you in scaling whatever your demand requests.
Use cases:
• Static asset caching,
• Live and on-demand video streaming,
• Security and DDoS protection,
• API acceleration and Software distribution.
AWS CloudFormation
AWS CloudFormation is a fully managed service that acts as an engine to
automate the provisioning of AWS resources. So, simplifies the task of repeatedly
and predictably creating groups or related resources that power your applications.
You can interact with AWS CloudFormation through AWS Management Console,
AWS CLI and AWS SDK/API.
Using one of the three methods, we can construct virtual environment for our
workloads.
AWS CloudFormation can create, update and delete resources and sets known as
stacks.
Components of AWS CloudFormation:
As mentioned above, stacks are the resource generated by a template file, but
they are also a unit of deployment. So, you can create stacks, make updates by
rerunning the modified template file and even delete stacks.
When you delete a stack, all of the resources in the stack are deleted, this
because a stack is a unit of deployment.
First, we have Elastic Load Balancers, or ELBs, which is a service that distributes
incoming traffic or loads amongst your instances.
ELB can also send metrics to Amazon CloudWatch, which is a managed
monitoring service. So, ELB can be a trigger and notify you of high latency or if
servers are becoming over-utilized.
ELBs can also customized.
Amazon Inspector
Amazon Inspector is a tool that helps you improve the security and compliance of
the applications deployed on AWS.
To help you get started quickly, Amazon Inspector includes a knowledge base of
hundreds of rules mapped to common security compliance standards and
vulnerability definitions.
Examples of built-in rules: remote root login being enabled, vulnerable software
versions installed.
When Amazon Inspector assesses a target, it delivers findings, that is, a detailed
descriptions of potential security issues.
Findings also contain recommendations for how to resolve security issues.
AWS Shield
AWS Shield is a managed DDoS protection service that safeguards applications
running on AWS.
The service provides always-on detection and automatic inline mitigations that
minimize application downtime and latency.
When using other applications not based on TCP, you cannot use services like
Amazon CloudFront or ELB.
In these cases, you often need to run your applications directly on internet-facing
Amazon EC2 instances,
• AWS Shield Standard protects your Amazon EC2 instance from common
infrastructure Layer 3 and Layer 4 attacks.
Built-in techniques are automatically engaged when a well-defined DDoS
attack signature is detected;
• AWS Shield Advanced protects against large, sophisticated DDoS attacks
for these applications by enabling Elastic IP address.
Enhanced detection automatically recognizes the type of AWS Resource
and size of EC2 instance and applies appropriate pre-defined mitigations.
You can create custom mitigation profiles and, during an attack, all your
Amazon VPC Network ACLs are automatically enforced at the border of the
AWS network, giving you access to additional bandwidth and scrubbing
capacity to mitigate large volumetric DDoS attacks.
AWS Security Compliance
At Amazon, the success of our security and compliance program is primarily
measured by one thing: our costumers’ success.
As presented in our Shared Responsibility Model, AWS and our customers partner
to protect their infrastructure:
Customers don’t communicate their use and configurations to AWS, but AWS
communicates its security and control environment to its customers, relevant to
business needs.
Some of its method to communicate are:
• Obtaining industry certifications (and third-party attestations);
• Publishing security and control practices;
• Providing compliance reports.
Control environment
AWS manages a comprehensive control environment that:
• includes policies, processes and control activities in place for the secure
delivery of AWS service offerings,
• supports the operating effectiveness of AWS control framework,
• applies leading industry practices.
AWS has integrated cloud-specific controls identified by top industry agencies
into the control framework.
Information security
AWS has implemented a formal information security program designed to protect
confidentiality, integrity and availability of its customers’ systems and data.
Customers can access a security whitepaper on its website to learn more about
how AWS can help them secure their data.
Module 6: Pricing and support
Fundamentals of Pricing
With AWS, you pay only for the individual services you need, for as long as you use
them and without signing up for long-term contracts or complex licensing.
Then, you only pay for the services you consume and once you stop using them,
there are no additional costs or termination fees.
For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved
capacity.
With Reserved Instances, you can save up to 75% over equivalent on-demand
capacity.
Reserved Instances are available in three options:
• AURI : all up-front;
• PURI : partial up-front;
• NURI : no upfront payments.
When you buy Reserved Instances, the larger payment you make upfront, the
greater your discount will be.
To maximize your savings, pay all upfront and get the largest discount.
PURI offer lower discounts but requires less spent upfront.
Lastly, you can choose to make no upfront payments and still receive a small
discount.
By using reserved capacity, your organization can minimize risks, more
predictably manage budgets.
AWS Storage services, in particular, can help you keep costs down.
To optimize your savings, choose the right combinations of storage solutions that
help you reduce pricing while boosting performance, security and durability.
For example, services like Amazon S3, transfer out from Amazon EC2, pricing is
tiered, meaning the more you use, the less you pay per gigabyte.
Data transfer in is always free of charge.
As a result, as your AWS usage needs increase, you benefit from the economies of
scale, allowing you to increase adoption and keep costs under control.
If none of AWS’s pricing models work for your project, custom pricing is available
for high-volume projects with unique requirements.
AWS also offers a free usage tier for new customers, who can run a free Amazon
EC2 Micro Instance for a year.
If you have multiple AWS accounts, you can consolidate your AWS usage using
Consolidated Billing and get tiering benefits based on the total usage cross your
accounts.
Pricing details
There are three fundamental characteristics you pay for with AWS:
1. Compute capacity,
2. Storage,
3. Outbound data transfer (aggregated).
These characteristics vary depending on the AWS product you are using.
Fundamentally, these are the core characteristics that have the greatest impact
on cost.
Although you are charged for data transfer out, there is no charge for inbound
data transfer or for data transfer between other services within the same region.
Pricing for Amazon EC2
Amazon EC2 changes the economics of computing by charging you only for the
capacity that you actually use.
When you begin to estimate the cost of using Amazon EC2, you need to the
consider the following:
• Clock hours of server time
Resources incur charges when they are running, for example, from the time
Amazon EC2 instances are launched until they are terminated.
• Machine / Instance configuration
Consider the physical capacity of the Amazon EC2 instance you choose.
Instance pricing varies with the AWS Region, O.S., number of cores and
memory.
Purchase types:
• On-demand instances
With them, you pay for compute capacity by the hour with no required
minimum commitments;
• Reserved instances
They give you the option to make a one-time payment or no up-front
payment at all for each instance you want to reserve, and in turn, receive a
significant discount on the hourly usage charge for that instance;
• Spot instances
With them, you can bid for unused Amazon EC2 capacity.
Other considerations:
• Number of instances,
• Load Balancing
An Elastic Load Balancer can be used to distribute traffic among Amazon
EC2 instances.
The number of hours the elastic load balancer runs and the amount of data
it processes contribute to the monthly cost.
Product options:
• Monitoring
You can use Amazon CloudWatch to monitor your EC2 instances.
By default, basic monitoring is enabled and available at no additional costs.
For a fixed monthly rate, you can opt for detailed monitoring, which
includes seven preselected metrics recorded once a minute;
• Auto Scaling
This service is available at no additional charge beyond Amazon
CloudWatch fees;
• Elastic IP addresses
You have one Elastic IP address with a running instance at no charge.