AWS SAA C02 Study Guide
AWS SAA C02 Study Guide
If at any point you find yourself feeling uncertain of your progress and in need of
keenanromain / AWS-SAA-C02-Study-Guide Public
more time, you can postpone your AWS exam date. Be sure to also keep up with the
ongoing discussions in r/AWSCertifications as you will find relevant exam tips, studying
How to become a certified AWS Solutions Architect
material, and advice from other exam takers. Before experimenting with AWS, it's very
928
stars
541
forks important to be sure that you know what is free and what isn't. Relevant Free Tier FAQs can
be found here. Finally, Udemy often has their courses go on sale from time to time. It
Star Notifications
might be worth waiting to purchase either the Tutorial Dojo practice exam or Stephane
Maarek's course depending on how urgently you need the content.
Code Issues 7 Pull requests Actions Projects Wiki Security
Table of Contents
master
Go to file
1. Introduction
keenanromain
Merge pull request #20 from MertNuhuz/patch-1 … on Jun 24
229 2. Identity Access Management (IAM)
4. CloudFront
5. Snowball
18. Aurora
19. DynamoDB Domain 1: Design Resilient Architectures
23. Elastic Load Balancers (ELB) 1.4 - Choose appropriate resilient storage
25. Virtual Private Cloud (VPC) 2.1 - Identify elastic and scalable compute solutions for a workload
26. Simple Queuing Service (SQS) 2.2 - Select high-performing and scalable storage solutions for a workload
27. Simple Workflow Service (SWF) 2.3 - Select high-performing networking solutions for a workload
28. Simple Notification Service (SNS) 2.4 - Choose high-performing database solutions for a workload
8. Amazon S3 FAQs
When joining the AWS ecosystem for the first time, new users are supplied an access
IAM Entities: key ID and a secret access key ID when you grant them programmatic access. These
are created just once specifically for the new user to join, so if they are lost simply
Users - any individual end user such as an employee, system architect, CTO, etc.
generate a new access key ID and a new secret access key ID. Access keys are only
Groups - any collection of similar people with shared permissions such as system used for the AWS CLI and SDK so you cannot use them to access the console.
administrators, HR employees, finance teams, etc. Each user within their specified group
will inherit the permissions set for the group.
When creating your AWS account, you may have an existing identity provider internal 2.) an expandable amount of metadata
to your company that offers Single Sign On (SSO). If this is the case, it is useful,
efficient, and entirely possible to reuse your existing identities on AWS. To do this, you 3.) a unique identifier so that the data can be retrieved
let an IAM role be assumed by one of the Active Directories. This is because the IAM
This makes it a perfect candidate to host files or directories and a poor candidate to host
ID Federation feature allows an external service to have the ability to assume an IAM
databases or operating systems. The following table highlights key differences between
role.
object and block storage:
IAM Roles can be assigned to a service, such as an EC2 instance, prior to its first
use/creation or after its been in used/created. You can change permissions as many
times as you need. This can all be done by using both the AWS console and the AWS
command line tools.
You cannot nest IAM Groups. Individual IAM users can belong to multiple groups, but
creating subgroups so that one IAM Group is embedded inside of another IAM Group
is not possible.
With IAM Policies, you can easily add tags that help define which resources are
accessible by whom. These tags are then used to control access via a particular IAM
policy. For example, production and development EC2 instances might be tagged as
such. This would ensure that people who should only be able to access development
instances cannot access production instances.
Priority Levels in IAM: Data uploaded into S3 is spread across multiple files and facilities. The files uploaded into
Explicit Deny: Denies access to a particular resource and this ruling cannot be S3 have an upper-bound of 5TB per file and the number of files that can be uploaded is
overruled. virtually limitless. S3 buckets, which contain all files, are named in a universal namespace
so uniqueness is required. All successful uploads will return an HTTP 200 response.
Explicit Allow: Allows access to a particular resource so long as there is not an
associated Explicit Deny. S3 Key Details:
Default Deny (or Implicit Deny): IAM identities start off with no resource access. Objects (regular files or directories) are stored in S3 with a key, value, version ID, and
Access instead must be granted. metadata. They can also contain torrents and sub resources for access control lists
which are basically permissions for the object itself.
Simple Storage Service (S3)
The data consistency model for S3 ensures immediate read access for new objects
after the initial PUT requests. These new objects are introduced into AWS for the first
S3 Simplified: time and thus do not need to be updated anywhere so they are available immediately.
S3 provides developers and IT teams with secure, durable, and highly-scalable object The data consistency model for S3 also ensures immediate read access for PUTS and
storage. Object storage, as opposed to block storage, is a general term that refers to data DELETES of already existing objects, since Decembre 2020.
composed of three things:
Amazon guarantees 99.999999999% (or 11 9s) durability for all S3 storage classes
1.) the data that you want to store except its Reduced Redundancy Storage class.
S3 comes with the following main features: S3 is a great candidate for static website hosting. When you enable static website
hosting for S3 you need both an index.html file and an error.html file. Static website
1.) tiered storage and pricing variability
hosting creates a website endpoint that can be accessed via the internet.
2.) lifecycle management to expire older content
When you upload new files and have versioning enabled, they will not inherit the
3.) versioning for version control properties of the previous version.
Bucket policies secure data at the bucket level while access control lists secure data at S3 Glacier - low-cost storage class for data archiving. This class is for pure storage
the more granular object level. purposes where retrieval isn’t needed often at all. Retrieval times range from minutes to
hours. There are differing retrieval methods depending on how acceptable the default
By default, all newly created buckets are private. retrieval times are for you:
S3 can be configured to create access logs which can be shipped into another bucket
Expedited: 1 - 5 minutes, but this option is the most expensive.
in the current account or even a separate account all together. This makes it easy to
Standard: 3 - 5 hours to restore.
monitor who accesses what inside S3. Bulk: 5 - 12 hours. This option has the lowest cost and is good for a large set
of data.
1.) For programmatic access only, use IAM & Bucket Policies to share entire buckets The Expedited duration listed above could possibly be longer during rare situations of
unusually high demand across all of AWS. If it is absolutely critical to have quick access to
2.) For programmatic access only, use ACLs & Bucket Policies to share objects
your Glacier data under all circumstances, you must purchase Provisioned Capacity.
3.) For access via the console & the terminal, use cross-account IAM roles Provisioned Capacity guarantees that Expedited retrievals always work within the time
constraints of 1 to 5 minutes.
S3 Deep Glacier - The lowest cost S3 storage where retrieval can take 12 hours.
S3 Versioning:
When versioning is enabled, S3 stores all versions of an object including all writes and
even deletes.
It is a great feature for implicitly backing up content and for easy rollbacks in case of
human error.
It can be thought of as analogous to Git.
Once versioning is enabled on a bucket, it cannot be disabled - only suspended.
Versioning integrates w/ lifecycle rules so you can set rules to expire or migrate data
based on their version.
Versioning also has MFA delete capability to provide an additional layer of security.
S3 Lifecycle Management:
Automates the moving of objects between the different storage tiers.
S3 Encryption:
Can be used in conjunction with versioning.
S3 data can be encrypted both in transit and at rest. Lifecycle rules can be applied to both current and previous versions of an object.
Encryption In Transit: When the traffic passing between one endpoint to another is
indecipherable. Anyone eavesdropping between server A and server B won’t be able to S3 Cross Region Replication:
make sense of the information passing by. Encryption in transit for S3 is always achieved Cross region replication only work if versioning is enabled.
by SSL/TLS.
When cross region replication is enabled, no pre-existing data is transferred. Only new
Encryption At Rest: When the immobile data sitting inside S3 is encrypted. If someone uploads into the original bucket are replicated. All subsequent updates are replicated.
breaks into a server, they still won’t be able to access encrypted info within that server. When you replicate the contents of one bucket to another, you can actually change
Encryption at rest can be done either on the server-side or the client-side. The server-side the ownership of the content if you want. You can also change the storage tier of the
is when S3 encrypts your data as it is being written to disk and decrypts it when you access new bucket with the replicated content.
it. The client-side is when you personally encrypt the object on your own and then upload When files are deleted in the original bucket (via a delete marker as versioning
it into S3 afterwards. prevents true deletions), those deletes are not replicated.
Cross Region Replication Overview
You can encrypted on the AWS supported server-side in the following ways:
What is and isn’t replicated such as encrypted objects, deletes, items in glacier, etc.
S3 Managed Keys / SSE - S3 (server side encryption S3 ) - when Amazon manages
the encryption and decryption keys for you automatically. In this scenario, you S3 Transfer Acceleration:
concede a little control to Amazon in exchange for ease of use.
Transfer acceleration makes use of the CloudFront network by sending or receiving
AWS Key Management Service / SSE - KMS - when Amazon and you both manage
data at CDN points of presence (called edge locations) rather than slower uploads or
the encryption and decryption keys together.
downloads at the origin.
Server Side Encryption w/ customer provided keys / SSE - C - when I give Amazon
This is accomplished by uploading to a distinct URL for the edge location instead of
my own keys that I manage. In this scenario, you concede ease of use in exchange for
the bucket itself. This is then transferred over the AWS network backbone at a much
more control.
faster speed.
You can test transfer acceleration speed directly in comparison to regular uploads. sending fewer direct requests to S3 which will reduce costs. For example, suppose that
you have a few objects that are very popular. CloudFront fetches those objects from
S3 Event Notications: S3 and caches them. CloudFront can then serve future requests for the objects from its
cache, reducing the total number of GET requests it sends to Amazon S3.
The Amazon S3 notification feature enables you to receive and send notifications when
More information on how to ensure high performance in S3
certain events happen in your bucket. To enable notifications, you must first configure the
events you want Amazon S3 to publish (new object added, old object deleted, etc.) and the
destinations where you want Amazon S3 to send the event notifications. Amazon S3
S3 Server Access Logging:
supports the following destinations where it can publish events: Server access logging provides detailed records for the requests that are made to a
bucket. Server access logs are useful for many applications. For example, access log
Amazon Simple Notification Service (Amazon SNS) - A web service that coordinates
information can be useful in security and access audits. It can also help you learn
and manages the delivery or sending of messages to subscribing endpoints or clients.
about your customer base and better understand your Amazon S3 bill.
Amazon Simple Queue Service (Amazon SQS) - SQS offers reliable and scalable
By default, logging is disabled. When logging is enabled, logs are saved to a bucket in
hosted queues for storing messages as they travel between computers.
the same AWS Region as the source bucket.
AWS Lambda - AWS Lambda is a compute service where you can upload your code
Each access log record provides details about a single access request, such as the
and the service can run the code on your behalf using the AWS infrastructure. You
requester, bucket name, request time, request action, response status, and an error
package up and upload your custom code to AWS Lambda when you create a Lambda
code, if relevant.
function. The S3 event triggering the Lambda function also can serve as the code's
input. It works in the following way:
S3 periodically collecting access log records of the bucket you want to monitor
The following diagram highlights how Pre-signed URLs work: The AWS CDN service is called CloudFront. It serves up cached content and assets for the
increased global performance of your application. The main components of CloudFront are
the edge locations (cache endpoints), the origin (original source of truth to be cached such
as an EC2 instance, an S3 bucket, an Elastic Load Balancer or a Route 53 config), and the
distribution (the arrangement of edge locations from the origin or basically the network
itself). More info on CloudFront's features
EC2 Simplified:
EC2 spins up resizable server instances that can scale up and down quickly. An instance is a
virtual server in the cloud. With Amazon EC2, you can set up and configure the operating
system and applications that run on your instance. Its configuration at launch is a live copy
of the Amazon Machine Image (AMI) that you specify when you launched the instance. EC2
has an extremely reduced time frame for provisioning and booting new instances and EC2
ensures that you pay as you go, pay for what you use, pay less as you use more, and pay
even less when you reserve capacity. When your EC2 instance is running, you are charged
on CPU, memory, storage, and networking. When it is stopped, you are only charged for
EBS storage.
Volume Gateway's Cached Volumes differ as they do not store the entire dataset EC2 Key Details:
locally like Stored Volumes. Instead, AWS is used as the primary data source and the
local hardware is used as a caching layer. Only the most frequently used components You can launch different types of instances from a single AMI. An instance type
are retained onto the on-prem infrastructure while the remaining data is served from essentially determines the hardware of the host computer used for your instance. Each
AWS. This minimizes the need to scale on-prem infrastructure while still maintaining instance type offers different compute and memory capabilities. You should select an
low-latency access to the most referenced data. instance type based on the amount of memory and computing power that you need
for the application or software that you plan to run on top of the instance.
In the following diagram of a Cached Volume architecture, the most frequently You can launch multiple instances of an AMI, as shown in the following figure:
accessed data is served to the user from the Storage Area Network, Network
Attached, or Direct Attached Storage within your data center. S3 serves the rest of
the data from AWS.
You have the option of using dedicated tenancy with your instance. This means that
within an AWS data center, you have exclusive access to physical hardware. Naturally,
this option incurs a high cost, but it makes sense if you work with technology that has
a strict licensing policy.
With EC2 VM Import, you can import existing VMs into AWS as long as those hosts instances are only available when Amazon has excess capacity, this option makes
use VMware ESX, VMware Workstation, Microsoft Hyper-V, or Citrix Xen virtualization sense only if your app has flexible start and end times. You won’t be charged if your
formats. instance stops due to a price change (e.g., someone else just bid a higher price for the
When you launch a new EC2 instance, EC2 attempts to place the instance in such a access) and so consequently your workload doesn’t complete. However, if you
way that all of your VMs are spread out across different hardware to limit failure to a terminate the instance yourself you will be charged for any hour the instance ran. Spot
single location. You can use placement groups to influence the placement of a group instances are normally used in batch processing jobs.
of interdependent instances that meet the needs of your workload. There is an
explanation about placement groups in a section below. Standard Reserved vs. Convertible Reserved vs. Scheduled Reserved:
When you launch an instance in Amazon EC2, you have the option of passing user
Standard Reserved Instances have inflexible reservations that are discounted at 75%
data to the instance when the instance starts. This user data can be used to run
off of On-Demand instances. Standard Reserved Instances cannot be moved between
common automated configuration tasks or scripts. For example, you can pass a bash
regions. You can choose if a Reserved Instance applies to either a specific Availability
script that ensures htop is installed on the new EC2 host and is always active.
Zone, or an Entire Region, but you cannot change the region.
By default, the public IP address of an EC2 Instance is released when the instance is
Convertible Reserved Instances are instances that are discounted at 54% off of On-
stopped even if its stopped temporarily. Therefore, it is best to refer to an instance by
Demand instances, but you can also modify the instance type at any point. For
its external DNS hostname. If you require a persistent public IP address that can be
example, you suspect that after a few months your VM might need to change from
associated to the same instance, use an Elastic IP address which is basically a static IP
general purpose to memory optimized, but you aren't sure just yet. So if you think
address instead.
that in the future you might need to change your VM type or upgrade your VMs
If you have requirements to self-manage a SQL database, EC2 can be a solid capacity, choose Convertible Reserved Instances. There is no downgrading instance
alternative to RDS. To ensure high availability, remember to have at least one other type with this option though.
EC2 Instance in a separate Availability zone so even if a DB instance goes down, the
Scheduled Reserved Instances are reserved according to a specified timeline that you
other(s) will still be available.
set. For example, you might use Scheduled Reserved Instances if you run education
A golden image is simply an AMI that you have fully customized to your liking with all software that only needs to be available during school hours. This option allows you
necessary software/data/configuration details set and ready to go once. This personal to better match your needed capacity with a recurring schedule so that you can save
AMI can then be the source from which you launch new instances. money.
Instance status checks check the health of the running EC2 server, systems status
check monitor the health of the underlying hypervisor. If you ever notice a systems EC2 Instance Lifecycle:
status issue, just stop the instance and start it again (no need to reboot) as the VM will
start up again on a new hypervisor. The following table highlights the many instance states that a VM can be in at a given
time.
shutting- Clustered Placement Grouping is when you put all of your EC2 instances in a
The instance is preparing to be terminated. Not billed single availability zone. This is recommended for applications that need the
down
lowest latency possible and require the highest network throughput.
The instance has been permanently deleted and
terminated Not billed Only certain instances can be launched into this group (compute optimized, GPU
cannot be started.
optimized, storage optimized, and memory optimized).
Note: Reserved Instances that are terminated are billed until the end of their term. 2.) Spread Placement Groups
EC2 Security: Spread Placement Grouping is when you put each individual EC2 instance on top
of its own distinct hardware so that failure is isolated.
When you deploy an Amazon EC2 instance, you are responsible for management of Your VMs live on separate racks, with separate network inputs and separate
the guest operating system (including updates and security patches), any application power requirements. Spread placement groups are recommended for
software or utilities installed on the instances, and the configuration of the AWS- applications that have a small number of critical instances that should be kept
provided firewall (called a security group) on each instance. separate from each other.
With EC2, termination protection of the instance is disabled by default. This means
that you do not have a safe-guard in place from accidentally terminating your 3.) Partitioned Placement Groups
instance. You must turn this feature on if you want that extra bit of protection.
Partitioned Placement Grouping is similar to Spread placement grouping, but
Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. differs because you can have multiple EC2 instances within a single partition.
Public–key cryptography uses a public key to encrypt a piece of data, such as a Failure instead is isolated to a partition (say 3 or 4 instances instead of 1), yet you
password, and the recipient uses their private key to decrypt the data. The public and enjoy the benefits of close proximity for improved network performance.
private keys are known as a key pair.
With this placement group, you have multiple instances living together on the
You can encrypt your root device volume which is where you install the underlying OS. same hardware inside of different availability zones across one or more regions.
You can do this during creation time of the instance or with third-party tools like bit
If you would like a balance of risk tolerance and network performance, use
locker. Of course, additional or secondary EBS volumes are also encryptable as well.
Partitioned Placement Groups.
By default, an EC2 instance with an attached AWS Elastic Block Store (EBS) root volume
will be deleted together when the instance is terminated. However, any additional or Each placement group name within your AWS must be unique
secondary EBS volume that is also attached to the same instance will be preserved.
This is because the root EBS volume is for OS installations and other low-level settings.
This rule can be modified, but it is usually easier to boot a new instance with a fresh
root device volume than make use of an old one.
You can move an existing instance into a placement group guaranteed that it is in a AMIs can also be thought of as pre-baked, launchable servers. AMIs are always used
stopped state. You can move the instance via the CLI or an AWS SDK, but not the when launching an instance.
console. You can also take a snapshot of the existing instance, convert it into an AMI, When you provision an EC2 instance, an AMI is actually the first thing you are asked to
and launch it into the placement group where you desire it to be. specify. You can choose a pre-made AMI or choose your own made from an EBS
snapshot.
Elastic Block Store (EBS) You can also use the following criteria to help pick your AMI:
Operating System
Throughput Optimized Hard Disk Drive (magnetic, built for larger data loads) HDD-backed volumes are built for large streaming workloads where throughput
(measured in MiB/s) is a better performance measure than IOPS. Rule of thumb: Will
Cold Hard Disk Drive (magnetic, built for less frequently accessed workloads)
your workload be throughput heavy? Plan for HDD.
Magnetic
EBS Volumes offer 99.999% SLA.
Wherever your EC2 instance is, your volume for it is going to be in the same
availability zone
An EBS volume can only be attached to one EC2 instance at a time.
After you create a volume, you can attach it to any EC2 instance in the same
availability zone.
Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and
write a copy of the data in the volume to S3, where it is stored redundantly in multiple
Availability Zones
An EBS snapshot reflects the contents of the volume during a concrete instant in time.
An image (AMI) is the same thing, but includes an operating system and a boot loader
so it can be used to boot an instance.
EBS Snapshots: temporary storage devices like Instance Store-backed volumes.
EBS-backed Volumes are launched from an AWS EBS snapshot, as the name implies
EBS Snapshots are point in time copies of volumes. You can think of Snapshots as
Instance Store-backed Volumes are launched from an AWS S3 stored template. They
photographs of the disk’s current state and the state of everything within it.
are ephemeral, so be careful when shutting down an instance!
A snapshot is constrained to the region where it was created.
Secondary instance stores for an instance-store backed root device must be installed
Snapshots only capture the state of change from when the last snapshot was taken.
during the original provisioning of the server. You cannot add more after the fact.
This is what is recorded in each new snapshot, not the entire state of the server.
However, you can add EBS volumes to the same instance after the server's creation.
Because of this, it may take some time for your first snapshot to be created. This is
With these drawbacks of Instance Store volumes, why pick one? Because they have a
because the very first snapshot's change of state is the entire new volume. Only
very high IOPS rate. So while an Instance Store can't provide data persistence, it can
afterwards will the delta be captured because there will then be something previous to
provide much higher IOPS compared to network attached storage like EBS.
compare against.
Further, Instance stores are ideal for temporary storage of information that changes
EBS snapshots occur asynchronously which means that a volume can be used as
frequently such as buffers, caches, scratch data, and other temporary content, or for
normal while a snapshot is taking place.
data that is replicated across a fleet of instances, such as a load-balanced pool of web
When creating a snapshot for a future root device, it is considered best practices to servers.
stop the running instance where the original device is before taking the snapshot.
When to use one over the other?
The easiest way to move an EC2 instance and a volume to another availability zone is Use EBS for DB data, critical logs, and application configs.
to take a snapshot.
Use instance storage for in-process data, noncritical logs, and transient
When creating an image from a snapshot, if you want to deploy a different volume application state.
type for the new image (e.g. General Purpose SSD -> Throughput Optimized HDD)
Use S3 for data shared between systems like input datasets and processed results,
then you must make sure that the virtualization for the new image is hardware-
or for static data needed by each new system when launched.
assisted.
A short summary for creating copies of EC2 instances: Old instance -> Snapshot ->
EBS Encryption:
Image (AMI) -> New instance
You cannot delete a snapshot of an EBS Volume that is used as the root device of a EBS encryption offers a straight-forward encryption solution for EBS resources that
registered AMI. If the original snapshot was deleted, then the AMI would not be able doesn't require you to build, maintain, and secure your own key management
to use it as the basis to create new instances. For this reason, AWS protects you from infrastructure.
accidentally deleting the EBS Snapshot, since it could be critical to your systems. To It uses AWS Key Management Service (AWS KMS) customer master keys (CMK) when
delete an EBS Snapshot attached to a registered AMI, first remove the AMI, then the creating encrypted volumes and snapshots.
snapshot can be deleted. You can encrypt both the root device and secondary volumes of an EC2 instance.
When you create an encrypted EBS volume and attach it to a supported instance type,
EBS Root Device Storage: the following types of data are encrypted:
Data at rest inside the volume
All AMI root volumes (where the EC2's OS is installed) are of two types: EBS-backed or
All data moving between the volume and the instance
Instance Store-backed
All snapshots created from the volume
When you delete an EC2 instance that was using an Instance Store-backed root
volume, your root volume will also be deleted. Any additional or secondary volumes All volumes created from those snapshots
will persist however. EBS encrypts your volume with a data key using the AES-256 algorithm.
If you use an EBS-backed root volume, the root volume will not be terminated with its Snapshots of encrypted volumes are naturally encrypted as well. Volumes restored
EC2 instance when the instance is brought offline. EBS-backed volumes are not from encrypted snapshots are also encrypted. You can only share unencrypted
snapshots. If an EC2 instance fails with ENI properly configured, you (or more likely,the code
The old way of encrypting a root device was to create a snapshot of a provisioned EC2 running on your behalf) can attach the network interface to a hot standby instance.
instance. While making a copy of that snapshot, you then enabled encryption during Because ENI interfaces maintain their own private IP addresses, Elastic IP addresses,
the copy's creation. Finally, once the copy was encrypted, you then created an AMI and MAC address, network traffic will begin to flow to the standby instance as soon as
from the encrypted copy and used to have an EC2 instance with encryption on the you attach the network interface on the replacement instance. Users will experience a
root device. Because of how complex this is, you can now simply encrypt root devices brief loss of connectivity between the time the instance fails and the time that the
as part of the EC2 provisioning options. network interface is attached to the standby instance, but no changes to the VPC
route table or your DNS server are required.
Elastic Network Interfaces (ENI) For instances that work with Machine Learning and High Performance Computing, use
EFA (Elastic Fabric Adaptor). EFAs accelerate the work required from the above use
cases. EFA provides lower and more consistent latency and higher throughput than
ENI Simplified: the TCP transport traditionally used in cloud-based High Performance Computing
An elastic network interface is a networking component that represents a virtual network systems.
card. When you provision a new instance, there will be an ENI attached automatically and EFA can also use OS-bypass (on linux only) that will enable ML and HPC applications
you can create and configure additional network interfaces if desired. When you move a to interface with the Elastic Fabric Adaptor directly, rather than be normally routed to
network interface from one instance to another, network traffic is redirected to the new it through the OS. This gives it a huge performance increase.
instance. You can enable a VPC flow log on your network interface to capture information about
the IP traffic going to and from a network interface.
ENI Key Details:
ENI is used mainly for low-budget, high-availability network solutions
Security Groups
However, if you suspect you need high network throughput then you can use
Enhanced Networking ENI. Security Groups Simplified:
Enhanced Networking ENI uses single root I/O virtualization to provide high- Security Groups are used to control access (SSH, HTTP, RDP, etc.) with EC2. They act as a
performance networking capabilities on supported instance types. SR-IOV provides virtual firewall for your instances to control inbound and outbound traffic. When you
higher I/O and lower throughput and it ensures higher bandwidth, higher packet per launch an instance in a VPC, you can assign up to five security groups to the instance and
second (PPS) performance, and consistently lower inter-instance latencies. SR-IOV security groups act at the instance level, not the subnet level.
does this by dedicating the interface to a single instance and effectively bypassing
parts of the Hypervisor which allows for better performance.
Security Groups Key Details:
Adding more ENIs won’t necessarily speed up your network throughput, but Enhanced
Networking ENI will. Security groups control inbound and outbound traffic for your instances (they act as a
There is no extra charge for using Enhanced Networking ENI and the better network Firewall for EC2 Instances) while NACLs control inbound and outbound traffic for your
performance it provides. The only downside is that Enhanced Networking ENI is not subnets (they act as a Firewall for Subnets). Security Groups usually control the list of
available on all EC2 instance families and types. ports that are allowed to be used by your EC2 instances and the NACLs control which
network or list of IP addresses can connect to your whole VPC.
You can attach a network interface to an EC2 instance in the following ways:
When it's running (hot attach) Every time you make a change to a security group, that change occurs immediately
When it's stopped (warm attach) Whenever you create an inbound rule, an outbound rule is created immediately. This
is because Security Groups are stateful. This means that when you create an ingress
When the instance is being launched (cold attach).
rule for a security group, a corresponding egress rule is created to match it. This is in
contrast with NACLs which are stateless and require manual intervention for creating As mentioned above, WAF operates as a Layer 7 firewall. This grants it the ability to
both inbound and outbound rules. monitor granular web-based conditions like URL query string parameters. This level of
Security Group rules are based on ALLOWs and there is no concept of DENY when in detail helps to detect both foul play and honest issues with the requests getting
comes to Security Groups. This means you cannot explicitly deny or blacklist specific passed onto your AWS environment.
ports via Security Groups, you can only implicitly deny them by excluding them in your With WAF, you can set conditions such as which IP addresses are allowed to make
ALLOWs list what kind of requests or access what kind of content.
Because of the above detail, everything is blocked by default. You must go in and Based off of these conditions, the corresponding endpoint will either allow the request
intentionally allow access for certain ports. by serving the requested content or return an HTTP 403 Forbidden status.
Security groups are specific to a single VPC, so you can't share a Security Group At the simplest level, AWS WAF lets you choose one of the following behaviors:
between multiple VPCs. However, you can copy a Security Group to create a new Allow all requests except the ones that you specify: This is useful when you want
Security Group with the same rules in another VPC for the same AWS Account. CloudFront or an Application Load Balancer to serve content for a public website,
Security Groups are regional and can span AZs, but can't be cross-regional. but you also want to block requests from attackers.
Outbound rules exist if you need to connect your server to a different service such as Block all requests except the ones that you specify: This is useful when you want
an API endpoint or a DB backend. You need to enable the ALLOW rule for the correct to serve content for a restricted website whose users are readily identifiable by
port though so that traffic can leave EC2 and enter the other AWS service. properties in web requests, such as the IP addresses that they use to browse to
You can attach multiple security groups to one EC2 instance and you can have the website.
multiple EC2 instances under the umbrella of one security group Count the requests that match the properties that you specify: When you want
You can specify the source of your security group (basically who is allowed to bypass to allow or block requests based on new properties in web requests, you first can
the virtual firewall) to be a single /32 IP address, an IP range, or even a separate configure AWS WAF to count the requests that match those properties without
security group. allowing or blocking those requests. This lets you confirm that you didn't
accidentally configure AWS WAF to block all the traffic to your website. When
You cannot block specific IP addresses with Security Groups (use NACLs instead)
you're confident that you specified the correct properties, you can change the
You can increase your Security Group limit by submitting a request to AWS
behavior to allow or block requests.
CloudWatch Events:
Amazon CloudWatch Events delivers a near real-time stream of system events that
describe changes in AWS resources.
You can use events to trigger lambdas for example while using alarms to inform you
that something went wrong.
You can customize your CloudWatch dashboards for insights.
CloudWatch Alarms: This event history simplifies security analysis, resource change tracking, and
troubleshooting.
CloudWatch alarms send notifications or automatically make changes to the resources
There are two types of events that can be logged in CloudTrail: management events
you are monitoring based on rules that you define.
and data events.
For example, you can create custom CloudWatch alarms which will trigger
Management events provide information about management operations that are
notifications such as surpassing a set billing threshold.
performed on resources in your AWS account.
CloudWatch alarms have two states of either ok or alarm
Think of Management events as things normally done by people when they are in
AWS. Examples:
CloudWatch Metrics: a user sign in
CloudWatch Metrics represent a time-ordered set of data points. a policy changed
These basically are a variable you can monitor over time to help tell if everything is a newly created security configuration
okay, e.g. Hourly CPU Utilization. a logging rule deletion
CloudWatch Metrics allows you to track high resolution metrics at sub-minute Data events provide information about the resource operations performed on or in a
intervals all the way down to per second. resource.
Think of Data events as things normally done by software when hitting various AWS
CloudWatch Dashboards: endpoints. Examples:
S3 object-level API activity
CloudWatch dashboards are customizable home pages in the CloudWatch console
that you can use to monitor your resources in a single view Lambda function execution activity
These dashboards integrate with CloudWatch Metrics and CloudWatch Alarms to By default, CloudTrail logs management events, but not data events.
create customized views of the metrics and alarms for your AWS resources. By default, CloudTrail Events log files are encrypted using Amazon S3 server-side
encryption (SSE). You can also choose to encrypt your log files with an AWS Key
Management Service (AWS KMS) key. As these logs are stored in S3, you can define
CloudTrail
Amazon S3 lifecycle rules to archive or delete log files automatically. If you want
notifications about log file delivery and validation, you can set up Amazon SNS
CloudTrail Simplified: notifications.
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and
risk auditing of your AWS account. With it, you can log, continuously monitor, and retain Elastic File System (EFS)
account activity related to actions across your AWS infrastructure. CloudTrail provides
event history of your AWS account activity, including actions taken through the AWS EFS Simplified:
Management Console, AWS SDKs, command line tools, API calls, and other AWS services. It
is a regional service, but you can configure CloudTrail to collect trails in all regions. EFS provides a simple and fully managed elastic NFS file system for use within AWS. EFS
automatically and instantly scales your file system storage capacity up or down as you add
CloudTrail Key Details: or remove files without disrupting your application.
Disaster recovery in AWS always looks to ensure standby copies of resources are Read replicas are supported for all six flavors of DB on top of RDS.
maintained in a separate geographical area. This way, if a disaster (natural disaster, Each Read Replica will have its own DNS endpoint.
political conflict, etc.) ever struck where your original resources are, the copies would Automated backups must be enabled in order to use read replicas.
be unaffected. You can have read replicas with Multi-AZ turned on or have the read replica in an
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a entirely separate region. You can even have read replicas of read replicas, but watch
primary DB instance and synchronously replicates the data to a standby instance in a out for latency or replication lag.
The caveat for Read Replicas is that they are subject
to small amounts of replication lag. This is because they might be missing some of the
latest transactions as they are not updated as quickly as primaries. Application Network traffic to and from the database is encrypted using Secure Sockets Layer
designers need to consider which queries have tolerance to slightly stale data. Those (SSL).
queries should be executed on the read replica, while those demanding completely You can use IAM to centrally manage access to your database resources, instead
up-to-date data should run on the primary node. of managing access individually on each DB instance.
For applications running on Amazon EC2, you can use profile credentials specific
RDS Backups: to your EC2 instance to access your database instead of a password, for greater
security
When it comes to RDS, there are two kinds of backups:
automated backups Encryption at rest is supported for all six flavors of DB for RDS. Encryption is done
using the AWS KMS service. Once the RDS instance is encryption enabled, the data in
database snapshots
the DB becomes encrypted as well as all backups (automated or snapshots) and read
Automated backups allow you to recover your database to any point in time within a
replicas.
retention period (between one and 35 days). Automated backups will take a full daily
After your data is encrypted, Amazon RDS handles authentication of access and
snapshot and will also store transaction logs throughout the day. When you perform a
decryption of your data transparently with a minimal impact on performance. You
DB recovery, RDS will first choose the most recent daily backup and apply the relevant
don't need to modify your database client applications to use encryption.
transaction logs from that day. Within the set retention period, this gives you the
ability to do a point in time recovery down to the precise second.
Automated backups Amazon RDS encryption is currently available for all database engines and storage
are enabled by default. The backup data is stored freely up to the size of your actual types. However, you need to ensure that the underlying instance type supports DB
database (so for every GB saved in RDS, that same amount will freely be stored in S3 encryption.
up until the GB limit of the DB). Backups are taken within a defined window so latency You can only enable encryption for an Amazon RDS DB instance when you create it,
might go up as storage I/O is suspended in order for the data to be backed up. not after the DB instance is created and
DB instances that are encrypted can't be
DB snapshots are done manually by the administrator. A key different from modified to disable encryption.
automated backups is that they are retained even after the original RDS instance is
terminated. With automated backups, the backed up data in S3 is wiped clean along RDS Enhanced Monitoring:
with the RDS engine. This is why you are asked if you want to take a final snapshot of
RDS comes with an Enhanced Monitoring feature. Amazon RDS provides metrics in
your DB when you go to delete it.
real time for the operating system (OS) that your DB instance runs on. You can view
When you go to restore a DB via automated backups or DB snapshots, the result is the
the metrics for your DB instance using the console, or consume the Enhanced
provisioning of an entirely new RDS instance with its own DB endpoint in order to be
Monitoring JSON output from CloudWatch Logs in a monitoring system of your
reached.
choice.
By default, Enhanced Monitoring metrics are stored in the CloudWatch Logs for 30
RDS Security: days. To modify the amount of time the metrics are stored in the CloudWatch Logs,
You can authenticate to your DB instance using IAM database authentication. IAM change the retention for the RDS OS Metrics log group in the CloudWatch console.
database authentication works with MySQL and PostgreSQL. With this authentication Take note that there are key differences between CloudWatch and Enhanced
method, you don't need to use a password when you connect to a DB instance. Monitoring Metrics. CloudWatch gathers metrics about CPU utilization from the
Instead, you use an authentication token. hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an
An authentication token is a unique string that Amazon RDS generates on request. agent on the instance. As a result, you might find differences between the
Authentication tokens have a lifetime of 15 minutes. You don't need to store user measurements, because the hypervisor layer performs a small amount of work that
credentials in the database because authentication is managed externally using IAM. can be picked up and interpreted as part of the metric.
Redshift also comes with Massive Parallel Processing (MPP) in order to take advantage Redshift Spectrum queries use much less of your cluster's processing capacity than
of all the nodes in your multi-node cluster. This is done by evenly distributing data other queries.
and query load across all nodes. Because of this, scaling out still retains great The cluster and the data files in Amazon S3 must be in the same AWS Region.
performance. External S3 tables are read-only. You can't perform insert, update, or delete operations
Redshift is encrypted in transit using SSL and is encrypted at rest using AES-256. By on external tables.
default, Redshift will manage all keys, but you can do so too via AWS CloudHSM or
AWS KMS. Redshift Enhanced VPC Routing:
Redshift is billed for:
Compute Node Hours (total hours your non-leader nodes spent querying for When you use Amazon Redshift Enhanced VPC Routing, Redshift forces all traffic (such
data) as COPY and UNLOAD traffic) between your cluster and your data repositories through
Backups your Amazon VPC.
Data transfer within a VPC (but not outside of it) If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the
Redshift is not multi-AZ, if you want multi-AZ you will need to spin up a separate Internet, including traffic to other services within the AWS network.
cluster ingesting the same input. You can also manually restore snapshots to a new AZ By using Enhanced VPC Routing, you can use standard VPC features, such as VPC
in the event of an outage. security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint
policies, internet gateways, and Domain Name System (DNS) servers.
ElastiCache A further comparison between MemcacheD and Redis for ElastiCache:
ElastiCache Simplified:
The ElastiCache service makes it easy to deploy, operate, and scale an in-memory cache in
the cloud. It helps you boost the performance of your existing databases by retrieving data
from high throughput and low latency in-memory data stores.
Amazon ElastiCache offers fully managed Redis and Memcached for the most
demanding applications that require sub-millisecond response times.
For data that doesn’t change frequently and is often asked for, it makes a lot of sense
to cache said data rather than querying it from the database.
Another advantage of using ElastiCache is that by caching query results, you pay the
Common configurations that improve DB performance include introducing read
price of the DB query only once without having to re-execute the query unless the
replicas of a DB primary and inserting a caching layer into the storage architecture.
data changes.
MemcacheD is for simple caching purposes with horizontal scaling and multi-threaded
Amazon ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating
performance, but if you require more complexity for your caching environment then
application demands. Write and memory scaling is supported with sharding. Replicas
choose Redis.
provide read scaling.
Route53
Route53 Simplified:
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service.
You can use Route 53 to perform three main functions in any combination: domain
registration, DNS routing, and health checking.
VPC Simplified:
VPC lets you provision a logically isolated section of the AWS cloud where you can launch
services and systems within a virtual network that you define. By having the option of
selecting which AWS resources are public facing and which are not, VPC provides much
more granular control over security.
Supports allow rules and deny rules Supports allow rules only
Security groups can span subnets, but do not span VPCs. ICMP ensures that instances We process rules in order, starting with the
We evaluate all rules before deciding
from one security group can ping others in a different security group. It is IPv4 and lowest numbered rule, when deciding
whether to allow traffic
IPv6 compatible. whether to allow traffic
VPC Subnets:
If a network has a large number of hosts without logically grouped subdivisions,
managing the many hosts can be a tedious job. Therefore you use subnets to divide a
y
NACL Security Group Attaching an Internet Gateway to a VPC allows instances with public IPs to directly
access the internet. NAT does a similar thing, however it is for instances that do not
Applies to an instance only if someone have a public IP. It serves as an intermediate step which allow private instances to first
Automatically applies to all instances in the
specifies the security group when masked their own private IP as the NAT's public IP before accessing the internet.
subnets that it's associated with (therefore,
launching the instance, or associates
it provides an additional layer of defense if You would want your private instances to access the internet so that they can have
the security group with the instance
the security group rules are too permissive) normal software updates. NAT prevents any initiating of a connection from the
later on
internet.
NAT instances are individual EC2 instances that perform the function of providing
Because NACLs are stateless, you must also ensure that outbound rules exist
private subnets a means to securely access the internet.
alongside the inbound rules so that ingress and egress can flow smoothly.
Because they are individual instances, High Availability is not a built-in feature and
The default NACL that comes with a new VPC has a default rule to allow all inbounds they can become a choke point in your VPC. They are not fault-tolerant and serve as a
and outbounds. This means that it exists, but doesn't do anything as all traffic passes single point of failure. While it is possible to use auto-scaling groups, scripts to
through it freely. automate failover, etc. to prevent bottlenecking, it is far better to use the NAT
Gateway as an alternative for a scalable solution.
However, when you create a new NACL (instead of using the default that comes with
NAT Gateway is a managed service that is composed of multiple instances linked
the VPC) the default rules will deny all inbounds and outbounds.
together within an availability zone in order to achieve HA by default.
If you create a new NACL, you must associate whichever desired subnets to it To achieve further HA and a zone-independent architecture, create a NAT gateway for
manually so that they can inherit the NACL’s rule set. If you don’t explicitly assign a each Availability Zone and configure your routing to ensure that resources use the
subnet to an NACL, AWS will associate it with your default NACL. NAT gateway in their corresponding Availability Zone.
NAT instances are deprecated, but still useable. NAT Gateways are the preferred
NACLs are evaluated before security groups and you block malicious IPs with NACLs,
means to achieve Network Address Translation.
not security groups.
There is no need to patch NAT Gateways as the service is managed by AWS. You do
A subnet can only follow the rules listed by one NACL at a time. However, a NACL can need to patch NAT Instances though because they’re just individual EC2 instances.
describe the rules for any number of subnets. The rules will take effect immediately. Because communication must always be initiated from your private instances, you
need a route rule to route traffic from a private subnet to your NAT gateway.
Network ACL rules are evaluated by rule number, from lowest to highest, and
executed immediately when a matching allow/deny rule is found. Because of this, Your NAT instance/gateway will have to live in a public subnet as your public subnet is
order matters with your rule numbers. the subnet configured to have internet access.
When creating NAT instances, it is important to remember that EC2 instances have
The lower the number of a rule on the list, the more seniority that rule will have. List source/destination checks on them by default. What these checks do is ensure that
your rules accordingly. any traffic it comes across must be either generated by the instance or be the
intended recipient of that traffic. Otherwise, the traffic is dropped because the EC2
If you are using NAT Gateway along with your NACL, you must ensure the availability
instance is neither the source nor the destination.
of the NAT Gateway ephemeral port range within the rules of your NACL. Because NAT
Gateway traffic can appear on any of range's ports for the duration of its connection, So because NAT instances act as a sort of proxy, you must disable source/destination
you must ensure that all possible ports are accounted for and open. checks when musing a NAT instance.
NACL can have a small impact on how EC2 instances in a private subnet will Bastion Hosts:
communicate with any service, including VPC Endpoints.
Bastion Hosts are special purpose computers designed and configured to withstand
attacks. This server generally runs a single program and is stripped beyond this
NAT Instances vs. NAT Gateways:
purpose in order to reduce attack vectors. If the Internet Gateway is not attached to the VPC, which is the prerequisite for
The purpose of Bastion Hosts are to remotely access the instances behind the private instances to be accessed from the internet, then naturally instances in your VPC will
subnet for system administration purposes without exposing the host via an internet not be reachable.
gateway. If you want all of your VPC to remain private (and not just some subnets), then do not
The best way to implement a Bastion Host is to create a small EC2 instance that only attach an IGW.
has a security group rule for a single IP address. This ensures maximum security. When a Public IP address is assigned to an EC2 instance, it is effectively registered by
It is perfectly fine to use a small instance rather than a large one because the instance the Internet Gateway as a valid public endpoint. However, each instance is only aware
will only be used as a jump server that connects different servers to each other. of its private IP and not its public IP. Only the IGW knows of the public IPs that belong
If you are going to RDP or SSH into the instances of your private subnet, use a Bastion to instances.
Host. If you are going to be providing internet traffic into the instances of your private When an EC2 instance initiates a connection to the public internet, the request is sent
subnet, use a NAT. using the public IP as its source even though the instance doesn't know a thing about
Similar to NAT Gateways and NAT Instances, Bastion Hosts live within a public-facing it. This works because the IGW performs its own NAT translation where private IPs are
subnet. mapped to public IPs and vice versa for traffic flowing into and out of the VPC.
There are pre-baked Bastion Host AMIs. So when traffic from the internet is destined for an instance's public IP endpoint, the
IGW receives it and forwards the traffic onto the EC2 instance using its internal private
IP.
Route Tables:
You can only have one IGW per VPC.
Route tables are used to make sure that subnets can communicate with each other Summary: IGW connects your VPC with the internet.
and that traffic knows where to go.
Every subnet that you create is automatically associated with the main route table for Virtual Private Networks (VPNs):
the VPC.
You can have multiple route tables. If you do not want your new subnet to be VPCs can also serve as a bridge between your corporate data center and the AWS
associated with the default route table, you must specify that you want it associated cloud. With a VPC Virtual Private Network (VPN), your VPC becomes an extension of
with a different route table. your on-prem environment.
Because of this default behavior, there is a potential security concern to be aware of: if Naturally, your instances that you launch in your VPC can't communicate with your
the default route table is public then the new subnets associated with it will also be own on-premise servers. You can allow the access by first:
public. attaching a virtual private gateway to the VPC
The best practice is to ensure that the default route table where new subnets are creating a custom route table for the connection
associated with is private. updating your security group rules to allow traffic from the connection
This means you ensure that there is no route out to the internet for the default route creating the managed VPN connection itself.
table. Then, you can create a custom route table that is public instead. New subnets To bring up VPN connection, you must also define a customer gateway resource in
will automatically have no route out to the internet. If you want a new subnet to be AWS, which provides AWS information about your customer gateway device. And you
publicly accessible, you can simply associate it with the custom route table. have to set up an Internet-routable IP address of the customer gateway's external
Route tables can be configured to access endpoints (public services accessed interface.
privately) and not just the internet. A customer gateway is a physical device or software application on the on-premise
side of the VPN connection.
Internet Gateway: Although the term "VPN connection" is a general concept, a VPN connection for AWS
always refers to the connection between your VPC and your own network. AWS
supports Internet Protocol security (IPsec) VPN connections. iii. Create a virtual private gateway and attach it to the desired VPC environment.
The following diagram illustrates a single VPN connection. iv. Select VPN connections and create a new VPN connection. Select both the
customer gateway and the virtual private gateway.
v. Once the VPN connection is available, set up the VPN either on the customer
gateway or the on-prem firewall itself
Data flow into AWS via DirectConnect looks like the following: On-prem router ->
dedicated line -> your own cage / DMZ -> cross connect line -> AWS Direct Connect
Router -> AWS backbone -> AWS Cloud
Summary: DirectConnect connects your on-prem with your VPC through a non-public
tunnel.
VPC Endpoints:
VPC Endpoints ensure that you can connect your VPC to supported AWS services
without requiring an internet gateway, NAT device, VPN connection, or AWS Direct
Connect. Traffic between your VPC and other AWS services stay within the Amazon
ecosystem and these Endpoints are virtual devices that are HA and without bandwidth
constraints.
These work basically by attaching an ENI to an EC2 instance that can easily
The above VPC has an attached virtual private gateway (note: not an internet gateway)
communicate to a wide range of AWS services.
and there is a remote network that includes a customer gateway which you must
configure to enable the VPN connection. You set up the routing so that any traffic Gateway Endpoints rely on creating entries in a route table and pointing them to
from the VPC bound for your network is routed to the virtual private gateway. private endpoints used for S3 or DynamoDB. Gateway Endpoints are mainly just a
target that you set.
Summary: VPNs connect your on-prem with your VPC over the internet.
Interface Endpoints use AWS PrivateLink and have a private IP address so they are
their own entity and not just a target in a route table. Because of this, they cost
AWS DirectConnect:
$.01/hour. Gateway Endpoints are free as they’re just a new route in to set.
Direct Connect is an AWS service that establishes a dedicated network connection Interface Endpoint provisions an Elastic Network interface or ENI (think network card)
between your premises and AWS. You can create this private connectivity to reduce within your VPC. They serve as an entry and exit for traffic going to and from another
network costs, increase bandwidth, and provide more consistent network experience supported AWS service. It uses a DNS record to direct your traffic to the private IP
compared to regular internet-based connections. address of the interface. Gateway Endpoint uses route prefix in your route table to
The use case for Direct Connect is high throughput workloads or if you need a stable direct traffic meant for S3 or DynamoDB to the Gateway Endpoint (think 0.0.0.0/0 ->
or reliable connection igw).
VPN connects to your on-prem over the internet and DirectConnect connects to your To secure your Interface Endpoint, use Security Groups. But to secure Gateway
on-prem off through a private tunnel. Endpoint, use VPC Endpoint Policies.
The steps for setting up an AWS DirectConnect connection: Summary: VPC Endpoints connect your VPC with AWS services through a non-public
i. Create a virtual interface in the DirectConnect console. This is a public virtual tunnel.
interface.
ii. Go to the VPC console and then VPN connections. Create a customer gateway for AWS PrivateLink:
your on-premise.
AWS PrivateLink simplifies the security of data shared with cloud-based applications
by eliminating the exposure of data to the public Internet. AWS PrivateLink provides
private connectivity between different VPCs, AWS services, and on-premises
applications, securely on the Amazon network.
It's similar to the AWS Direct Connect service in that it establishes private connections
to the AWS cloud, except Direct Connect links on-premises environments to AWS.
PrivateLink, on the other hand, secures traffic from VPC environments which are
already in AWS.
This is useful because different AWS services often talk to each other over the internet.
If you do not want that behavior and instead want AWS services to only communicate
within the AWS network, use AWS PrivateLink. By not traversing the Internet,
PrivateLink reduces the exposure to threat vectors such as brute force and distributed
denial-of-service attacks.
PrivateLink allows you to publish an "endpoint" that others can connect with from
their own VPC. It's similar to a normal VPC Endpoint, but instead of connecting to an
AWS service, people can connect to your endpoint.
Further, you'd want to use private IP connectivity and security groups so that your
services function as though they were hosted directly on your private network.
Remember that AWS PrivateLink applies to Applications/Services communicating with
each other within the AWS network. For VPCs to communicate with each other within It is worth knowing what VPC peering configurations are not supported:
the AWS network, use VPC Peering. Overlapping CIDR Blocks
Summary: AWS PrivateLink connects your AWS services with other AWS services Transitive Peering
through a non-public tunnel. Edge to Edge Routing through a gateway or connection device (VPN connection,
Internet Gateway, AWS Direct Connect connection, etc.)
VPC Peering: You can peer across regions, but you cannot have one subnet stretched over multiple
availability zones. However, you can have multiple subnets in the same availability
VPC peering allows you to connect one VPC with another via a direct network route
zone.
using the Private IPs belonging to both. With VPC peering, instances in different VPCs
Summary: VPC Peering connects your VPC to another VPC through a non-public
behave as if they were on the same network.
tunnel.
You can create a VPC peering connection between your own VPCs, regardless if they
are in the same region or not, and with a VPC in an entirely different AWS account.
VPC Flow Logs:
VPC Peering is usually done in such a way that there is one central VPC that peers with
others. Only the central VPC can talk to the other VPCs. VPC Flow Logs is a feature that captures the IP information for all traffic flowing into
You cannot do transitive peering for non-central VPCs. Non-central VPCs cannot go and out of your VPC. Flow log data is sent to an S3 bucket or CloudWatch where you
through the central VPC to get to another non-central VPC. You must set up a new can view, retrieve, and manipulate this data.
portal between non-central nodes if you need them to talk to each other.
You can capture the traffic flow at various stages through its travel:
The following diagram highlights the above idea. VPC B is free to communicate with
VPC A with VPC Peering enabled between both. However, VPC B cannot continue the Traffic flowing into and out of the VPC (like at the IGW)
conversation with VPC C. Only VPC A can communicate with VPC C. Traffic flowing into and out of the subnet
Traffic flowing into and out of the network interface of the EC2 instance (eth0,
eth1, etc.)
VPS Flow Logs capture packet metadata and not packet contents. Things like:
The source IP
The destination IP
The packet size
Anything which could be observed from outside of the packet. In summary, Global Accelerator is a fast/reliable pipeline between user and
application.
Your flow logs can be configured to log valid traffic, invalid traffic, or both
It's like going on a trip (web traffic) and stopping to ask for directions in possibly
You can have flow logs sourced from a different VPC compared to the VPC where your unsafe parts of town (multiple networks are visited which can increase security risks)
Flow Logs are. However, the other VPC must be peered via VPC Peering and under as opposed to having a GPS (global accelerator) that leads you directly where you
your account via AWS Organizations. want to go (endpoint) without having to make unnecessary stops.
It can be confused with Cloudfront, but CloudFront is a cache for content stemming
You can customize your logs by tagging them. from a distant origin server.
Once you create a Flow Log, you cannot change its config. You must make a new one. While CloudFront simply caches static content to the closest AWS Point Of Presence
(POP) location, Global accelerator will use the same Amazon POP to accept initial
Not all IP traffic is monitored under VPC Flow Logs. The following is a list of things requests and routes them directly to the services.
that are ignored by Flow Logs: Route53's latency based routing might also appear similar to Global Accelerator, but
Route 53 is for simply helping choose which region for the user to use. Route53 has
Query requests for instance metadata
nothing to do with actually providing a fast network path.
DHCP traffic
Global Accelerator also provides fast regional failover.
Query requests to the AWS DNS server
SNS provides topics for high-throughput, push-based, many-to-many messaging. Kinesis is used for processing real-time data streams (data that is generated
Using Amazon SNS topics, your publisher systems can fan out messages to a large continuously) from devices constantly sending data into AWS so that said data can be
number of subscriber endpoints for parallel processing, including Amazon SQS collected and analyzed.
queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used
to fan out notifications to end users using mobile push, SMS, and email. It is a fully managed service that automatically scales to match the throughput of your
data and requires no ongoing administration. It can also batch, compress, and encrypt
You can send these push notifications to Apple, Google, Fire OS, and Windows
the data before loading it, minimizing the amount of storage used at the destination
devices.
and increasing security.
SNS allows you to group multiple recipients using topics. A topic is an access point for
allowing recipients to dynamically subscribe for identical copies of the same There are three different types of Kinesis:
notification.
Kinesis Streams
One topic can support deliveries to multiple endpoint types. When you publish to a
topic, SNS appropriately formats copies of that message to send to whichever kind of Kinesis Streams works where the data producers stream their data into
device. Kinesis Streams which can retain the data that enters it from one day up until
To prevent messages being lost, messages are stored redundantly across multiple AZs. 7 days. Once inside
Kinesis Streams, the data is contained within shards.
There is no long or short polling involved with SNS due to the instantaneous pushing Kinesis Streams can continuously capture and store terabytes of data per
of messages hour from hundreds of thousands of sources such as website clickstreams,
SNS has flexible message delivery over multiple transport protocols and has a simple financial
transactions, social media feeds, IT logs, and location-tracking
API. events. For example: purchase requests from a large online store like
Amazon, stock prices, Netflix
content, Twitch content, online gaming data,
Uber positioning and directions, etc.
Kinesis
Kinesis Firehose
Kinesis Simplified:
Amazon Kinesis Firehose is the easiest way to load streaming data into data
stores and analytics tools. When data is streamed into Kinesis Firehose, there
is no
persistent storage there to hold onto it. The data has to be analyzed as
it comes in so it's optional to have Lambda functions inside your Kinesis
Firehose. Once
processed, you send the data elsewhere.
Kinesis Firehose can capture, transform, and load streaming data into AWS Lambda is the ultimate abstraction layer. You only worry about code, AWS does
Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, everything else.
enabling near real-time
analytics with existing business intelligence tools and Lambda supports Go, Python, C#, PowerShell, Node.js, and Java
dashboards you’re already using today.
Each Lambda function maps to one request. Lambda scales horizontally automatically.
Kinesis Analytics Lambda is priced on the number of requests and the first one million are free. Each
million afterwards is $0.20.
Kinesis Analytics works with both Kinesis Streams and Kinesis Firehose and
Lambda is also priced on the runtime of your code, rounded up to the nearest 100mb,
can analyze data on the fly. The data within Kinesis Analytics also gets sent
and the amount of memory your code allocates.
elsewhere once
it is finished processing. It analyzes your data inside of the
Lambda works globally.
Kinesis service itself.
Lambda functions can trigger other Lambda functions.
Partition keys are used with Kinesis so you can organize data by shard. This way, input You can use Lambda as an event-driven service that executes based on changes in
from a particular device can be assigned a key that will limit its destination to a your AWS ecosystem.
specific shard.
You can also use Lambda as a handler in response to HTTP events via API calls over
Partition keys are useful if you would like to maintain order within your shard. the AWS SDK or API Gateway.
Consumers, or the EC2 instances that read from Kinesis Streams, can go inside the
shards to analyze what is in there. Once finished analyzing or parsing the data, the
consumers can then pass on the data to a number of places for storage like a DB or
S3.
The total capacity of a Kinesis stream is the sum of data within its constituent shards.
You can always increase the write capacity assigned to your shard table.
Lambda
Lambda Simplified:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for
the compute time you consume. With Lambda, you can run code for virtually any type of
application or backend service - all with zero administration. You upload your code and
Lambda takes care of everything required to run and scale your code with high availability.
When you create or update Lambda functions that use environment variables, AWS
You can set up your code to be automatically triggered from other AWS services or be
Lambda encrypts them using the AWS Key Management Service. When your Lambda
called directly from any web or mobile app.
function is invoked, those values are decrypted and made available to the Lambda
code.
Lambda Key Details:
Lambda is a compute service where you upload your code as a function and AWS
provisions the necessary details underneath the function so that the function executes
successfully.
The first time you create or update Lambda functions that use environment variables API Gateway
in a region, a default service key is created for you automatically within AWS KMS. This
key is used to encrypt environment variables. However, if you wish to use encryption
helpers and use KMS to encrypt environment variables after your Lambda function is API Gateway Simplified:
created, you must create your own AWS KMS key and choose it instead of the default API Gateway is a fully managed service for developers that makes it easy to build, publish,
key. manage, and secure entire APIs. With a few clicks in the AWS Management Console, you
can create an API that acts as a “front door” for applications to access data, business logic,
To enable your Lambda function to access resources inside a private VPC, you must
or functionality from your back-end services, such as workloads running on EC2) code
provide additional VPC-specific configuration information that includes VPC subnet
running on AWS Lambda, or any web application.
IDs and security group IDs. AWS Lambda uses this information to set up elastic
network interfaces (ENIs) that enable your function to connect securely to other
resources within a private VPC. API Gateway Key Details:
AWS X-Ray allows you to debug your Lambda function in case of unexpected Amazon API Gateway handles all the tasks involved in accepting and processing up to
behavior. hundreds of thousands of concurrent API calls, including traffic management,
authorization and access control, monitoring, and API version management.
Lambda@Edge: Amazon API Gateway has no minimum fees or startup costs. You pay only for the API
calls you receive and the amount of data transferred out.
You can use Lambda@Edge to allow your Lambda functions to customize the content API Gateway does the following for your APIs:
that CloudFront delivers. Exposes HTTP(S) endpoints for RESTful functionality
It adds compute capacity to your CloudFront edge locations and allows you to Uses serverless functionality to connect to Lambda & DynamoDB
execute the functions in AWS locations closer to your application's viewers. The
Can send each API endpoint to a different target
functions run in response to CloudFront events, without provisioning or managing
Runs cheaply and efficiently
servers. You can use Lambda functions to change CloudFront requests and responses
at the following points: Scales readily and effortlessly
After CloudFront receives a request from a viewer (viewer request) Can throttle requests to prevent attacks
Before CloudFront forwards the request to the origin (origin request) Track and control usage via an API key
After CloudFront receives the response from the origin (origin response) Can be version controlled
Before CloudFront forwards the response to the viewer (viewer response) Can be connected to CloudWatch for monitoring and observability
Since API Gateway can function with AWS Lambda, you can run your APIs and code
without needing to maintain servers.
Amazon API Gateway provides throttling at multiple levels including global and by a
service call.
In software, a throttling process, or a throttling controller as it is sometimes
called, is a process responsible for regulating the rate at which application
processing is conducted, either statically or dynamically.
Throttling limits can be set for standard rates and bursts. For example, API owners
can set a rate limit of 1,000 requests per second for a specific method in their
REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000
You'd use Lambda@Edge to simplify and reduce origin infrastructure. requests per second for a few seconds.
Amazon API Gateway tracks the number of requests per second. Any requests CORS does not prevent XSS attacks, but does protect against CSRF attacks. What it
over the limit will receive a 429 HTTP response. The client SDKs generated by does is controls who can use the data served by your endpoint. So if you have a
Amazon API Gateway retry calls automatically when met with this response. weather website with callbacks to an API that checks the forecast, you could stop
You can add caching to API calls by provisioning an Amazon API Gateway cache and someone from writing a website that serves JavaScript calls into your API when they
specifying its size in gigabytes. The cache is provisioned for a specific stage of your navigate to your website.
APIs. This improves performance and reduces the traffic sent to your back end. Cache When someone attempts the malicious calls, your browser will read the CORS headers
settings allow you to control the way the cache key is built and the time-to-live (TTL) and it will not allow the request to take place thus protecting you from the attack.
of the data stored for each method. Amazon API Gateway also exposes management
APIs that help you invalidate the cache for each stage. CloudFormation
You can enable API caching for improving latency and reducing I/O for your endpoint.
When caching for a particular API stage (version controlled version), you cache
CloudFormation Simplified:
responses for a particular TTL in seconds.
API Gateway supports AWS Certificate Manager and can make use of free TLS/SSL CloudFormation is an automated tool for provisioning entire cloud-based environments. It
certificates. is similar to Terraform where you codify the instructions for what you want to have inside
your application setup (X many web servers of Y type with a Z type DB on the backend,
With API Gateway, there are two kinds of API calls:
etc). It makes it a lot easier to just describe what you want in markup and have AWS do the
Calls to the API Gateway API to create, modify, delete, or deploy REST APIs. These
actual provisioning work involved.
are logged in CloudTrail.
API calls set up by the developers to deliver their custom functionality: These are
not logged in CloudTrail.
CloudFormation Key Details:
The main use case for CloudFormation is for advanced setups and production
Cross Origin Resource Sharing: environments as it is complex and has many robust features.
CloudFormation templates can be used to create, update, and delete infrastructure.
In computing, the same-origin policy is an important concept where a web browser
permits scripts contained in one page to access data from another page, but only if The templates are written in YAML or JSON
both pages have the same origin. A full CloudFormation setup is called a stack.
This behavior is enforced by browsers, but is ignored by tools like cURL and PostMan. Once a template is created, AWS will make the corresponding stack. This is the living
Cross-origin resource sharing (CORS) is one way the server at the origin can relax the and active representation of said template. One template can create an infinite
same-origin policy. CORS allows sharing of restricted resources like fonts to be number of stacks.
requested from another domain outside the original domain of where the first The Resources field is the only mandatory field when creating a CloudFormation
resource was shared from. template
CORS defines a way for client web applications that are loaded in one domain to Rollback triggers allow you to monitor the creation of the stack as it's built. If an error
interact with resources in a different domain. With CORS support, you can build rich occurs, you can trigger a rollback as the name implies.
client-side web applications with Amazon S3 and selectively allow cross-origin access AWS Quick Starts is composed of many high-quality CloudFormation stacks designed
to your Amazon S3 resources. by AWS engineers.
If you ever come across an error that mentions that an origin policy cannot be read at An example template that would spin up an EC2 instance:
the remote resource, then you need to enable CORS on API Gateway.
CORS is enforced on the client (web browser) side.
A common example of this issue is if you are using a site with Javascript/AJAX for
multiple domains under API Gateway. You would need to ensure that CORS is enabled.
AWS Organizations
A template can be updated and then used to update the same stack. The point of AWS Organizations is to deploy permissions to the separate accounts
underneath the root account and have those policies trickle down. AWS Organizations
helps you centrally govern your environment as you grow and scale your workloads
ElasticBeanstalk
on AWS.
You can use organizational units (OUs) to group similar accounts together to
ElasticBeanstalk Simplified: administer as a single unit. This greatly simplifies the management of your accounts.
ElasticBeanstalk is another way to script out your provisioning process by deploying You can attach a policy-based control to an OU, and all accounts within the OU
existing applications to the cloud. ElasticBeanstalk is aimed toward developers who know automatically inherit the policy. So if your company's developers all have their own
very little about the cloud and want the simplest way of deploying their code. sandbox AWS account, they can be treated as a single unit and be restricted by the
same policies.
ElasticBeanstalk Key Details: With AWS Organizations, we can enable or disable services using Service Control
Policies (SCPs) broadly on organizational units or more specifically on individual
Just upload your application and ElasticBeanstalk will take care of the underlying accounts
infrastructure.
Use SCPs with AWS Organizations to establish access controls so that all IAM
ElasticBeanstalk has capacity provisioning, meaning you can use it with autoscaling principals (users and roles) adhere to them. With SCPs, you can specify Conditions,
from the get-go.
ElasticBeanstalk applies updates to your application by having a Resources, and NotAction to deny access across accounts in your organization or
duplicate ready with the already updated version. This duplicate is then swapped with organizational unit. For example, you can use SCPs to restrict access to specific AWS
the original. This is done as a preventative measure in case your updated application Regions, or prevent deleting common resources, such as an IAM role used for your
fails. If the app does fail, ElasticBeanstalk will switch back to the original copy with the central administrators.
older version and there will be no downtime experienced by the users who are using
your application.
Miscellaneous
You can use ElasticBeanstalk to even host Docker as Elastic Beanstalk supports the
deployment of web applications from containers. With Docker containers, you can
The following section includes services, features, and techniques that may appear on the
define your own runtime environment, your own platform, programming language,
exam. They are also extremely useful to know as an engineer using AWS. If the following
and any application dependencies (such as package managers or tools) that aren't
items do appear on the exam, they will not be tested in detail. You'll just have to know
supported by other platforms. ElasticBeanstalk makes it easy to deploy Docker as
what the meaning is behind the name. It is a great idea to learn each item in depth for
Docker containers are already self-contained and include all the configuration
your career's benefit, but it is not necessary for the exam.
information and software required to run.
What is the Amazon Cognito? AWS Resource Access Manager (RAM) is a service that enables you to easily and
securely share AWS resources with any AWS account or within your AWS Organization.
Before discussing Amazon Cognito, it is first important to understand what Web You can share AWS Transit Gateways, Subnets, AWS License Manager configurations,
Identity Federation is. Web Identity Federation lets you give your users access to AWS and Amazon Route 53 Resolver rules resources with RAM.
resources after they have successfully authenticated into a web-based identity
Many organizations use multiple accounts to create administrative or billing isolation,
provider such as Facebook, Google, Amazon, etc. Following a successful login into
and to limit the impact of errors as part of the AWS Organizations service.
these services, the user is provided an auth code from the identity provider which can
RAM eliminates the need to create duplicate resources in multiple accounts, reducing
be used to gain temporary AWS credentials.
the operational overhead of managing those resources in every single account you
Amazon Cognito is the Amazon service that provides Web Identity Federation. You
own.
don’t need to write the code that tells users to sign in for Facebook or sign in for
You can create resources centrally in a multi-account environment, and use RAM to
Google on your application. Cognito does that already for you out of the box.
share those resources across accounts in three simple steps: create a Resource Share,
Once authenticated into an identity provider (say with Facebook as an example), the
specify resources, and specify accounts.
provider supplies an auth token. This auth token is then supplied to cognito which
RAM is available at no additional charge.
responds with limited access to your AWS environment. You dictate how limited you
would like this access to be in the IAM role.
Cognito's job is broker between your app and legitimate authenticators.
What is Athena?
Cognito User Pools are user directories that are used for sign-up and sign-in Athena is an interactive query service which allows you to interact and query data
functionality on your application. Successful authentication generates a JSON web from S3 using standard SQL commands. This is beneficial for programmatic querying
token. Remember user pools to be user based. It handles registration, recovery, and for the average developer. It is serverless, requires no provisioning, and you pay per
authentication. query and per TB scanned. You basically turn S3 into a SQL supported database by
Cognito Identity Pools are used to allow users temp access to direct AWS Services like using Athena.
S3 or DynamoDB. Identity pools actually go in and grant you the IAM role. Example use cases:
SAML-based authentication can be used to allow AWS Management Console login for Query logs that are dumped into S3 buckets as an alternative or supplement to
non-IAM users. the ELK stack
In particular, you can use Microsoft Active Directory which implements Security Setting queries to run business reports based off of the data regularly entering S3
Assertion Markup Language (SAML) as well. Running queries on click-stream data to have further insight of customer
You can use Amazon Cognito to deliver temporary, limited-privilege credentials to behavior
your application so that your users can access AWS resources.
Amazon Cognito identity pools support both authenticated and unauthenticated What is AWS Macie?
identities.
To understand Macie, it is important to understand PII or Personally Identifiable
You can retrieve a unique Amazon Cognito identifier (identity ID) for your end user Information:
immediately if you're allowing unauthenticated users or after you've set the login Personal data used to establish an individual’s identity which can be exploited
tokens in the credentials provider if you're authenticating users.
Examples: Social Security number, phone number, home address, email address,
When you need to easily add authentication to your mobile and desktop app, think D.O.B, passport number, etc.
Amazon Cognito.
Amazon Macie is an ML-powered security service that helps you prevent data loss by
automatically discovering, classifying, and protecting sensitive data stored in Amazon
What is AWS Resource Access Manager? S3. Amazon Macie uses machine learning to recognize sensitive data such as
personally identifiable information (PII) or intellectual property, assigns a business
value, and provides visibility into where this data is stored and how it is being used in Because of this risk, many customers have chosen not to regularly rotate their
your organization. credentials, which effectively substitutes one risk for another (functionality vs.
You can be informed of detections via the Macie dashboards, alerts, or reporting. security).
Macie can also analyze CloudTrail logs to see who might have interacted with sensitive Secrets Manager enables you to replace hard-coded credentials in your code
data. (including passwords), with an API call to Secrets Manager to retrieve the secret
Macie continuously monitors data access activity for anomalies, and delivers alerts programmatically.
when it detects risk of unauthorized access or inadvertent data leaks. This helps ensure that the secret can't be compromised by someone examining your
Macie has ability to detect global access permissions inadvertently being set on code, because the secret simply isn't there.
sensitive data, detect uploading of API keys inside source code, and verify sensitive Also, you can configure Secrets Manager to automatically rotate the secret for you
customer data is being stored and accessed in a manner that meets their compliance according to a schedule that you specify. This enables you to replace long-term
standards. secrets with short-term ones, which helps to significantly reduce the risk of
compromise.
What is AWS KMS?
What is AWS STS?
AWS Key Management Service (AWS KMS) is a managed service that makes it easy for
you to create and control the encryption keys used to encrypt your data. The master AWS Security Token Service (AWS STS) is the service that you can use to create and
keys that you create in AWS KMS are protected by FIPS 140-2 validated cryptographic provide trusted users with temporary security credentials that can control access to
modules. your AWS resources.
AWS KMS is integrated with most other AWS services that encrypt your data with Temporary security credentials work almost identically to the long-term access key
encryption keys that you manage. AWS KMS is also integrated with AWS CloudTrail to credentials that your IAM users can use.
provide encryption key usage logs to help meet your auditing, regulatory and Temporary security credentials are short-term, as the name implies. They can be
compliance needs. configured to last for anywhere from a few minutes to several hours. After the
You can configure your application to use the KMS API to encrypt all data before credentials expire, AWS no longer recognizes them or allows any kind of access from
saving it to disk. API requests made with them.
AWS Secrets Manager is an AWS service that makes it easier for you to manage AWS OpsWorks is a configuration management service that provides managed
secrets. instances of Chef and Puppet. Chef and Puppet are automation platforms that allow
Secrets can be database credentials, passwords, third-party API keys, and even you to use code to automate the configurations of your servers.
arbitrary text. You can store and control access to these secrets centrally by using the OpsWorks lets you use Chef and Puppet to automate how servers are configured,
Secrets Manager console, the Secrets Manager command line interface (CLI), or the deployed, and managed across your Amazon EC2 instances or on-premises compute
Secrets Manager API and SDKs. environments.
In the past, when you created a custom application that retrieves information from a OpsWorks has three offerings - AWS Opsworks for Chef Automate, AWS OpsWorks for
database, you typically had to embed the credentials (the secret) for accessing the Puppet Enterprise, and AWS OpsWorks Stacks.
database directly in the application. When it came time to rotate the credentials, you AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-
had to do much more than just create new credentials. You had to invest time to premises. With OpsWorks Stacks, you can model your application as a stack
update the application to use the new credentials. Then you had to distribute the containing different layers, such as load balancing, database, and application server.
updated application. If you had multiple applications that shared credentials and you
missed updating one of them, the application would break.
OpsWorks Stacks is complex enough for you to deploy and configure Amazon EC2 With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice
instances in each layer or connect to other resources such as Amazon RDS databases. that they can access anywhere, anytime, from any supported device.
AWS Directory Service provides multiple ways to use Amazon Cloud Directory and
Microsoft Active Directory (AD) with other AWS services.
What is Amazon Elastic Container Service?
Directories store information about users, groups, and devices, and administrators use Amazon Elastic Container Service (Amazon ECS) is a fully managed container
them to manage access to information and resources. orchestration service.
AWS Directory Service provides multiple directory choices for customers who want to Amazon ECS eliminates the need for you to install, operate, and scale your own cluster
use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware management infrastructure. With simple API calls, you can launch and stop container-
applications in the cloud. It also offers those same choices to developers who need a enabled applications, query the complete state of your cluster, and access many
directory to manage users, groups, devices, and access. familiar features like security groups, Elastic Load Balancing, EBS volumes and IAM
roles.
What is IoT Core? You can use Amazon ECS to schedule the placement of containers across your cluster
based on your resource needs and availability requirements. You can also integrate
AWS IoT Core is a managed cloud service that lets connected devices easily and
your own scheduler or third-party schedulers to meet business or application specific
securely interact with cloud applications and other devices.
requirements.
AWS IoT Core provides secure communication and data processing across different
You can choose to run your ECS clusters using AWS Fargate, which is serverless
kinds of connected devices and locations so you can easily build IoT applications.
compute for containers. Fargate removes the need to provision and manage servers,
lets you specify and pay for resources per application, and improves security through
What is AWS WorkSpaces? application isolation by design.
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You
can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a What is Amazon Elastic Kubernetes Service?
few minutes and quickly scale to provide thousands of desktops to workers across the
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes
globe.
service. EKS runs upstream Kubernetes and is certified Kubernetes conformant so you
Amazon WorkSpaces helps you eliminate the complexity in managing hardware can leverage all benefits of open source tooling from the community. You can also
inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which easily migrate any standard Kubernetes application to EKS without needing to refactor
helps simplify your desktop delivery strategy. your code.
Kubernetes is open source software that allows you to deploy and manage One of the challenges with automating deployments is the cut-over from the final
containerized applications at scale. Kubernetes groups containers into logical stage of testing to live production. You usually need to do this quickly in order to
groupings for management and discoverability, then launches them onto clusters of minimize downtime.
EC2 instances. Using Kubernetes you can run containerized applications including The Blue-Green deployment approach does this by ensuring you have two production
microservices, batch processing workers, and platforms as a service (PaaS) using the environments, as identical as possible. At any time one of them, let's say blue for the
same tool set on premises and in the cloud. example, is live. As you prepare a new release of your software you do your final stage
Amazon EKS provisions and scales the Kubernetes control plane, including the API of testing in the green environment. Once the software is working in the green
servers and backend persistence layer, across multiple AWS availability zones for high environment, you switch the router so that all incoming requests go to the green
availability and fault tolerance. Amazon EKS automatically detects and replaces environment - the blue one is now idle.
unhealthy control plane nodes and provides patching for the control plane. Blue-green deployment also gives you a rapid way to rollback - if anything goes
Without Amazon EKS, you have to run both the Kubernetes control plane and the wrong you switch the router back to your blue environment.
cluster of worker nodes yourself. With Amazon EKS, you provision your worker nodes CloudFormation and CodeDeploy (AWS's version of Jenkins) both support this
using a single command in the EKS console, CLI, or API, and AWS handles deployment technique.
provisioning, scaling, and managing the Kubernetes control plane in a highly available
and secure configuration. This removes a significant operational burden for running What is Amazon Data Lifecycle Manager?
Kubernetes and allows you to focus on building applications instead of managing
AWS infrastructure. You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation,
You can run EKS using AWS Fargate, which is serverless compute for containers. retention, and deletion of snapshots taken to back up your Amazon EBS volumes.
Fargate removes the need to provision and manage servers, lets you specify and pay Automating snapshot management helps you to:
for resources per application, and improves security through application isolation by Protect valuable data by enforcing a regular backup schedule.
design. Retain backups as required by auditors or internal compliance.
Amazon EKS is integrated with many AWS services to provide scalability and security Reduce storage costs by deleting outdated backups.
for your applications. These services include Elastic Load Balancing for load Using Amazon DLM means that you no longer need to remember to take your EBS
distribution, IAM for authentication, Amazon VPC for isolation, and AWS CloudTrail for snapshots, thus reducing cognitive load on engineers.
logging.
Releases