0% found this document useful (0 votes)
146 views

Full Stack Unit 5

Unit 5 discusses cloud implementation, covering cloud providers, deployment models (public, private, hybrid, community), and service models (IaaS, PaaS, SaaS). It highlights the benefits of cloud computing, such as on-demand self-service and cost-effectiveness, while also addressing risks like security and vendor lock-in. Additionally, it explains the concept of Virtual Private Clouds (VPCs) and the importance of scalability in cloud computing.

Uploaded by

Harikumar.N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views

Full Stack Unit 5

Unit 5 discusses cloud implementation, covering cloud providers, deployment models (public, private, hybrid, community), and service models (IaaS, PaaS, SaaS). It highlights the benefits of cloud computing, such as on-demand self-service and cost-effectiveness, while also addressing risks like security and vendor lock-in. Additionally, it explains the concept of Virtual Private Clouds (VPCs) and the importance of scalability in cloud computing.

Uploaded by

Harikumar.N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Unit 5 APP IMPLEMENTATION IN CLOUD

Cloud providers Overview – Virtual Private Cloud – Scaling (Horizontal and Vertical) –
Virtual Machines, Ethernet and Switches – Docker Container – Kubernetes

Cloud Providers Overview

Cloud Computing provides us means of accessing the applications as utilities over the
Internet. It allows us to create, configure, and customize the applications online.

What is Cloud?

The term Cloud refers to a Network or Internet. In other words, we can say that
Cloud is something, which is present at remote location. Cloud can provide services
over public and private networks, i.e., WAN, LAN or VPN.

Applications such as e-mail, web conferencing, customer relationship management


(CRM) execute on cloud.

What is Cloud Computing?

Cloud Computing refers to manipulating, configuring, and accessing the hardware


and software resources remotely. It offers online data storage, infrastructure, and
application.

Cloud computing offers platform independency, as the software is not required to be


installed locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.
Basic Concepts

There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users. Following are the working models for
cloud computing:

1
 Deployment Models
 Service Models

Deployment Models

Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid, and
Community.

Public Cloud

The public cloud allows systems and services to be easily accessible to the general
public. Public cloud may be less secure because of its openness.

Private Cloud

The private cloud allows systems and services to be accessible within an


organization. It is more secured because of its private nature.

Community Cloud

The community cloud allows systems and services to be accessible by a group of


organizations.
Hybrid Cloud

The hybrid cloud is a mixture of public and private cloud, in which the critical
activities are performed using private cloud while the non-critical activities are
performed using public cloud.

Service Models

Cloud computing is based on service models. These are categorized into three basic
service models which are -

 Infrastructure-as–a-Service (IaaS)
 Platform-as-a-Service (PaaS)
 Software-as-a-Service (SaaS)

2
Anything-as-a-Service (XaaS) is yet another service model, which includes
Network-as-a-Service, Business-as-a-Service, Identity-as-a-Service,
Database-as-a-Service or Strategy-as-a-Service.

The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of


the service models inherit the security and management mechanism from the
underlying model, as shown in the following diagram:

Infrastructure-as-a-Service (IaaS)

IaaS provides access to fundamental resources such as physical machines, virtual


machines, virtual storage, etc.
Platform-as-a-Service (PaaS)

PaaS provides the runtime environment for applications, development and


deployment tools, etc.

Software-as-a-Service (SaaS)

SaaS model allows to use software applications as a service to end-users.

History of Cloud Computing

The concept of Cloud Computing came into existence in the year 1950 with
implementation of mainframe computers, accessible via thin/static clients. Since then,
cloud computing has been evolved from static clients to dynamic ones and from
software to services. The following diagram explains the evolution of cloud
computing:

3
Benefits

Cloud Computing has numerous advantages. Some of them are listed below -

One can access applications as utilities, over the Internet.

One can manipulate and configure the applications online at any time.

It does not require to install a software to access or manipulate cloud


application.

Cloud Computing offers online development and deployment tools,


programming runtime environment through PaaS model.
Cloud resources are available over the network in a manner that provide
platform independent access to any type of clients.

Cloud Computing offers on-demand self-service. The resources can be used


without interaction with cloud service provider.

Cloud Computing is highly cost effective because it operates at high efficiency


with optimum utilization. It just requires an Internet connection

Cloud Computing offers load balancing that makes it more reliable.

4
Risks related to Cloud Computing

Although cloud Computing is a promising innovation with various benefits in the


world of computing, it comes with risks. Some of them are discussed below:

Security and Privacy

It is the biggest concern about cloud computing. Since data management and
infrastructure management in cloud is provided by third-party, it is always a risk to
handover the sensitive information to cloud service providers.

Although the cloud computing vendors ensure highly secured password protected
accounts, any sign of security breach may result in loss of customers and businesses.
Lock In

It is very difficult for the customers to switch from one Cloud Service Provider
(CSP) to another. It results in dependency on a particular CSP for service.

Isolation Failure

This risk involves the failure of isolation mechanism that separates storage, memory,
and routing between the different tenants.

Management Interface Compromise

In case of public cloud provider, the customer management interfaces are accessible
through the Internet.

Insecure or Incomplete Data Deletion

5
It is possible that the data requested for deletion may not get deleted. It happens
because either of the following reasons

Extra copies of data are stored but are not available at the time of deletion

Disk that stores data of multiple tenants is destroyed.

Characteristics of Cloud Computing

There are four key characteristics of cloud computing. They are shown in the
following diagram:

On Demand Self Service

Cloud Computing allows the users to use web services and resources on demand. One
can logon to a website at any time and use them.
Broad Network Access

Since cloud computing is completely web based, it can be accessed from anywhere
and at any time.

Resource Pooling

Cloud computing allows multiple tenants to share a pool of resources. One can share
single physical instance of hardware, database and basic infrastructure.

Rapid Elasticity

It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand.

The resources being used by customers at any given point of time are automatically
monitored.
Measured Service

6
In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.

What are cloud service providers?

Overview

Cloud service providers are companies that establish public clouds, manage private
clouds, or offer on-demand cloud computing components (also known as cloud
computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
(PaaS), and Software-as-a-Service(SaaS). Cloud services can reduce business process
costs when compared to on-premise IT.

These clouds aren’t usually deployed as a standalone infrastructure solution, but


rather as part of a hybrid cloud.

Why use a cloud provider?

Using a cloud provider is a helpful way to access computing services that you would
otherwise have to provide on your own, such as:

Infrastructure: The foundation of every computing environment. This


infrastructure could include networks, database services, data management,
data storage (known in this context as cloud storage), servers (cloud is the
basis for serverless computing), and virtualization.

Platforms: The tools needed to create and deploy applications. These


platforms could include operating systems like Linux®, middleware, and
runtime environments.

Software: Ready-to-use applications. This software could be custom or


standard applications provided by independent service providers.

Public cloud provider vs. managed private cloud

Public cloud providers

Public cloud providers virtualize their own infrastructure, platforms, or applications


from hardware they own, and then pool all that into data lakes that they orchestrate
with management and automation software before transmitting it across the internet to
their end users.

Managed private cloud

Also known as managed cloud service providers, private cloud providers serve
customers a private cloud that's deployed, configured, and managed by someone other
than the customer. It's a cloud delivery option that helps enterprises with understaffed
or underskilled IT teams provide better private cloud services and cloud infrastructure
to users.

Certified cloud providers

There are a handful of well-known, major public cloud companies—such as Alibaba


Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud,
7
Oracle Cloud, and Microsoft Azure—but there are also hundreds of other cloud
computing providers all over the world.

The Red Hat Certified Cloud and Service Provider program includes hundreds of
cloud, system integrator, and managed service providers—along with software
developers and hardware manufacturers—you can use to run Red Hat products, host
physical and virtual machines, and set up private and public cloud environments.

How do I pick a cloud provider?

The best cloud for your enterprise depends on your business needs, the size of your
business, your current computing platform and IT infrastructure, and what your goals
are for the future—among other things.

For example, the first thing you might do is evaluate whether using a particular cloud
provider aligns with your enterprise strategy.

If it does, the next step is to verify what services you’ll need from your cloud to
support this strategy—what cloud technologies will you be able to handle within your
enterprise, and which should be delegated to a cloud service provider?

Having infrastructure, platform, or software that are managed for you can free your
business to serve your clients, be more efficient in overall operations, and allow more
time to look into improving or expanding your development operations (DevOps).

You can also do more than just secure your own space within your cloud; you can
choose providers who build their cloud solutions on Red Hat® Enterprise Linux.

Using a supported, enterprise open source operating system means that thousands of
developers are monitoring millions of lines of code in the Linux kernel—finding
flaws and developing fixes before errors become vulnerabilities or leaks. An entire
organization verifies those fixes and deploys patches without interrupting your
applications.

Many public cloud providers have a set of standard support contracts that include
validating active software subscriptions, resolving issues, maintaining security, and
deploying patches. Managed cloud providers' support could be relegated to simple
cloud administration or it can serve the needs of an entire IT department.

After verifying your cloud provider starts with Linux, here are some steps to help
determine which provider is right for you.
Public cloud provider vs. managed private cloud

Cost

The resources, platforms, and services public cloud providers supply are usually
charged by the hour or byte—meaning they can fluctuate based on how much you
use.

Cost

Managed private clouds might include more fixed contracts tied to individual
contractors or cloud admins, with only minor spikes when enterprise activity
increases.
8
Location

Major public cloud providers give you data access from nearly anywhere in the world,
but regional providers may help you comply with data sovereignty regulations.

Location

Support staff that’s close to your datacenter means it will be easier for them to
maintain the physical infrastructure holding up your cloud.

Security

There are certain innate risks that come with not owning or managing the systems that
house enterprise information, services, and functions.

Security

Hire and partner with trustworthy people and organizations who understand the
complexities your unique security risks and compliance requirements.

Reliability

Many public cloud providers guarantee certain uptimes—like 99.9%. There are also
various service-level agreements that dictate change requests and service restoration.

Reliability

Managed private cloud providers’ reliability mirrors that of public cloud providers,
but it may be tied to the condition of the physical hardware your cloud runs on.

Technical specifications

The right public cloud provider should be certified to run operating systems, storage,
middleware, and management systems that integrate with your existing systems.
Technical specifications

Every contractor’s skill set is unique. Verify that each individual has the training and
certification necessary to manage your cloud appropriately.

How do I become a cloud provider

Becoming a cloud provider is as simple as setting up a cloud and letting someone else
use it. There are other constraints to consider—security, routes of access, self-service,
and more—but letting someone else use the cloud is the fundamental concept of being
a cloud provider.

Becoming a cloud provider can also be more effective when your environments are
certified to run the products customers already use in their datacenters.

Virtual Private Cloud

What is a virtual private cloud (VPC)?


9
A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a public
cloud. VPC customers can run code, store data, host websites, and do anything else
they could do in an ordinary private cloud, but the private cloud is hosted remotely by
a public cloud provider. (Not all private clouds are hosted in this fashion.) VPCs
combine the scalability and convenience of public cloud computing with the data
isolation of private cloud computing.

Imagine a public cloud as a crowded restaurant, and a virtual private cloud as a


reserved table in that crowded restaurant. Even though the restaurant is full of people,
a table with a "Reserved" sign on it can only be accessed by the party who made the
reservation. Similarly, a public cloud is crowded with various cloud customers
accessing computing resources – but a VPC reserves some of those resources for use
by only one customer.

What is a public cloud? What is a private cloud?

A public cloud is shared cloud infrastructure. Multiple customers of the cloud vendor
access that same infrastructure, although their data is not shared – just like every
person in a restaurant orders from the same kitchen, but they get different dishes.
Public cloud service providers include AWS, Google Cloud Platform, and Microsoft
Azure, among others.

The technical term for multiple separate customers accessing the same cloud
infrastructure is "multitenancy" (see What Is Multitenancy? to learn more).

A private cloud, however, is single-tenant. A private cloud is a cloud service that is


exclusively offered to one organization. A virtual private cloud (VPC) is a private
cloud within a public cloud; no one else shares the VPC with the VPC customer.

How is a VPC isolated within a public cloud?

A VPC isolates computing resources from the other computing resources available in
the public cloud. The key technologies for isolating a VPC from the rest of the public
cloud are:

Subnets: A subnet is a range of IP addresses within a network that are reserved so


10
that they're not available to everyone within the network, essentially dividing part of
the network for private use. In a VPC these are private IP addresses that are not
accessible via the public Internet, unlike typical IP addresses, which are publicly
visible.

VLAN: A LAN is a local area network, or a group of computing devices that are all
connected to each other without the use of the Internet. A VLAN is a virtual LAN.
Like a subnet, a VLAN is a way of partitioning a network, but the partitioning takes
place at a different layer within the OSI model (layer 2 instead of layer 3).

VPN: A virtual private network (VPN) uses encryption to create a private network
over the top of a public network. VPN traffic passes through publicly shared Internet
infrastructure – routers, switches, etc. – but the traffic is scrambled and not visible to
anyone.

A VPC will have a dedicated subnet and VLAN that are only accessible by the VPC
customer. This prevents anyone else within the public cloud from accessing
computing resources within the VPC – effectively placing the "Reserved" sign on the
table. The VPC customer connects via VPN to their VPC, so that data passing into
and out of the VPC is not visible to other public cloud users.

Some VPC providers offer additional customization with:


 Network Address Translation (NAT): This feature matches private IP
addresses to a public IP address for connections with the public Internet. With
NAT, a public-facing website or application could run in a VPC.
 BGP route configuration: Some providers allow customers to customize
BGP routing tables for connecting their VPC with their other infrastructure.

What are the advantages of using a VPC instead of a private cloud?

Scalability: Because a VPC is hosted by a public cloud provider, customers can add
more computing resources on demand.

Easy hybrid cloud deployment: It's relatively simple to connect a VPC to a public
cloud or to on-premises infrastructure via the VPN.

Better performance: Cloud-hosted websites and applications typically perform better


than those hosted on local on-premises servers.

Better security: The public cloud providers that offer VPCs often have more
resources for updating and maintaining the infrastructure, especially for small and
mid-market businesses. For large enterprises or any companies that face extremely
tight data security regulations, this is less of an advantage.

Scaling (Horizontal and Vertical)

What is Cloud Scalability?

Cloud scalability in cloud computing refers to the ability to increase or decrease IT


resources as needed to meet changing demand. Scalability is one of the hallmarks of
the cloud and the primary driver of its exploding popularity with businesses.

Data storage capacity, processing power and networking can all be scaled using
existing cloud computing infrastructure. Better yet, scaling can be done quickly and
11
easily, typically with little to no disruption or down time. Third-party cloud providers
have all the infrastructure already in place; in the past, when scaling with on-premises
physical infrastructure, the process could take weeks or months and require
tremendous expense.

Cloud scalability versus cloud elasticity

Cloud providers can offer both elastic and scalable solutions. While these two terms
sound identical, cloud scalability and elasticity are not the same.

Elasticity refers to a system’s ability to grow or shrink dynamically in response to


changing workload demands, like a sudden spike in web traffic. An elastic system
automatically adapts to match resources with demand as closely as possible, in real
time. A business that experiences variable and unpredictable workloads might seek an
elastic solution in the public cloud.
A system’s scalability, as described above, refers to its ability to increase workload
with existing hardware resources. A scalable solution enables stable, longer-term
growth in a pre-planned manner, while an elastic solution addresses more immediate,
variable shifts in demand. Elasticity and scalability in cloud computing are both
important features for a system, but the priority of one over the other depends in part
on whether your business has predictable or highly variable workloads.
Why is cloud scalable?

A scalable cloud architecture is made possible through virtualization. Unlike physical


machines whose resources and performance are relatively set, virtual machines virtual
machines (VMs) are highly flexible and can be easily scaled up or down. They can be
moved to a different server or hosted on multiple servers at once; workloads and
applications can be shifted to larger VMs as needed.

Third-party cloud providers also have all the vast hardware and software resources
already in place to allow for rapid scaling that an individual business could not
achieve cost-effectively on its own.

Benefits of cloud scalability

The major cloud scalability benefits are driving cloud adoption for businesses large
and small:

 Convenience: Often with just a few clicks, IT administrators can easily add
more VMs that are available without delay—and customized to the exact
needs of an organization. That saves precious time for IT staff. Instead of
spending hours and days setting up physical hardware, teams can focus on
other tasks.
 Flexibility and speed: As business needs change and grow—including
unexpected spikes in demand—cloud scalability allows IT to respond quickly.
Today, even smaller businesses have access to high-powered resources that
used to be cost prohibitive. No longer are companies tied down by obsolete
equipment—they can update systems and increase power and storage with
ease.
 Cost savings: Thanks to cloud scalability, businesses can avoid the upfront
costs of purchasing expensive equipment that could become outdated in a few
years. Through cloud providers, they pay for only what they use and minimize
waste.
 Disaster recovery: With scalable cloud computing, you can reduce disaster
12
recovery costs by eliminating the need for building and maintaining secondary
data centers.

When to use cloud scalability

Successful businesses employ scalable business models that allow them to grow
quickly and meet changing demands. It’s no different with their IT. Cloud scalability
advantages help businesses stay nimble and competitive.
Scalability is one of the driving reasons to migrate to the cloud. Whether traffic or
workload demands increase suddenly or grow gradually over time, a scalable cloud
solution enables organizations to respond appropriately and cost-effectively to
increase storage and performance.

How to achieve cloud scalability?

Businesses have many options for how to set up a customized, scalable cloud solution
via public cloud, private cloudor hybrid cloud.
There are two basic types of scalability in cloud computing: vertical and horizontal
scaling.
With vertical scaling, also known as “scaling up” or “scaling down,” you add or
subtract power to an existing cloud server upgrading memory (RAM), storage or
processing power (CPU). Usually this means that the scaling has an upper limit based
on the capacity of the server or machine being scaled; scaling beyond that often
requires downtime.

To scale horizontally (scaling in or out), you add more resources like servers to your
system to spread out the workload across machines, which in turn increases
performance and storage capacity. Horizontal scaling is especially important for
businesses with high availability services requiring minimal downtime.

How do you determine optimal cloud scalability?

Changing business requirements or surging demand often require changes to your


scalable cloud solution. But how much storage, memory and processing power do you
really need? Will you scale up or out?

To determine a right-sized solution, ongoing performance testing is essential. IT


administrators must continually measure factors such as response time, number of
requests, CPU load and memory usage. Scalability testing also measures an
application’s performance and ability to scale up or down depending on user requests.

Automation can also help optimize cloud scalability. You can determine thresholds
for usage that trigger automatic scaling so that there’s no effect on performance. You
may also consider a third-party configuration management service or tool to help
manage your scaling needs, goals and implementation.

Horizontal and Vertical Scaling Strategies

The cloud has dramatically simplified these scaling problems by making it easier to
scale up or down and out or in. Primarily, there are two ways to scale in the cloud:
horizontally or vertically.

When you scale horizontally, you are scaling out or in, which refers to the number of
provisioned resources. When you scale vertically, it’s often called scaling up or down,
13
which refers to the power and capacity of an individual resource.

What are the differences between horizontal and vertical scaling in the cloud?

Horizontal scaling refers to provisioning additional servers to meet your needs, often
splitting workloads between servers to limit the number of requests any individual
server is getting. Horizontal scaling in cloud computing means adding additional
instances instead of moving to a larger instance size.
Vertical scaling refers to adding more or faster CPUs, memory, or I/O resources to an
existing server, or replacing one server with a more powerful server. In a data center,
administrators traditionally achieved vertical scaling by purchasing a new, more
powerful server and discarding or repurposing the old one. Today’s cloud architects
can accomplish AWS vertical scaling and Microsoft Azure vertical scaling by
changing instance sizes. AWS and Azure cloud services have many different instance
sizes, so vertical scaling in cloud computing is possible for everything from EC2
instances to RDS databases.

Horizontal vs. Vertical Scaling Pros and Cons

Pros and cons of horizontal scaling:

Pros: Horizontal scaling is much easier to accomplish without downtime. Horizontal


scaling is also easier than vertical scaling to manage automatically. Limiting the
number of requests any instance gets at one time is good for performance, no matter
how large the instance. Provisioning additional instances also means having greater
redundancy in the rare event of an outage.

Cons: Depending on the number of instances you need, your costs may be higher.
Additionally, without a load balancer in place, your machines run the risk of being
over-utilized, which could lead to an outage. However, with public cloud platforms,
you can pay attention to discounts for Reserved Instances (RIs) if you’re able to
predict when you require more compute power.

Pros and cons of vertical scaling:

Pros: In the cloud, vertical scaling means changing the sizes of cloud resources,
rather than purchasing more, to match them to the workload. This process is known as
right sizing. For example, right sizing in AWS can refer to the CPU, memory, storage,
and networking capacity of instances and storage classes. Right sizing is one of the
most effective ways to control cloud costs. When done correctly, right sizing can help
lower costs of vertically scaled resources.

Cons: In general, vertical scaling can cost more. Why is vertical scaling expensive?
When resources aren’t right sized correctly — or at all — costs can skyrocket.
There’s also downtime to consider. Even in a cloud environment, scaling vertically
usually requires making an application unavailable for some amount of time.
Therefore, environments or applications that can’t have downtime would typically
benefit more from horizontal scalability by provisioning additional resources instead
of increasing capacity for existing resources.
Which Is Better: Horizontal or Vertical Scaling?

The decision to scale horizontally or vertically in the cloud depends upon the
requirements of your data. Remember that scaling continues to be a challenge, even in
cloud environments. All parts of your application need to scale, from the compute
14
resources to database and storage resources. Neglecting any pieces of the scaling
puzzle can lead to unplanned downtime or worse. The best solution might be a
combination of vertical scaling in order to find the ideal capacity of each instance and
then horizontal scaling to handle spikes in demand, while ensuring uptime.

Types of Cloud Scalability: Manual vs. Scheduled vs. Automatic Scaling

What also matters is how you scale. Three basic ways to scale in a cloud environment
include manual scaling, scheduled scaling, and automatic scaling.

Manual Scaling
Manual scaling is just as it sounds. It requires an engineer to manage scaling up and
out or down and in. In the cloud, both vertical and horizontal scaling can be
accomplished with the push of a button, so the actual scaling isn’t terribly difficult
when compared to managing a data center.
However, because it requires a team member’s attention, manual scaling cannot take
into account all the minute-by-minute fluctuations in demand seen by a normal
application. This also can lead to human error. An individual might forget to scale
back down, leading to extra charges.

Scheduled Scaling

Scheduled scaling solves some of the problems with manual scaling. This makes it
easier to tailor your provisioning to your actual usage without requiring a team
member to make the changes manually every day.

If you know when peak activity occurs, you can schedule scaling based on your usual
demand curve. For example, you can scale out to ten instances from 5 p.m. to 10 p.m.,
then back into two instances from 10 p.m. to 7 a.m., and then back out to five
instances until 5 p.m. Look for a cloud management platform with Heat Maps that can
visually identify such peaks and valleys of usage.

Automatic Scaling

Automatic scaling (also known as Auto Scaling) is when your compute, database, and
storage resources scale automatically based on predefined rules. For example, when
metrics like vCPU, memory, and network utilization rates go above or below a certain
threshold, you can scale out or in.

Auto scaling makes it possible to ensure your application is always available — and
always has enough resources provisioned to prevent performance problems or outages
— without paying for far more resources than you are actually using.
Virtual Machines

What is a virtual machine?

A virtual machine (VM) is a digital version of a physical computer. Virtual machine


software can run programs and operating systems, store data, connect to networks,
and do other computing functions, and requires maintenance such as updates and
system monitoring. Multiple VMs can be hosted on a single physical machine, often a
server, and then managed using virtual machine software. This provides flexibility for
compute resources (compute, storage, network) to be distributed among VMs as
needed, increasing overall efficiency. This architecture provides the basic building
blocks for the advanced virtualized resources we use today, including cloud
15
computing.

Virtual machine defined

A VM is a virtualized instance of a computer that can perform almost all of the same
functions as a computer, including running applications and operating systems.

Virtual machines run on a physical machine and access computing resources from
software called a hypervisor. The hypervisor abstracts the physical machine’s
resources into a pool that can be provisioned and distributed as needed, enabling
multiple VMs to run on a single physical machine.

What are virtual machines used for?

VMs are the basic building blocks of virtualized computing resources and play a
primary role in creating any application, tool, or environment—for virtual machines
online and on-premises. Here are a few of the more common enterprise functions of
virtual machines:

Consolidate servers

VMs can be set up as servers that host other VMs, which lets organizations reduce
sprawl by concentrating more resources onto a single physical machine.

Create development and test environments

VMs can serve as isolated environments for testing and development that include full
functionality but have no impact on the surrounding infrastructure.

Support DevOps
VMs can easily be turned off or on, migrated, and adapted, providing maximum
flexibility for development.
Enable workload migration

The flexibility and portability that VMs provide are key to increasing the velocity of
migration initiatives.

Improve disaster recovery and business continuity

Replicating systems in cloud environments using VMs can provide an extra layer of
security and certainty. Cloud environments can also be continuously updated.

Create a hybrid environment

VMs provide the foundation for creating a cloud environment alongside an on-
premises one, bringing flexibility without abandoning legacy systems.

What are virtual machines used for?

Common use cases for virtual machines on single computers include:

 Testing - Software developers often want to test their applications in different


environments. They can use virtual machines to run their applications in
various OSes on one computer. This is simpler and more cost-effective than
16
testing on several different physical machines.
 Running software designed for other OSes - Although certain software
applications are only available for a single platform, a VM can run software
designed for a different OS. For example, a Mac user who wants to run
software designed for Windows can run a Windows VM on their Mac host.
 Running outdated software - Some pieces of older software can’t be run in
modern OSes. Users who want to run these applications can run an old OS on
a virtual machine.
 Browser isolation - Browser isolation is the practice of 'isolating' web
browser activity away from the rest of a computer's operating system to keep
malware from affecting the computer's other files and programs. Some
broswer isolation tools use VMs to establish this isolation — though this
approach can slow down browsing activity.

How does cloud computing use virtual machines?

Several cloud providers offer virtual machines to their customers. These virtual
machines typically live on powerful servers that can act as a host to multiple VMs and
can be used for a variety of reasons that wouldn’t be practical with a locally-hosted
VM. These include:

 Running SaaS applications - Software-as-a-Service, or SaaS for short, is a


cloud-based method of providing software to users, in which an application is
served to user over the Internet rather than running on their computers. Often,
it is virtual machines in the cloud that do the computation for SaaS
applications as well as delivering them to users. If the cloud provider has a
geographically distributed network edge, then the application will run closer to
the user, resulting in faster performance.
 Backing up data - Cloud-based VM services are popular for backing up data,
because the data can be accessed from anywhere. Plus, cloud VMs provide
better redundancy, require less maintenance, and generally scale better than
physical data centers. (For example, it’s relatively easy to buy an extra
gigabyte of storage space from a cloud VM provider, but much more difficult
to build a new local data server for that extra gigabyte of data.)
 Hosting services like email and access management - Hosting these services
on cloud VMs is generally faster and more cost-effective, and helps minimize
maintenance and offload security concerns as well.
 Browswer isolation - Some browser isolation tools use cloud VMs to run web
broswing activity and deliver safe content to users via a secure Internet
connection

Ethernet and Switches

What is an Ethernet Switch?

Ethernet switching connects wired devices such as computers, laptops, routers,


servers, and printers to a local area network (LAN). Multiple Ethernet switch ports
allow for faster connectivity and smoother access across many devices at once.

An Ethernet switch creates networks and uses multiple ports to communicate between
devices in the LAN. Ethernet switches differ from routers, which connect networks
and use only a single LAN and WAN port. A full wired and wireless corporate
infrastructure provides wired connectivity and Wi-Fi for wireless connectivity.

17
Hubs are similar to Ethernet switches in that connected devices on the LAN will be
wired to them, using multiple ports. The big difference is that hubs share bandwidth
equally among ports, while Ethernet switches can devote more bandwidth to certain
ports without degrading network performance. When many devices are active on a
network, Ethernet switching provides more robust performance.

Routers connect networks to other networks, most commonly connecting LANs to


wide area networks (WANs). Routers are usually placed at the gateway between
networks and route data packets along the network.

Most corporate networks use combinations of switches, routers, and hubs, and wired
and wireless technology.

What Ethernet Switches Can Do For Your Network

Ethernet switches provide many advantages when correctly installed, integrated, and
managed. These include:

1. Reduction of network downtime


2. Improved network performance and increased available bandwidth on the
network
3. Relieving strain on individual computing devices
4. Protecting the overall corporate network with more robust security
5. Lower IT capex and opex costs thanks to remote management and
consolidated wiring
6. Right-sizing IT infrastructure and planning for future expansion using modular
switches

Most corporate networks support a combination of wired and wireless technologies,


including Ethernet switching as part of the wired infrastructure. Dozens of devices
can connect to a network using an Ethernet switch, and administrators can monitor
traffic, control communications among machines, securely manage user access, and
rapidly troubleshoot.

The switches come in a wide variety of options, meaning organizations can almost
always find a solution right-sized for their network. These range from basic
unmanaged network switches offering plug-and-play connectivity, to feature-rich
Gigabit Ethernet switches that perform at higher speeds than wireless options.

How Ethernet Switches Work: Terms and Functionality

Frames are sequences of information, travel over Ethernet networks to move data
between computers. An Ethernet frame includes a destination address, which is where
the data is traveling to, and a source address, which is the location of the device
sending the frame. In a standard seven-layer Open Systems Interconnection (OSI)
model for computer networking, frames are part of Layer 2, also known as the data-
link layer. These are sometimes known as “link layer devices” or “Layer 2 switches.”

Transparent Bridging is the most popular and common form of bridging, crucial to
Ethernet switch functionality. Using transparent bridging, a switch automatically
begins working without requiring any configuration on a switch or changes to the
computers in the network (i.e. the operation of the switch is transparent).

Address Learning -- Ethernet switches control how frames are transmitted between
18
switch ports, making decisions on how traffic is forwarded based on 48-bit media
access control (MAC) addresses that are used in LAN standards. An Ethernet switch
can learn which devices are on which segments of the network using the source
addresses of the frames it receives.

Every port on a switch has a unique MAC address, and as frames are received on
ports, the software in the switch looks at the source address and adds it to a table of
addresses it constantly updates and maintains. (This is how a switch “discovers” what
devices are reachable on which ports.) This table is also known as a forwarding
database, which is used by the switch to make decisions on how to filter traffic to
reach certain destinations. That the Ethernet switch can “learn” in this manner makes
it possible for network administrators to add new connected endpoints to the network
without having to manually configure the switch or the endpoints.

Traffic Filtering -- Once a switch has built a database of addresses, it can smoothly
select how it filters and forwards traffic. As it learns addresses, a switch checks
frames and makes decisions based on the destination address in the frame. Switches
can also isolate traffic to only those segments needed to receive frames from senders,
ensuring that traffic does not unnecessarily flow to other ports.

Frame Flooding -- Entries in a switch’s forwarding database may drop from the list if
the switch doesn’t see any frames from a certain source over a period of time. (This
keeps the forwarding database from becoming overloaded with “stale” source
information.) If an entry is dropped—meaning it once again is unknown to the switch
—but traffic resumes from that entry at a later time, the switch will forward the frame
to all switch ports (also known as frame flooding) to search for its correct destination.
When it connects to that destination, the switch once again learns the correct port, and
frame flooding stops.

Multicast Traffic -- LANs are not only able to transmit frames to single addresses,
but also capable of sending frames to multicast addresses, which are received by
groups of endpoint destinations. Broadcast addresses are a specific form of multicast
address; they group all of the endpoint destinations in the LAN. Multicasts and
broadcasts are commonly used for functions such as dynamic address assignment, or
sending data in multimedia applications to multiple users on a network at once, such
as in online gaming. (Streaming applications such as video, which send high rates of
multicast data and generate a lot of traffic, can hog network bandwidth.

Managed vs. Unmanaged Ethernet Switches

Unmanaged Ethernet switching refers to switches that have no user configuration;


these can just be plugged in and turned on.

Managed Ethernet switching refers to switches that can be managed and programmed
to deliver certain outcomes and perform certain tasks, from adjusting speeds and
combining users into subgroups, to monitoring network traffic.

Ethernet enables cloud connectivity through several service types:

 Ethernet Private Lines (EPLs) and Ethernet Virtual Private Lines


(EVPLs) are the top services for private cloud and inter-data center
connectivity. EPLs provide point-to-point connections, while EVPLs also
support point-to-multipoint connectivity using EVCs (Ethernet Virtual
Connections). Traffic prioritization is provided through CoS (Class of Service)
19
features.
 Ethernet DIA (Dedicated Internet Access) services are used primarily for
connectivity to public cloud offerings.
 E-Access to IP/MPLS VPN implementations are increasing for hybrid
Ethernet/IP VPNs that link to public services or to private clouds.
 E-LAN services are used for private cloud connectivity between on-net
enterprise sites and data centers. Metro LAN services connect sites within a
metro area, and WAN VPLS services support wide area topologies.

Ethernet-based cloud connectivity is also heating up for collocation companies (e.g.,


Equinix, Telx, etc.). Exchange services offer vendor-neutral connections among cloud
providers, content/media providers, network service operators and enterprises.
Ethernet simplifies physical connections for exchange participants and enables virtual
interconnectivity. These capabilities facilitate new business models that disrupt the
economics of traditional wide area networks. Look for exchange ecosystems to
expand their cloud offerings during 2013.

Standards for Ethernet-based cloud connectivity continue to advance. The MEF's


Carrier Ethernet 2.0 (CE 2.0) initiative provides guidelines for cloud-ready Ethernet
services and equipment. Developments are focused on multi-network
Interconnectivity, end-to-end SLAs (Service Level Agreements), application-aware
QoS (Quality of Service) and dynamic bandwidth provisioning. A new CE 2.0
certification process aims to ensure standards adherence.

Centrally Managed from the Cloud

With cloud management, thousands of switch ports can be configured and monitored
instantly over the web. Without needing a physical connection between switches, you
can remotely configure them for access devices, assign voice VLANs, control PoE,
and more, with just a few simple clicks and without on-site IT. By managing your
network through the cloud you can provision remote sites, deploy network-wide
configuration changes, and easily manage campus and distributed networks without
IT training or dedicated staff.

Docker Container

A container is a standard unit of software that packages up code and all its
dependencies so the application runs quickly and reliably from one computing
environment to another. A Docker container image is a lightweight, standalone,
executable package of software that includes everything needed to run an application:
code, runtime, system tools, system libraries and settings.

Container images become containers at runtime and in the case of Docker containers
– images become containers when they run on Docker Engine. Available for both
Linux and Windows-based applications, containerized software will always run the
same, regardless of the infrastructure. Containers isolate software from its
environment and ensure that it works uniformly despite differences for instance
between development and staging.

Docker containers that run on Docker Engine:

 Standard: Docker created the industry standard for containers, so they could
be portable anywhere
 Lightweight: Containers share the machine’s OS system kernel and therefore
20
do not require an OS per application, driving higher server efficiencies and
reducing server and licensing costs
 Secure: Applications are safer in containers and Docker provides the strongest
default isolation capabilities in the industry

Docker container technology was launched in 2013 as an open source Docker Engine.
It leveraged existing computing concepts around containers and specifically in the
Linux world, primitives known as cgroups and namespaces. Docker’s technology is
unique because it focuses on the requirements of developers and systems operators to
separate application dependencies from infrastructure.

Success in the Linux world drove a partnership with Microsoft that brought Docker
containers and its functionality to Windows Server.

Technology available from Docker and its open source project, Moby has been
leveraged by all major data center vendors and cloud providers. Many of these
providers are leveraging Docker for their container-native IaaS offerings. Additionally,
the leading open source serverless frameworks utilize Docker container technology.

Comparing Containers and Virtual Machines

Containers and virtual machines have similar resource isolation and allocation
benefits, but function differently because containers virtualize the operating
system instead of hardware. Containers are more portable and efficient.

21
CONTAINERS

Containers are an abstraction at the app layer that packages code and dependencies
together. Multiple containers can run on the same machine and share the OS kernel
with other containers, each running as isolated processes in user space. Containers
take up less space than VMs (container images are typically tens of MBs in size), can
handle more applications and require fewer VMs and Operating systems.

VIRTUAL MACHINES

Virtual machines (VMs) are an abstraction of physical hardware turning one server
into many servers. The hypervisor allows multiple VMs to run on a single machine.
Each VM includes a full copy of an operating system, the application, necessary
binaries and libraries – taking up tens of GBs. VMs can also be slow to boot.

Containers and Virtual Machines Together

Containers and VMs used together provide a great deal of flexibility in deploying
and managing app

As a result, container technology offers all the functionality and benefits of VMs -
including application isolation, cost-effective scalability, and disposability - plus
important additional advantages:

 Lighter weight: Unlike VMs, containers don’t carry the payload of an entire
OS instance and hypervisor; they include only the OS processes
and dependencies necessary to execute the code. Container sizes are measured
in megabytes (vs. gigabytes for some VMs), make better use of hardware
capacity, and have faster startup times.
 Greater resource efficiency: With containers, you can run several times as
many copies of an application on the same hardware as you can using VMs.
This can reduce your cloud spending.
 Improved developer productivity: Compared to VMs, containers are faster
and easier to deploy, provision and restart. This makes them ideal for use
in continuous integration and continuous delivery (CI/CD) pipelines and a
better fit for development teams adopting Agile and DevOps practices.

Docker enhanced the native Linux containerization capabilities with technologies that
enable:

 Improved—and seamless—portability: While LXC containers often


reference machine-specific configurations, Docker containers run without
modification across any desktop, data center and cloud environment.
 Even lighter weight and more granular updates: With LXC, multiple
processes can be combined within a single container. With Docker containers,
only one process can run in each container. This makes it possible to build an
application that can continue running while one of its parts is taken down for
an update or repair.
 Automated container creation: Docker can automatically build a container
based on application source code.

22
 Container versioning: Docker can track versions of a container image, roll
back to previous versions, and trace who built a version and how. It can even
upload only the deltas between an existing version and a new one.
 Container reuse: Existing containers can be used as base images—essentially
like templates for building new containers.
 Shared container libraries: Developers can access an open-source registry
containing thousands of user-contributed containers.

Docker tools and terms

Some of the tools and terminology you’ll encounter when using Docker include:

DockerFile

Every Docker container starts with a simple text file containing instructions for how
to build the Docker container image. DockerFile automates the process of Docker
image creation. It’s essentially a list of command-line interface (CLI) instructions
that Docker Engine will run in order to assemble the image.

Docker images

Docker images contain executable application source code as well as all the tools,
libraries, and dependencies that the application code needs to run as a container.
When you run the Docker image, it becomes one instance (or multiple instances) of
the container.

It’s possible to build a Docker image from scratch, but most developers pull them
down from common repositories. Multiple Docker images can be created from a
single base image, and they’ll share the commonalities of their stack.

Docker images are made up of layers, and each layer corresponds to a version of the
image. Whenever a developer makes changes to the image, a new top layer is created,
and this top layer replaces the previous top layer as the current version of the image.
Previous layers are saved for rollbacks or to be re-used in other projects.

Each time a container is created from a Docker image, yet another new layer called
the container layer is created. Changes made to the container—such as the addition or
deletion of files—are saved to the container layer only and exist only while the
container is running. This iterative image-creation process enables increased overall
efficiency since multiple live container instances can run from just a single base
image, and when they do so, they leverage a common stack.

Docker containers

Docker containers are the live, running instances of Docker images. While Docker
images are read-only files, containers are live, ephemeral, executable content. Users
can interact with them, and administrators can adjust their settings and conditions
using docker commands.

23
Docker Hub

Docker Hub (link resides outside IBM) is the public repository of Docker images that
calls itself the “world’s largest library and community for container images.” It holds
over 100,000 container images sourced from commercial software
vendors, open-source projects, and individual developers. It includes images that have
been produced by Docker, Inc., certified images belonging to the Docker Trusted
Registry, and many thousands of other images.

All Docker Hub users can share their images at will. They can also download
predefined base images from the Docker filesystem to use as a starting point for
any containerization project.

Docker daemon

Docker daemon is a service running on your operating system, such as


Microsoft Windows or Apple MacOS or iOS. This service creates and manages
your Docker images for you using the commands from the client, acting as the control
center of your Docker implementation.

Docker registry

A Docker registry is a scalable open-source storage and distribution system for docker
images. The registry enables you to track image versions in repositories, using tagging
for identification. This is accomplished using git, a version control tool.

Docker deployment and orchestration

If you’re running only a few containers, it’s fairly simple to manage your application
within Docker Engine, the industry de facto runtime. But if your deployment
comprises thousands of containers and hundreds of services, it’s nearly impossible to
manage that workflow without the help of these purpose-built tools.

Docker Compose

If you’re building an application out of processes in multiple containers that all reside
on the same host, you can use Docker Compose to manage the application’s
architecture. Docker Compose creates a YAML file that specifies which services are
included in the application and can deploy and run containers with a single command.
Using Docker Compose, you can also define persistent volumes for storage, specify
base nodes, and document and configure service dependencies.

Kubernetes

To monitor and manage container lifecycles in more complex environments, you’ll


need to turn to a container orchestration tool. While Docker includes its own
orchestration tool (called Docker Swarm), most developers
choose Kubernetes instead.
Kubernetes is an open-source container orchestration platform descended from a
project developed for internal use at Google.
Kubernetes schedules and automates tasks integral to the management of container-
based architectures, including container deployment, updates, service discovery,
storage provisioning, load balancing, health monitoring, and more.
24
In addition, the open source ecosystem of tools for Kubernetes—including
Istio and Knative—enables organizations to deploy a high-productivity Platform-as-a-
Service (PaaS) for containerized applications and a faster on-ramp to serverless
computing.

Kubernetes

What can Kubernetes do for you?

With modern web services, users expect applications to be available 24/7, and
developers expect to deploy new versions of those applications several times a day.
Containerization helps package software to serve these goals, enabling applications to
be released and updated without downtime.
Kubernetes helps you make sure those containerized applications run where and when
you want, and helps them find the resources and tools they need to work. Kubernetes
is a production-ready, open source platform designed with Google's accumulated
experience in container orchestration, combined with best-of-breed ideas from the
community.

Kubernetes Basics Modules

1. Create a Kubernetes cluster

2. Deploy an app

3. Explore your app

25
4. Expose your app publicly

5. Scale up your app

6. Update your app


Kubernetes, also known as K8s, is an open-source system for automating deployment,
scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy
management and discovery. Kubernetes builds upon 15 years of experience of
running production workloads at Google, combined with best-of-breed ideas and
practices from the community.

Planet Scale

Designed on the same principles that allow Google to run billions of containers a
week, Kubernetes can scale without increasing your operations team.

Never Outgrow

Whether testing locally or running a global enterprise, Kubernetes flexibility grows


with you to deliver your applications consistently and easily no matter how complex
your need is.

Run K8s Anywhere

Kubernetes is open source giving you the freedom to take advantage of on-premises,
hybrid, or public cloud infrastructure, letting you effortlessly move workloads to
where it matters to you.

Kubernetes Features

Automated rollouts and rollbacks

Kubernetes progressively rolls out changes to your application or its configuration,


while monitoring application health to ensure it doesn't kill all your instances at the
same time. If something goes wrong, Kubernetes will rollback the change for you.
Take advantage of a growing ecosystem of deployment solutions.
26
Service discovery and load balancing

No need to modify your application to use an unfamiliar service discovery mechanism.


Kubernetes gives Pods their own IP addresses and a single DNS name for a set of
Pods, and can load-balance across them.

Storage orchestration

Automatically mount the storage system of your choice, whether from local storage, a
public cloud provider such as GCP or AWS, or a network storage system such as NFS,
iSCSI, Gluster, Ceph, Cinder, or Flocker.

Secret and configuration management

Deploy and update secrets and application configuration without rebuilding your
image and without exposing secrets in your stack configuration.
Automatic bin packing

Automatically places containers based on their resource requirements and other


constraints, while not sacrificing availability. Mix critical and best-effort workloads in
order to drive up utilization and save even more resources.

Batch execution

In addition to services, Kubernetes can manage your batch and CI workloads,


replacing containers that fail, if desired.

IPv4/IPv6 dual-stack

Allocation of IPv4 and IPv6 addresses to Pods and Services

Horizontal scaling

Scale your application up and down with a simple command, with a UI, or
automatically based on CPU usage.

Self-healing

Restarts containers that fail, replaces and reschedules containers when nodes die, kills
containers that don't respond to your user-defined health check, and doesn't advertise
them to clients until they are ready to serve.

Designed for extensibility

Add features to your Kubernetes cluster without changing upstream source code.

27

You might also like