Lecture Notes Cloud Computing Unit-1 13 Sep 2023
Lecture Notes Cloud Computing Unit-1 13 Sep 2023
we can define three criteria to discriminate whether a service is delivered in the cloud
computing style:
• The service is accessible via a Web browser (nonproprietary) or a Web services
application programming interface (API).
• Zero capital expenditure is necessary to get started.
• You pay only for what you use as you use it.
While accessing e-mail service our data is stored on cloud server and not on our
computer. The technology and infrastructure behind the cloud is invisible. It is less
important whether cloud services are based on HTTP, XML, Ruby, PHP or other
specific technologies as far as it is user friendly and functional. An individual user can
connect to cloud system from his/her own devices like desktop, laptop or mobile.
Cloud computing harnesses small business effectively having limited resources, it gives
small businesses access to the technologies that previously were out of their reach.
Cloud computing helps small businesses to convert their maintenance cost into profit.
Let’s see how?
In an in-house IT server, you have to pay a lot of attention and ensure that there are no
flaws into the system so that it runs smoothly. And in case of any technical glitch you
are completely responsible; it will seek a lot of attention, time and money for repair.
Whereas, in cloud computing, the service provider takes the complete responsibility of
the complication and the technical faults.
Cloud computing has some interesting characteristics that bring benefits to both
cloud service consumers (CSCs) and cloud service providers (CSPs). These
characteristics are:
• No up-front commitments
• On-demand access
• Nice pricing
• Simplified application acceleration and scalability
• Efficient resource allocation
• Energy efficiency
• Seamless creation and use of third-party services
Platform as a Service
Infrastructure as a Service
Virtualized servers
Let’s explore the key components of the NIST Cloud Computing Reference Model:
1. Cloud Service Models:
Infrastructure as a Service (IaaS): Provides virtualized computing resources,
such as virtual machines, storage, and networks, on-demand to users.
Platform as a Service (PaaS): Offers a platform with development tools, libraries,
and services for users to build and deploy applications.
Software as a Service (SaaS): Delivers software applications over the internet,
typically accessed through web browsers, without the need for installation or
maintenance.
2. Cloud Deployment Models:
Public Cloud: Resources are owned and operated by a cloud service provider
and made available to the general public over the internet.
Private Cloud: Resources are exclusively used by a single organization,
providing greater control, security, and customization.
Hybrid Cloud: Combines public and private cloud environments, allowing data
and applications to be shared between them.
Platform as a Service
Infrastructure as a Service
Virtualized servers
Cloud Computing
Strategic edge
Cloud computing offers a competitive edge over your competitors. It helps you to
access the latest and applications any time without spending your time and money on
installations.
High Speed
Cloud computing allows you to deploy your service quickly in fewer clicks. This
faster deployment allows you to get the resources required for your system within
fewer minutes.
Reliability
Reliability is one of the biggest pluses of cloud computing. You can always get
instantly updated about the changes.
Mobility
Employees who are working on the premises or at the remote locations can easily
access all the could services. All they need is an Internet connectivity.
Quick Deployment
Last but not least, cloud computing gives you the advantage of rapid deployment. So,
when you decide to use the cloud, your entire system can be fully functional in very
few minutes. Although, the amount of time taken depends on what kind of
technologies are used in your business.
• On-Demand Self-service
• Multi-tenancy
• Offers Resilient Computing
• Fast and effective virtualization
• Provide you low-cost software
• Offers advanced online security
• Location and Device Independence
• Always available, and scales automatically to adjust to the increase in demand
• Allows pay-per-use
• Web-based control & interfaces • API Access available.
Technical Issues
Cloud technology is always prone to an outage and other technical issues. Even, the
best cloud service provider companies may face this type of trouble despite
maintaining high standards of maintenance.
Downtime
Downtime should also be considered while working with cloud computing. That's
because your cloud provider may face power loss, low internet connectivity, service
maintenance, etc.
Lower Bandwidth
Many cloud storage service providers limit bandwidth usage of their users. So, in case
if your organization surpasses the given allowance, the additional charges could be
significantly costly
Lacks of Support
Cloud Computing companies fail to provide proper support to the customers.
Moreover, they want their user to depend on FAQs or online help, which can be a
tedious job for non-technical persons.
1. The Idea Phase- This phase incepted in the early 1960s with the emergence
of utility and grid computing and lasted till pre-internet bubble era. Joseph Carl
Robnett Licklider was the founder of cloud computing.
2. The Pre-cloud Phase- The pre-cloud phase originated in 1999 and extended
to 2006. In this phase the internet as the mechanism to provide Application as
Service.
3. The Cloud Phase- The much talked about real cloud phase started in the year
2007 when the classification of IaaS, PaaS, and SaaS got formalized. The history of
cloud computing has witnessed some very interesting breakthroughs launched by
some of the leading computer/web organizations of the world.
The idea of renting computing services by leveraging large distributed computing
facilities has been around for long time. It dates back to the days of the mainframes
in the early 1950s. From there on, technology has evolved and been refined. This
process has created a series of favorable conditions for the realization of cloud
computing.
1969 : ARPANET
1984 : DEC’s 2005 : Amazon
1951 : UNIVAC I , VMScluster AWS ( EC2, S 3)
First Mainframe
1975 : Xerox PARC
Invented Ethernet 2004 : Web 2.0
Clouds 1990 : Lee-Calliau
WWW, HTTP, HTML
1960 : Cray’s First
Grids Supercomputer
Clusters
Mainframes
When we switch on the fan or any electric device, we are less concern about the
power supply from where it comes and how it is generated. The power supply or
electricity that we receives at our home travels through a chain of network, which
includes power stations, transformers, power lines and transmission stations. These
components together make a ‘Power Grid’. Likewise, ‘Grid Computing’ is an
infrastructure that links computing resources such as PCs, servers, workstations and
storage elements and provides the mechanism required to access them.
Grid Computing is a middle ware to co-ordinate disparate IT resources across a network, allowing
them to function as whole. It is more often used in scientific research and in universities for
educational purpose. For example, a group of architect students working on a different project
requires a specific designing tool and a software for designing purpose but only couple of them
got access to this designing tool, the problem is how they can make this tool available to rest of
the students. To make available for other students they will put this designing tool on campus
network, now the grid will connect all these computers in campus network and allow student to
use designing tool required for their project from anywhere.
Cloud computing and Grid computing is often confused, though there functions are almost similar
there approach for their functionality is different. Let see how they operate-
Utility computing is the process of providing service through an on-demand, pay per
use billing method. The customer or client has access to a virtually unlimited supply
of computing solutions over a virtual private network or over the internet, which can
be sourced and used whenever it’s required. Based on the concept of utility
computing , grid computing, cloud computing and managed IT services are based.
• Utility computing refers to the ability to • Cloud Computing also works like utility
charge the offered services, and charge computing, you pay only for what you use
customers for exact usage but Cloud Computing might be cheaper, as
such, Cloud based app can be up and
running in days or weeks.
• Utility computing is more favorable when • Cloud computing is great and easy to use
performance and selection infrastructure is when the selection infrastructure and
critical performance is not critical
• Utility computing is a good choice for less • Cloud computing is a good choice for high
resource demanding resource demanding
Eras of computing
The two fundamental and dominant models of computing are sequential and parallel.
The sequential computing era began in the 1940s; the parallel (and distributed)
computing era followed it within a decade .The four key elements of computing
developed during these eras are architectures, compilers, applications, and problem-
solving environments.
Applications
Problem-Solving Environments
Architectures
Compilers
Problem-Solving Environments
1940 1950 1960 1970 1980 1990 2000 2010 2020 2030
The term distributed computing encompasses any architecture or system that allows
the computation to be broken down into units and executed concurrently on different
computing elements, whether these are processors on different nodes, processors on
the same computer, or cores within the same processor. Therefore, distributed
computing includes a wider range of systems and applications than parallel
computing and is often considered a more general term. Even though it is not a rule,
the term distributed often implies that the locations of the computing elements are
not the same and such elements might be heterogeneous in terms of hardware and
software features. Classic examples of distributed computing systems are computing
grids or Internet computing systems, which combine together the biggest variety of
architectures, systems, and applications in the world.
Distributed Computing
A distributed system is a network of autonomous computers that communicate with each other in
order to achieve a goal. The computers in a distributed system are independent and do not
physically share memory or processors. They communicate with each other using messages, pieces
of information transferred from one computer to another over a network. Messages can
communicate many things: computers can tell other computers to execute a procedures with
Middleware
IPC Primitives for
Control and Data
Operating System
Networking and
Parallel Hardware
Hardware
Computers in a distributed system can have different roles. A computer's role depends on the
goal of the system and the computer's own hardware and software properties. There are two
predominant ways of organizing computers in a distributed system. The first is the client-server
architecture, and the second is the peer-to-peer architecture.
The client-server model of communication can be traced back to the introduction of UNIX in the
1970's, but perhaps the most influential use of the model is the modern World Wide Web. An
The concepts of client and server are powerful functional abstractions. A server is simply a unit
that provides a service, possibly to multiple clients simultaneously, and a client is a unit that
consumes the service. The clients do not need to know the details of how the service is provided,
or how the data they are receiving is stored or calculated, and the server does not need to know
how the data is going to be used.
Peer-to-peer Systems
The client-server model is appropriate for service-oriented situations. However, there are other
computational goals for which a more equal division of labor is a better choice. The term peerto-
peer is used to describe distributed systems in which labor is divided among all the components of
the system. All the computers send and receive data, and they all contribute some processing power
and memory. As a distributed system increases in size, its capacity of computational resources
increases. In a peer-to-peer system, all components of the system contribute some processing
power and memory to a distributed computation.
In some peer-to-peer systems, the job of maintaining the health of the network is taken on by a set
of specialized components. Such systems are not pure peer-to-peer systems, because they have
different types of components that serve different functions. The components that support a peer-
to-peer network act like scaffolding: they help the network stay connected, they maintain
information about the locations of different computers, and they help newcomers take their place
within their neighborhood.
Skype, the voice- and video-chat service, is an example of a data transfer application with a peerto-
peer architecture.
peer
peer
peer
peer
peer
Java RMI is a standard technology provided by Java for enabling RPC among distributed Java
objects. RMI defines an infrastructure allowing the invocation of methods on objects that are
located on different Java Virtual Machines (JVMs) residing either on the local node or on a remote
one.
.NET remoting
Remoting is the technology allowing for IPC among .NET applications. It provides developers
with a uniform platform for accessing remote objects from within any application developed in
any of the languages supported by .NET.
Social Networks,
Scientific Computing,
Enterprise Applications
Applications (SaaS)
Frameworks for
Cloud Application
Development
Middleware (PaaS)
Virtual Hardware,
Networking, OS Images, Hardware and OS (IaaS)
and Storage
Processor
Processor 1
Processor 2
Processor N
Si
Si
ng
ng
Processor 1 le
le
D
D
at
at
a
a
In O
ut
pu
t pu
Processor 2
t
St
St
re
a re
a
m
Processor N
Processor 1
Processor 2
Processor N
Purpose
Elasticity aims at matching the amount of resource allocated to a service with the
amount of resource it actually requires, avoiding over- or under-provisioning.
Over-provisioning, i.e., allocating more resources than required, should be avoided
as the service provider often has to pay for the resources that are allocated to the
service. For example, an Amazon EC2 M4 extra-large instance costs US$0.239/hour.
If a service has allocated two virtual machines when only one is required, the service
provider wastes $2,095 every year. Hence, the service provider's expenses are higher
than optimal and their profit is reduced.
From a provider’s standpoint, cloud provisioning can include the supply and assignment of
required cloud resources to the customer. For example, the creation of virtual machines, the
allocation of storage capacity and/or granting access to cloud software
The Web is the primary interface through which cloud computing delivers its services. At present,
the Web encompasses a set of technologies and services that facilitate interactive information
sharing, collaboration, user-centered design, and application composition. This evolution has
transformed the Web into a rich platform for application development and is known as Web 2.0.
This term captures a new way in which developers architect applications and deliver services
through the Internet and provides new experience for users of these applications and services.
Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook,
Twitter, YouTube, de.li.cious, Blogger, and Wikipedia. In particular, social networking Websites
take the biggest advantage of Web 2.0.