0% found this document useful (0 votes)
51 views29 pages

Lecture Notes Cloud Computing Unit-1 13 Sep 2023

The document provides an introduction to cloud computing, defining it as delivering computing resources as a service over a network. It discusses the key characteristics of cloud computing including elasticity, on-demand provisioning, and pay-per-use pricing. The document also outlines the benefits of cloud computing such as lower costs, improved performance and scalability.

Uploaded by

tritik59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views29 pages

Lecture Notes Cloud Computing Unit-1 13 Sep 2023

The document provides an introduction to cloud computing, defining it as delivering computing resources as a service over a network. It discusses the key characteristics of cloud computing including elasticity, on-demand provisioning, and pay-per-use pricing. The document also outlines the benefits of cloud computing such as lower costs, improved performance and scalability.

Uploaded by

tritik59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Cloud Computing [Unit-1]

INTRODUCTION Introduction to Cloud Computing – Definition of Cloud


– Evolution of Cloud Computing – Underlying Principles of Parallel and
Distributed Computing – Cloud Characteristics – Elasticity in Cloud –
On-demand Provisioning.

What is Cloud Computing?

Cloud Computing can be defined as delivering computing power( CPU, RAM,


Network Speeds, Storage OS software) a service over a network (usually on the
internet) rather than physically having the computing resources at the customer
location.

Example: AWS, Azure, Google Cloud

Cloud computing allows renting infrastructure, runtime


environments, and services on a pay- per-use basis.

Cloud computing refers to both the applications


delivered as services over the Internet and the hardware
and system software in the datacenters that provide
those services.

Definition proposed by the U.S. National Institute of Standards and Technology


(NIST):
Cloud computing is a model for enabling ubiquitous, convenient, on-demand
network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction.
Let’s learn Cloud computing with an example -
Whenever you travel through a bus or train, you take a ticket for your destination and
hold back to your seat till you reach your destination. Likewise other passengers also

PREPARED BY: DR PAWAN KUMAR GOEL


takes ticket and travel in the same bus with you and it hardly bothers you where they
go. When your stop comes you get off the bus thanking the driver. Cloud computing is
just like that bus, carrying data and information for different users and allows to use
its service with minimal cost.

Why the Name Cloud?


The term “Cloud” came from a network design that was used by network engineers to
represent the location of various network devices and there inter-connection. The
shape of this network design was like a cloud.

we can define three criteria to discriminate whether a service is delivered in the cloud
computing style:
• The service is accessible via a Web browser (nonproprietary) or a Web services
application programming interface (API).
• Zero capital expenditure is necessary to get started.
• You pay only for what you use as you use it.

Why Cloud Computing?


With increase in computer and Mobile user’s, data storage has become a priority in all
fields. Large and small scale businesses today thrive on their data & they spent a huge
amount of money to maintain this data. It requires a strong IT support and a storage hub.
Not all businesses can afford high cost of in-house IT infrastructure and back up support
services. For them Cloud Computing is a cheaper solution. Perhaps its efficiency in
storing data, computation and less maintenance cost has succeeded to attract even bigger
businesses as well.

PREPARED BY: DR PAWAN KUMAR GOEL


Cloud computing decreases the hardware and software demand from the user’s side.
The only thing that user must be able to run is the cloud computing systems interface
software, which can be as simple as Web browser, and the Cloud network takes care of
the rest. We all have experienced cloud computing at some instant of time, some of the
popular cloud services we have used or we are still using are mail services like gmail,
hotmail or yahoo etc.

While accessing e-mail service our data is stored on cloud server and not on our
computer. The technology and infrastructure behind the cloud is invisible. It is less
important whether cloud services are based on HTTP, XML, Ruby, PHP or other
specific technologies as far as it is user friendly and functional. An individual user can
connect to cloud system from his/her own devices like desktop, laptop or mobile.

Cloud computing harnesses small business effectively having limited resources, it gives
small businesses access to the technologies that previously were out of their reach.
Cloud computing helps small businesses to convert their maintenance cost into profit.
Let’s see how?

In an in-house IT server, you have to pay a lot of attention and ensure that there are no
flaws into the system so that it runs smoothly. And in case of any technical glitch you
are completely responsible; it will seek a lot of attention, time and money for repair.
Whereas, in cloud computing, the service provider takes the complete responsibility of
the complication and the technical faults.

Benefits of Cloud Computing


The potential for cost saving is the major reason of cloud services adoption by many
organizations. Cloud computing gives the freedom to use services as per the
requirement and pay only for what you use. Due to cloud computing it has become
possible to run IT operations as a outsourced unit without much in-house resources.

Following are the benefits of cloud computing:

1. Lower IT infrastructure and computer costs for users


2. Improved performance
3. Fewer Maintenance issues
4. Instant software updates
5. Improved compatibility between Operating systems
6. Backup and recovery
7. Performance and Scalability
8. Increased storage capacity

PREPARED BY: DR PAWAN KUMAR GOEL


9. Increase data safety

Cloud computing has some interesting characteristics that bring benefits to both
cloud service consumers (CSCs) and cloud service providers (CSPs). These
characteristics are:
• No up-front commitments
• On-demand access
• Nice pricing
• Simplified application acceleration and scalability
• Efficient resource allocation
• Energy efficiency
• Seamless creation and use of third-party services

The cloud computing reference model


A fundamental characteristic of cloud computing is the capability to deliver, on
demand, a variety of IT services that are quite diverse from each other. This variety
creates different perceptions of what cloud computing is among users. Despite this
lack of uniformity, it is possible to classify cloud computing services offerings into
three major categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
(PaaS), and Software-as-a-Service (SaaS). These categories are related to each other
as described in given figure, which provides an organic view of cloud computing.
We refer to this diagram as the Cloud Computing Reference Model
The reference model for cloud computing is an abstract model that
characterizes and standardizes a cloud computing environment by partitioning
it into abstraction layers and cross-layer functions.
The Cloud Computing Reference Model provides a conceptual framework for
understanding and categorizing the various components and functions of cloud
computing. It helps define the relationships and interactions between different
cloud computing elements. The most widely recognized and used reference model
is the NIST (National Institute of Standards and Technology) Cloud Computing
Reference Architecture.

PREPARED BY: DR PAWAN KUMAR GOEL


Web 2.0 Software as a Service
Interfaces End -user applications

Scientific applicati ons

Office automation, photo editing,

CRM, and social networking

Examples : Google Documents, Facebook, Flickr, Salesforce

Platform as a Service

Runtime environment for applications

Development and da ta processing platforms

E xamples : Windows Azure, Hadoop, Google AppEngine, Aneka

Infrastructure as a Service
Virtualized servers

Storage and networking

Examples : Amazon EC2, S3, Rightscale, vCloud

Cloud Computing Reference Model

Let’s explore the key components of the NIST Cloud Computing Reference Model:
1. Cloud Service Models:
 Infrastructure as a Service (IaaS): Provides virtualized computing resources,
such as virtual machines, storage, and networks, on-demand to users.
 Platform as a Service (PaaS): Offers a platform with development tools, libraries,
and services for users to build and deploy applications.
 Software as a Service (SaaS): Delivers software applications over the internet,
typically accessed through web browsers, without the need for installation or
maintenance.
2. Cloud Deployment Models:
 Public Cloud: Resources are owned and operated by a cloud service provider
and made available to the general public over the internet.
 Private Cloud: Resources are exclusively used by a single organization,
providing greater control, security, and customization.
 Hybrid Cloud: Combines public and private cloud environments, allowing data
and applications to be shared between them.

PREPARED BY: DR PAWAN KUMAR GOEL


 Community Cloud: Shared infrastructure and services are used by a specific
community or group of organizations with shared interests or requirements.
3. Essential Characteristics:
 On-Demand Self-Service: Users can provision computing resources as needed
without requiring human intervention from the service provider.
 Broad Network Access: Services are accessible over standard network protocols
and can be accessed by various devices.
 Resource Pooling: Computing resources are shared and dynamically assigned to
users based on demand, with multi-tenancy support.
 Rapid Elasticity: Resources can be scaled up or down quickly to meet changing
demands.
 Measured Service: Cloud service usage is monitored, controlled, and billed
based on specific metrics, providing transparency and cost optimization.
4. Cloud Service Orchestration:
 Refers to the management and coordination of multiple cloud services to deliver
end-to-end solutions.
 It involves integrating various services, components, and workflows to achieve
business objectives efficiently and effectively.
5. Cloud Security and Management:
 Covers the governance, security, and management aspects of cloud computing.
 It includes identity and access management, data protection, compliance,
monitoring, and service-level agreement (SLA) management.
The NIST Cloud Computing Reference Model provides a standardized framework
to understand the key components and relationships within cloud computing. It
serves as a common language for discussing and designing cloud-based solutions,
enabling interoperability and facilitating the adoption of cloud computing
technologies.

PREPARED BY: DR PAWAN KUMAR GOEL


Web 2.0 Software as a Service
Interfaces End -user applications

Scientific applicati ons

Office automation, photo editing,

CRM, and social networking

Examples : Google Documents, Facebook, Flickr, Salesforce

Platform as a Service

Runtime environment for applications

Development and da ta processing platforms

E xamples : Windows Azure, Hadoop, Google AppEngine, Aneka

Infrastructure as a Service
Virtualized servers

Storage and networking

Examples : Amazon EC2, S3, Rightscale, vCloud

Cloud Computing Reference Model

Advantages of Cloud Computing


Here, are important benefits for using Cloud computing in your organization:

Cloud Computing

PREPARED BY: DR PAWAN KUMAR GOEL


Cost Savings
Cost saving is the biggest benefit of cloud computing. It helps you to save substantial
capital cost as it does not need any physical hardware investments. Also, you do not
need trained personnel to maintain the hardware. The buying and managing of
equipment is done by the cloud service provider.

Strategic edge
Cloud computing offers a competitive edge over your competitors. It helps you to
access the latest and applications any time without spending your time and money on
installations.

High Speed
Cloud computing allows you to deploy your service quickly in fewer clicks. This
faster deployment allows you to get the resources required for your system within
fewer minutes.

Back-up and restore data


Once the data is stored in a Cloud, it is easier to get the back-up and recovery of that,
which is otherwise very time taking process on-premise.

Automatic Software Integration


In the cloud, software integration is something that occurs automatically. Therefore,
you don't need to take additional efforts to customize and integrate your applications
as per your preferences.

Reliability
Reliability is one of the biggest pluses of cloud computing. You can always get
instantly updated about the changes.

Mobility
Employees who are working on the premises or at the remote locations can easily
access all the could services. All they need is an Internet connectivity.

PREPARED BY: DR PAWAN KUMAR GOEL


Unlimited storage capacity
The cloud offers almost limitless storage capacity. At any time you can quickly
expand your storage capacity with very nominal monthly fees.
Collaboration
The cloud computing platform helps employees who are located in different
geographies to collaborate in a highly convenient and secure manner.

Quick Deployment
Last but not least, cloud computing gives you the advantage of rapid deployment. So,
when you decide to use the cloud, your entire system can be fully functional in very
few minutes. Although, the amount of time taken depends on what kind of
technologies are used in your business.

Other Important Benefits


Apart from the above, some other advantages of cloud computing are:

• On-Demand Self-service
• Multi-tenancy
• Offers Resilient Computing
• Fast and effective virtualization
• Provide you low-cost software
• Offers advanced online security
• Location and Device Independence
• Always available, and scales automatically to adjust to the increase in demand
• Allows pay-per-use
• Web-based control & interfaces • API Access available.

Disadvantages of Cloud Computing


Here, are significant challenges of using Cloud Computing:

PREPARED BY: DR PAWAN KUMAR GOEL


Performance Can Vary
When you are working in a cloud environment, your application is running on the
server which simultaneously provides resources to other businesses. Any greedy
behavior or DDOS attack on your tenant could affect the performance of your shared
resource.

Technical Issues
Cloud technology is always prone to an outage and other technical issues. Even, the
best cloud service provider companies may face this type of trouble despite
maintaining high standards of maintenance.

Security Threat in the Cloud


Another drawback while working with cloud computing services is security risk.
Before adopting cloud technology, you should be well aware of the fact that you will
be sharing all your company's sensitive information to a third-party cloud computing
service provider. Hackers might access this information.

Downtime
Downtime should also be considered while working with cloud computing. That's
because your cloud provider may face power loss, low internet connectivity, service
maintenance, etc.

PREPARED BY: DR PAWAN KUMAR GOEL


Internet Connectivity
Good Internet connectivity is a must in cloud computing. You can't access cloud
without an internet connection. Moreover, you don't have any other way to gather data
from the cloud.

Lower Bandwidth
Many cloud storage service providers limit bandwidth usage of their users. So, in case
if your organization surpasses the given allowance, the additional charges could be
significantly costly

Lacks of Support
Cloud Computing companies fail to provide proper support to the customers.
Moreover, they want their user to depend on FAQs or online help, which can be a
tedious job for non-technical persons.

THE EVOLUTION OF CLOUD COMPUTING

PREPARED BY: DR PAWAN KUMAR GOEL


The evolution of cloud computing can be bifurcated into three basic phases:

1. The Idea Phase- This phase incepted in the early 1960s with the emergence
of utility and grid computing and lasted till pre-internet bubble era. Joseph Carl
Robnett Licklider was the founder of cloud computing.

2. The Pre-cloud Phase- The pre-cloud phase originated in 1999 and extended
to 2006. In this phase the internet as the mechanism to provide Application as
Service.

3. The Cloud Phase- The much talked about real cloud phase started in the year
2007 when the classification of IaaS, PaaS, and SaaS got formalized. The history of
cloud computing has witnessed some very interesting breakthroughs launched by
some of the leading computer/web organizations of the world.
The idea of renting computing services by leveraging large distributed computing
facilities has been around for long time. It dates back to the days of the mainframes
in the early 1950s. From there on, technology has evolved and been refined. This
process has created a series of favorable conditions for the realization of cloud
computing.

Given Figure provides an overview of the evolution of the distributed computing


technologies that have influenced cloud computing. In tracking the historical
evolution, we briefly review five core technologies that played an important role in
the realization of cloud computing. These technologies are distributed systems,
virtualization, Web 2.0, service orientation, and utility computing.

The evolution of distributed computing technologies, 1950s 2010s.

PREPARED BY: DR PAWAN KUMAR GOEL


2010 : Microsoft
1970 : DARPA’s TCP/IP 1999 : Grid Computing Azure

1984 : IEEE 802.3 1997: IEEE


Ethernet & LAN 2008 : Google
802.11 ( Wi-Fi ) AppEngine
1966 : Flynn’s Taxonomy
SISD , SIMD , MISD , MIMD 1989 : TCP/IP
IETF RFC 1122 2007 : Manjrasoft Aneka

1969 : ARPANET
1984 : DEC’s 2005 : Amazon
1951 : UNIVAC I , VMScluster AWS ( EC2, S 3)
First Mainframe
1975 : Xerox PARC
Invented Ethernet 2004 : Web 2.0
Clouds 1990 : Lee-Calliau
WWW, HTTP, HTML
1960 : Cray’s First
Grids Supercomputer

Clusters

Mainframes

Three major milestones have led to cloud computing: mainframe computing,


cluster computing, and grid computing.
• Mainframes. These were the first examples of large computational facilities
leveraging multiple processing units. Mainframes were powerful, highly reliable
computers specialized for large data movement and massive input/output (I/O)
operations. They were mostly used by large organizations for bulk data processing
tasks such as online transactions, enterprise resource planning, and other
operations involving the processing of significant amounts of data. Even though
mainframes cannot be considered distributed systems, they offered large
computational power by using multiple processors, which were presented as a
single entity to users. One of the most attractive features of mainframes was the
ability to be highly reliable computers that were “always on” and capable of
tolerating failures transparently. No system shutdown was required to replace
failed components, and the system could work without interruption. Batch
processing was the main application of mainframes. Now their popularity and
deployments have reduced, but evolved versions of such systems are still in use
for transaction processing (such as online banking, airline ticket booking,
supermarket and telcos, and government services).
• Clusters. Cluster computing started as a low-cost alternative to the use of
mainframes and supercomputers. The technology advancement that created faster
and more powerful mainframes and supercomputers eventually generated an
increased availability of cheap commodity machines as a side effect. These
machines could then be connected by a high-bandwidth network and controlled
by specific software tools that manage them as a single system. Starting in the

PREPARED BY: DR PAWAN KUMAR GOEL


1980s, clusters become the standard technology for parallel and highperformance
computing. Built by commodity machines, they were cheaper than mainframes
and made high-performance computing available to a large number of groups,
including universities and small research labs. Cluster technology contributed
considerably to the evolution of tools and frameworks for distributed computing,
including Condor, Parallel Virtual Machine (PVM), and Message Passing
Interface (MPI) . One of the attractive features of clusters was that the
computational power of commodity machines could be leveraged to solve
problems that were previously manageable only on expensive supercomputers.
Moreover, clusters could be easily extended if more computational power was
required.
• Grids. Grid computing appeared in the early 1990s as an evolution of cluster
computing. In an analogy to the power grid, grid computing proposed a new
approach to access large computational power, huge storage facilities, and a
variety of services. Users can “consume” resources in the same way as they use
other utilities such as power, gas, and water. Grids initially developed as
aggregations of geographically dispersed clusters by means of Internet
connections. These clusters belonged to different organizations, and arrangements
were made among them to share the computational power. Different from a “large
cluster,” a computing grid was a dynamic aggregation of heterogeneous
computing nodes, and its scale was nationwide or even worldwide. Several
developments made possible the diffusion of computing grids: (a) clusters became
quite common resources; (b) they were often underutilized; (c) new problems were
requiring computational power that went beyond the capability of single clusters;
and (d) the improvements in networking and the diffusion of the Internet made
possible long-distance, high-bandwidth connectivity. All these elements led to the
development of grids, which now serve a multitude of users across the world.

Cloud computing is often considered the successor of grid computing. In reality,


it embodies aspects of all these three major technologies. Computing clouds are
deployed in large datacenters hosted by a single organization that provides services
to others. Clouds are characterized by the fact of having virtually infinite capacity,
being tolerant to failures, and being always on, as in the case of mainframes. In many
cases, the computing nodes that form the infrastructure of computing clouds are
commodity machines, as in the case of clusters. The services made available by a
cloud vendor are consumed on a pay-per-use basis, and clouds fully implement the
utility vision introduced by grid computing

PREPARED BY: DR PAWAN KUMAR GOEL


Compare: Grid Computing Vs Cloud Computing

Cloud Computing Grid Computing

• Cloud computing works more • Grid computing uses the


as a service provider for available resource and
utilizing computer resource interconnected computer
systems to accomplish a
common goal

• Cloud computing is a • Grid computing is a


centralized model decentralized model, where the
computation could occur over
many administrative model

• Cloud is a collection of • A grid is a collection of


computers usually owned by a computers which is owned by a
single party. multiple parties in multiple
• locations and connected
together so that users can share
the combined power of
resources

• Cloud offers more services all • Grid provides limited services


most all the services like web
hosting, DB (Data Base)
support and much more

PREPARED BY: DR PAWAN KUMAR GOEL


• Cloud computing is typically • Grid computing federates the
provided within a single resources located within
organization (eg : Amazon) different organization.

When we switch on the fan or any electric device, we are less concern about the
power supply from where it comes and how it is generated. The power supply or
electricity that we receives at our home travels through a chain of network, which
includes power stations, transformers, power lines and transmission stations. These
components together make a ‘Power Grid’. Likewise, ‘Grid Computing’ is an
infrastructure that links computing resources such as PCs, servers, workstations and
storage elements and provides the mechanism required to access them.

Grid Computing is a middle ware to co-ordinate disparate IT resources across a network, allowing
them to function as whole. It is more often used in scientific research and in universities for
educational purpose. For example, a group of architect students working on a different project
requires a specific designing tool and a software for designing purpose but only couple of them
got access to this designing tool, the problem is how they can make this tool available to rest of
the students. To make available for other students they will put this designing tool on campus
network, now the grid will connect all these computers in campus network and allow student to
use designing tool required for their project from anywhere.

Cloud computing and Grid computing is often confused, though there functions are almost similar
there approach for their functionality is different. Let see how they operate-

Utility Computing Vs Cloud Computing


In our previous conversation in “Grid Computing” we have seen how electricity is
supplied to our house, also we do know that to keep electricity supply we have to pay
the bill. Utility Computing is just like that, we use electricity at home as per our

PREPARED BY: DR PAWAN KUMAR GOEL


requirement and pay the bill accordingly likewise you will use the services for the
computing and pay as per the use this is known as ‘Utility computing’. Utility
computing is a good source for small scale usage, it can be done in any server
environment and requires Cloud Computing.

Utility computing is the process of providing service through an on-demand, pay per
use billing method. The customer or client has access to a virtually unlimited supply
of computing solutions over a virtual private network or over the internet, which can
be sourced and used whenever it’s required. Based on the concept of utility
computing , grid computing, cloud computing and managed IT services are based.

Utility Computing Cloud Computing

• Utility computing refers to the ability to • Cloud Computing also works like utility
charge the offered services, and charge computing, you pay only for what you use
customers for exact usage but Cloud Computing might be cheaper, as
such, Cloud based app can be up and
running in days or weeks.

• Utility computing users want to be in • In cloud computing, provider is in complete


control of the geographical location of the control of cloud computing services and
infrastructure infrastructure

• Utility computing is more favorable when • Cloud computing is great and easy to use
performance and selection infrastructure is when the selection infrastructure and
critical performance is not critical

• Utility computing is a good choice for less • Cloud computing is a good choice for high
resource demanding resource demanding

• Utility computing refers to a business • Cloud computing refers to the underlying


model IT architecture

PREPARED BY: DR PAWAN KUMAR GOEL


Through utility computing small businesses with limited budget can easily use
software like CRM (Customer Relationship Management) without investing heavily
on infrastructure to maintain their clientele base.

Eras of computing
The two fundamental and dominant models of computing are sequential and parallel.
The sequential computing era began in the 1940s; the parallel (and distributed)
computing era followed it within a decade .The four key elements of computing
developed during these eras are architectures, compilers, applications, and problem-
solving environments.

Underlying Principles of Parallel and Distributed Computing


• The term parallel computing and distributed computing are often
used interchangeably, even though they mean slightly different
things.
• The term parallel implies a tightly coupled system, where as
distributed systems refers to a wider class of system, including those
that are tightly coupled.
• More precisely, the term parallel computing refers to a model in
which the computation is divided among several processors sharing
the same memory.
• The architecture of parallel computing system is often characterized
by the homogeneity of components: each processor is of the same
type and it has the same capability as the others.
• The shared memory has a single address space, which is accessible
to all the processors.
• Parallel programs are then broken down into several units of
execution that can be allocated to different processors and can
communicate with each other by means of shared memory.
• Originally parallel systems are considered as those architectures that
featured multiple processors sharing the same physical memory and
that were considered a single computer.
• Over time, these restrictions have been relaxed, and parallel systems now include all
architectures that are based on the concept of shared memory, whether this is physically

PREPARED BY: DR PAWAN KUMAR GOEL


present or created with the support of libraries, specific hardware, and a highly efficient
networking infrastructure.
• For example: a cluster of which of the nodes are connected through an InfiniBand network
and configured with distributed shared memory system can be considered as a parallel
system.
• The term distributed computing encompasses any architecture or
system that allows the computation to be broken down into units and
executed concurrently on different computing elements, whether
these are processors on different nodes, processors on the same
computer, or cores within the same processor.
• Distributed computing includes a wider range of systems and
applications than parallel computing and is often considered a more
general term.
• Even though it is not a rule, the term distributed often implies that
the locations of the computing elements are not the same and such
elements might be heterogeneous in terms of hardware and software
features.
• Classic examples of distributed computing systems are
• Computing Grids
• Internet Computing Systems

Parallel vs. distributed computing


The terms parallel computing and distributed computing are often used
interchangeably, even though they mean slightly different things. The term parallel
implies a tightly coupled system, whereas distributed refers to a wider class of
system, including those that are tightly coupled.

PREPARED BY: DR PAWAN KUMAR GOEL


Architectures

Sequential Era Compilers

Applications

Problem-Solving Environments

Architectures

Compilers

Parallel Era Applications

Problem-Solving Environments

1940 1950 1960 1970 1980 1990 2000 2010 2020 2030

Eras of computing, 1940s- 2030s.

The term distributed computing encompasses any architecture or system that allows
the computation to be broken down into units and executed concurrently on different
computing elements, whether these are processors on different nodes, processors on
the same computer, or cores within the same processor. Therefore, distributed
computing includes a wider range of systems and applications than parallel
computing and is often considered a more general term. Even though it is not a rule,
the term distributed often implies that the locations of the computing elements are
not the same and such elements might be heterogeneous in terms of hardware and
software features. Classic examples of distributed computing systems are computing
grids or Internet computing systems, which combine together the biggest variety of
architectures, systems, and applications in the world.
Distributed Computing

A distributed system is a network of autonomous computers that communicate with each other in
order to achieve a goal. The computers in a distributed system are independent and do not
physically share memory or processors. They communicate with each other using messages, pieces
of information transferred from one computer to another over a network. Messages can
communicate many things: computers can tell other computers to execute a procedures with

PREPARED BY: DR PAWAN KUMAR GOEL


particular arguments, they can send and receive packets of data, or send signals that tell other
computers to behave a certain way.

Frameworks for Applications


Distributed
Programming

Middleware
IPC Primitives for
Control and Data

Operating System

Networking and
Parallel Hardware
Hardware

A layered view of a distributed system.

Computers in a distributed system can have different roles. A computer's role depends on the
goal of the system and the computer's own hardware and software properties. There are two
predominant ways of organizing computers in a distributed system. The first is the client-server
architecture, and the second is the peer-to-peer architecture.

 Client/Server Systems: The client-server architecture is a way to dispense a service from a


central source. There is a single server that provides a service, and multiple clients that
communicate with the server to consume its products. In this architecture, clients and servers
have different jobs. The server's job is to respond to service requests from clients, while a
client's job is to use the data provided in response in order to perform some task.

The client-server model of communication can be traced back to the introduction of UNIX in the
1970's, but perhaps the most influential use of the model is the modern World Wide Web. An

PREPARED BY: DR PAWAN KUMAR GOEL


example of a client-server interaction is reading the New York Times online. When the web server
at www.nytimes.com is contacted by a web browsing client (like Firefox), its job is to send back
the HTML of the New York Times main page. This could involve calculating personalized content
based on user account information sent by the client, and fetching appropriate advertisements. The
job of the web browsing client is to render the HTML code sent by the server. This means
displaying the images, arranging the content visually, showing different colors, fonts, and shapes
and allowing users to interact with the rendered web page.

The concepts of client and server are powerful functional abstractions. A server is simply a unit
that provides a service, possibly to multiple clients simultaneously, and a client is a unit that
consumes the service. The clients do not need to know the details of how the service is provided,
or how the data they are receiving is stored or calculated, and the server does not need to know
how the data is going to be used.

 Peer-to-peer Systems

The client-server model is appropriate for service-oriented situations. However, there are other
computational goals for which a more equal division of labor is a better choice. The term peerto-
peer is used to describe distributed systems in which labor is divided among all the components of
the system. All the computers send and receive data, and they all contribute some processing power
and memory. As a distributed system increases in size, its capacity of computational resources
increases. In a peer-to-peer system, all components of the system contribute some processing
power and memory to a distributed computation.

In some peer-to-peer systems, the job of maintaining the health of the network is taken on by a set
of specialized components. Such systems are not pure peer-to-peer systems, because they have
different types of components that serve different functions. The components that support a peer-
to-peer network act like scaffolding: they help the network stay connected, they maintain
information about the locations of different computers, and they help newcomers take their place
within their neighborhood.
Skype, the voice- and video-chat service, is an example of a data transfer application with a peerto-
peer architecture.

PREPARED BY: DR PAWAN KUMAR GOEL


peer
peer

peer
peer

peer

peer
peer

Peer-to-peer architectural style.

Technologies for distributed computing


Remote procedure call (RPC), distributed object frameworks, and service-oriented computing.

Remote procedure call


RPC is the fundamental abstraction enabling the execution of procedures on client’s request. RPC
allows extending the concept of a procedure call beyond the boundaries of a process and a single
memory address space. The called procedure and calling procedure may be on the same system or
they may be on different systems in a network. The concept of RPC has been discussed since 1976
and completely formalized by Nelson and Birrell in the early 1980s. From there on, it has not
changed in its major components. Even though it is a quite old technology, RPC is still used today
as a fundamental component for IPC in more complex systems.

Distributed object frameworks


Distributed object frameworks extend object-oriented programming systems by allowing objects
to be distributed across a heterogeneous network and provide facilities so that they can coherently
act as though they were in the same address space. Distributed object frameworks leverage the
basic mechanism introduced with RPC and extend it to enable the remote invocation of object
methods and to keep track of references to objects made available through a network connection.

Examples of Distributed object frameworks are:


Common object request broker architecture (CORBA): CORBA is a specification introduced
by the Object Management Group (OMG) for providing crossplatform and cross-language
interoperability among distributed components. The specification was originally designed to
provide an interoperation standard that could be effectively used at the industrial level. Distributed
component object model (DCOM/COM1)

PREPARED BY: DR PAWAN KUMAR GOEL


DCOM, later integrated and evolved into COM1, is the solution provided by Microsoft for
distributed object programming before the introduction of .NET technology. DCOM introduces a
set of features allowing the use of COM components beyond the process boundaries.

Java remote method invocation (RMI)

Java RMI is a standard technology provided by Java for enabling RPC among distributed Java
objects. RMI defines an infrastructure allowing the invocation of methods on objects that are
located on different Java Virtual Machines (JVMs) residing either on the local node or on a remote
one.

.NET remoting

Remoting is the technology allowing for IPC among .NET applications. It provides developers
with a uniform platform for accessing remote objects from within any application developed in
any of the languages supported by .NET.

A cloud computing distributed system.

Social Networks,
Scientific Computing,
Enterprise Applications

Applications (SaaS)

Frameworks for
Cloud Application
Development
Middleware (PaaS)

Virtual Hardware,
Networking, OS Images, Hardware and OS (IaaS)
and Storage

A cloud computing distributed system.

What is parallel processing? (Parallel Computing)


Processing of multiple tasks simultaneously on multiple processors is called parallel processing.
The parallel program consists of multiple active processes (tasks) simultaneously solving a given

PREPARED BY: DR PAWAN KUMAR GOEL


problem. A given task is divided into multiple subtasks using a divide-and-conquer technique, and
each subtask is processed on a different central processing unit (CPU). Programming on a
multiprocessor system using the divide-and-conquer technique is called parallel programming.
Many applications today require more computing power than a traditional sequential computer
can offer. Parallel processing provides a cost-effective solution to this problem by increasing the
number of CPUs in a computer and by adding an efficient communication system between them.
The workload can then be shared between different processors. This setup results in higher
computing power and performance than a single-processor system offers.
The development of parallel processing is being influenced by many factors. The prominent
among them include the following:
• Computational requirements are ever increasing in the areas of both scientific and business
computing. The technical computing problems, which require high-speed computational power,
are related to life sciences, aerospace, geographical information systems, mechanical design and
analysis, and the like.
• Sequential architectures are reaching physical limitations as they are constrained by the speed
of light and thermodynamics laws. The speed at which sequential CPUs can operate is reaching
saturation point (no more vertical growth), and hence an alternative way to get high
computational speed is to connect multiple CPUs (opportunity for horizontal growth).
• Hardware improvements in pipelining, superscalar, and the like are nonscalable and require
sophisticated compiler technology. Developing such compiler technology is a difficult task.
• Vector processing works well for certain kinds of problems. It is suitable mostly for scientific
problems (involving lots of matrix operations) and graphical processing. It is not useful for other
areas, such as databases.
• The technology of parallel processing is mature and can be exploited commercially; there is
already significant R&D work on development tools and environments.
• Significant development in networking technology is paving the way for heterogeneous
computing.

Hardware architectures for parallel processing


The core elements of parallel processing are CPUs. Based on the number of instruction and data
streams that can be processed simultaneously, computing systems are classified into the following
four categories:
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems

PREPARED BY: DR PAWAN KUMAR GOEL


Instruction
Stream

Data Input Data Output

Processor

Single-instruction, single-data (SISD) architecture.

Single Instruction Stream

Data Input 1 Data Output 1

Processor 1

Data Input 2 Data Output 2

Processor 2

Data Input N Data Output N

Processor N

Single-instruction, multiple-data (SIMD) architecture.

PREPARED BY: DR PAWAN KUMAR GOEL


Instruction Instruction Instruction
Stream 1 Stream 2 Stream N

Si
Si
ng
ng
Processor 1 le
le
D
D
at
at
a
a
In O
ut
pu
t pu
Processor 2
t
St
St
re
a re
a
m

Processor N

Multiple-instruction, single-data (MISD) architecture.

Instruction Instruction Instruction


Stream 1 Stream 2 Stream N

Data Input 1 Data Output 1

Processor 1

Data Input 2 Data Output 2

Processor 2

Data Input N Data Output 3

Processor N

Multiple-instructions, multiple-data (MIMD) architecture.

Approaches to parallel programming:


A wide variety of parallel programming approaches are available.
The most prominent among them are the following: Data parallelism
Process parallelism
Farmer-and-worker model

PREPARED BY: DR PAWAN KUMAR GOEL


Levels of parallelism
Levels of parallelism are decided based on the lumps of code (grain size) that can be
a potential candidate for parallelism.
Table Levels of Parallelism

Grain Size Code Item Parallelized By

Large Separate and heavyweight process Programmer


Medium Function or procedure Programmer
Fine Loop or instruction block Parallelizing compiler
Very fine Instruction Processor

Explain the term Elasticity in terms of Cloud Computing.

In cloud computing, elasticity is defined as "the degree to which a system is able


to adapt to workload changes by provisioning and de-provisioning resources in
an autonomic manner, such that at each point in time the available resources
match the current demand as closely as possible". Elasticity is a defining
characteristic that differentiates cloud computing from previously proposed
computing paradigms, such as grid computing. The dynamic adaptation of capacity,
e.g., by altering the use of computing resources, to meet a varying workload is called
"elastic computing".

Purpose

Elasticity aims at matching the amount of resource allocated to a service with the
amount of resource it actually requires, avoiding over- or under-provisioning.
Over-provisioning, i.e., allocating more resources than required, should be avoided
as the service provider often has to pay for the resources that are allocated to the
service. For example, an Amazon EC2 M4 extra-large instance costs US$0.239/hour.
If a service has allocated two virtual machines when only one is required, the service
provider wastes $2,095 every year. Hence, the service provider's expenses are higher
than optimal and their profit is reduced.

PREPARED BY: DR PAWAN KUMAR GOEL


Under-provisioning, i.e., allocating fewer resources than required, must be avoided,
otherwise the service cannot serve its users with a good service. In the above
example, under-provisioning the website may make it seem slow or unreachable.
Web users eventually give up on accessing it, thus, the service provider loses
customers. On the long term, the provider's income will decrease, which also reduces
their profit.

What does Cloud Provisioning mean?


Cloud provisioning refers to the processes for the deployment and integration of cloud computing
services within an enterprise IT infrastructure. This is a broad term that incorporates the policies,
procedures and an enterprise’s objective in sourcing cloud services and solutions from a cloud
service provider.
Cloud provisioning primarily defines how, what and when an organization will provision cloud
services. These services can be internal, public or hybrid cloud products and solutions. There are
three different delivery models:

• Dynamic/On-Demand Provisioning: The customer or requesting application is provided


with resources on run time.
• User Provisioning: The user/customer adds a cloud device or device themselves.
• Post-Sales/Advanced Provisioning: The customer is provided with the resource upon
contract/service signup.

From a provider’s standpoint, cloud provisioning can include the supply and assignment of
required cloud resources to the customer. For example, the creation of virtual machines, the
allocation of storage capacity and/or granting access to cloud software

What is Web 2.0 ? Give the examples of web2.0.

The Web is the primary interface through which cloud computing delivers its services. At present,
the Web encompasses a set of technologies and services that facilitate interactive information
sharing, collaboration, user-centered design, and application composition. This evolution has
transformed the Web into a rich platform for application development and is known as Web 2.0.
This term captures a new way in which developers architect applications and deliver services
through the Internet and provides new experience for users of these applications and services.
Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook,
Twitter, YouTube, de.li.cious, Blogger, and Wikipedia. In particular, social networking Websites
take the biggest advantage of Web 2.0.

PREPARED BY: DR PAWAN KUMAR GOEL

You might also like