0% found this document useful (0 votes)
411 views23 pages

Cloud Computing Unit-2 (A)

The document provides an overview of the client-server model of computing. It discusses how the client-server model partitions a system into clients that make requests and servers that provide resources and services. It also describes some key characteristics of client-server computing including: - Consisting of networked small machines that provide flexibility and fault tolerance. - Using open systems that allow mixing hardware and software from different vendors. - Having modular components that allow expansion and modernization. - Providing potential cost reductions through better utilization of resources across the network. - Presenting complexity in integrating heterogeneous systems from multiple vendors.

Uploaded by

nabeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
411 views23 pages

Cloud Computing Unit-2 (A)

The document provides an overview of the client-server model of computing. It discusses how the client-server model partitions a system into clients that make requests and servers that provide resources and services. It also describes some key characteristics of client-server computing including: - Consisting of networked small machines that provide flexibility and fault tolerance. - Using open systems that allow mixing hardware and software from different vendors. - Having modular components that allow expansion and modernization. - Providing potential cost reductions through better utilization of resources across the network. - Presenting complexity in integrating heterogeneous systems from multiple vendors.

Uploaded by

nabeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Er.

Rohit Handa
Lecturer, CSE-IT Department 1
IBM-ICE Program, BUEST Baddi
Topic: Overview of Grid, Peer-to-Peer, Pervasive and Utility Computing technologies,
their characteristics and comparison with cloud computing
Topic-1: Client-Server Model of Computing
The client-server model of computing is a
distributed application model that partitions the
system into two entities:
o providers of a resource/service, called
servers
o the users of the resource/service, called
clients
A server machine is a host that runs one or more server programs and share resources
with clients.
A client machine sends requests to the server through computer network and may
make use of client program.
A client does not share any of its resources, but requests a server's content or service
function. Clients therefore initiate communication sessions with servers which await
incoming requests.
A client-server network involves multiple clients connecting to a single, central server.
Often clients and servers communicate over a computer network on separate hardware,
but both client and server may reside in the same system.
Examples of computer applications that use the clientserver model are Email, network
printing, and the World Wide Web.
The clientserver characteristic describes the relationship of cooperating programs in
an application.
The server component provides a function or service to one or many clients, which
initiate requests for such services.
Servers are classified by the services they provide. A web server serves web pages; a file
server serves computer files.
A shared resource may be any of the server computer's software and electronic
components, from programs and data to processors and storage devices. The sharing of
resources of a server constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature of the
application that requires the service functions. For example, a single computer can run
web server and file server software at the same time to serve different data to clients
making different kinds of requests. Client software can also communicate with server
software within the same computer.
Communication between servers, such as to synchronize data, is sometimes
called inter-server or server-to-server communication.
In general, a service is an abstraction of computer resources and a client does not have
to be concerned with how the server performs while fulfilling the request and delivering
the response.
The client only has to understand the response based on the well-known application
protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in a request-response messaging pattern: The
client sends a request, and the server returns a response. This exchange of messages is
an example of inter-process communication.
To communicate, the computers must have a common language, and they must follow
rules so that both the client and the server know what to expect. The language and
Er. Rohit Handa
Lecturer, CSE-IT Department 2
IBM-ICE Program, BUEST Baddi
rules of communication are defined in a communications protocol. All client-server
protocols operate in the application layer.

Configurations in Client-Server Computing


Client-Server Computing is divided into three components, a Client Process requesting service
and a Server Process providing the requested service, with a Middleware in between them for
their interaction.
Client: A Client Machine usually manages the user-interface portion of the application,
validate data entered by the user, dispatch requests to server programs. It is the front-
end of the application that the user sees and interacts with. Besides, the Client Process
also manages the local resources that the user interacts with such as the monitor,
keyboard, workstation, CPU and other peripherals.
Server: On the other hand, the Server Machine fulfills the client request by performing
the service requested. After the server receives requests from clients, it executes
database retrieval, updates and manages data integrity and dispatches responses to
client requests. The server-based process may run on another machine on the network;
the server is then provided both file system services and application services. Or in
some cases, another desktop machine provides the application services. The server acts
as software engine that manages shared resources such as databases, printers,
communication links, or high powered-processors. The main aim of the Server Process
is to perform the back-end tasks that are common to similar applications.
The simplest forms of servers are disk servers and file servers. With a file server, the
client passes requests for files or file records over a network to the file server. This form
of data service requires large bandwidth and can slow a network with many users. The
more advanced form of servers are Database servers, Transaction server and
Application servers.
Middleware: Middleware allows applications to transparently communicate with other
programs or processes regardless of location. The key element of Middleware is NOS
(Network Operating System) that provides services such as routing, distribution, and
messaging and network management service. NOS rely on communication protocols to
provide specific services. Once the physical connection has been established and
transport protocols chosen, a client-server protocol is required before the user can
access the network services. A client-server protocol dictates the manner in which
clients request information and services from a server and also how the server replies to
that request.

Characteristics and Features in Client-Server Computing


Consists of networked webs of small and powerful machines (both servers and
clients): Client-Server Computing uses local processing power-the power of desktop
platform. It changes the way enterprise accesses, distributes, and uses data. With this
approach, data is no longer under the tight control of Seniors Managers and MIS
(Management of Information Systems) staff; it is readily available to middle-rank
personnel and staff. They can actively involve in the decision-making and operation on
behalf of the company. The company becomes more flexible and gives a faster response
to the changing business environment outside. In addition, if one machine goes down,
the company will still function properly.
Open Systems: Another feature of Client-Server Computing is open systems, which
means you can configure your systems, both software and hardware from various
vendors as long as they stick to a common standard. In this way, company can tailor
their system for their particular situation and needs, pick up and choose the most cost-
Er. Rohit Handa
Lecturer, CSE-IT Department 3
IBM-ICE Program, BUEST Baddi
effective hardware and software components to suit their tasks. For example, you can
grab the data and run it through a spreadsheet from your desktop using the brand of
computer and software tools that you're most comfortable with and get the job done in
your own way.
Modularity: Since we are mixing softwares and hardwares of different natures together
as a whole. All the software and hardware components are actually modular in nature.
This modularity allows the system to expand and modernize to meet requirements and
needs as the company grows. You can add or remove certain client or machine,
implement some new application softwares and even add some hardware features
without affecting the operation and functioning of the Client-Server System as a whole.
Cost Reduction and Better Utilisation of Resources: Potential cost savings prompt
organizations to consider Client-Server Computing. The combined base price of
hardware (machines and networks) and software for Client/Server systems is often a
tenth of the cost to that of mainframe computing. Furthermore, another feature of
Client-Server Computing is able to link existing hardware and software applications
and utilise them in an efficient way.
Complexity: Since the environment is typically heterogeneous and multivendor. The
hardware platform and operating system of client and server are not usually the same.
The biggest challenge to successfully implementing the system is to put together this
complex system of hardware and software from multiple vendors. Therefore we need
expertise not just for software, hardware or networks but expertise of all these fields
and understand the interdependencies and interconnection. Sometimes when the
system is down, it is extremely difficult to identify a bug or mistake, but there are just
serveral culprits that might cause the problems. Futhermore we have to pay extra
efforts and times in traning IS professionals to maintain this new environment in
geographically dispersed location.

Topic-2: Cluster Computing


A cluster is a type of parallel or distributed computer system, which consists of a
collection of interconnected standalone computers working together as a single
integrated computing resource.
Clusters are usually deployed to improve performance and availability over that of a
single computer, while typically being much more cost-effective than single computers
of comparable speed or availability.

Major Characteristics:
Tightly coupled systems
Single system image
Centralized Job management & scheduling system
Multiple computing nodes,
o low cost
o a fully functioning computer with its own memory, CPU, possibly storage
o own instance of operating system
computing nodes are connected by interconnects
o typically low cost, high bandwidth and low latency
permanent, high performance data storage
a resource manager to distribute and schedule jobs
Er. Rohit Handa
Lecturer, CSE-IT Department 4
IBM-ICE Program, BUEST Baddi

the middleware that allows the computers act as a distributed or parallel system
parallel applications designed to run on it
More towards parallel computing
Makes use of inter connection technologies
Processing elements generally lie within the close proximity of each other
Gives the impression of a single powerful computer
Generally cost effective compared to single computers of comparable speed and
availability
Deployed to improve performance and availability over that of a single computer

Attributes of Clusters
Computer clusters may be configured for different purposes ranging from general
purpose business needs such as web-service support, to computation-intensive
scientific calculations. In either case, the cluster may use a high-availability approach.
"Load-balancing" clusters are configurations in which cluster-nodes share
computational workload to provide better overall performance. For example, a web
server cluster may assign different queries to different nodes, so the overall response
time will be optimized. However, approaches to load-balancing may significantly differ
among applications, e.g. a high-performance cluster used for scientific computations
would balance load with different algorithms from a web-server cluster which may just
use a simple round-robin method by assigning each new request to a different node.
"Computer clusters" are used for computation-intensive purposes, rather than
handling IO-oriented operations such as web service or databases. For instance, a
computer cluster might support computational simulations of vehicle crashes or
weather.
Very tightly coupled computer clusters are designed for work that may approach
"supercomputing".
"High-availability clusters" (also known as failover clusters, or HA clusters) improve
the availability of the cluster approach. They operate by having redundant nodes, which
are then used to provide service when system components fail. HA cluster
implementations attempt to use redundancy of cluster components to eliminate single
points of failure.

Benefits
Low Cost: Customers can eliminate the cost and complexity of procuring, configuring
and operating HPC clusters with low, pay-as-you-go pricing. Further, you can optimize
costs by leveraging one of several pricing models: On Demand, Reserved or Spot
Instances.
Elasticity: You can add and remove computer resources to meet the size and time
requirements for your workloads.
Run Jobs Anytime, Anywhere: You can launch compute jobs using simple APIs or
management tools and automate workflows for maximum efficiency and scalability. You
can increase your speed of innovation by accessing computer resources in minutes
instead of spending time in queues.

Components of a Cluster
The following are the components of cluster computers:
Multiple computers (computing nodes)
Er. Rohit Handa
Lecturer, CSE-IT Department 5
IBM-ICE Program, BUEST Baddi

Operating system of the nodes


High performance interconnect network and fast communication protocols
o With high bandwidth
o Low latency
Cluster Middleware
o To support Single System Image (SSI) and System Availability Infrastructure
o Resource management and scheduling software
Initial installation
Administration
Scheduling
Allocation of hardware
Allocation software components
Parallel programming environments and tools
o Compilers
o Parallel Virtual Machine (PVM)
o Message Passing Interface (MPI)
Applications
o Sequential
o Parallel or Distributed

Classification of clusters
1. According to usage requirements
a. High Performance and High Throughput Clusters: They are used for applications
which require high computing capability.
b. High Availability Clusters: The aim is to keep the overall services of the cluster
available as much as possible, considering the fail possibility of each hardware of
software. They provide redundant services across multiple systems, to overcome
loss of service. If a node fails, others pick up the service to keep the system
environment consistent, from the point of view of the user. The switching over
Er. Rohit Handa
Lecturer, CSE-IT Department 6
IBM-ICE Program, BUEST Baddi

should take very short time. A subset of this type is the load balancing clusters.
They are usually used for business needs. The aim is to share processing load as
evenly as possible. No single parallel program that runs across those nodes. Each
node is independent, running separate software. There should be a central node
balancing server.

2. According to the node type


a. Homogeneous clusters: In homogeneous clusters all nodes have similar
properties. Each node is much like any other. Amount of memory and
interconnects are similar.
b. Heterogeneous clusters: Nodes have different characteristics, in the sense of
memory and interconnect performance.

3. According to the hierarchy they inherit.


a. Single level (single-tier) clusters: There is no hierarchy of nodes is defined. Any
node may be used for any purpose. The main advantage of the single tier cluster
is its simplicity. The main disadvantage is its limit to be expanded.
b. Multi level (multi-tier) clusters: There is a hierarchy between nodes. There are
node sets, where each set has a specialized function.

High-performance Computing: High-performance computing (HPC) is a broad term that at


its core represents compute intensive applications that need acceleration.
HPC Applications
Medical Imaging CT scan (pre-processing, reconstruction)
3D ultrasound real-time X-ray
Financial (Trading) Derivatives trading
Black Scholesmodel
BGM/LIBOR market model
Monte Carlo simulations
Oil and Gas Reservoir modeling
Seismic data interpretation
3D imaging processing
Bioscience Gene and protein annotation (mapping SNPs to the human genome
modeling protein families)
Map drug therapy to individuals genes (future)
OtherMarkets Military, data compression, coder/decoder (CODEC)
Search, security, etc.

High-throughput computing (HTC)


It describes the use of many computing resources over long periods of time to
accomplish a computational task.
HPC tasks are characterized as needing large amounts of computing power for short
periods of time, whereas HTC tasks also require large amounts of computing, but for
much longer times (months and years, rather than hours and days).
HTC field is more interested in how many jobs can be completed over a long period of
time instead of how fast an individual job can complete.
Er. Rohit Handa
Lecturer, CSE-IT Department 7
IBM-ICE Program, BUEST Baddi
HPC systems are tightly coupled parallel jobs, and as such they must execute within a
particular site with low-latency interconnects.
Conversely, HTC systems are independent, sequential jobs that can be individually
scheduled on many different computing resources across multiple administrative
boundaries. HTC systems achieve this using various grid computing technologies and
techniques.

Topic-3: Peer-to-Peer Computing


A peer-to-peer (abbreviated to P2P) computer network is one in which each computer
in the network can act as a client or server for the other computers in the network,
allowing shared access to various resources such as files, peripherals, and sensors
without the need for a central server.
A peer-to-peer (P2P) network is a type of decentralized and distributed network
architecture in which individual nodes in the network (called "peers") act as both
suppliers and consumers of resources, in contrast to the centralized clientserver model
where client nodes request access to resources provided by central servers.
In a peer-to-peer network, tasks are shared amongst multiple interconnected peers who
each make a portion of their resources directly available to other network participants,
without the need for centralized coordination by servers.

Characteristics
A distributed system architecture, i.e., No centralized control
Clients are also servers and routers
Nodes contribute content, storage, memory, CPU
Nodes are autonomous (no administrative authority)
Ad-hoc Nature. Peers join and leave the system without direct control of any entity.
Therefore, the number and location of active peers as well as the network topology
interconnecting them are highly dynamic. The ad-hoc nature requires P2P systems to
be self-organizing. So, network is dynamic: nodes enter and leave the network
frequently
Nodes collaborate directly with each other (not through well-known servers)
Limited Capacity and Reliability of Peers. Measurement studies [33] show that peers
do not have server-like properties: peers have much less capacity and they fail more
often. The unreliability of peers suggests that fault tolerance and adaptation
techniques should be integral parts of the P2P protocols. The limited capacity of peers
demands load-sharing and balancing among all participating peers.
Nodes have widely varying capabilities
Rationality of Peers. Computers participating in a P2P system are typically owned and
operated by autonomous and rational entities (peers). Rational peers make decisions to
maximize their own benefits. Peers may decide, for example, on whether to share data,
leave the system, and forward queries. These decisions are not always inline with the
performance objectives of the system. This conflict of interest may jeopardize the growth
and the performance of the entire system. Therefore, peer rationality should be
considered in designing the P2P protocols
Decentralization: One main goal of decentralization is the emphasis on users'
ownership and control of data and resources. In a fully decentralized system, all peers
assume equal roles. This makes the implementation of the P2P models difficult in
practice because there is no centralized server with a global view of all the peers in the
network or the files they provide. This is the reason why many P2P file systems are built
Er. Rohit Handa
Lecturer, CSE-IT Department 8
IBM-ICE Program, BUEST Baddi
as hybrid approach as in the case of Napster, where there is a centralized directory of
the files but the nodes download files directly from their peers.
Scalability: An immediate benefit of decentralization is improved scalability. Scalability
is limited by factors such as the amount of centralized operations (e.g., synchronization
and coordination) that needs be performed, the amount of state that needs to be
maintained, the inherent parallelism an application exhibits, and the programming
model that is used to represent the computation.
Anonymity: An important goal of anonymity is to allow people to use systems without
concern for legal or other ramifications. A further goal is to guarantee that censorship
of digital content is not possible.
Cost of Ownership: One of the promises of P2P network is shared ownership. Shared
ownership reduces the cost of owning the systems and the content, and the cost of
maintaining them. This is applicable to all classes of P2P systems.
Ad-hoc Connectivity: The ad-hoc nature of connectivity has a strong effect on all
classes of P2P systems. In distributed computing, the parallelized applications cannot
be executed on all systems all of the time; some of the systems will be available all of
the time, some will be available part of the time, and some will not be available at all.
P2P systems and applications in distributed computing need to be aware of this ad-hoc
nature and be able to handle systems joining and withdrawing from the pool of
available P2P systems.
Performance: Performance is a significant concern in P2P systems. P2P systems aim to
improve performance by aggregating distributed storage capacity (e.g., Napster,
Gnutella) and computing cycles (e.g., SETI@Home) of devices spread across a network.
Owing to the decentralized nature of these models, performance is influenced by three
types of resources namely: processing, storage, and networking.
Security: P2P systems share most of their security needs with common distributed
systems: trust chains between peers and shared objects, session key exchange
schemes, encryption, digital digests, and signatures. However, new security
requirements appeared with P2P systems. Some of these requirements are multi-key
encryption, sandboxing, digital rights management, reputation and accountability and
firewalls.
Transparency and Usability: In distributed systems, transparency was traditionally
associated with the ability to transparently connect distributed systems into a
seamlessly local system. The primary form of transparency was location transparency,
but other forms include transparency of access, concurrency, replication, failure,
mobility, scaling, etc. Over time, some of the transparencies were further qualified,
such as transparency for failure, by requiring distributed applications to be aware of
failures, and addressing transparency on the Internet and Web. Another form of
transparency is related to security and mobility.
Fault Tolerance: One of the primary goals of a P2P system is to avoid a central point of
failure. Although most P2P systems (pure P2P) already do this, they nevertheless are
faced with failures commonly associated with systems spanning multiple hosts and
networks: disconnections/unreachability, partitions, and node failures. It would be
desirable to continue active collaboration among the still connected peers in the
presence of such failures.

P2P Applications
File sharing (Napster, Gnutella, Kazaa)
Multiplayer games (Unreal Tournament, DOOM)
Collaborative applications (ICQ, shared whiteboard)
Er. Rohit Handa
Lecturer, CSE-IT Department 9
IBM-ICE Program, BUEST Baddi
Distributed computation (Seti@home)
Ad-hoc networks

Types of Peer-to-peer Computing


Structured peer-to-peer networks
Unstructured peer-to-peer networks
Pure peer-to-peer systems
Hybrid peer-to-peer systems
Centralized peer-to-peer systems

Structured peer-to-peer networks


Peers are organized following specific criteria and algorithms, which lead to overlays
with specific topologies and properties.
They typically use distributed hash table (DHT) based indexing.
Structured P2P systems are appropriate for large-scale implementations due to high
scalability and some guarantees on performance (typically approximating O(log N),
where N is the number of nodes in the P2P system).
Structured P2P networks employ a globally consistent protocol to ensure that any node
can efficiently route a search to some peer that has the desired file/resource, even if the
resource is extremely rare. Such a guarantee necessitates a more structured pattern of
overlay links.
The most common type of structured P2P networks implement a distributed hash
table(DHT), in which a variant of consistent hashing is used to assign ownership of
each file to a particular peer, in a way analogous to a traditional hash table's
assignment of each key to a particular array slot.

Unstructured P2P networks


Unstructured P2P networks do not impose any structure on the overlay networks.
Peers in these networks connect in an ad-hoc fashion based on some loose set of rules.
Three Categories:
o In pure peer-to-peer systems the entire network consists solely of equipotent
peers. There is only one routing layer, as there are no preferred nodes with any
special infrastructure function.
o In centralized peer-to-peer systems, a central server is used for indexing
functions and to bootstrap the entire system. Although this has similarities with
Er. Rohit Handa
Lecturer, CSE-IT Department 10
IBM-ICE Program, BUEST Baddi
a structured architecture, the connections between peers are not determined by
any algorithm.
o Hybrid P2P networks employ dynamic central entities, which establish a second
routing hierarchy to optimize the routing behavior of flat overlay approaches.

Centralized P2P-Napster
Napster can be classified as a centralized P2P network, where a central entity is
necessary to provide the service.
A central database maintains an index of all files that are shared by the peers currently
logged onto the Napster network.
The database can be queried by all peers to lookup the IP addresses and ports of all
peers sharing the requested file.
File transfer is decentralized, but locating content is centralized.
In 2000 Napster offered the first P2P file-sharing application and with it a real P2P rush
started.

Pure P2P
The main disadvantage of a central architecture is its single point of failure.
For this reason pure P2P networks like Gnutella 0.4 have been developed.
They are established and maintained completely without central entities.
All peers in these overlay networks are homogeneous and provide the same
functionality.
Therefore, they are very fault resistant as any peer can be removed without loss of
functionality.
Because of Gnutellas unstructured network architecture, no guarantee can be given
that content can be found.
Er. Rohit Handa
Lecturer, CSE-IT Department 11
IBM-ICE Program, BUEST Baddi
Messages are coded in plain text and all queries have to be flooded through the
network.
This results in a significant signaling overhead and a comparable high network load.

Gnutella
Searching by flooding:
If you dont have the file you want, query 7 of your neighbors.
If they dont have it, they contact 7 of their neighbors, for a maximum hop count of 10.
Requests are flooded, but there is no tree structure.
No looping but packets may be received twice.
Reverse path forwarding

Hybrid P2P
Hybrid approaches, like Gnutella try to reduce network traffic by establishing a second
routing hierarchy, i.e. the Superpeer layer.
By the differentiation of participating nodes in Superpeers and Leaf nodes a significant
reduction of the data rate consumption can be achieved, without loosing the networks
complete self organization.

Topic-4: Ubiquitous Computing


Ubiquitous computing is an advanced computing concept where computing is made to
appear everywhere and anywhere.
In contrast to desktop computing, ubiquitous computing can occur using any device, in
any location, and in any format.
A user interacts with the computer, which can exist in many different forms, including
laptop computers, tablets, terminals and phones.
The underlying technologies to support ubiquitous computing include Internet,
advanced middleware, operating system, mobile code, sensors, microprocessors,
new I/O and user interfaces, networks, mobile protocols, location and positioning
and new materials.
It is normally associated with a large number of small electronic devices (small
computers) which have computation and communication capabilities such as smart
mobile phones, contactless smart cards, handheld terminals, sensor network nodes,
Radio Frequency IDentification (RFIDs) etc. which are being used in our daily life.
These small computers are equipped with sensors and actuators, thus allowing them to
interact with the living environment. In addition to that, the availability of
communication functions enables data exchange within environment and devices.
In the advent of this new technology, learning styles has progressed from electronic-
learning (m-learning) to mobile-learning (m-learning) and from mobile-learning to
ubiquitous-learning (u-learning).
Ubiquitous learning, also known as u-learning is based on ubiquitous technology. The
most significant role of ubiquitous computing technology in u-learning is to construct a
ubiquitous learning environment, which enables anyone to learn at anyplace at
anytime.
Ubiquitous computing is the method of enhancing computer use by making many
computers available throughout the physical environment, but making them effectively
invisible to the user
Er. Rohit Handa
Lecturer, CSE-IT Department 12
IBM-ICE Program, BUEST Baddi
Ubiquitous computing, or calm technology, is a paradigm shift where technology
becomes virtually invisible in our lives.

Context Awareness
A Ubiquitous computing system has to be context aware, i.e., aware of users state and
surroundings and modify its behavior based on this information.
The situational conditions that are associated with a user location, surrounding
conditions light, temperature, humidity, noise level, etc), social activities, user
intentions, personal information, etc.

Context Aware Applications


Examples of context-aware applications include advising a driver to take a particular
route based on his location , his destination, and current traffic conditions ; advising a
nurse to attend to a particular patient based on the medical telemetry being received
from all patients on a ward; and delivering a message either by cell phone or by e-mail
depending on the recipients current context.
The individuals who benefit from context-aware applications may not be sitting with a
keyboard, mouse, and display, and may in fact be engrossed in other activities.
They may remain unaware of the computer systems working on their behalf except
when those systems interrupt them for some urgent purpose.

Ubiquitous Computing / Pervasive Computing


Computing is embedded in to everyday devices.
Information access and communication is virtually everywhere.
Ubiquitous means everywhere at the same time.
Pervasive means spreading the computation power into everything around us.

Pervasive computing technologies


Pervasive computing involves three converging areas of ICT
o computing (devices): likely to assume many different forms and sizes, from
handheld units (similar to mobile phones)to near-invisible devices set into
everyday objects (like furniture and clothing).
o Communications (connectivity): This can be achieved via both wired (such as
Broadband (ADSL) or Ethernet) and wireless networking technologies (such as
WiFior Bluetooth).
o User interfaces: new user interfaces are being developed that will be capable of
sensing and supplying more information about users, and the broader
environment, to the computer for processing.

Ubiquitous/Pervasive computing goes beyond the sphere of personal computers.


It is the idea that almost any device, from clothing to tools to appliances to cars to
homes to the human body to your coffee mug, can be embedded with chips to connect
the device to an infinite network of other devices.
The goal of pervasive computing, which combines current network technologies with
wireless computing, voice recognition, Internet capability and artificial intelligence, is to
create an environment where the connectivity of devices is embedded in such a way
that the connectivity is unobtrusive and always available.

Several terms that share a common vision


Er. Rohit Handa
Lecturer, CSE-IT Department 13
IBM-ICE Program, BUEST Baddi
- Pervasive Computing
- Ubiquitous Computing
- Ambient Intelligence
- Wearable Computing
- Context Awareness

Characteristics
Permanency: The information remains unless the learners purposely remove it.
Accessibility: The information is always available whenever the learners need to use it.
Immediacy: The information can be retrieved immediately by the learners.
Interactivity: The learners can interact with peers, teachers, and experts efficiently
and effectively through different media.
Context-awareness: The environment can adapt to the learners real situation to
provide adequate information for the learners.
Invisibility: - Invisible Intelligent Devices - Wearable Computing Devices
Adaptation: Adapting to Device Type, Time, Location, Temperature, Weather, etc
Task Dynamism - Applications need to adapt to the users environment and
uncertainties - Programs need to adapt to changing goals
Device heterogeneity and resource constraints: Technological capabilities of the
environment change Approaches to Mobility - Device itself is mobile.
Constraints: physical ones limiting resources (e.g., battery power, network bandwidth,
etc.), variability in availability of resources.
Application that follows the user: Constraints: dynamic adaptation of applications to
changing - hardware capabilities and variability in software services.
Computing in a social environment Applications will have significant impact on
their social environment. Ubiquitous environment sensors. Who should access
sensors data ? Who owns data from Ubiquitous Computing System ?

Pervasive Computing Applications


Always running and available
Composed of collaborating parts spared over the network distributed components
Adapt to environments when the users/devices move reconfigure to use available
services
Users are not aware of the computing embedded in the device transparent interaction
Information pursues the user rather than user pursues the information

Smart Clothing
Sensors based on fabric e.g., monitor pulse, blood pressure, body temperature
Invisible collar microphones
Kidswear
o game console on the sleeve
o integrated GPS-driven locators
o integrated small camera

Topic-5: Grid Computing


Grid computing is a method of harnessing the power of many computers in a network
to solve problems requiring a large number of processing cycles and involving huge
amounts of data.
Er. Rohit Handa
Lecturer, CSE-IT Department 14
IBM-ICE Program, BUEST Baddi
The grid computing helps in exploiting underutilized resources, achieving parallel CPU
capacity; provide virtual resources for collaboration and reliability.
Grid computing combines computers from multiple administrative domains to reach a
common goal, to solve a single task, and may then disappear just as quickly.
One of the main strategies of grid computing is to use middleware to divide and
apportion pieces of a program among several computers, sometimes up to many
thousands.
Grid computing involves computation in a distributed fashion, which may also involve
the aggregation of large-scale clusters.
The size of a grid may vary from small (i.e. confined
to a network of computer workstations within a
corporation) to large (public collaborations across
many companies and networks).
The notion of a confined grid may also be known as
intra-nodes cooperation whilst the notion of a
larger, wider grid may thus refer to inter-nodes
cooperation.
Coordinating applications on Grids can be a
complex task, especially when coordinating the flow
of information across distributed computing
resources.
Grid computing is a computer network in which
each computer's resources are shared with every
other computer in the system.
Processing power, memory and data storage are all community resources that
authorized users can tap into and leverage for specific tasks.
A grid computing system can be as simple as a collection of similar computers running
on the same operating system or as complex as inter-networked systems comprised of
every computer platform you can think of.
It's a special kind of distributed computing. In distributed computing, different
computers within the same network share one or more resources. In the ideal grid
computing system, every resource is shared, turning a computer network into a
powerful supercomputer.
With the right user interface, accessing a grid computing system would look no different
than accessing a local machine's resources. Every authorized computer would have
access to enormous processing power and storage capacity.
Grid computing systems work on the principle of pooled resources ,i.e., share the load
across multiple computers to complete tasks more efficiently and quickly.

Computer's resources:
Central processing unit (CPU): A CPU is a microprocessor that performs mathematical
operations and directs data to different memory locations. Computers can have more than
one CPU.
Memory: In general, a computer's memory is a kind of temporary electronic storage.
Memory keeps relevant data close at hand for the microprocessor. Without memory, the
microprocessor would have to search and retrieve data from a more permanent storage
device such as a hard disk drive.
Er. Rohit Handa
Lecturer, CSE-IT Department 15
IBM-ICE Program, BUEST Baddi
Storage: In grid computing terms, storage refers to permanent data storage devices like
hard disk drives or databases.

Normally, a computer can only operate within the limitations of its own resources.
There's an upper limit to how fast it can complete an operation or how much information it
can store. Most computers are upgradeable, which means it's possible to add more power
or capacity to a single computer, but that's still just an incremental increase in
performance.
Grid computing systems link computer resources together in a way that lets someone use
one computer to access and leverage the collected power of all the computers in the
system.
To the individual user, it's as if the user's computer has transformed into a
supercomputer.

In general, a grid computing system requires:


At least one computer, usually a server, which handles all the administrative duties
for the system. Many people refer to this kind of computer as a control node. Other
application and Web servers (both physical and virtual) provide specific services to the
system.
A network of computers running special grid computing network software. These
computers act both as a point of interface for the user and as the resources the system will
tap into for different applications. Grid computing systems can either include several
computers of the same make running on the same operating system (called a
homogeneous system) or different computers running on every operating system
imaginable (a heterogeneous system). The network can be anything from a hardwired
system where every computer connects to the system with physical wires to an open
system where computers connect with each other over the Internet.
A collection of computer software called middleware. The purpose of middleware is to
allow different computers to run a process or application across the entire network of
machines. Middleware is the workhorse of the grid computing system. Without it,
communication across the system would be impossible. Like software in general, there's no
single format for middleware.
If middleware is the workhorse of the grid computing system, the control node is the
dispatcher.
The control node must prioritize and schedule tasks across the network.
It's the control node's job to determine what resources each task will be able to access.
The control node must also monitor the system to make sure that it doesn't become
overloaded.
It's also important that each user connected to the network doesn't experience a drop in
his or her computer's performance.
A grid computing system should tap into unused computer resources without impacting
everything else.
Characteristics:
1. Large scale: a grid must be able to deal with a number of resources ranging from just a
few to millions. This raises the very serious problem of avoiding potential performance
degradation as the grid size increases.
2. Geographical distribution: grids resources may be located at distant places.
Er. Rohit Handa
Lecturer, CSE-IT Department 16
IBM-ICE Program, BUEST Baddi
3. Heterogeneity: a grid hosts both software and hardware resources that can be very
varied ranging from data, files, software components or programs to sensors, scientific
instruments, display devices, personal digital organizers, computers, super-computers
and networks.
4. Resource sharing: resources in a grid belong to many different organizations that allow
other organizations (i.e. users) to access them. Nonlocal resources can thus be used by
applications, promoting efficiency and reducing costs.
5. Multiple administrations: each organization may establish different security and
administrative policies under which their owned resources can be accessed and used.
As a result, the already challenging network security problem is complicated even more
with the need of taking into account all different policies.
6. Resource coordination: resources in a grid must be coordinated in order to provide
aggregated computing capabilities.
7. Transparent access: a grid should be seen as a single virtual computer.
8. Dependable access: a grid must assure the delivery of services under established
Quality of Service (QoS) requirements. The need for dependable service is fundamental
since users require assurances that they will receive predictable, sustained and often
high levels of performance.
9. Consistent access: a grid must be built with standard services, protocols and inter-
faces thus hiding the heterogeneity of the resources while allowing its scalability.
Without such standards, application development and pervasive use would not be
possible.
10. Pervasive access: the grid must grant access to available resources by adapting to a
dynamic environment in which resource failure is commonplace. This does not imply
that resources are everywhere or universally available but that the grid must tailor its
behavior as to extract the maximum performance from the available re-sources.
11. Gives a feeling of Desktop supercomputing -means you are sitting in front of your
desktop but you are connected to supercomputer.
12. Grid grows and shrinks dynamically. There is nothing as such you have a static set of
resources which we call as Grid.
13. Cluster is not a Grid. Grid is not owned by one person. Grid cannot be built by one or
by a single resource. This is to say the fact that One can be part of the Grid
14. It is a mix of open source, proprietary software/applications/databases.
15. Virtualization of underlying resources
16. Decentralized scheduling, administering and job management
17. Highly Volatile, resources join and leave the Grid at their
own will and wish.
18. Unlimited number of nodes/resources
19. Internet is used as Information Highway. Grid is built on
top of Internet and is used as Computing/Sharing Highway

Grid Computing -Architecture


Grids provide protocols and services at five different layers
as identified in the Grid protocol architecture.
At the fabric layer, Grids provide access to different
resource types such as computer, storage and network
resource, code repository, etc.
The connectivity layer defines score communication and authentication protocols for
easy and secure network transactions.
Er. Rohit Handa
Lecturer, CSE-IT Department 17
IBM-ICE Program, BUEST Baddi
The resource layer defines protocols for the publication, discovery, negotiation,
monitoring, accounting and payment of sharing operations on individual resources.
The collective layer captures interactions across collections of resources.
The application layer comprises whatever user
applications built on top of the above protocols and
APIs and operate in VO Environments.

Cloud Computing Architecture


The fabric layer contains the raw hardware level
resources, such as computer resources, storage
resources, and network resources.
The unified resource layer contains resources that
have been abstracted/encapsulated (usually by
virtualization) so that they can be exposed to upper
layer and end users as integrated resources, for
instance, a virtual computer/cluster, a logical file
system, a database system, etc.
The platform layer adds on a collection of specialized tools, middleware and services
on top of the unified resources to provide a development and/or deployment platform.
For instance, a Webhosting environment, a scheduling service, etc.
Finally, the application layer contains the applications that would run in the Clouds.

Why need Grid Computing?


Exploiting underutilized resources
Parallel CPU capacity
Virtual resources and virtual organizations for collaboration
Access to additional resources
Resource Balancing
Reliability

Distributed computing vs. Grid Computing


Distributed Computing: Refers to managing or pooling the hundreds or thousands of
computer systems which individually are more limited in their memory and processing
power.
Grid Computing: has some extra characteristics, i.e., concerned to efficient utilization
of a pool of heterogeneous systems with optimal workload management utilizing an
enterprise's entire computational resources (servers, networks, storage, and
information). Grid computing is focused on the ability to support computation across
multiple administrative domains that sets it apart from traditional distributed
computing.

Virtual Organizations within Grid


A Virtual Organization: It is a community of users who work or do research in a
particular domain of interest. Example: Physicist, Chemists, Biologists, Computer
Scientists, Doctors and others.
Within a virtual organization we find personnel and resources working on a common
problem and are specialized in a specific domain.
Grid is one such environment which has different kinds of Virtual organizations. This
enables collaborative computing and interactive computing within the Grid
Environment.
Er. Rohit Handa
Lecturer, CSE-IT Department 18
IBM-ICE Program, BUEST Baddi
A dynamic set of individual and/or institutions defined around a set of resource
sharing rules and conditions.
The collaborations involved in Grid computing lead to the emergence of multiple
organizations that function as one unit through the use of their shared competencies
and resources for the purpose of one or more identified goals.

Examples of Virtual Organization


An industrial consortium formed to develop a feasibility study for a next-generation
supersonic aircraft undertakes a highly accurate multi-disciplinary simulation of the
entire aircraft. This simulation integrates proprietary software components developed
by different participants, with each component operating on that participants
computers and having access to appropriate design databases and other data made
available to the consortium by its members.

Topic-6: Utility Computing


Utility computing is a business model in which one company outsources part or all of
its computer support to another company. Support in this case doesn't just mean
technical advice, it includes everything from computer processing power to data
storage.
The principle of utility computing is very simple: One company pays another company
for computing services.
The services might include hardware rental, data storage space, use of specific
computer applications or access to computer processing power.
It all depends on what the client wants and what the utility computing company can
offer.
Many utility computing companies offer bundles or packages of resources. A
comprehensive package might include all of the following:
o Computer hardware, including servers, CPUs, monitors, input devices and
network cables.
o Internet access, including Web servers and browsing software.
o Software applications that run the entire gamut of computer programs. They
could include word processing programs, e-mail clients, project-specific
applications and everything in between. Industry experts call this particular kind
of business "Software as a Service" (SaaS).
o Access to the processing power of a supercomputer. Some corporations have
hefty computational requirements. For example, a financial company might need
to process rapidly-changing data gathered from the stock market. While a normal
computer might take hours to process complicated data, a supercomputer could
complete the same task much more quickly.
o The use of a grid computing system. A grid computing system is a network of
computers running special software called middleware. The middleware detects
idle CPU processing power and allows an application running on another
computer to take advantage of it. It's useful for large computational problems
that can be divided into smaller chunks.
o Off-site data storage, which is also called cloud storage. There are many
reasons a company might want to store data off-site. If the company processes a
lot of data, it might not have the physical space to hold the data servers it needs.
An off-site backup is also a good way to protect information in case of a
Er. Rohit Handa
Lecturer, CSE-IT Department 19
IBM-ICE Program, BUEST Baddi
catastrophe. For example, if the company's building were demolished in a fire, its
data would still exist in another location.

Utility computing rates vary depending on the utility computing company and the
requested service.
Usually, companies charge clients based on service usage rather than a flat fee.
The more a client uses services, the more fees it must pay.
Some companies bundle services together at a reduced rate, essentially selling
computer services in bulk.
Utility computing is the packaging of computing resources, such as computation,
storage and services, as a metered service.
This model has the advantage of a low or no initial cost to acquire computer resources;
instead, computational resources are essentially rented.
This repackaging of computing services became the foundation of the shift to "on
demand" computing, software as a service and cloud computing models that further
propagated the idea of computing, application and network as a service.
Utility Computing can support grid computing which has the characteristic of very large
computations or a sudden peaks in demand which are supported via a large number of
computers.
"Utility computing" has usually envisioned some form of virtualization so that the
amount of storage or computing power available is considerably larger than that of a
single time-sharing computer.
Multiple servers are used on the "back end" to make this possible. These might be a
dedicated computer cluster specifically built for the purpose of being rented out, or
even an underutilized supercomputer. The technique of running a single calculation on
multiple computers is known as distributed computing.
The term "grid computing" is often used to describe a particular form of distributed
computing, where the supporting nodes are geographically distributed or
cross administrative domains. To provide utility computing services, a company can
"bundle" the resources of members of the public for sale, who might be paid with a
portion of the revenue from clients.
This model has the advantage of a low or no initial cost to acquire computer
resources; instead, computational resources are essentially rented-turning what was
previously a need to purchase products (hardware, software and network bandwidth)
into a service.
This packaging of computing services became the foundation of the shift to "On
Demand" computing, Software as a Service and Cloud Computing models that further
propagated the idea of computing, application and network as a service.
Utility computing is not a new concept, but rather has quite a long history. Among the
earliest references is: If computers of the kind I have advocated become the computers
of the future, then computing may someday be organized as a public utility just as
the telephone system is a public utility... The computer utility could become the
basis of a new and important industry.

Cluster V/s Cloud


Cluster vs. Grid and Cloud Computing
1. Cluster differs from Cloud and Grid in that a cluster is a group of computers connected
by a local area network (LAN), whereas cloud and grid are more wide scale and can be
geographically distributed.
Er. Rohit Handa
Lecturer, CSE-IT Department 20
IBM-ICE Program, BUEST Baddi
2. Another way to put it is to say that a cluster is tightly coupled, whereas a Grid or a
cloud is loosely coupled.
3. Clusters are made up of machines with similar hardware, whereas clouds and grids are
made up of machines with possibly very different hardware configurations.

Grid V/s Cloud


1. Resource distribution: Cloud computing is a centralized model whereas grid
computing is a decentralized model where the computation could occur over many
administrative domains.
2. Ownership: A grid is a collection of computers which is owned by multiple parties in
multiple locations and connected together so that users can share the combined power
of resources. Whereas a cloud is a collection of computers usually owned by a single
party.
3. Examples of Clouds: Amazon Web Services (AWS), Google App Engine & Examples of
Grids: FutureGrid
Grid computing Cloud computing
What? Grids enable access to shared computing Clouds enable access to leased
power and storage capacity from your computing power and storage capacity
desktop from your desktop
Who Research institutes and universities Large individual companies e.g. Amazon
provides federate their services around the world and Microsoft and at a smaller scale,
the through projects such as EGI-InSPIRE and institutes and organisations deploying
service? the European Grid Infrastructure. open source software such as Open Slate,
Eucalyptus and Open Nebula.
Who uses Research collaborations, called "Virtual Small to medium commercial businesses
the Organisations", which bring together or researchers with generic IT needs
service? researchers around the world working in
the same field.
Who pays Governments - providers and users are The cloud provider pays for the computing
for the usually publicly funded research resources; the user pays to use them
service? organisations, for example through
National Grid Initiatives.
Where are In computing centres distributed across The cloud providers private data centres
the different sites, countries and continents. which are often centralised in a few
computing locations with excellent network
resources? connections and cheap electrical power.
Why use - You don`t need to buy or maintain - You don`t need to buy or maintain
them? your own large computer centre your own personal computer centre
- You can complete more work more - You can quickly access extra
quickly and tackle more difficult problems. resources during peak work periods
- You can share data with your
distributed team in a secure way.
What are Grids were designed to handle large sets of Clouds best support long term services
they useful limited duration jobs that produce or use and longer running jobs (E.g.
for? large quantities of data (e.g. the LHC and facebook.com)
life sciences)
How do Grids are an open source technology. Clouds are a proprietary technology.
they work? Resource users and providers alike can Only the resource provider knows exactly
understand and contribute to the how their cloud manages data, job
management of their grid queues, security requirements and so on.
Benefits? - Collaboration: grid offers a federated - Flexibility: users can quickly outsource
platform for distributed and collective peaks of activity without long term
work. commitment
Er. Rohit Handa
Lecturer, CSE-IT Department 21
IBM-ICE Program, BUEST Baddi
- Ownership : resource providers maintain - Reliability: provider has financial
ownership of the resources they contribute incentive to guarantee service availability
to the grid (Amazon, for example, can provide user
- Transparency: the technologies used are rebates if availability drops below 99.9%)
open source, encouraging trust and - Ease of use: relatively quick and easy
transparency. for non-expert users to get started but
- Resilience: grids are located at multiple setting up sophisticated virtual machines
sites, reducing the risk in case of a failure to support complex applications is more
at one site that removes significant difficult.
resources from the infrastructure.
Drawbacks? - Reliability: grids rely on distributed - Generality: clouds do not offer many of
services maintained by distributed staff, the specific high-level services currently
often resulting in inconsistency in provided by grid technology.
reliability across individual sites, although - Security: users with sensitive data may
the service itself is always available. be reluctant to entrust it to external
- Complexity: grids are complicated to providers or to providers outside their
build and use, and currently users require borders.
some level of expertise.- Opacity: the technologies used to
- Commercial: grids are generally only guarantee reliability and safety of cloud
available for not-for-profit work, and for operations are not made public.
proof of concept in the commercial sphere - Rigidity: the cloud is generally located
at a single site, which increases risk of
complete cloud failure.
- Provider lock-in: theres a risk of being
locked in to services provided by a very
small group of suppliers.
When? The concept of grids was proposed in 1995. In the late 1990`s Oracle and EMC offered
The Open science grid (OSG) started in early private cloud solutions . However
1995 The EDG (European Data Grid) the term cloud computing didn't gain
project began in 2001. prominence until 2007.

Utility V/s Cloud


1. Utility computing involves the renting of computing resources such as hardware,
software and network bandwidth on an as-required, on-demand basis. These might be
a dedicated computer cluster specifically built for the purpose of being rented out, or
even an under-utilized supercomputer. Utility computing refers to the ability to meter
the offered services and charge customers for exact usage.
2. Utility computing can be implemented without cloud computing. Consider a
supercomputer that rents out processing time to multiple clients. This is an example of
utility computing as users pay for resources used. However, with only one location and
no virtualization of resources, it cannot be called cloud computing.
3. Utility computing is very often connected to cloud computing as it is one of the options
for its accounting. As explained in Cloud computing infrastructure, Utility computing is
a good choice for less resource demanding applications where peak usage is expected to
be sporadic and rare.
4. Still, Utility computing does not require Cloud computing and it can be done in any
server environment. Also, it is unreasonable to meter smaller usage and economically
inefficient when applied on a smaller scale. That is why it is most often applied on cloud
hosting where large resources are being managed.

Ubiquitous V/s Cloud


1. Cloud Computing doesnt put the computer everywhere but, instead, it gives access to it
everywhere.
Er. Rohit Handa
Lecturer, CSE-IT Department 22
IBM-ICE Program, BUEST Baddi

Peer-to-peer versus Cloud


1. Distributed architecture without the need of central coordination; Participants at the
same time can be both suppliers and consumers;
2. Peers are equally privileged and powered
Er. Rohit Handa
Lecturer, CSE-IT Department 23
IBM-ICE Program, BUEST Baddi

You might also like