Cloud Computing_CS307_unit1
Cloud Computing_CS307_unit1
COURSE No : CS307
Semester : VI
1
Program Outcomes
PO1 Engineering An ability to apply knowledge of mathematics (including probability,
knowledge statistics and discrete mathematics), science, and engineering for
solving Engineering problems and modeling
PO2 Problem analysis An ability to design, simulate and conduct experiments, as well as to
analyze and interpret data including hardware and software
components
PO3 Design / An ability to design a complex system or process to meet desired
development of specifications and needs
solutions
PO4 Conduct An ability to identify, formulate, comprehend, analyze, design
investigations of synthesis of the information to solve complex engineering problems
complex and provide valid conclusions.
problems
PO5 Modern tool An ability to use the techniques, skills and modern engineering tools
usage necessary for engineering practice
PO6 The engineer and An understanding of professional, health, safety, legal, cultural and
society social responsibilities
PO7 Environment and The broad education necessary to understand the impact of
sustainability engineering solutions in a global, economic, environmental and
demonstrate the knowledge need for sustainable development.
PO8 Ethics Apply ethical principles, responsibility and norms of the engineering
practice
PO9 Individual and An ability to function on multi-disciplinary teams.
team work
PO10 Communication An ability to communicate and present effectively
PO11 Project An ability to use the modern engineering tools, techniques, skills and
management and management principles to do work as a member and leader in a team,
finance to manage projects in multi-disciplinary environments
PO12 Life-long A recognition of the need for, and an ability to engage in, to resolve
learning contemporary issues and acquire lifelong learning
2
Course No : CS307 Course Title : Cloud Computing
Semester : VI L T P C : 3-0-2-4
Lec. (Ch/Sec/Pg
Learning Objective Topics to be covered
No. Text Book
Introduction to distributed computing.
Concepts of Distributed T Ch.1.1, 1.2,
1-3
Computing 1.3/R Ch.1
Parallel vs Distributed computing, Elements of parallel
Parallel vs Distributed computing T Ch.1.4/R
4-6
computing Ch.2.2, 2.3
3
Implementation levels of virtualization, Virtualization T Ch.5.1,
19-24 Virtual machines structures/tools and mechanisms 5.6/R Ch.19,
20
Virtualization of CPU, memory and I/O devices,
Virtual clusters and resource management, T Ch.5/R
Virtualization of
Virtualization for data-center automation. Ch.21
25-26 Clusters and Data
T Ch.5/R
centers Ch.21
Evaluation Scheme:
Important Instructions
1. A minimum of 75 % attendance is compulsory and failing to which a student will not be allowed to
appear in the comprehensive examination
2. No MAKE UP will be granted without prior permission from the Instructors.
Only in genuine cases, Makeup will be allowed at the discretion of the Instructors.
Policy: It shall be the responsibility of individual students to attend all sessions, to take Prescribed tests,
examinations, etc. A student must normally maintain a minimum of 75% Attendance without which he/she
shall be disqualified from appearing the respective Examinations.
4
COURSE OUTCOME EXPLANATION
CO1 : KNOWLEDGE Understand different Computing Paradigms and Virtualization
CO3 : APPLICATION Understand various service delivery models of a cloud computing architecture
CO4 : ANALYSIS Demonstrate the ways in which the cloud can be programmed and deployed
CO-PO MAPPING:
CO/PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 3 3 2
CO2 3 2 2 2
CO3 3 3 3 3 3
CO4 3 2
CO5 2 2 2
CO6 1
5
WEB SOURCE REFERENCES:
DELIVERY/INSTRUCTIONAL METHODOLOGIES:
√ √ STUD.
√
WEB
BOARD ASSIGNMENT RESOURCES
ICT √ √ LCD
ENABLED CLASSES
STUD.
SEMINARS
ASSESSMENT METHODOLOGIES-DIRECT:
√ √
TESTS/COMPRE. ASSIGNMENT MINI/MAJOR
EXAMS PROJECTS
ASSESSMENT METHODOLOGIES-INDIRECT:
STUDENT FEEDBACK ON
ASSESSMENT OF COURSE
FACULTY
OUTCOMES
(TWICE)
(BY FEEDBACK, ONCE)
CO/ PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO1 PO11 PO12
PO 0
CO1 Understand Ability Ability to
distributed to identify
computing, understa systems,
Parallel vs nd and tools and
Distributed learn technologi
computing, distribut es in
Elements of ed distributed
parallel computi computing
computing ng, etc
CO2 Understandin Ability Ability to An
g cloud to identify ability
computing, understa Cloud to use
Building nd and computing the
cloud learn platforms techniqu
6
computing cloud and es for
environment computi technologi Cloud
ng, etc es computi
ng
platform
s, etc
CO3 An ability to Demonstrate Ability Ability to An
design the ability for to design and ability
Virtual the design, configure to use
machines Implementati simulate Virtual the
and on levels of Virtualiz clusters techniqu
Virtualizatio virtualization, ation of and es to
n of Clusters Virtualization CPU, resource implem
and Data structures/tool memory manageme ent
centers s and and I/O nt Virtuali
mechanisms devices zation
for data-
center
automat
ion
CO4 Ability to An ability
design and to
interpret comprehe
Programming nd and
Enterprise analyze
Clouds using Best
Aneka Practices
in
Architecti
ng Cloud
Applicatio
ns in the
AWS
cloud,
Building
Content
Delivery
networks
using
Clouds
7
Course Content:
UNIT-I
Concepts of Distributed Computing: Introduction to distributed computing, Parallel vs
Distributed computing, Elements of parallel computing, Elements of distributed computing,
Service oriented computing.
UNIT-II
Concepts of Cloud Computing: About cloud computing, building cloud computing environment,
Cloud computing platforms and technologies, System models for distributed and cloud
computing.
UNIT-III
Virtual machines and Virtualization of Clusters and Data centers: Implementation levels of
virtualization, Virtualization structures/tools and mechanisms, Virtualization of CPU, memory
and I/O devices, Virtual clusters and resource management, Virtualization for data-center
automation.
UNIT-IV
Programming Enterprise Clouds using Aneka: Introduction, Aneka Architecture, Thread
Programming using Aneka, Task Programming: using Aneka, Map Reduce Programming using
Aneka. Monitoring, Management and Applications: An Architecture for Federated Cloud
Computing, SLA Management in Cloud Computing, Performance Prediction for HPC on
Clouds, Best Practices in Architecting Cloud Applications in the AWS cloud, Building Content
Delivery networks using Clouds, Resource Cloud Mashups.
UNIT- V
Cloud Applications &Security: Scientific Applications, Business and Consumer Applications,
security aspect of cloud computing.
8
UNIT-I
Concepts of Distributed Computing:
1. Introduction to distributed computing
2. Parallel vs Distributed computing
3. Elements of parallel computing
4. Elements of distributed computing
5. Service oriented computing
9
1. Introduction to distributed computing
1. Hardware:
2. Software:
3. Networks:
4. Algorithms:
Serial Computing:
Serial computing refers to the process of executing a series of computational
tasks sequentially, one after the other, on a single processor.
In this approach, only one instruction is processed at a time, and tasks must
wait for the previous ones to complete before being executed.
Traditionally, software has been written for serial computation:
A problem is broken into a discrete series of instructions.
10
Instructions are executed sequentially one after the other using a single
processor.
Only one instruction may execute at any time.
1. Sequential Execution:
2. Single Processor:
4. Limited Performance:
11
Advantages of Serial Computing
1. Simplicity:
2. Predictable Behavior:
Parallel Computing:
parallel computing is the simultaneous use of multiple compute resources to
solve a computational problem:
12
Parallel computing example of processing payroll
13
Comparison with Parallel Computing
1. Bit-level Parallelism:
It is the form of parallel computing which is based on the increasing
processor’s size.
It reduces the number of instructions that the system must execute in
order to perform a task on large-sized data.
Operations on smaller units (bits) performed simultaneously.
For example, there is an 8-bit processor, and you want to do an
operation on 16-bit numbers. First, it must operate the 8 lower-order
bits and then the 8 higher-order bits. Therefore, two instructions are
needed to execute the operation. The operation can be performed with
one instruction by a 16-bit processor.
14
Use Case: Arithmetic operations in high-speed processing.
2. Instruction-level parallelism:
Parallelism achieved by executing multiple instructions simultaneously
within a single CPU.
Overlapping low-level operations.
3. Task Parallelism:
Tasks or threads are divided among processors, where each processor
performs a different task.
Different processors handle separate tasks that may be dependent or
independent but work collaboratively towards the same goal.
Example: In a web server, one thread handles incoming HTTP requests while
another processes database queries.
Use Case: Multithreaded programming
4. Data-level parallelism:
The same operation is performed on multiple data elements
simultaneously.
Large datasets are divided into smaller chunks, and each processor
works on a chunk.
Example: Applying a filter to an image where each pixel is processed in
parallel.
5. Pipeline parallelism:
A series of computations are divided into stages, where each stage is
executed in parallel by different processors.
15
Tasks are organized as a pipeline, with each stage working on a different
part of the data.
Example: In video encoding, one stage compresses frames while another stage
processes audio.
Use Case: Real-time data processing, assembly lines in manufacturing.
Distributed Computing:
Distributed computing is a computational model where multiple computers
(often geographically dispersed) work together to solve a common problem.
It's like having a team of individuals each contributing their skills to complete
a complex project.
Distributed computing is the method of making multiple computers work together to solve a common
problem.
16
Types of Distributed Computing:
17
2. Peer-to-Peer (P2P) Computing
A decentralized model where nodes (peers) act as both clients and
servers, sharing resources directly with each other.
The peer to peer computing architecture contains nodes that are equal
participants in data sharing. All the tasks are equally divided between all
the nodes. The nodes interact with each other as required as share
resources.
Each peer can initiate or fulfill requests, and there is no central
authority.
Example:
File-sharing networks like BitTorrent.
Blockchain and cryptocurrencies like Bitcoin.
18
3. 3-Tier Computing:
Divides the system into three layers:
1. Presentation Layer (Client): User interface.
2. Logic Layer (Application Server): Processes user requests.
3. Data Layer (Database Server): Stores and retrieves data.
Example:
Use Cases:
19
Imagine a bank with branches in different cities. Each branch has
customer account information, but all branches act as a single
system.
Use Cases:
Banking systems.
E-commerce inventory systems.
5.Cloud Computing:
Provides on-demand access to computing resources (e.g., storage, servers) over
the internet.
Example:
Use Cases:
20
2. Parallel Vs Distributed Computing
21
1. Definition
2. Architecture
Communication is often
Communication occurs over
internal to the system and
Communication a network, often slower (e.g.,
faster (e.g., shared memory or
message passing or RPC).
interconnects).
22
3. Task Execution
4. Scalability
5. Fault Tolerance
23
6. Use Cases
- Matrix computations,
- Cloud computing, web services,
simulations, image
distributed databases.
Applications processing.
- Big data analytics, content
- Weather modeling,
delivery networks (CDNs).
molecular simulations.
Illustrative Example
24
3. Elements of Parallel Computing
Parallel computing involves dividing a task into smaller parts and executing
them simultaneously on multiple processors to solve problems faster.
The main elements of parallel computing are:
1. Tasks
2. Processes
3. Threads
4. Communication
5. Synchronization
6. Scalability
7. Speed and Efficiency
8. Load Balancing
9. Dependencies
1. Task:
A task is a unit of work or computation that can be executed
independently.
Tasks can be fine-grained (small and numerous) or coarse-grained (larger
and fewer).
They may be independent or dependent on other tasks.
Example:
25
Example:
Scenario: Rendering a 3D animation.
Each process renders a different frame of the animation, and all processes
run in parallel.
3. Threads:
A thread is the smallest unit of execution within a process, sharing the
process's memory space.
Threads are used for lightweight, fine-grained parallelism within a single
process.
Example:
Scenario: A web browser.
One thread handles user interactions, another renders the web page, and
a third downloads resources—all simultaneously.
4. Communication:
The exchange of information between tasks or processes, often necessary
when tasks are interdependent.
Types:
Shared Memory: Tasks communicate by accessing shared variables.
Message Passing: Tasks exchange messages, typically in distributed
systems.
Example:
Scenario: A team working on a shared document.
Team members (tasks) communicate by editing (shared memory) or
sending updates to others (message passing).
5. Synchronization:
Ensures that tasks are executed in the correct order when they depend on
each other.
26
Prevents race conditions and ensures data consistency.
Example:
Scenario: A bank's transaction system.
If one task withdraws money and another checks the balance,
synchronization ensures the balance reflects the withdrawal.
6. Scalability:
The ability of a parallel computing system to handle increasing workloads
by adding more processors or resources.
Types:
Strong Scalability: Speedup is achieved without increasing the workload.
Weak Scalability: Speedup is achieved by increasing both workload and
resources.
Example:
Scenario: Weather simulation.
Adding more processors allows the simulation to process larger areas or
finer details.
Example:
Scenario: Processing a dataset.
If a task takes 10 hours in serial and 2 hours in parallel with 5
processors:
Speedup = 10/2=5
27
Efficiency = Speedup/Number of Processors = 5/5=1(100% efficiency)
8. Load Balancing:
Distributing tasks evenly among processors to ensure no processor is idle
or overloaded.
Ensures maximum resource utilization and minimizes execution time.
Example:
Scenario: A web server.
Incoming user requests are distributed among multiple servers to ensure
balanced workloads.
9. Dependencies:
Some tasks depend on the results of other tasks and cannot start until
those tasks are complete.
Types:
Data Dependency: One task needs data produced by another.
Control Dependency: The execution order depends on certain conditions.
Example:
Scenario: Cooking a meal.
You can’t cook rice until you’ve boiled the water (data dependency).
28
4.Elements of Distributed Computing
Distributed computing involves the coordination and cooperation of multiple
autonomous systems to achieve a common goal.
1. Nodes
2. Processes
3. Communication
4. Coordination
5. Synchronization
6. Scalability
7. Fault Tolerance
8. Transparency
9. Resource Sharing
10. Concurrency
11. Latency
12. Security
1. Nodes
Example:
29
2. Processes
Example:
3. Communication
Example:
4. Coordination
Ensures that tasks across nodes are properly organized and executed in
the right sequence.
Role: Maintains order and consistency in a distributed system.
Example:
30
Multiple users edit the same document while the system
coordinates changes to prevent conflicts.
5. Synchronization
Example:
6. Scalability
Example:
7. Fault Tolerance
31
Example:
8. Transparency
Example:
9. Resource Sharing
Example:
32
10. Concurrency
Example:
11. Latency
Example:
12. Security
Example:
33
5.Service Oriented Computing
Service-Oriented Computing is a distributed computing paradigm where
functionality is provided as services, typically over a network.
1. Services:
2. Loose Coupling:
3. Interoperability:
4. Reusability:
5. Discoverability:
34
6. Statelessness:
1. Service Provider:
2. Service Registry:
3. Service Consumer:
Uses the service by locating it in the registry and invoking it via its
interface.
4. Middleware:
1. Service Provider:
2. Service Registry:
35
Enables service discovery by clients.
3. Service Consumer:
Uses the service by locating it in the registry and invoking it via its
interface.
4. Middleware:
1. Flexibility:
2. Scalability:
3. Cost Efficiency:
4. Interoperability:
5. Faster Development:
36
Applications of Service-Oriented Computing
1. E-Commerce:
2. Healthcare:
3. Banking:
37
NOTES
__________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
38
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
39
UNIT-II
40