0% found this document useful (0 votes)
38 views40 pages

Cloud Computing_CS307_unit1

The document outlines the course structure for Cloud Computing (CS307) for the academic year 2024-2025, detailing program outcomes, course objectives, and a comprehensive syllabus. It covers essential topics such as distributed computing, cloud computing concepts, virtualization, programming enterprise clouds, and security aspects. The evaluation scheme emphasizes attendance, project management, and various assessment methodologies to ensure student engagement and understanding.

Uploaded by

arpitadash024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views40 pages

Cloud Computing_CS307_unit1

The document outlines the course structure for Cloud Computing (CS307) for the academic year 2024-2025, detailing program outcomes, course objectives, and a comprehensive syllabus. It covers essential topics such as distributed computing, cloud computing concepts, virtualization, programming enterprise clouds, and security aspects. The evaluation scheme emphasizes attendance, project management, and various assessment methodologies to ensure student engagement and understanding.

Uploaded by

arpitadash024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

COURSE : CLOUD COMPUTING

COURSE No : CS307
Semester : VI

Instructor In-charge : Dr. D Srinivasa Rao

Instructor(s) : Mr. K. Vara Prasad Rao

Academic Year : 2024-2025

1
Program Outcomes
PO1 Engineering An ability to apply knowledge of mathematics (including probability,
knowledge statistics and discrete mathematics), science, and engineering for
solving Engineering problems and modeling
PO2 Problem analysis An ability to design, simulate and conduct experiments, as well as to
analyze and interpret data including hardware and software
components
PO3 Design / An ability to design a complex system or process to meet desired
development of specifications and needs
solutions
PO4 Conduct An ability to identify, formulate, comprehend, analyze, design
investigations of synthesis of the information to solve complex engineering problems
complex and provide valid conclusions.
problems
PO5 Modern tool An ability to use the techniques, skills and modern engineering tools
usage necessary for engineering practice
PO6 The engineer and An understanding of professional, health, safety, legal, cultural and
society social responsibilities
PO7 Environment and The broad education necessary to understand the impact of
sustainability engineering solutions in a global, economic, environmental and
demonstrate the knowledge need for sustainable development.
PO8 Ethics Apply ethical principles, responsibility and norms of the engineering
practice
PO9 Individual and An ability to function on multi-disciplinary teams.
team work
PO10 Communication An ability to communicate and present effectively

PO11 Project An ability to use the modern engineering tools, techniques, skills and
management and management principles to do work as a member and leader in a team,
finance to manage projects in multi-disciplinary environments
PO12 Life-long A recognition of the need for, and an ability to engage in, to resolve
learning contemporary issues and acquire lifelong learning

2
Course No : CS307 Course Title : Cloud Computing
Semester : VI L T P C : 3-0-2-4

Instructor In-charge : Dr. D Srinivasa Rao


Instructor(s) : Mr. Vara Prasad Rao
Course Objectives:
1. Understand the underlying infrastructure and architecture of clouds, techniques for enabling services and
the quality of such services.
2. Analyze various levels of services that can be achieved by cloud computing.
3. Understand the programming aspects of cloud computing using different tools and techniques.
4. Identify research related issues of cloud computing in performance, security and management.

Text book Cloud Computing: Principles and Paradigms, Rajkumar Buyya,


T James Broberg and Andrzej M. Goscinski, Wiley, 1st Edition, 2013.

Reference book(s) Cloud Computing: A Practical Approach, Anthony T.Velte,


R1 Toby J.Velte, Robert Elsenpeter, Tata McGraw Hill, 1st Edition, 2017.

Enterprise Cloud Computing, GautamShroff, Cambridge University


R2 Press, 1st Edition, 2010.

R3 Cloud Computing: Implementation, Management and Security,


John W. Ritting house, James F.Ransome, CRC Press, 1st Edition, 2009.

Lecture-wise plan (Syllabus):

Lec. (Ch/Sec/Pg
Learning Objective Topics to be covered
No. Text Book
Introduction to distributed computing.
Concepts of Distributed T Ch.1.1, 1.2,
1-3
Computing 1.3/R Ch.1
Parallel vs Distributed computing, Elements of parallel
Parallel vs Distributed computing T Ch.1.4/R
4-6
computing Ch.2.2, 2.3

Elements of distributed computing, Service oriented T Ch.1.4/R 2.4


Elements of distributed computing
7-10 T Ch.2.2/R
computing
Ch.7
Concepts of Cloud About cloud computing, Building cloud T Ch. 2.1,
11-14 Computing computing environment 2.5/R Ch.3, 4,
5, 6
Cloud computing Cloud computing platforms and technologies, System models
T Ch. 2.1-
platforms and for distributed and cloud computing.
15-18 2.5/R Ch.5,
technologies
6,7,8

3
Implementation levels of virtualization, Virtualization T Ch.5.1,
19-24 Virtual machines structures/tools and mechanisms 5.6/R Ch.19,
20
Virtualization of CPU, memory and I/O devices,
Virtual clusters and resource management, T Ch.5/R
Virtualization of
Virtualization for data-center automation. Ch.21
25-26 Clusters and Data
T Ch.5/R
centers Ch.21

Introduction, Aneka Architecture, Thread Programming T


Programming Enterprise
27-30 using Aneka Ch.6.4,6.5/R
Clouds using Aneka
Ch.23.2, 23.3
Task Programming: using Aneka, Map Reduce
Programming using Aneka. An Architecture for Federated TCh.8.1,8.2,8.
31-33 Task Programming
Cloud Computing, SLA Management in Cloud Computing, 3/R Ch.30

Performance Prediction for HPC on Clouds, Best


Monitoring, Practices in Architecting Cloud Applications in the T Ch.7.1
34-38 Management and ;7.2R
AWS cloud, Building Content Delivery networks
Applications Ch.26.2
using Clouds, Resource Cloud Mashups
Scientific Applications, Business and Consumer T Ch.7.1
Cloud Applications &
39-42 Applications. ;7.2R
Security Ch.26.2
Cloud Applications & Security aspect of cloud computing T Ch.7.1,
43-45
Security Ch.26.2

Evaluation Scheme:
Important Instructions
1. A minimum of 75 % attendance is compulsory and failing to which a student will not be allowed to
appear in the comprehensive examination
2. No MAKE UP will be granted without prior permission from the Instructors.
Only in genuine cases, Makeup will be allowed at the discretion of the Instructors.
Policy: It shall be the responsibility of individual students to attend all sessions, to take Prescribed tests,
examinations, etc. A student must normally maintain a minimum of 75% Attendance without which he/she
shall be disqualified from appearing the respective Examinations.

Instructor In-charge: Dr. D. Srinivasa Rao

Theory Course Faculty: 1. Dr. D.Srinivasa Rao


2. Mr. Vara Prasad Rao

4
COURSE OUTCOME EXPLANATION
CO1 : KNOWLEDGE Understand different Computing Paradigms and Virtualization

CO2 : COMPREHENSION Learn the fundamentals of Cloud Computing

CO3 : APPLICATION Understand various service delivery models of a cloud computing architecture

CO4 : ANALYSIS Demonstrate the ways in which the cloud can be programmed and deployed

CO5 : SYNTHESIS Identify applications that can deploy on a Cloud environment

CO-PO MAPPING:
CO/PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 3 3 2
CO2 3 2 2 2
CO3 3 3 3 3 3
CO4 3 2
CO5 2 2 2
CO6 1

The various correlation levels are:


● “1” – Slight (Low) Correlation
● “2” – Moderate (Medium) Correlation
● “3” – Substantial (High) Correlation
● “-” indicates there is no correlation.

GAPS IN THE SYLLABUS - TO MEET INDUSTRY/PROFESSION REQUIREMENTS:

SNO DESCRIPTION PROPOSED ACTIONS


1 Cloud computing in the Real world Students are encouraged to do a
study on various real time cloud
application such as kubernetes
deployment, and scaling of
containerized applications
2 Case study of containerized applications Students are motivated to learn
the configuration set up of
containerized applications

TOPICS BEYOND SYLLABUS/ADVANCED TOPICS/DESIGN:

S.No. ADVANCED TOPICS


 Enterprise Load balancing and Tracking resource
1 allocation
 Managing and scaling AI and machine learning
2 workloads

5
WEB SOURCE REFERENCES:

S.No. Web links


1 https://onlinecourses.nptel.ac.in/noc25_cs11/preview
2 https://professional.mit.edu/course-catalog/cloud-devops-continuous-
transformation
3 https://web.stanford.edu/class/cs349d/

DELIVERY/INSTRUCTIONAL METHODOLOGIES:

√ √ STUD.

WEB
BOARD ASSIGNMENT RESOURCES

ICT √ √ LCD
ENABLED CLASSES
STUD.
SEMINARS

ASSESSMENT METHODOLOGIES-DIRECT:

√ √
TESTS/COMPRE. ASSIGNMENT MINI/MAJOR
EXAMS PROJECTS

STUD. LAB STUD. VIVA


√ PRACTICES √
STUD.SEMINARS

ASSESSMENT METHODOLOGIES-INDIRECT:

STUDENT FEEDBACK ON
ASSESSMENT OF COURSE
FACULTY
OUTCOMES
(TWICE)
(BY FEEDBACK, ONCE)

ASSESSMENT OF MINI/MAJOR OTHERS


PROJECTS

CO/ PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO1 PO11 PO12
PO 0
CO1 Understand Ability Ability to
distributed to identify
computing, understa systems,
Parallel vs nd and tools and
Distributed learn technologi
computing, distribut es in
Elements of ed distributed
parallel computi computing
computing ng, etc
CO2 Understandin Ability Ability to An
g cloud to identify ability
computing, understa Cloud to use
Building nd and computing the
cloud learn platforms techniqu

6
computing cloud and es for
environment computi technologi Cloud
ng, etc es computi
ng
platform
s, etc
CO3 An ability to Demonstrate Ability Ability to An
design the ability for to design and ability
Virtual the design, configure to use
machines Implementati simulate Virtual the
and on levels of Virtualiz clusters techniqu
Virtualizatio virtualization, ation of and es to
n of Clusters Virtualization CPU, resource implem
and Data structures/tool memory manageme ent
centers s and and I/O nt Virtuali
mechanisms devices zation
for data-
center
automat
ion
CO4 Ability to An ability
design and to
interpret comprehe
Programming nd and
Enterprise analyze
Clouds using Best
Aneka Practices
in
Architecti
ng Cloud
Applicatio
ns in the
AWS
cloud,
Building
Content
Delivery
networks
using
Clouds

CO5 Unde Appl Learning lifelong to the


rstan y security threats in the Business
ding ethic and Consumer Applications,
socia al security aspect of cloud
l princ computing and their solutions
respo iples
nsibil using
ities qualit
using y of
secur servi
ity ces
mech with
anis secur
m ity
and featu
other res
real
life
Clou
d
Appl
icatio
ns &
Secu
rity
Scien
tific
Appl
icatio
ns

7
Course Content:
UNIT-I
Concepts of Distributed Computing: Introduction to distributed computing, Parallel vs
Distributed computing, Elements of parallel computing, Elements of distributed computing,
Service oriented computing.

UNIT-II
Concepts of Cloud Computing: About cloud computing, building cloud computing environment,
Cloud computing platforms and technologies, System models for distributed and cloud
computing.
UNIT-III
Virtual machines and Virtualization of Clusters and Data centers: Implementation levels of
virtualization, Virtualization structures/tools and mechanisms, Virtualization of CPU, memory
and I/O devices, Virtual clusters and resource management, Virtualization for data-center
automation.

UNIT-IV
Programming Enterprise Clouds using Aneka: Introduction, Aneka Architecture, Thread
Programming using Aneka, Task Programming: using Aneka, Map Reduce Programming using
Aneka. Monitoring, Management and Applications: An Architecture for Federated Cloud
Computing, SLA Management in Cloud Computing, Performance Prediction for HPC on
Clouds, Best Practices in Architecting Cloud Applications in the AWS cloud, Building Content
Delivery networks using Clouds, Resource Cloud Mashups.

UNIT- V
Cloud Applications &Security: Scientific Applications, Business and Consumer Applications,
security aspect of cloud computing.

8
UNIT-I
Concepts of Distributed Computing:
1. Introduction to distributed computing
2. Parallel vs Distributed computing
3. Elements of parallel computing
4. Elements of distributed computing
5. Service oriented computing

9
1. Introduction to distributed computing

Computing refers to the process of using computers or computational devices


to perform calculations, process data, solve problems, or execute tasks.
It involves a combination of hardware (physical components) and software
(programs and algorithms) working together to manipulate information.

Core Components of Computing

1. Hardware:

 Physical devices like processors, memory, and storage.


 Example: CPUs, GPUs, hard drives.

2. Software:

 Programs and applications that perform specific tasks.


 Example: Operating systems, web browsers, and games.

3. Networks:

 Systems that connect devices for communication.


 Example: The internet or local area networks (LAN).

4. Algorithms:

 Step-by-step instructions to perform tasks.


 Example: Sorting numbers in ascending order.

Serial Computing:
Serial computing refers to the process of executing a series of computational
tasks sequentially, one after the other, on a single processor.
In this approach, only one instruction is processed at a time, and tasks must
wait for the previous ones to complete before being executed.
Traditionally, software has been written for serial computation:
 A problem is broken into a discrete series of instructions.

10
 Instructions are executed sequentially one after the other using a single
processor.
 Only one instruction may execute at any time.

Characteristics of Serial Computing

1. Sequential Execution:

 Instructions are executed one at a time in the order they are


written.
 No overlap or parallelism in task execution.

2. Single Processor:

 Utilizes a single Central Processing Unit (CPU) to perform


computations.

3. Simple Programming Model:

 Easier to design, debug, and maintain compared to parallel


systems.

4. Limited Performance:

 Speed is constrained by the processing power of the single CPU.


 Not suitable for large-scale computations requiring high
performance.

11
Advantages of Serial Computing

1. Simplicity:

 Straightforward implementation with minimal complexity.

2. Predictable Behavior:

 Easier to predict execution time as tasks are performed one at a


time.

Parallel Computing:
parallel computing is the simultaneous use of multiple compute resources to
solve a computational problem:

 A problem is broken into discrete parts that can be solved concurrently


 Each part is further broken down to a series of instructions
 Instructions from each part execute simultaneously on different
processors
 An overall control/coordination mechanism is employed

12
Parallel computing example of processing payroll

The computational problem should be able to:

 Be broken apart into discrete pieces of work that can be solved


simultaneously;
 Execute multiple program instructions at any moment in time;
 Be solved in less time with multiple compute resources than with a
single compute resource.

The compute resources are typically:

 A single computer with multiple processors/cores


 An arbitrary number of such computers connected by a network

13
Comparison with Parallel Computing

Aspect Serial Computing Parallel Computing


Tasks run one after the
Execution Tasks run simultaneously.
other.
Resource Uses a single processor or Utilizes multiple processors or
Usage core. cores.
Scales with the number of
Performance Limited to CPU speed.
processors.
Easier to implement and Requires synchronization and
Complexity
debug. coordination.

Types of Parallel Computing:


Parallel computing refers to the simultaneous execution of multiple tasks or
computations to solve problems faster.
The classification of parallel computing is based on how tasks and data are
divided and executed across multiple processors.
Below are the main types:
1. Bit-level parallelism
2. Instruction-level parallelism
3. Task Parallelism
4. Data-level parallelism
5. Pipeline parallelism

1. Bit-level Parallelism:
 It is the form of parallel computing which is based on the increasing
processor’s size.
 It reduces the number of instructions that the system must execute in
order to perform a task on large-sized data.
 Operations on smaller units (bits) performed simultaneously.
 For example, there is an 8-bit processor, and you want to do an
operation on 16-bit numbers. First, it must operate the 8 lower-order
bits and then the 8 higher-order bits. Therefore, two instructions are
needed to execute the operation. The operation can be performed with
one instruction by a 16-bit processor.

14
Use Case: Arithmetic operations in high-speed processing.
2. Instruction-level parallelism:
 Parallelism achieved by executing multiple instructions simultaneously
within a single CPU.
 Overlapping low-level operations.

Example: A processor executing multiple instructions (e.g., addition and


multiplication) in parallel during one clock cycle.
Use Case: High-performance computing and general-purpose processors.

3. Task Parallelism:
 Tasks or threads are divided among processors, where each processor
performs a different task.
 Different processors handle separate tasks that may be dependent or
independent but work collaboratively towards the same goal.

Example: In a web server, one thread handles incoming HTTP requests while
another processes database queries.
Use Case: Multithreaded programming

4. Data-level parallelism:
 The same operation is performed on multiple data elements
simultaneously.
 Large datasets are divided into smaller chunks, and each processor
works on a chunk.
Example: Applying a filter to an image where each pixel is processed in
parallel.

Use Case: Image processing and large-scale data analytics.

5. Pipeline parallelism:
 A series of computations are divided into stages, where each stage is
executed in parallel by different processors.

15
 Tasks are organized as a pipeline, with each stage working on a different
part of the data.

Example: In video encoding, one stage compresses frames while another stage
processes audio.
Use Case: Real-time data processing, assembly lines in manufacturing.

Distributed Computing:
Distributed computing is a computational model where multiple computers
(often geographically dispersed) work together to solve a common problem.
It's like having a team of individuals each contributing their skills to complete
a complex project.

Distributed computing is the method of making multiple computers work together to solve a common
problem.

16
Types of Distributed Computing:

Distributed computing refers to a system where multiple autonomous


computers or nodes work together to achieve a common goal. These systems
share resources, collaborate on tasks, and appear as a single cohesive system
to users. Based on the architecture and application, distributed computing can
be categorized into several types:

1. Client Server Computing


2. Peer-to-Peer (P2P) Computing
3. 3-Tier Computing
4. Distributed Databases
5. Cloud Computing

1. Client Server Computing:


 The server hosts resources and services, while clients access these
resources by sending requests over a network.
 A model where clients request services, and servers provide them.
 The client server computing works with a system of request and
response. The client sends a request to the server and the server
responds with the desired information.
 The client and server should follow a common communication protocol
so they can easily interact with each other. All the communication
protocols are available at the application layer.
 A server can only accommodate a limited number of client requests at a
time. So it uses a system based to priority to respond to the requests.

Example of a client server computing system is a web server. It returns the


web pages to the clients that requested them.

Use Case: Web applications, online banking, and database management


systems.

17
2. Peer-to-Peer (P2P) Computing
 A decentralized model where nodes (peers) act as both clients and
servers, sharing resources directly with each other.
 The peer to peer computing architecture contains nodes that are equal
participants in data sharing. All the tasks are equally divided between all
the nodes. The nodes interact with each other as required as share
resources.
 Each peer can initiate or fulfill requests, and there is no central
authority.
Example:
 File-sharing networks like BitTorrent.
 Blockchain and cryptocurrencies like Bitcoin.

Use Case: File sharing, distributed ledgers, and collaborative platforms.

18
3. 3-Tier Computing:
Divides the system into three layers:
1. Presentation Layer (Client): User interface.
2. Logic Layer (Application Server): Processes user requests.
3. Data Layer (Database Server): Stores and retrieves data.

Example:

 Ordering a pizza online:

 The website (client) shows the menu.


 The order system (logic layer) processes your selection.
 The database (data layer) tracks available ingredients.

Use Cases:

 Web-based business applications.


4. Distributed Databases:

Data is distributed across multiple nodes but appears to users as a single


database.
Example:

19
 Imagine a bank with branches in different cities. Each branch has
customer account information, but all branches act as a single
system.

Use Cases:

 Banking systems.
 E-commerce inventory systems.

5.Cloud Computing:
Provides on-demand access to computing resources (e.g., storage, servers) over
the internet.
Example:

 Renting a storage unit instead of building one at home.

Use Cases:

 Scalable application hosting.


 On-demand data storage.

20
2. Parallel Vs Distributed Computing

Aspect Parallel Computing Distributed Computing


Simultaneous execution of
Collaboration of multiple
multiple tasks on multiple
Definition autonomous systems to
processors within a single
achieve a common goal.
system.
Distributed memory
Shared memory architecture
architecture (each node has
Architecture (processors share memory
its own memory and
and resources).
resources).
Systems are often
Processors are usually in the
Location of geographically distributed
same physical location, such
Resources and connected via a
as a single machine.
network.
To increase speed and
To achieve scalability, fault
efficiency by breaking tasks
Goal tolerance, and resource
into smaller, simultaneous
sharing across systems.
parts.
Systems communicate over
Processors communicate via
Communication a network using protocols
shared memory.
like HTTP, RPC, or MPI.
Limited by the number of
Highly scalable as nodes can
Scalability processors and shared
be added to the network.
memory.
Lower fault tolerance as the High fault tolerance since
Fault Tolerance failure of one processor can nodes are independent, and
impact the system. redundancy can be applied.
Multi-core processors
Distributed databases, cloud
executing parallel threads,
Examples computing platforms, and
GPUs performing parallel
blockchain networks.
computations.

21
1. Definition

Aspect Parallel Computing Distributed Computing

Simultaneous execution of multiple Execution of tasks across


Definition tasks on a single system with multiple independent systems
shared resources. connected by a network.

Speedup of computation by dividing


Collaboration between
tasks among multiple
Focus systems to solve a problem by
cores/processors in the same
sharing resources and tasks.
system.

2. Architecture

Aspect Parallel Computing Distributed Computing

Typically uses a single system Consists of multiple


with multiple processors or autonomous systems (e.g.,
System Setup
cores (e.g., multi-core CPU, computers, servers, or virtual
GPU). machines).

Systems have their own


Resource Processors share memory and
memory and storage
Sharing storage (shared memory).
(distributed memory).

Communication is often
Communication occurs over
internal to the system and
Communication a network, often slower (e.g.,
faster (e.g., shared memory or
message passing or RPC).
interconnects).

22
3. Task Execution

Aspect Parallel Computing Distributed Computing

Tasks are divided into smaller Tasks are distributed across


sub-tasks that run multiple systems, often
Task Division
simultaneously on multiple working independently or
cores/processors. collaboratively.

High synchronization is Synchronization is lower but


Synchronization required between threads or can involve coordination
processes. between systems.

4. Scalability

Aspect Parallel Computing Distributed Computing

Limited by the number of Highly scalable, as additional


Scalability processors/cores in a single systems can be added to the
system. network.

5. Fault Tolerance

Aspect Parallel Computing Distributed Computing

Less fault-tolerant; failure in a More fault-tolerant; failure of one


Fault
core can impact the entire system may not disrupt the entire
Tolerance
computation. computation.

23
6. Use Cases

Aspect Parallel Computing Distributed Computing

- Matrix computations,
- Cloud computing, web services,
simulations, image
distributed databases.
Applications processing.
- Big data analytics, content
- Weather modeling,
delivery networks (CDNs).
molecular simulations.

- Google’s search engine crawling


- A GPU performing real-time
and indexing using distributed
rendering for gaming.
Examples systems.
- Parallelized matrix
- Netflix’s content delivery
multiplication.
network.

Illustrative Example

Aspect Parallel Computing Distributed Computing

Scenario: Adding two arrays of


numbers. Scenario: A search engine
processing user queries.
Approach: Split the arrays into
Approach: Distribute queries
Example equal parts and assign each part
across servers located in different
to a different core of a multi-core
regions, each processing a portion
processor.
of the data.

24
3. Elements of Parallel Computing

Parallel computing involves dividing a task into smaller parts and executing
them simultaneously on multiple processors to solve problems faster.
The main elements of parallel computing are:
1. Tasks
2. Processes
3. Threads
4. Communication
5. Synchronization
6. Scalability
7. Speed and Efficiency
8. Load Balancing
9. Dependencies
1. Task:
 A task is a unit of work or computation that can be executed
independently.
 Tasks can be fine-grained (small and numerous) or coarse-grained (larger
and fewer).
 They may be independent or dependent on other tasks.

Example:

Adding two arrays of numbers, A and B, to produce a result array C.

 Task 1: Add elements A[0] and B[0] to get C[0].


 Task 2: Add elements A[1] and B[1] to get C[1].
 Tasks are executed in parallel for faster computation.
2. Processes:
 A process is an instance of a program in execution.
 Processes are independent, with separate memory spaces.
 Processes handle tasks in parallel, often running on separate processors
or cores.

25
Example:
 Scenario: Rendering a 3D animation.
 Each process renders a different frame of the animation, and all processes
run in parallel.

3. Threads:
 A thread is the smallest unit of execution within a process, sharing the
process's memory space.
 Threads are used for lightweight, fine-grained parallelism within a single
process.

Example:
Scenario: A web browser.
 One thread handles user interactions, another renders the web page, and
a third downloads resources—all simultaneously.

4. Communication:
 The exchange of information between tasks or processes, often necessary
when tasks are interdependent.

Types:
 Shared Memory: Tasks communicate by accessing shared variables.
 Message Passing: Tasks exchange messages, typically in distributed
systems.

Example:
Scenario: A team working on a shared document.
 Team members (tasks) communicate by editing (shared memory) or
sending updates to others (message passing).

5. Synchronization:
 Ensures that tasks are executed in the correct order when they depend on
each other.

26
 Prevents race conditions and ensures data consistency.

Example:
Scenario: A bank's transaction system.
 If one task withdraws money and another checks the balance,
synchronization ensures the balance reflects the withdrawal.

6. Scalability:
 The ability of a parallel computing system to handle increasing workloads
by adding more processors or resources.

Types:
 Strong Scalability: Speedup is achieved without increasing the workload.
 Weak Scalability: Speedup is achieved by increasing both workload and
resources.

Example:
Scenario: Weather simulation.
 Adding more processors allows the simulation to process larger areas or
finer details.

7. Speedup and Efficiency:


 Speedup: Measures how much faster a task is completed using parallel
computing compared to serial computing.
 Efficiency: Measures how effectively resources are utilized in a parallel
system.

Example:
Scenario: Processing a dataset.
 If a task takes 10 hours in serial and 2 hours in parallel with 5
processors:
 Speedup = 10/2=5

27
 Efficiency = Speedup/Number of Processors = 5/5=1(100% efficiency)

8. Load Balancing:
 Distributing tasks evenly among processors to ensure no processor is idle
or overloaded.
 Ensures maximum resource utilization and minimizes execution time.

Example:
Scenario: A web server.
 Incoming user requests are distributed among multiple servers to ensure
balanced workloads.

9. Dependencies:
 Some tasks depend on the results of other tasks and cannot start until
those tasks are complete.

Types:
 Data Dependency: One task needs data produced by another.
 Control Dependency: The execution order depends on certain conditions.

Example:
Scenario: Cooking a meal.
 You can’t cook rice until you’ve boiled the water (data dependency).

28
4.Elements of Distributed Computing
Distributed computing involves the coordination and cooperation of multiple
autonomous systems to achieve a common goal.
1. Nodes
2. Processes
3. Communication
4. Coordination
5. Synchronization
6. Scalability
7. Fault Tolerance
8. Transparency
9. Resource Sharing
10. Concurrency
11. Latency
12. Security

1. Nodes

 Nodes are the individual computing units in a distributed system.


 They can be physical machines (servers, desktops) or virtual systems
(cloud instances).
 Role: Perform computations and store data while collaborating with
other nodes.

Example:

 Scenario: A social media platform.

 Multiple servers (nodes) handle tasks like storing user profiles,


processing posts, and serving content.

29
2. Processes

 Independent programs running on different nodes, working together to


complete tasks.
 Role: Each process performs a part of the overall computation.

Example:

 Scenario: E-commerce website.

 One process handles user login, another processes payments, and


another manages the product database.

3. Communication

 Nodes exchange data using network protocols like HTTP, TCP/IP, or


message queues.
 Role: Enables coordination and data sharing between distributed
components.

Example:

 Scenario: Online video streaming.

 The user requests a video (communication) from a content delivery


network (CDN) server.

4. Coordination

 Ensures that tasks across nodes are properly organized and executed in
the right sequence.
 Role: Maintains order and consistency in a distributed system.

Example:

 Scenario: Collaborative editing (e.g., Google Docs).

30
 Multiple users edit the same document while the system
coordinates changes to prevent conflicts.

5. Synchronization

 Ensures consistency and correct order of operations across nodes,


especially when tasks are interdependent.
 Role: Prevents race conditions and ensures that tasks execute in the
intended order.

Example:

 Scenario: Distributed database updates.

 Synchronization ensures that data written to one node is


replicated accurately to other nodes.

6. Scalability

 Definition: The system's ability to handle increased workloads by adding


more nodes.
 Role: Ensures the system remains efficient as demand grows.

Example:

 Scenario: Cloud storage service (e.g., Dropbox).

 More servers are added to accommodate millions of users


uploading files.

7. Fault Tolerance

 Definition: The ability of the system to continue functioning despite


node or component failures.
 Role: Ensures reliability and availability of services.

31
Example:

 Scenario: Online banking system.

 If a server handling transactions fails, another server takes over


seamlessly.

8. Transparency

 Definition: Hiding the complexity of the distributed system from the


user.
 Types:

 Access Transparency: Users access resources without knowing


where they are located.
 Location Transparency: Users interact with services as if they are
local, regardless of actual location.

Example:

 Scenario: Cloud storage.

 Users upload files to the cloud without needing to know which


server stores them.

9. Resource Sharing

 Definition: Sharing hardware, software, or data across nodes in a


distributed system.
 Role: Maximizes resource utilization and reduces costs.

Example:

 Scenario: University computing clusters.

 Students and researchers share computational resources for


simulations and experiments.

32
10. Concurrency

 Definition: Simultaneous execution of multiple tasks across nodes.


 Role: Improves efficiency and reduces processing time.

Example:

 Scenario: Search engine indexing.

 Multiple nodes index different parts of the web concurrently to


speed up search results.

11. Latency

 Definition: The time delay in communication or task execution across


nodes.
 Role: Impacts system performance and user experience.

Example:

 Scenario: Online gaming.

 A delay in communication between players and the game server


results in lag.

12. Security

 Definition: Protecting data and communication within the distributed


system.
 Role: Ensures confidentiality, integrity, and authentication.

Example:

 Scenario: Online shopping.

 Payment data is encrypted during transmission to ensure it’s not


intercepted.

33
5.Service Oriented Computing
Service-Oriented Computing is a distributed computing paradigm where
functionality is provided as services, typically over a network.

Key Concepts of Service-Oriented Computing

1. Services:

 Self-contained units of software functionality.


 Perform specific tasks or provide specific data.
 Expose their functionality through well-defined interfaces.

2. Loose Coupling:

 Services are designed to minimize dependencies between them.


 Changes in one service have minimal impact on others.

3. Interoperability:

 Services use standard protocols and data formats to communicate


(e.g., HTTP, XML, JSON).
 Allows integration across different platforms and technologies.

4. Reusability:

 Services can be reused across multiple applications.


 Reduces development effort and improves efficiency.

5. Discoverability:

 Services are registered in directories or repositories for easy


discovery and integration.
 Example: Universal Description, Discovery, and Integration
(UDDI).

34
6. Statelessness:

 Services do not retain client state between requests.


 Each request is processed independently.

Components of Service-Oriented Computing

1. Service Provider:

 Develops and hosts the service.


 Publishes the service details to a service registry.

2. Service Registry:

 A directory or repository where services are listed.


 Enables service discovery by clients.

3. Service Consumer:

 Uses the service by locating it in the registry and invoking it via its
interface.

4. Middleware:

 Acts as a communication layer to handle service interactions.

Example: Enterprise Service Bus (ESB).

Components of Service-Oriented Computing

1. Service Provider:

 Develops and hosts the service.


 Publishes the service details to a service registry.

2. Service Registry:

 A directory or repository where services are listed.

35
 Enables service discovery by clients.

3. Service Consumer:

 Uses the service by locating it in the registry and invoking it via its
interface.

4. Middleware:

 Acts as a communication layer to handle service interactions.

Example: Enterprise Service Bus (ESB).

Benefits of Service-Oriented Computing

1. Flexibility:

 Modular services allow for easy updates and changes.

2. Scalability:

 Services can be deployed and scaled independently.

3. Cost Efficiency:

 Reusability of services reduces development costs.

4. Interoperability:

 Standards-based communication allows integration of diverse


systems.

5. Faster Development:

 Existing services can be reused, speeding up development.

36
Applications of Service-Oriented Computing

1. E-Commerce:

 Modular services for catalog, payment, and delivery.

2. Healthcare:

 Integration of patient records, diagnostics, and billing systems.

3. Banking:

 Online banking platforms with services for accounts, transactions,


and loans.

4. IoT (Internet of Things):

 Services to collect and process data from connected devices.

37
NOTES

__________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

38
_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

_________________________________________________________

39
UNIT-II

Concepts of Cloud Computing:

1. About cloud computing

2. building cloud computing environment

3. Cloud computing platforms and technologies

4. System models for distributed and cloud computing.

40

You might also like