0% found this document useful (0 votes)
10 views5 pages

CPU Scheduling and Process Management

The document discusses CPU scheduling and process management, emphasizing the importance of efficient CPU utilization through multiprogramming and various scheduling algorithms. It covers concepts such as CPU-I/O bursts, preemptive vs. nonpreemptive scheduling, and different scheduling strategies like FCFS, SJF, and Round Robin. Additionally, it highlights issues like race conditions and starvation, along with solutions such as aging and multilevel queue scheduling.

Uploaded by

Hassan ali Sayed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views5 pages

CPU Scheduling and Process Management

The document discusses CPU scheduling and process management, emphasizing the importance of efficient CPU utilization through multiprogramming and various scheduling algorithms. It covers concepts such as CPU-I/O bursts, preemptive vs. nonpreemptive scheduling, and different scheduling strategies like FCFS, SJF, and Round Robin. Additionally, it highlights issues like race conditions and starvation, along with solutions such as aging and multilevel queue scheduling.

Uploaded by

Hassan ali Sayed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

CPU Scheduling and Process Management

1. Basic Concepts:
CPU scheduling is a fundamental aspect of operating system design, ensuring optimal
CPU utilization by executing multiple programs efficiently through multiprogramming.

Maximum CPU Utilization


 The primary goal of CPU scheduling is to ensure maximum CPU utilization, which is
achieved through multiprogramming.
 Multiprogramming allows multiple processes to share CPU resources effectively.

CPU-I/O Burst
 CPU-I/O Burst: The time a process spends on CPU execution and I/O operations.
 CPU-I/O Burst Cycle: Process execution involves alternating between CPU execution
(CPU burst) and waiting for I/O operations (I/O burst).
 Since CPU is one of the core computing resources, efficient CPU scheduling is crucial
for operating system performance.

CPU Bursts
 The duration of CPU bursts follows an exponential or hyper-exponential distribution.
 Characteristics of CPU bursts:
o Most CPU bursts are short.
o Few CPU bursts are long.
 Process behavior based on CPU bursts:
o I/O-bound processes: Have many short CPU bursts.
o CPU-bound processes: Have fewer but longer CPU bursts.
o

CPU Scheduler
 The CPU scheduler (or CPU manager) selects processes from the ready queue and
allocates CPU cores.
 The ready queue can be ordered using different scheduling strategies.
 CPU scheduling occurs in four situations:
1. When a process switches from running to waiting state (e.g., waiting for I/O).
2. When a process switches from running to ready state (e.g., interrupted).
3. When a process switches from waiting to ready state (e.g., I/O completed).
4. When a process terminates.
 In cases 1 and 4, scheduling is mandatory (no choice).
 In cases 2 and 3, the scheduler has a choice of which process to execute next.
Preemptive vs. Nonpreemptive Scheduling
 Nonpreemptive Scheduling: Once a process is assigned to the CPU, it keeps it until it
either terminates or moves to the waiting state.
 Preemptive Scheduling: The CPU can be taken away from a running process when a
higher-priority process arrives.
 Modern operating systems (Windows, Linux, MacOS, UNIX) use preemptive
scheduling.

Preemptive Scheduling and Race Conditions


 Race Condition: Occurs when multiple processes access shared data concurrently,
leading to unpredictable outcomes.
 Example:
o Bank account withdrawal: If two users try to withdraw money at the same
time, balance updates might be incorrect.
o Printer requests: If multiple users send print jobs simultaneously, documents
might get mixed up.

Dispatcher
 The dispatcher is a module responsible for switching CPU control between processes
selected by the scheduler.
 The dispatcher performs:
o Context switching
o Mode switching (from kernel mode to user mode)
o Jumping to the correct program location to resume execution
 Dispatch latency: The time taken to stop one process and start another.

Scheduling Criteria
 CPU Utilization: Keep the CPU busy as much as possible.
 Throughput: Number of processes completed per time unit.
 Turnaround Time: Time taken from submission to completion of a process.
 Waiting Time: Time spent in the ready queue waiting for execution.
 Response Time: Time from request submission to the first response.
Scheduling Algorithm Optimization Criteria
 Scheduling algorithms are designed to:
o Maximize CPU utilization and throughput.
o Minimize turnaround time, waiting time, and response time.
 The choice of a scheduling algorithm depends on system requirements and workload
characteristics.

Types of Scheduling Algorithms

1. First-Come, First-Served (FCFS)


 Implements FIFO (First In, First Out) queue.
 Supports both preemptive and nonpreemptive scheduling.
 Advantages:
o Simple and easy to implement.
 Disadvantages:
o High waiting time for long processes.
o Convoy effect: Short processes get delayed by long ones.

Turnaround Time Calculation


 Turnaround time is the measure of the elapsed time between the submission of a
process to the system and the completion of its execution. I TAT covers the time a
process spends waiting in the ready queue, its actual execution time by the CPU, and
any additional time spent in the I/O operations.

1. Formula: TAT=Completion Time − Arrival Time


2. Alternative Formula (for FCFS): TAT=Burst Time +Waiting Time
2. Shortest-Job-First (SJF)
 Selects the process with the smallest CPU burst.
 Advantages:
o Minimizes average waiting time.
 Disadvantages:
o Requires accurate burst time prediction.
o Starvation: Long processes may never execute.

3. Shortest Remaining Time First (SRTF)


 Preemptive version of SJF.
 Re-evaluates process selection whenever a new process enters the queue.
 Provides optimal waiting time but requires precise burst time estimation.

4. Round Robin (RR)


 Each process gets the CPU for a fixed time quantum (q).
 Preemptive: If a process does not complete within q, it moves to the end of the
queue.
 Performance depends on q:
o Large q → Behaves like FCFS.
o Small q → More context switches (overhead increases).
Example:
 If n processes are in the ready queue and q = 10ms, each process gets 1/n CPU time.
 No process waits more than (n-1)q time units.

5. Priority Scheduling
 Each process is assigned a priority value.
 The CPU executes the highest priority process first.
 Types:
o Preemptive: Higher priority process interrupts a running process.
o Nonpreemptive: Once a process starts, it runs until completion.

Issues in Priority Scheduling


 Starvation: Low-priority processes may never execute.
 Solution: Aging (gradually increasing priority over time).
Multilevel Queue Scheduling
 The ready queue is divided into multiple queues based on priority or process type.
 Each queue has its own scheduling algorithm.

Example
Queue Scheduling Algorithm
System Processes Highest Priority
Interactive Processes Round Robin
Background Processes FCFS

Multilevel Feedback Queue Scheduling


 Allows processes to move between queues based on behavior.
 Prevents starvation by adjusting priorities dynamically.
Example of a 3-Level Queue System
1. Q0 (Highest Priority): RR with 8ms time quantum.
2. Q1: RR with 16ms time quantum.
3. Q2 (Lowest Priority): FCFS.
 Process starts in Q0.
 If it doesn’t complete in 8ms, it moves to Q1.
 If still not completed after 16ms, it moves to Q2.

You might also like