Operating System Important Semester Questions
Operating System Important Semester Questions
Multithreading: a technique by which a single set of code can be used by several processors at different
stages of execution.
Shared Memory: It is a faster inter process communication system. It allows cooperating processes to
access the same pieces of data concurrently. It speeds up computation.
Process: A process is a program in execution. Process is also called as job, task or unit of work.
Context Switching: When the CPU switches from one process to another it is called as Context Switching.
It is done to transfer the data from one place to another. It is usually done using Inter-Process
Communication (IPC)
Resource Management: When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of them. Many different types of resources like file storages, CPU cycles, etc. are managed by the
Operating System.
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.
Security & Protection: We use anti-virus for security so unauthorised users cannot access or attack the computer
system.
• The OS ensures that all access to system resources is controlled.
• The OS ensures that eternal i/O devices are protected from invalid access attempts.
Views Of Operating System:
• User View: This view focuses on the interaction and experience of the end-users. It includes
elements like the user interface (CLI or GUI), ease of use, and how users access and utilize
applications.
• System View: The system view is concerned with the internal workings of the operating system. It
encompasses aspects such as process management, memory allocation, file system management, and
resource allocation.
Real Time Operating System: in Real Time Operating System, the task that is to be performed needs to be completed
at the given time limit. It is fixed time constraint. In Real Time Operating System the response should be fast.
Applications:
• Flight Control System
• Simulations
• Industrial control
• Military applications
List Any Four File Operations.
• Creating a file
• Writing a file
• Reading a file
• Deleting a file
• Renaming an existing file.
• Creating copy of a file, copy file to another I/O device such as printer or display
Command Line Based on DOS, UNIX, GUI, LINUX
Priority Scheduling: Priority Scheduling OS is the scheduling algorithm which schedules processes according to the
priority assigned to each of the processes
Round Robin Scheduling: Round-robin (RR) is one of the algorithms employed by process and network schedulers
in computing.
Avg. Waiting Time = Waiting Time Of All Process / Total Number of Process
Multiprogramming Vs Multiprocessing / Multitasking
Multiprogramming Multiprocessing / Multitasking
It allows multiple programs to utilize the A supplementary of the
CPU simultaneously multiprogramming system also allows for
user interaction.
Based on the context switching Based on the timesharing mechanism.
mechanism
In a multiuser environment, the CPU In a single-user environment, the CPU
switches between programs/processes switches between the processes of various
quickly. programs.
It takes maximum time to execute the It takes minimum time to execute the
process. process.
Real Time System Vs Time Sharing System
Real Time System Time Sharing System
In real time system, job has to be done In time sharing system, fixed time is given
within a fixed deadline. to each process and all the processors are
arranged in a queue.
Real Time Sharing has well-defined and It requires more complicated CPU
fixed time constraints. scheduling algorithms.
Response Time is important Response Time is not important.
It focusses on accomplishing a Emphasis on provide a quick response to
computational task before its specified a request.
deadline.
Services Provided By Operating System.
• Program Execution: The purpose of computer system is to allow the users to execute programs in an efficient
manner.
• User Interface: All operating systems have a user interface that allows users to communicate with the system.
Three types of user interfaces are available: a. Command line interface (CLI) b. Batch interface c. Graphical user
interface (GUI)
• I/O Operations: When a program is running, it may require input/output resources such as a file or devices such
as printer. So the operating system provides a service to do I/O.
• Error Detection: Operating systems detects CPU and memory hardware such as a memory error or power
failure, a connection failure on a network or lack of paper in the printer etc.
• Accounting: Operating system keeps track of usages of various computer resources allocated to users.
• Resource Allocation: Operating system manages resource allocation to the processes. These resources are CPU,
main memory, file storage and I/O devices.
Components/Functions Of Operating System
Process Management: A process is a program in execution. The operating system assigns processors with
different tasks that must be performed by the computer system. The OS is responsible for creating and
deleting both user and system processors, Suspending and Resuming Processes, etc.
Main Memory Management: An operating system deals with the allocation of main memory and other
storage areas to the system programs as well as user programs and data. The OS is responsible for Keeping
track of which parts of memory are currently being used and by whom, Allocating and De-Allocating
memory space as needed.
File Management: OS deals with the management and organisation of various files in the system. The OS
is responsible for creating and deleting files, directories, Backing up files on storage media.
I/O Device Management: Operating System performs the task of allocation and de-allocation of the
devices. The. I/O subsystem consists of drivers for specific hardware devices, a general device driver
interface, etc.
Write Two Use of the following OS tools
1) Device Management: Allows interaction with hardware devices through device driver. Keeps track
of all device’s data and location.
2) Task scheduler: Assign processor to task ready for execution. Executing predefined actions
automatically whenever a certain set of condition is met.
3) Performance monitor: Monitor various activities on a computer such as CPU or memory usage.
Used to examine how programs running on their computer affect computer’s performance.
4) User Management: User management includes everything from creating a user to deleting a user on
your system. User management can be done in three ways on a Linux system.
5) Security Policy: Access Control: Security policies define who can access specific resources, enhancing
data protection by restricting unauthorized entry. Compliance: They ensure adherence to regulatory
requirements and industry standards, helping organizations maintain legal and security standards.
Write any four systems call related to file management.
System calls are an essential part of a computer's operating system that allow applications and user-level processes
to request services and functionality from the kernel (the core of the operating system). These calls serve as a
bridge between user-level software and the hardware resources of a computer. Here's how they work:
Medium-Term Scheduler: Some operating systems have a medium-term scheduler, which can swap processes out
of memory and into secondary storage (e.g., hard disk) to free up memory for other processes. This helps in
managing the available physical memory more efficiently.
Threads And Its Types
A Threads is sometimes called as a Light Weight Process (LWP), is a basic unit of CPU utilization. It comprises a
thread Id, a program counter, a register set and a stack.
A) User Level Thread: The thread implemented at the user level are known as User Level Thread. In user level
thread, thread management is done by the application while the kernel is not aware of the existence of threads.
Advantages of User level threads are: 1) They can run on any Operating System, 2) User thread library is easy to
portable.
B) Kernel Level Thread: The threads implemented at the kernel level are known as Kernel Level Threads. Kernel
threads are directly supported by the Operating System. Advantages of Kernel level threads are: 1) Kernel
routines themselves can multithreaded, 2) If one thread in a process is blocked, the kernel can schedule another
thread of same process.
Multithreading Models
One to One:
Many to One:
Many to Many:
Schedulers are special system software which handles process scheduling in various ways. Their main task is to select
the jobs to be submitted into the system and to decide which process to run.
1) Short Term Scheduler: The scheduler which selects the jobs or processes which are ready to execute from the
ready queue and allocate the CPU to one of them is called as Short Term Scheduler.
2) Medium Term Scheduler: The scheduler which remove the process from main memory and again reloads
afterwards when required is called as Medium Term Scheduler.
3) Long Term Scheduler: The scheduler which picks up job from pool and loads into main memory for execution
is called as Long Term Scheduler.
Process State Diagram
Waiting or Blocked State: A process that cannot execute until some event occurs or an I/O completion then the
process is said to be in Waiting or Blocked State.
Terminated State: After the completion the process, the process will be automatically terminated by the CPU, so this
is called as the terminated state of the process.
Process Control Block
A PCB stores descriptive information referring to a process such as its Memory Allocation
state, program counter, memory management, information, allocated
resources, its scheduling, etc. that is required to control and manage a
Event Information
particular process. List Of Open Files
The basic purpose of PCB is to indicate the so far progress of a process.
Shared Memory
Process B Process B
Kernel Kernel
Shared Memory: Two processes exchange data or Message Passing: In message passing model the data or
information through sharing region. They can read and information is exchanged in the form of messages.
write the data from this region
Long Term Scheduler Vs Short Term Scheduler
A) Pre-Emptive Scheduling: It allows a higher priority process to replace a currently running process, even if its time
slot is not completed or it has not requested for any I/O. If a higher priority process enters the system, the currently
running process is stopped and the CPU transfers the control to the higher priority process.
Disadvantage of Pre-Emptive Scheduling: They are complex and they lead the system to race condition.
B) Non Pre-Emptive Scheduling: In Non-Pre-Emptive, once the CPU is assigned with a process, the processor do
not release until the completion of that process. It means that a running process has the control of the CPU and other
allocated resources until the normal termination of that process.
Advantage of Non Pre-Emptive Scheduling: They are simple and they cannot lead the system to race condition.
A system user / designer must consider a variety of factors when developing a scheduling discipline, such as the type
of system and the user’s needs. Following are the scheduling Objectives that the user must expect.
1) Fairness: It is defined as the degree to which each process gets an equal chance to execute.
2) Maximize the Resource Utilization: The scheduling techniques should keep the resources of the system busy.
3) Response Time: A scheduler should minimize the response time for interactive user.
4) Efficiency: The scheduler must make sure that the system is utilized every second and not kept idle.
5) Turnaround: A scheduler should minimize the time batch users must wait for an output.
6) Maximize Throughput: A scheduler should maximize the number of jobs processed per unit time.
7) Enforce Priorities: If the system assigns priorities to processors, the scheduling mechanism should favour the
higher priority processors first.
CPU I/O Burst Cycles
• CPU Burst Cycles: It is a time period when process is busy with CPU.
• I/O Burst Cycle It is a time period when process is busy in working with I/O
resource
• CPU scheduling is greatly affected by how a process behaves during its Execution.
Almost all the processes continue to switch between CPU (for processing) and I/O
devices (for performing I/O) during their execution
• CPU I/O burst cycles refer to the alternating phases in a computer's operation. The
CPU processes data (CPU burst), then waits for input/output tasks to finish (I/O
burst), before resuming processing. This pattern repeats to ensure efficient data
handling and smooth functioning of the system.
Explain Scheduling Criteria.
1) CPU Utilization: The main objective is to keep CPU as busy as possible. CPU utilization can range from 0 to 100
percent.
2) Throughput: It is the number of processes that are completed per unit time. It is a measure of work done in the
system. Throughput depends on the execution time required for any process.
3) Turnaround Time: The time interval from the time of submission of a process to the time of completion of
that process is called as Turnaround Time.
4) Response Time: The time period from the submission of a request until the first response is produced is called
as response time.
5) Waiting time: It is the sum of time periods spent in the ready queue by a process.
6) Balanced Utilization: It’s main aim is to get more work done by the system.
Scheduling Algorithms
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the
CPU. The algorithm used by the scheduler to carry out the selection of a process for execution is known as scheduling
algorithm.
Advantage Of FCFS:
• FCFS are well suited for Batch Systems where the longer time periods for each process are often acceptable.
• Ensures fairness by serving processes in the order they arrive.
Disadvantage of FCFS:
• Average Waiting Time is very large
• Not suitable for time-sensitive or real-time systems.
Concept Of FCFS
Shortest Job First Scheduling Algorithm
SJF stands for Shortest Job First, it is also known as Shortest Process Next (SPN) or Shortest Request Next (SRN).
This algorithm schedules the processes according to the length of the CPU burst they require.
Advantage Of SJF:
• Minimizes Average Waiting Time
• Reduces the Average Turnaround Time.
• Provides optimal efficiency for processes with short burst times.
Disadvantage of SJF:
• It may cause long waiting times for longer processes
• It is very difficult to know the length of the next CPU burst
SRTN stands for Shortest Remaining Time Next. It is also known as Shortest Time To Go. It is a scheduling
discipline in which the next scheduling entity, job or a process is selected on the basis of the shortest remaining
execution time.
Advantage Of SRTN:
• Minimizes Average Waiting Time.
• Reduces Turnaround Time
Disadvantage of SRTN:
• Can lead to starvation for processes with longer burst times.
• Requires frequent preemption, causing high overhead.
• Internal Priority: They are based on time, memory requirements, number of open files, etc.
• External Priority: They are human created examples seniority, influence of any person, etc.
Advantage of Priority:
• It is simple in use
• Prioritizes important tasks to be completed first
Disadvantage of Priority:
• Possibility of starvation for low priority tasks.
• High priority tasks might monopolize the CPU, leading to delayed execution of lower priority tasks.
The Round Robin (RR) scheduling algorithm is designed especially for Timesharing systems. RR is the pre-emptive
Version of FCFS. It is similar to FCFS but pre-emption is added to switch between processes.
Advantage of RR:
• Ensures fairness by providing equal opportunities for all processes to execute.
• Reduces response time as each process gets a small time slice, ensuring quick execution.
Disadvantage of RR:
• Care must be taken in choosing quantum value.
• Throughput in RR is low if time quantum is too small
Deadlock is a situation in a resource allocation system in which two or more processes are in a simultaneous
weed state each one waiting for one of the others to release a resource before it can proceed or It can be defined
as the permanent blocking of a set of process that either compete for system resources or communicate with
each other
Deadlock Handing
A deadlock in operating system can be handled in following four different ways
To prevent the Deadlock, the System can use either deadlock prevention or deadlock avoidance techniques
Conditions for Deadlock Preventions
By ensuring that at least one of below conditions cannot hold, we can prevent the occurrence of a Deadlock
1) Mutual Exclusion: The mutual exclusion condition must hold for non shareable resources, Shareable resources
do not require mutually exclusive access thus cannot be involved in a deadlock.
2) Hold And Wait: There are two possible ways to handle this situation:
First protocol that can be used, requires each processes to request all the resources needed before its execution.
Second protocol that can be used is, before it requests any additional resources, it must release all the resources
that are currently allocated to it.
3) No pre-emption: If a process that is holding some resource requests another resource that cannot be
immediately allocated to it then all resources currently being held are pre-empted that is these resources are
implicitly released.
4) Circular Wait: There may exist a set (P0, P1, Pn) of waiting processes such that P0 is waiting for a resource which
is held by P1, P1 is waiting for resource which is held by P-2 and vice versa. Thus there must be a circular chain of
two or more processes each of them waiting for a resource held by the next member of the chain.
Deadlock Preventions
Following are the ways to be considered in order to prevent deadlock
• Banker's Algorithm: It ensures the system can avoid deadlock by dynamically analyzing resource allocation
requests. It grants requests only if it determines that the system will remain in a safe state.
• Resource Request Algorithm: It is used to check if a resource request by a process can be granted without
causing the system to enter an unsafe state or result in a deadlock.
Memory Partitioning
In memory partitioning, memory is divided into a number of regions or portions. Each region may have one program
to be executed. When a region is free a program is selected from the job queue and loaded into the free region and
when it’s terminated, the region becomes available for another program. There are two types of memory Partitioning:
2) Linked List: In this approach, the free disk blocks are linked 2)
together i.e. a free block contains a pointer to the next free
block.
2) Segmentation: It is a Memory Management Technique scheme that implements the user’s view of a program. In
segmentation, the entire logical address space is considered as a collection of segments with each segment having a
number and a length
3) Page Fault: A page fault in an operating system happens when a program needs data that isn't currently in the
main memory (RAM). The system then has to fetch that data from the slower storage, like the hard drive, which slows
down the program's execution temporarily.
Fragmentation
4) Fragmentation: As processes are loaded and removed from memory the free memory space is broken into little
pieces, after sometimes that processes cannot be allocated to memory blocks because of their small size and memory
blocks remains unused, this problem is known as Fragmentation.
A) Internal Fragmentation: Wasting of memory within a partition, due to a difference in size of a Partition and of
the object resident within it, is called Internal Fragmentation.
B) External Fragmentation: Wasting of memory between partitions due to scattering of the free space into a
number of discontinuous areas is called External Fragmentation.
File & Attributes of File
A file is a named collection of related information that is recorded on secondary storage such as magnetic discs
magnetic tapes and optical disks.
• Name: It is the easy-to-read label given to a file, allowing humans to identify and access it within a system.
• Type: this type of information is needed for those systems that support different types.
• Location: It refers to a location / position where the file is stored on that device.
• Size: It refers to the size of the file and the maximum allowed size is mentioned in this attribute.
• Protection: Access control information controls that you can do reading writing executing and so on
File operations & File types
There are various types of file operations some of them are classified below
There are various Types of Files some of them are classified below:
An access method describes the manner and mechanisms by which a process accesses the
Data / Information in a file.
A) Sequential File Access: It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common.
B) Indexed Sequential File Access: It is the other method of accessing a file that is built on the top of the
sequential access method. Here the data is stored sequentially with an index for efficient access, allowing both
sequential and direct access.
C) Direct File Access: It is also known as relative access. A file is made up of fixed length, logical records that allow
programs to read and write records rapidly in no particular order.
Directory & Its Types
To organise files in the computer system in a systematic manner the operating system provides the concept of
directories. A directory can be defined as a way of grouping files together
There are three Types of Directories
A) Single Level directory Structure: It is the simplest form of cat bo a test data mail cont data records
directory. In single level directory, Entire files are contained in
the same directory so unique name must be assigned to each
file of the directory. Root Directory
C) Tree Structured Directory: In this structure it allows the users User A User B User C
to create their own subdirectory and to organise their files
tmp File 1 File 3 Proj 1
accordingly a subdirectory contains a set of files or subdirectories. Proj
File 2