Os Assignment 2 Answer Key
Os Assignment 2 Answer Key
Even Semester
OPERATING SYSTEMS
1. State critical section problem? Discuss three solutions to solve the critical section
problem.
The Critical-Section Problem:
The resource that cannot be shared between two or more processes at the same time is
called as a critical resource.There may be a situation where more than one process requires to
access the critical resource. Then, during the execution of these processes they can send data
to the critical resource, receive data from the critical resource or they can just get the
information about the status of the critical resource by sending related commands to it. An of
a source by sending related commands to it. An example of a critical or a non-sharable
resource is "printer" A critical resource can be accessed only from the critical section of a
program.
Problem Setup
Two processes, P0 and P1, need to access a shared resource (critical section).
They must ensure that only one process is allowed in the critical section at a time.
Two shared variables are used: flag[] (a boolean array) and turn (an integer).
Variables:
flag[2]— This is an array where each index corresponds to one of the processes (0 for
P0, 1 for P1). The flag is set to true when a process wants to enter its critical section
and false otherwise.
turn — This variable indicates whose turn it is to enter the critical section. If it's set to
0, it means P0's turn, and if set to 1, it means P1's turn.
Peterson's Algorithm:
Flag setting: When a process (e.g., P0) wants to enter the critical section, it sets its
flag to true, indicating its intention to enter
Turn setting: The process then sets the turn variable to the index of the other
process (i.e., if P0 is setting turn = 1, meaning it’s giving priority to P1).
Waiting: After setting turn, the process waits as long as the other process wants to
enter the critical section (flag[other] == true) and it’s the other process's turn.
Entering the critical section: Once it’s safe (i.e., either the other process is not
interested in entering the critical section or it’s its turn to enter), the process enters its
critical section.
Exiting the critical section: After completing the critical section, the process sets
flag[process] = false to indicate it no longer needs access.
Code Representation
while (true) {
flag[0] = true;
turn = 1;
while (flag[1] && turn == 1);
// Critical SectioN
flag[0] = false;
}}
Explanation:
Mutual Exclusion: At any given time, only one process can enter the critical section.
The turn variable ensures that only one process is allowed in at a time. When both
processes want to enter, the one whose turn it is will proceed.
Deadlock Freedom: Deadlock is avoided because each process alternates the turn
variable. When one process enters the critical section, it ensures that the other process
gets a chance later.
Starvation Freedom: Both processes are guaranteed to enter the critical section
eventually because each time one process exits, it sets flag[] to false, allowing the
other process to proceed.
Synchronization hardware
There are several key synchronization hardware primitives that are commonly used:
1. Test-and-Set (TAS)
The Test-and-Set instruction is a simple atomic hardware operation. It checks the value of a
memory location and sets it to a new value in a single atomic operation. This helps in
implementing locks (mutexes) to ensure mutual exclusion.
2. Compare-and-Swap (CAS)
The Compare-and-Swap (CAS) is another atomic operation that compares the value of a
memory location to a given value and, if they are equal, swaps it with a new value. This
operation is essential in implementing efficient synchronization mechanisms like lock-free
algorithms and building more advanced data structures such as queues, stacks, or lists.
Operation:
1. Counting Semaphore
2. Binary Semaphore (also known as Mutex)
1. Counting Semaphore
A counting semaphore is an integer value that can range over an unrestricted domain. It is
used to manage access to a pool of resources. For example, if you have multiple identical
resources (like printers or database connections), you can use a counting semaphore to ensure
that only a certain number of processes or threads access the resources concurrently.
A binary semaphore is a special case of a counting semaphore where the value can only be 0
or 1. It is primarily used for mutual exclusion (mutex), ensuring that only one process or
thread can enter its critical section at a time.
Example:
This is typically used for ensuring that only one thread or process enters the critical section.
semaphore mutex = 1;
wait(mutex);
Critical section
signal(mutex);
Wait Operation (P or Down) The wait operation (also called down) tries to decrement the
semaphore. If the semaphore value is greater than zero, it decreases the value and the process
proceeds. If the value is zero, the process is blocked (waiting) until the value becomes greater
than zero.
semaphore wait(semaphore s) {
while (s <= 0) {
}
Critical section
s = s - 1; }
Signal Operation (V or Up) The signal operation (also called up) increments the
semaphore's value. If there are any processes waiting (blocked), it unblocks one of
them by signaling.
semaphore signal(semaphore s) {
s = s + 1;
}
A classic example where semaphores are used is the Producer-Consumer problem, where
one or more producers produce items and add them to a shared buffer, while one or more
consumers remove and process those items.
Here, semaphores are used to control the access to the shared buffer (which has a limited
size).
Full: Semaphore that counts how many slots in the buffer are full.
Empty: Semaphore that counts how many slots in the buffer are empty.
Mutex: A binary semaphore to protect mutual exclusion while accessing the buffer.
semaphore mutex = 1;
semaphore empty = N;
semaphore full = 0;
// Producer
while (true) {
produce_item();
wait(empty);
wait(mutex);
insert_item_to_buffer();
signal(mutex);
signal(full);
}
// Consumer
while (true) {
wait(full);
wait(mutex);
remove_item_from_buffer();
signal(mutex);
signal(empty);
consume_item();
}
Advantages of Semaphores:
Disadvantages of Semaphores:
Deadlock: If not carefully managed, semaphores can lead to deadlock, where two or more
processes are blocked forever.
Starvation: If semaphores are not used correctly, some processes may never get a chance to
execute (starvation).
Complexity: In systems with many semaphores and processes, managing synchronization can
become complex.
2.Consider the following system snapshot using data structures in the banker’s
algorithm, with resources A, B, C and D and process P0 to P4.
Max Allocation Available
ABCD ABCD ABCD
P0 6012 4001 3211
P1 1750 1100
P2 2356 1254
P3 1653 0633
P4 1656 0212
Using banker’s algorithm, Answer the following questions:
a) How many resources of type A, B, C and D are there?
b) What are the contents of the need matrix?
c) Is the system in a safe state?
d) If a requests from process P4 arrives for additional resources of (1,2,0,0), can the
banker’s algorithm grant the request immediately? Show the new system state and
other criteria.
Solution
a) How many resources of type A, B, C and D are there?
A-9; B-13;C-10;D-11
b) What are the contents of the need matrix?
Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:
Process
Need
ABCD
P0 2011
P1 0650
P2 1102
P3 1020
P4 1444
c) Is the system in a safe state? Why?
The system is in a safe state as the processes can be finished in the sequence
P0, P2, P4, P1 and P3.
d) If a requests from process P4 arrives for additional resources of (1,2,0,0), can
the banker’s algorithm grant the request immediately? Show the new system
state and other criteria.
If a request from process P4 arrives for additional resources of (1,2,0,0,), and if this request is
granted, the new system state would be tabulated as follows.
What is Segmentation?
1. Segmentation refers to the division of a program into different segments based on its
logical structure. Each segment represents a different part of the program, such as the
code segment, data segment, stack segment, etc. These segments can have different
sizes and are independently addressable.
1. Paging divides memory into fixed-size blocks called pages, whereas segmentation
divides memory into variable-sized segments.
2. Paging may lead to fragmentation within a page, while segmentation allows the
program's logical structure to be maintained, making it more efficient for certain
applications.
1. Code Segment: This contains the executable code of the program. It's where
instructions are stored.
2. Data Segment: This stores global variables, constants, and static variables used by
the program. It's further divided into initialized and uninitialized sections.
3. Stack Segment: This is used for function calls, local variables, and control
information like return addresses. It grows and shrinks as functions are called and
returned.
4. Heap Segment: This segment stores dynamically allocated memory (e.g., memory
allocated via malloc in C). It grows or shrinks during program execution.
Segment Table:
1. The operating system maintains a segment table to keep track of the base address
(starting point) and the length (size) of each segment.
2. Each segment has a segment number and an offset (an address within the segment),
and the operating system uses the segment table to translate a logical address
(segment number + offset) to a physical address in memory.
1. A logical address consists of a segment number and an offset within that segment.
2. The physical address refers to the actual location in the physical memory (RAM)
after the operating system translates the logical address using the segment table.
3. This translation ensures that the program accesses its memory correctly, and each
segment is treated as a separate entity.
Benefits of Segmentation:
1. External Fragmentation: Since segments can be of variable size, over time, unused
gaps of memory (called fragmentation) may arise between segments. This
fragmentation can lead to inefficient use of memory.
2. Complex Address Translation: The process of translating logical addresses to
physical addresses in a segmented system can be more complex than in paging.
3. Segment Limitations: Since segments vary in size, managing them can be more
challenging, and it can result in inefficient memory usage if segments are not
allocated or deallocated properly.
4.When page faults will occur? Describe the actions taken by operating system during
page fault.
A page fault occurs when a program accesses a page in virtual memory that is not
currently in physical memory (RAM). In other words, the operating system has to load the
page from secondary storage (usually a hard disk or SSD) into RAM because it’s not in
memory at the time the program tries to use it.
1. The page that a program needs is not in the main memory (RAM).
2. The program attempts to access a page that is either not loaded into RAM or has been
swapped out to disk (paging/swapping).
3. The page table entry for the requested page is not valid (indicating it is not in memory).
Initial access: When a program first accesses a page and it hasn't been loaded into memory
yet (such as when the program is first started).
Swapping out: When the system has to swap a page out of RAM to free space, and later the
program tries to access that page again.
Demand Paging: In systems using demand paging, the OS only loads a page when it is
needed. So, if a page is not yet in memory, a page fault occurs when the program accesses it.
When a page fault occurs, the operating system must handle it carefully to ensure the
program can continue executing. The sequence of actions is as follows:
o The OS first checks if the memory access is valid. For example, if the program tries
to access a page that it shouldn't be allowed to access (like accessing invalid memory
or a protected page), this is a segmentation fault or an illegal access. In this case, the
OS will terminate the process.
o If the access is valid but the page is just not in memory, the OS proceeds with
handling the page fault.
o The OS checks the page table to identify the location of the missing page (usually a
page file or swap space on disk).
o The OS identifies which swap file or disk block contains the requested page.
o The OS checks if there is any free frame (empty space) in RAM to store the page.
o If there is no free space, the OS will need to swap out an existing page from RAM to
disk to free up space. This is where the page replacement algorithm comes into play
(e.g., LRU, FIFO, or Optimal algorithm). The OS selects a page to swap out, writes it
to disk (if modified), and frees up space for the new page.
o After loading the page into memory, the OS updates the page table to mark the page
as present in memory, providing the new physical address for the page.
o The valid bit for the page in the page table is set to indicate that the page is now in
memory.
o Once the page is loaded into memory and the page table updated, the OS restores
control to the program and allows it to continue execution from where it left off (this
is often done by resuming the instruction that caused the page fault).
o If more pages need to be swapped in or out (for example, if the program continues to
access more pages that are not in memory), the OS continues managing page faults
and swapping pages between memory and disk as needed.
1. Trap to the OS: The CPU triggers an interrupt to inform the OS about the page fault.
2. Check Validity: The OS checks if the page access is valid.
3. Locate the Page: The OS finds where the missing page is stored on disk.
4. Free Space in RAM: The OS finds free space in memory or swaps out a page to make room.
5. Load the Page: The OS loads the page from disk into RAM.
6. Update Page Table: The page table is updated to reflect the new location of the page in
memory.
7. Resume Execution: The process continues executing, with the page now in memory.
Performance Considerations:
Page Fault Rate: If page faults occur frequently, the system spends a lot of time swapping
pages between RAM and disk, resulting in poor performance.
Disk I/O vs. CPU Execution: Disk I/O is much slower than RAM access, so a high page
fault rate can significantly slow down a program, often called thrashing.
Effective Page Replacement Algorithms: To reduce the impact of page faults, effective
page replacement algorithms (such as LRU (Least Recently Used)) are used to decide
which page to swap out when memory is full.