0% found this document useful (0 votes)
23 views

Os Assignment 2 Answer Key

The document discusses the critical section problem in operating systems, outlining its definition and the requirements for solutions, including mutual exclusion, progress, and bounded waiting. It presents Peterson's Solution for two processes, synchronization hardware mechanisms like Test-and-Set and Compare-and-Swap, and the use of semaphores for managing access to shared resources. Additionally, it covers the Banker's algorithm for resource allocation and the concept of segmentation in memory management, highlighting its advantages over paging.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Os Assignment 2 Answer Key

The document discusses the critical section problem in operating systems, outlining its definition and the requirements for solutions, including mutual exclusion, progress, and bounded waiting. It presents Peterson's Solution for two processes, synchronization hardware mechanisms like Test-and-Set and Compare-and-Swap, and the use of semaphores for managing access to shared resources. Additionally, it covers the Banker's algorithm for resource allocation and the concept of segmentation in memory management, highlighting its advantages over paging.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Even Semester
OPERATING SYSTEMS

ASSIGNMENT II:ANSWER KEY

1. State critical section problem? Discuss three solutions to solve the critical section
problem.
The Critical-Section Problem:
The resource that cannot be shared between two or more processes at the same time is
called as a critical resource.There may be a situation where more than one process requires to
access the critical resource. Then, during the execution of these processes they can send data
to the critical resource, receive data from the critical resource or they can just get the
information about the status of the critical resource by sending related commands to it. An of
a source by sending related commands to it. An example of a critical or a non-sharable
resource is "printer" A critical resource can be accessed only from the critical section of a
program.

Critical section problem:


Each process has a segment of code called a critical section. Critical section is used to
avoid race conditions on data items.
In critical section, the process may be changing common variables, updating a table,
writing a file and so on.
At any moment at most one process can execute in critical section.
A critical section is a piece of code that accesses a shared resource (data structure or a
device) that must not be concurrently accessed by more than one thread of execution.
The critical-section problem is to design a protocol that the processes can use to
cooperate. A Critical Section Environment contains:
 Entry Section: Code requesting entry into the critical section.
 Critical Section: Code in which only one process can execute at any one time.
 Exit Section: The end of the critical section, releasing or allowing others in.
A solution to the critical-section problem must satisfy the requirements.
l) Mutual Exclusion: If process Pt is executing in its critical section, then no other processes
can be executing in their critical sections.
2) Progress: If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then only those processes that are not executing in
their reminder section can participate In the decision on which will enter its critical next.
This selection of the processes that will enter the critical section next cannot be postponed
indefinitely.
3) Bounded Waiting: When a process requests access to a critical section, a decision that
grants its access may not be delayed indefinitely. A process may not be denied access
because of starvation or deadlock. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted. (i.e. All requesters must
eventually be let into the critical section). There are two general ways for handling
critical sections in the operating systems. They are:
1) Preemptive Kernel: It allows the Kernel model process to be preempted (i.e.,
interrupted) during execution.
Non-preemptive Kernel: It does not allow a Kernel mode process to be during
execution, the process will execute until it exits Kernel mode or voluntarily leaves control of
the CPU. This approach is helpful in avoiding race conditions
Peterson's Solution:
Peterson's solution is a classic algorithm used to solve the critical section problem for two
processes. It ensures mutual exclusion, meaning that only one process can be in its critical section at a
time, while preventing deadlock and starvation.

Problem Setup

 Two processes, P0 and P1, need to access a shared resource (critical section).
 They must ensure that only one process is allowed in the critical section at a time.
 Two shared variables are used: flag[] (a boolean array) and turn (an integer).

Variables:

flag[2]— This is an array where each index corresponds to one of the processes (0 for
P0, 1 for P1). The flag is set to true when a process wants to enter its critical section
and false otherwise.

turn — This variable indicates whose turn it is to enter the critical section. If it's set to
0, it means P0's turn, and if set to 1, it means P1's turn.

Peterson's Algorithm:

The solution involves the following steps:

Flag setting: When a process (e.g., P0) wants to enter the critical section, it sets its
flag to true, indicating its intention to enter

Turn setting: The process then sets the turn variable to the index of the other
process (i.e., if P0 is setting turn = 1, meaning it’s giving priority to P1).

Waiting: After setting turn, the process waits as long as the other process wants to
enter the critical section (flag[other] == true) and it’s the other process's turn.

Entering the critical section: Once it’s safe (i.e., either the other process is not
interested in entering the critical section or it’s its turn to enter), the process enters its
critical section.

Exiting the critical section: After completing the critical section, the process sets
flag[process] = false to indicate it no longer needs access.

Code Representation

while (true) {
flag[0] = true;
turn = 1;
while (flag[1] && turn == 1);
// Critical SectioN
flag[0] = false;
}}

Explanation:

Mutual Exclusion: At any given time, only one process can enter the critical section.
The turn variable ensures that only one process is allowed in at a time. When both
processes want to enter, the one whose turn it is will proceed.

Deadlock Freedom: Deadlock is avoided because each process alternates the turn
variable. When one process enters the critical section, it ensures that the other process
gets a chance later.

Starvation Freedom: Both processes are guaranteed to enter the critical section
eventually because each time one process exits, it sets flag[] to false, allowing the
other process to proceed.

This solution is only suitable for two processes.

Synchronization hardware

Synchronization hardware refers to the specialized hardware mechanisms provided by


the computer system to manage the coordination between multiple processes or threads. The
goal is to ensure mutual exclusion, prevent race conditions, and avoid deadlock when
multiple processes or threads access shared resources concurrently. These hardware
mechanisms help improve the efficiency and safety of multi-threaded or multi-processing
systems.

There are several key synchronization hardware primitives that are commonly used:

1. Test-and-Set (TAS)

The Test-and-Set instruction is a simple atomic hardware operation. It checks the value of a
memory location and sets it to a new value in a single atomic operation. This helps in
implementing locks (mutexes) to ensure mutual exclusion.

test_and_set(lock) // Atomic operation


if lock == 1
go back and retry

2. Compare-and-Swap (CAS)
The Compare-and-Swap (CAS) is another atomic operation that compares the value of a
memory location to a given value and, if they are equal, swaps it with a new value. This
operation is essential in implementing efficient synchronization mechanisms like lock-free
algorithms and building more advanced data structures such as queues, stacks, or lists.

 Operation:

1. It reads the value at a memory address.


2. It compares this value with a given value (expected value).
3. If they match, the memory is updated with a new value.
4. The operation returns the original value (before the swap), allowing the process to
check if the CAS was successful.

CAS(address, expected, new_value)


if *address == expected
*address = new_value
return true
else
return false
Semaphore

A semaphore is a synchronization primitive used to control access to a shared


resource in concurrent programming, ensuring mutual exclusion, and solving synchronization
problems like deadlock and race conditions. A semaphore is essentially an integer variable
that is used for signaling between processes or threads. It is a low-level mechanism for inter-
process or inter-thread communication.

There are two types of semaphores:

1. Counting Semaphore
2. Binary Semaphore (also known as Mutex)

1. Counting Semaphore

A counting semaphore is an integer value that can range over an unrestricted domain. It is
used to manage access to a pool of resources. For example, if you have multiple identical
resources (like printers or database connections), you can use a counting semaphore to ensure
that only a certain number of processes or threads access the resources concurrently.

Initialization: A counting semaphore is initialized with a value representing the


number of available resources.

Operations: The two main operations on a semaphore are:


o Wait (P or down): Decreases the value of the semaphore. If the value is greater than
0, the process continues. If the value is 0, the process is blocked until the value
becomes greater than 0.
o Signal (V or up): Increases the value of the semaphore, potentially unblocking a
waiting process.

2. Binary Semaphore (Mutex)

A binary semaphore is a special case of a counting semaphore where the value can only be 0
or 1. It is primarily used for mutual exclusion (mutex), ensuring that only one process or
thread can enter its critical section at a time.

 Initialization: A binary semaphore is initialized to 1 (indicating the resource is available).


 Operations:

o Wait (P or down): If the semaphore value is 1, it is changed to 0, and the process


enters the critical section. If the value is already 0, the process is blocked until it
becomes 1.
o Signal (V or up): The semaphore value is set back to 1, potentially unblocking a
waiting process.

Example:

This is typically used for ensuring that only one thread or process enters the critical section.

semaphore mutex = 1;
wait(mutex);
Critical section
signal(mutex);

Semaphore Operations in Detail

Wait Operation (P or Down) The wait operation (also called down) tries to decrement the
semaphore. If the semaphore value is greater than zero, it decreases the value and the process
proceeds. If the value is zero, the process is blocked (waiting) until the value becomes greater
than zero.

semaphore wait(semaphore s) {
while (s <= 0) {
}
Critical section
s = s - 1; }
Signal Operation (V or Up) The signal operation (also called up) increments the
semaphore's value. If there are any processes waiting (blocked), it unblocks one of
them by signaling.

semaphore signal(semaphore s) {
s = s + 1;
}

Example of Semaphore Use Cases

1. Producer-Consumer Problem (Bounded Buffer)

A classic example where semaphores are used is the Producer-Consumer problem, where
one or more producers produce items and add them to a shared buffer, while one or more
consumers remove and process those items.

Here, semaphores are used to control the access to the shared buffer (which has a limited
size).

 Full: Semaphore that counts how many slots in the buffer are full.
 Empty: Semaphore that counts how many slots in the buffer are empty.
 Mutex: A binary semaphore to protect mutual exclusion while accessing the buffer.

semaphore mutex = 1;
semaphore empty = N;
semaphore full = 0;
// Producer
while (true) {
produce_item();
wait(empty);
wait(mutex);
insert_item_to_buffer();
signal(mutex);
signal(full);
}
// Consumer
while (true) {
wait(full);
wait(mutex);
remove_item_from_buffer();
signal(mutex);
signal(empty);
consume_item();
}

Advantages of Semaphores:

 Simple Concept: Semaphores provide a simple and easy-to-understand synchronization


mechanism.
 Flexibility: They can be used for a wide variety of synchronization problems, such as mutual
exclusion, producer-consumer problems, etc.
 Efficiency: Semaphores are very efficient in terms of the system resources they consume.

Disadvantages of Semaphores:

 Deadlock: If not carefully managed, semaphores can lead to deadlock, where two or more
processes are blocked forever.
 Starvation: If semaphores are not used correctly, some processes may never get a chance to
execute (starvation).
 Complexity: In systems with many semaphores and processes, managing synchronization can
become complex.

2.Consider the following system snapshot using data structures in the banker’s
algorithm, with resources A, B, C and D and process P0 to P4.
Max Allocation Available
ABCD ABCD ABCD
P0 6012 4001 3211
P1 1750 1100
P2 2356 1254
P3 1653 0633
P4 1656 0212
Using banker’s algorithm, Answer the following questions:
a) How many resources of type A, B, C and D are there?
b) What are the contents of the need matrix?
c) Is the system in a safe state?
d) If a requests from process P4 arrives for additional resources of (1,2,0,0), can the
banker’s algorithm grant the request immediately? Show the new system state and
other criteria.
Solution
a) How many resources of type A, B, C and D are there?
A-9; B-13;C-10;D-11
b) What are the contents of the need matrix?
Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

Process
Need
ABCD
P0 2011
P1 0650
P2 1102
P3 1020
P4 1444
c) Is the system in a safe state? Why?
The system is in a safe state as the processes can be finished in the sequence
P0, P2, P4, P1 and P3.
d) If a requests from process P4 arrives for additional resources of (1,2,0,0), can
the banker’s algorithm grant the request immediately? Show the new system
state and other criteria.
If a request from process P4 arrives for additional resources of (1,2,0,0,), and if this request is
granted, the new system state would be tabulated as follows.

3.Explain the basic concepts of segmentation.


In the context of operating systems (OS), segmentation is a memory management
scheme that divides a program's memory into variable-sized segments, such as code, data,
and stack, instead of dividing memory into fixed-sized blocks like in paging.

Basic Concepts of Segmentation in Operating Systems:

What is Segmentation?

1. Segmentation refers to the division of a program into different segments based on its
logical structure. Each segment represents a different part of the program, such as the
code segment, data segment, stack segment, etc. These segments can have different
sizes and are independently addressable.

Segmentation vs. Paging:

1. Paging divides memory into fixed-size blocks called pages, whereas segmentation
divides memory into variable-sized segments.
2. Paging may lead to fragmentation within a page, while segmentation allows the
program's logical structure to be maintained, making it more efficient for certain
applications.

Types of Segments: A program's memory can be divided into several types of


segments, including:

1. Code Segment: This contains the executable code of the program. It's where
instructions are stored.
2. Data Segment: This stores global variables, constants, and static variables used by
the program. It's further divided into initialized and uninitialized sections.
3. Stack Segment: This is used for function calls, local variables, and control
information like return addresses. It grows and shrinks as functions are called and
returned.
4. Heap Segment: This segment stores dynamically allocated memory (e.g., memory
allocated via malloc in C). It grows or shrinks during program execution.

Segment Table:

1. The operating system maintains a segment table to keep track of the base address
(starting point) and the length (size) of each segment.
2. Each segment has a segment number and an offset (an address within the segment),
and the operating system uses the segment table to translate a logical address
(segment number + offset) to a physical address in memory.

Logical and Physical Address:

1. A logical address consists of a segment number and an offset within that segment.
2. The physical address refers to the actual location in the physical memory (RAM)
after the operating system translates the logical address using the segment table.
3. This translation ensures that the program accesses its memory correctly, and each
segment is treated as a separate entity.
Benefits of Segmentation:

1. Logical View: Segmentation provides a logical view of memory because it organizes


memory into meaningful chunks like code, data, and stack, rather than fixed-size
blocks.
2. Protection: Segments can be protected individually. For example, the OS can prevent
a program from accessing its stack or code segment if it’s not supposed to.
3. Dynamic Size Allocation: Segments can grow or shrink dynamically, making it more
flexible than paging.

Problems with Segmentation:

1. External Fragmentation: Since segments can be of variable size, over time, unused
gaps of memory (called fragmentation) may arise between segments. This
fragmentation can lead to inefficient use of memory.
2. Complex Address Translation: The process of translating logical addresses to
physical addresses in a segmented system can be more complex than in paging.
3. Segment Limitations: Since segments vary in size, managing them can be more
challenging, and it can result in inefficient memory usage if segments are not
allocated or deallocated properly.

Segmentation with Paging:

1. Paged Segmentation: To overcome the problem of external fragmentation, some


systems use a combination of both techniques, where each segment is further divided
into pages. This approach allows the benefits of both segmentation (logical division)
and paging (eliminates external fragmentation).

How Segmentation Works:

I. Logical Address: A logical address in a segmented system is represented as:


Logical Address=(Segment Number,Offset)\text{Logical Address} = (\text{Segment Number}, \
text{Offset})Logical Address=(Segment Number,Offset)
II. Segment Table Lookup: The OS uses the segment number to find the corresponding entry
in the segment table, which gives the base address of that segment and its size.
III. Physical Address Calculation: The physical address is calculated by adding the offset to the
base address provided by the segment table.

4.When page faults will occur? Describe the actions taken by operating system during
page fault.

A page fault occurs when a program accesses a page in virtual memory that is not
currently in physical memory (RAM). In other words, the operating system has to load the
page from secondary storage (usually a hard disk or SSD) into RAM because it’s not in
memory at the time the program tries to use it.

This situation happens when:

1. The page that a program needs is not in the main memory (RAM).
2. The program attempts to access a page that is either not loaded into RAM or has been
swapped out to disk (paging/swapping).
3. The page table entry for the requested page is not valid (indicating it is not in memory).

Conditions for a Page Fault:

Page faults typically happen under these conditions:

 Initial access: When a program first accesses a page and it hasn't been loaded into memory
yet (such as when the program is first started).
 Swapping out: When the system has to swap a page out of RAM to free space, and later the
program tries to access that page again.
 Demand Paging: In systems using demand paging, the OS only loads a page when it is
needed. So, if a page is not yet in memory, a page fault occurs when the program accesses it.

Actions Taken by the Operating System During a Page Fault

When a page fault occurs, the operating system must handle it carefully to ensure the
program can continue executing. The sequence of actions is as follows:

Trap to the Operating System:

o When a page fault happens, a trap or interrupt is triggered by the hardware,


transferring control to the operating system’s page fault handler.

Check if the Access is Valid:

o The OS first checks if the memory access is valid. For example, if the program tries
to access a page that it shouldn't be allowed to access (like accessing invalid memory
or a protected page), this is a segmentation fault or an illegal access. In this case, the
OS will terminate the process.
o If the access is valid but the page is just not in memory, the OS proceeds with
handling the page fault.

Locate the Page on Disk (Secondary Storage):

o The OS checks the page table to identify the location of the missing page (usually a
page file or swap space on disk).
o The OS identifies which swap file or disk block contains the requested page.

Find a Free Frame in RAM:

o The OS checks if there is any free frame (empty space) in RAM to store the page.
o If there is no free space, the OS will need to swap out an existing page from RAM to
disk to free up space. This is where the page replacement algorithm comes into play
(e.g., LRU, FIFO, or Optimal algorithm). The OS selects a page to swap out, writes it
to disk (if modified), and frees up space for the new page.

Read the Page from Disk into RAM:


o The OS loads the required page from the disk into the allocated free frame in RAM.
This is typically a slow operation compared to accessing RAM because disk access
speeds are slower.

Update the Page Table:

o After loading the page into memory, the OS updates the page table to mark the page
as present in memory, providing the new physical address for the page.
o The valid bit for the page in the page table is set to indicate that the page is now in
memory.

Resume Program Execution:

o Once the page is loaded into memory and the page table updated, the OS restores
control to the program and allows it to continue execution from where it left off (this
is often done by resuming the instruction that caused the page fault).

Repeat the Process if Necessary:

o If more pages need to be swapped in or out (for example, if the program continues to
access more pages that are not in memory), the OS continues managing page faults
and swapping pages between memory and disk as needed.

Summary of Page Fault Handling Process:

1. Trap to the OS: The CPU triggers an interrupt to inform the OS about the page fault.
2. Check Validity: The OS checks if the page access is valid.
3. Locate the Page: The OS finds where the missing page is stored on disk.
4. Free Space in RAM: The OS finds free space in memory or swaps out a page to make room.
5. Load the Page: The OS loads the page from disk into RAM.
6. Update Page Table: The page table is updated to reflect the new location of the page in
memory.
7. Resume Execution: The process continues executing, with the page now in memory.

Performance Considerations:

 Page Fault Rate: If page faults occur frequently, the system spends a lot of time swapping
pages between RAM and disk, resulting in poor performance.
 Disk I/O vs. CPU Execution: Disk I/O is much slower than RAM access, so a high page
fault rate can significantly slow down a program, often called thrashing.
 Effective Page Replacement Algorithms: To reduce the impact of page faults, effective
page replacement algorithms (such as LRU (Least Recently Used)) are used to decide
which page to swap out when memory is full.

You might also like