Inter-Process Communication and Synchronization
Inter-Process Communication and Synchronization
and Synchronization
Introduction and Semaphore
Prof. Mahadik
Introduction
• An important and fundamental feature in modern
operating systems is concurrent execution of
processes/threads. This feature is essential for the
realization of multiprogramming, multiprocessing,
distributed systems, and client-server model of
computation.
• Concurrency encompasses many design issues
including communication and synchronization among
processes, sharing of and contention for resources.
• In this discussion we will look at the various design
issues/problems and the wide variety of solutions
available.
Prof. Mahadik
Introduction
• Generally three forms of interprocess interaction
• Interprocess synchronization : Set of protocols and
mechanism used to preserve system integrity and
consistency when concurrent processes share that are
reusable.
• Intetprocess signaling : the exchange of timing signals
among concurrent processes used to co ordinate their
collective progress.
• Interprocess communication: Concurrent cooperating
processes must communicate for data exchange,
reporting progress, and accumulating collective results.
Prof. Mahadik
Interactions among processes
• In a multi-process application these are the various degrees
of interaction:
1. Competing processes: Processes themselves do not
share anything. But OS has to share the system resources
among these processes “competing” for system resources
such as disk, file or printer.
Co-operating processes : Results of one or more
processes may be needed for another process.
2. Co-operation by sharing : Example: Sharing of an IO
buffer. Concept of critical section. (indirect)
3. Co-operation by communication : Example: typically
no data sharing, but co-ordination thru’ synchronization
becomes essential in certain applications. (direct)
Prof. Mahadik
Interactions ...(contd.)
• Among the three kinds of interactions indicated by 1, 2
and 3 above:
• 1 is at the system level: potential problems : deadlock
and starvation.
• 2 is at the process level : significant problem is in
realizing mutual exclusion.
• 3 is more a synchronization problem.
• We will study mutual exclusion and synchronization
here, and defer deadlock, and starvation for a later
time.
Prof. Mahadik
Synchronised communications
–send and receive operations blocking
•sender is suspended until receiving process does a
corresponding read
•receiver suspended until a message is sent for it to receive
–properties :
effective confirmation of receipt for sender
•at most one message can be outstanding for any process pair
–no buffer space problems
•easy to implement, with low overhead
–disadvantages :
•sending process might want to continue after its send
operation without waiting for confirmation of receipt
•receiving process might want to do something else if no
message is waiting to be received
Prof. Mahadik
•Asynchronous :
–send and receive operations non-blocking
•sender continues when no corresponding receive outstanding
•receiver continues when no message has been sent
–properties :
•messages need to be buffered until they are received
•amount of buffer space to allocate can be problematic
•a process running amok could clog the system with
messages if not careful
•often very convenient rather than be forced to wait
–particularly for senders
•can increase concurrency
–receivers can poll for messages
•i.e. do a test-receive every so often to see if any messages waiting
•interrupt and signal programming more difficult
•preferable alternative perhaps to have a blocking receive in a separate
thread
Prof. Mahadik
Race condition
Prof. Mahadik
Mutual exclusion problem
• Successful use of concurrency among processes
requires the ability to define critical sections and
enforce mutual exclusion.
• Critical section : is that part of the process code
that affects the shared resource.
• Mutual exclusion: in the use of a shared resource
is provided by making its access mutually exclusive
among the processes that share the resource.
• This is also known as the Critical Section (CS)
problem.
Prof. Mahadik
Mutual exclusion
• Any facility that provides mutual exclusion should
meet these requirements:
1. No assumption regarding the relative speeds of
the processes.
2. A process is in its CS for a finite time only.
3. Only one process allowed in the CS.
4. Process requesting access to CS should not wait
indefinitely.
5. A process waiting to enter CS cannot be blocking
a process in CS or any other processes.
Prof. Mahadik
Software Solutions: Algorithm 1
• Process 0 • Process 1
• ... • ...
• while turn != 0 do • while turn != 1 do
• nothing;
• nothing;
• // busy waiting
• // busy waiting
• < Critical Section>
• < Critical Section>
• turn = 1;
• ... • turn = 0;
Problems : Strict • ...
alternation, Busy
Waiting
Algorithm 2
• PROCESS 0 • PROCESS 1
• ... • ...
• flag[0] = TRUE; • flag[1] = TRUE;
• while flag[1] do nothing;
• while flag[0] do nothing;
• <CRITICAL SECTION>
• <CRITICAL SECTION>
• flag[0] = FALSE;
• flag[1] = FALSE;
PROBLEM : Potential for
deadlock, if one of the
processes fail within
CS.
Algorithm 3
• Combined shared variables of algorithms 1 and 2.
• Process Pi
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn = j) ;
critical section
flag [i] = false;
remainder section
} while (1);
• Solves the critical-section problem for two processes.
Need for interprocess
synchronization
• Processes have access to common address
space for shared variables for no of
purposes.
• Unrestricted updating of variables lead to
unconsistencies
• Take example of Keyboard and display
process.
Prof. Mahadik
Semaphore
• First implemented by Dijkstra.
• Semaphore mechanism consist of two operations ie Wait
and Signal.
• Wait(s): Decrements value of its argument semaphore s.
• Signal(s): Increments value of its argument semaphore
s.
• A semaphore whose variable is allowed to take only
values of 0 (busy) and 1(free) is called binary
semaphore.
• A semaphore whose variable is allowed to take any
integer value is called general semaphore.
Prof. Mahadik
Semaphore
• Wait(s): While not (s>0) do {Keeptesting}; s=s – 1;
• Signal(s): s=s+1;
Example of Mutual Exclusion with semaphore
Prof. Mahadik
Semaphore
Process 2;
Begin
while true do
begin
wait(mutex)
critical section;
signal(mutex);
other_p2_processing
end {while}
End {p2}
Process 3;
Begin
while true do
begin
wait(mutex)
critical section;
signal(mutex);
other_p3_processing
end {while}
End {p3}
{parent process}
Begin(Mutex)
Mutex :=1 {free}
Initiate p1,p2,p3,
End(mutex)
Prof. Mahadik
Semaphore
Time Process status/ activity Mutex Processes
p1 p2 p3 1=free In critical
0=busy section:
attempting
to enter
M1 - - - 1 -:-
Prof. Mahadik
H/W support for Mutual exclusion
• Pessimistic and optimistic concurrency control:
Pessimistic :
1. Block everything that could possibly interfere.
2. Update the global variable.
3. Unlock the part blocked in step 1.
Optimistic:
1.Read the value of global variable and prepare the
tentative local update based that value. (global remains
accessible to others)
2. Compare the current value of the global variable to a
value used to prepare tentative update. If it is
unchanged, apply the tentative local update to the
global variable. Otherwise discard tentative update and
repeat from step 1.
Prof. Mahadik
H/W support for Mutual exclusion
• Disable/Enable interrupts:
Basic idea is to obtain the resources by
the process wishing to enter its critical
section and subsequently release for the
use of others.
DI : disable interrupt
critical section: use guarded resources
EI :enable interrupt
Prof. Mahadik
H/W support for Mutual exclusion
• Test and Set Instructions: These are intended for
direct h/w support for mutual exclusion.
• It is designed for resolving conflicts among processes
by making it possible for only one processes to permit
in CS.
• Idea is to set one global variable to FREE when guarded
shared resource is available
• In principle TS takes one operand, address of control
variable or register that may act as semaphore.
• TS operand: test and set operand
1. Compare value of the operand to BUSY, and set the
condition codes to reflect the outcome.
2. Set the operand to BUSY.
Prof. Mahadik
H/W support for Mutual exclusion
Prof. Mahadik
Queuing implementation of semaphore
• Busy wait implementationhas two
drawbacks
1.Indefinite postponement
2.Low efficiency due to consumption of
processor cycles by blocked processes.
Px … Py Pz S
Prof. Mahadik
Queuing implementation of semaphore
• Wait(s): if not (s>0) then suspend the caller at s
else S=S-1;
• Signal(s): if queue at s not empty {at least one process
waiting} then resume a process from queue at S.
else S=S+1;
• A queue of PCBs is associated with semaphore variable.
• Here affected process is suspended and its PCB is
enqueued in the queue associated with semaphore that
the process is waiting on.
• suspended process do not consume processor cycles and
it is potentially more effective than busy waiting.
• Signal operation is responsible for resuming a process
suspended at a given semaphore.
• His implementation gives higher precedence to
processes allready waiting than new arrival. (FIFO)
Prof. Mahadik
Producer-consumer problem
• also known as the bounded-buffer problem is a classical example of a
multi-process synchronization problem.
• The problem describes two processes, the producer and the consumer,
who share a common, fixed-size buffer
• The producer's job is to generate a piece of data, put it into the buffer
and start again
• At the same time the consumer is consuming the data (i.e. removing it
from the buffer) one piece at a time.
• The problem is to make sure that the producer won't try to add data
into the buffer if it's full and that the consumer won't try to remove data
from an empty buffer.
• The solution for the producer is to go to sleep if the buffer is full. The
next time the consumer removes an item from the buffer, it wakes up
the producer who starts to fill the buffer again
• In the same way the consumer goes to sleep if it finds the buffer to be
empty
• next time the producer puts data into the buffer, it wakes up the
sleeping consumer.
Prof. Mahadik
Producer-consumer problem
• To solve the problem, a careless programmer might come up with a
solution shown below
• In the solution two library routines are used, sleep and wakeup. When
sleep is called, the caller is blocked until another process wakes it up
by using the wakeup routine. itemCount is the number of items in the
buffer.
int itemCount
procedure producer() {
while (true) {
item = produceItem()
if (itemCount == BUFFER_SIZE) {
sleep()
}
putItemIntoBuffer(item)
itemCount = itemCount + 1
if (itemCount == 1) {
wakeup(consumer)
}
}
}
Prof. Mahadik
Producer-consumer problem
procedure consumer() {
while (true) {
if (itemCount == 0) {
sleep()
}
item = removeItemFromBuffer()
itemCount = itemCount - 1
if (itemCount = BUFFER_SIZE - 1) {
wakeup(producer)
}
consumeItem(item)
}
}
Prof. Mahadik
• The problem with this solution:
1. The consumer has just read the variable itemCount,
noticed it's zero and is just about to move inside the if-
block.
2. Just before calling sleep, the consumer is interrupted and
the producer is resumed.
3. The producer creates an item, puts it into the buffer, and
increases itemCount.
4. Because the buffer was empty prior to the last addition,
the producer tries to wake up the consumer.
5. Unfortunately the consumer wasn't yet sleeping, and the
wakeup call is lost. The next time the consumer runs, it
goes to sleep and will never be awakened.
6. The producer will loop until the buffer is full, after which it
will also go to sleep.
7. Since both processes will sleep forever, we have run into a
deadlock
Prof. Mahadik
Implementation using semaphores
• Semaphores solve the problem of lost
wakeup calls
• In the solution below we use two semaphores,
full and empty, to solve the problem. Full is
incremented and empty decremented when a
new item has been put into the buffer
• If the producer tries to decrement empty while
its value is zero, the producer is put to sleep.
The next time an item is consumed, empty is
incremented and the producer wakes up
Prof. Mahadik
Implementation using semaphores
semaphore full = 0
semaphore empty = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(empty)
putItemIntoBuffer(item)
up(full)
}
}
procedure consumer() {
while (true) {
down(full) item = removeItemFromBuffer()
up(empty)
consumeItem(item)
}
}
Prof. Mahadik
Implementation using semaphores
Prof. Mahadik
Implementation using semaphores
Prof. Mahadik
Implementation using semaphores
semaphore mutex = 1
semaphore full = 0
semaphore empty = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(empty)
down(mutex)
putItemIntoBuffer(item)
up(mutex)
up(full)
}
}
procedure consumer() {
while (true) {
down(full)
down(mutex)
item = removeItemFromBuffer()
up(mutex)
up(empty)
consumeItem(item)
} Prof. Mahadik
Critical regions and conditional
critical regions
• Critical regions and conditional critical regions (CCRs) were proposed as a
means of hiding the complexity of semaphore programming.
• Note that the programmer must leave the data structure in a consistent state
• before executing await, as well as before exiting the region.
• CCRs are difficult to implement. Programmers may invent any condition on the
shared data.
• All conditions have to be tested when any process leaves the region.
Prof. Mahadik
Monitors
• Monitor is a predecessor of the “class” concept.
• Initially it was implemented as a programming
language construct and more recently as library. The
latter made the monitor facility available for general
use with any PL.
• Monitor consists of procedures, initialization
sequences, and local data. Local data is accessible
only thru’ monitor’s procedures. Only one process can
be executing in a monitor at a time. Other process that
need the monitor wait suspended.
Monitors
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
. . .}
procedure body P2 (…) {
. . .}
procedure body Pn (…) {
. . .}
{
initialization code
}
}
Monitors
• To allow a process to wait within the monitor, a condition
variable must be declared, as
condition x, y;
• Condition variable can only be used with the operations
wait and signal.
– The operation
x.wait();
means that the process invoking this operation is
suspended until another process invokes
x.signal();
– The x.signal operation resumes exactly one suspended
process. If no process is suspended, then the signal
operation has no effect.
Schematic View of a Monitor
Monitor With Condition
Variables
Message passing
• Both synchronization and communication
requirements are taken care of by this
mechanism.
• More over, this mechanism yields to
synchronization methods among distributed
processes.
• Basic primitives are:
send (destination, message);
receive ( source, message);
Issues in message passing
• Send and receive: could be blocking or non-blocking:
– Blocking send: when a process sends a message it blocks until
the message is received at the destination.
– Non-blocking send: After sending a message the sender
proceeds with its processing without waiting for it to reach the
destination.
– Blocking receive: When a process executes a receive it waits
blocked until the receive is completed and the required message
is received.
– Non-blocking receive: The process executing the receive
proceeds without waiting for the message(!).
• Blocking Receive/non-blocking send is a common combination.