Chapter 3
Chapter 3
Chapter 3
Chapter 3
Concurrency and Process Synchronization
Bounded Buffer
Empty Pool
Producer Consumer
Producer Consumer
Full Pool
Figure 3.1 Bounded Buffer Problem
* Producer process:
Concurrency and Process Synchronization 33
While (1) {
/* produce an item in next_item_Produced */
While(counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = next_item_Produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
* Consumer process:
While(1) {
While(counter == 0)
; /* do nothing */
next_item_Consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
* The statements:
+ counter = counter+ 1;
+ counter = counter-1;
must be executed atomically.
Algorithm 3.1
Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes should be mutually exclusive in time as shown in
Figure 3.2. The structure for a process to enter into its critical section is given as under:
Structure of process P i
do{
entry section
critical section
exit section
reminder section
} while(1);
Concurrency and Process Synchronization 34
Algorithm 3.2
Mutual exclusion needs to be enforced only when processes access shared modifiable data - when
processes are performing operations that do not conflict with one another they can be allowed to
proceed concurrently. When a process is accessing shared modifiable data, the process is said to
be in its critical section.
Shared data:
boolean lock = false;
Process P i
do{
while(TestAndSet(lock)); /* busy wait */
critical section
lock = false; /* release lock */
remainder section
} while(1);
In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to
enter in its critical section, it first tests the lock. If lock is 0, the process first sets it to 1 and then
enters the critical section. If the lock is already 1, the process just waits until (lock) variable
becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses -
some process is in its critical section. The flaw in this proposal can be best explained by example.
Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is
scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to
1, and two processes will be in their critical section simultaneously.
* Strict Alteration
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the
critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section.
Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes
1. Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.
Taking turns is not a good idea when one of the processes is much slower than the other. Suppose
process 0 finishes its critical section quickly, so both processes are now in their non critical
section. This situation violates above mentioned condition 3.
Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used
to match up 'sleeps' and 'wakeups' .
As an example how sleep-wakeup system calls are used, consider the producer-consumer
problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into
the buffer and the consumer takes information out.
Trouble arises when
(i) The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
(ii) The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.
This approach also leads to same race conditions we have seen in earlier approaches. Race
condition can occur due to the fact that access to 'count' is unconstrained. The essence of the
problem is that a wakeup call, sent to a process that is not sleeping, is lost. The algorithm for
producer / consumer using sleep and wakeup is given in Algorithm 3.3.
void producer( )
{
int item;
while(TRUE){ /* Loop */
item =item_ produce(); /* produce item to be placed in the buffer */
if (counter == N) sleep(); /* Check if buffer is full */
put_item( item); /*put item in buffer */
counter = counter+1; /*increment counter */
if (counter == 0) wakeup(consumer);
}
}
void consumer( )
{
int item;
while(TRUE){ /* Loop */
if (counter == 0) sleep(); /* Check if buffer is empty */
item =take_item(); /* take item from buffer */
counter = counter-1; /* decrement counter */
if (counter == 0) wakeup(producer);
item_consume(item);
}
}
Algorithm 3.3
Some of the primitives that can be used for inter process communication are as follows:
• Message passing supports the RISC among the IPC primitives
• Remote procedure call (RPC) supports process interaction at language level, type checking
• Transactions support for operations and their synchronization on shared objects
Concurrency and Process Synchronization 38
selective receive:
select
receive var_list from source_id1;
|receive var_list from source_id2;
|receive var_list from source_id3;
end
The send and receive calls are normally implemented as operating system calls accessible from
many programming language environments.
Some of the characteristics used in message passing primitives are mentioned below:
• blocking vs. non-blocking
• buffered vs. unbuffered
• reliable vs. unreliable
• fixed-size vs. variable-size messages
• direct vs. indirect communication
Blocking Non-Blocking
send Returns control to user only after Returns control as soon as
message has been sent, or until message queued or copied.
acknowledgment has been received.
receive Returns only after message has been Signals willingness to
received. receive message.
Buffer is ready.
Send Send
Time-out
Time-out
Receive Receive
Send ( S, msg2)
Indirect Communication:
* Mailboxes and Ports
Message-based communications can occur directly between a sender and a receiver, or
intermediate queues can be used as shown in Figure 3.7. In case of ports, the sender and receiver
each specify the queue rather than the process with which they are communicating as shown in
Figure 3.8.
A mailbox is a message queue that may be used by multiple senders and receivers. In a
distributed system, the receivers can be disbursed across many computers, thus requiring two
transmissions for most of the messages—from the sender to the mailbox and from the mailbox to
the receiver. Giving each receiver a port solves this problem; this is defined as a mailbox used by
multiple senders and a single receiver. In distributed computing systems, each server ordinarily
has a port at which it receives requests from many other clients.
Concurrency and Process Synchronization 41
where remote_precess executes the desired procedure, and the where input_parameters are input
to the remote and output_parameters are the results returned to the caller.
Remote procedure calls present a number of challenges. Ideally, a remote procedure call should
be indistinguishable from a local procedure call as far as the use is concerned. Call-by-value is
easily implemented by sending copies of the data via messages. Call-by-reference is
uncomfortable because RPC crosses address spaces; internal data formats may be quite different
in networks of heterogeneous machines, thus requiring complex conversions and considerable
overhead. With local procedure calls we assume error free transmission, but remote calls must
deal with fault and transmissions loss.
Ports:
In case of ports there can be multiple senders, only one receiver, and access to port is passed
between processes in form of capabilities.
FIFO Queue
Port P
Send (P, Msg3)
Using message passing the mutual exclusion can be enforced. Let us consider the blocking and
receive primitive and the nonblocking send primitive. The shared mailbox mutex can be used by
the concurrent processes to send and receive. Initially the mailbox contain single message with
null content. A process trying to enter the critical section first makes an attempt to receive a
message. If the mailbox is empty, the process is blocked. Once a message is acquired by a
process, it performs its critical section and then places the message back into the mailbox.
A solution to the producer/consumer (with bounded buffer) problem using messages is shown
below in Algorithm 3.4.
/* Producer Consumer problem using N messages */
#define N 100 /* number of slots in the buffer */
void producer( )
{
int item;
message msg; /* buffer for messages */
while(TRUE){
item = item_produce(); /* produce item to be placed in the buffer */
receive(consumer, &msg); /*wait for an empty to arrive */
construct_message(&msg, item); /*construct a message to be send *.
send (consumer, &msg); /* send item to consume */
}
}
void consumer( )
{
int item,i;
message msg; /* buffer for messages */
while(TRUE) {
for (i=0; i<N;i++) send (producer, &msg) /*send N empties)
receiver(producer, &msg) /* get message containing item */
item = retrieve_item(); /* extract item from message */
send(producer, &msg); /*send back empty reply */
item_consume(item); /* do something with the item */
}
}
Algorithm 3.4
* Reader/Writer Problem
Concurrency and Process Synchronization 43
The reader writer problem is defined as follows: A data is being shared among number of
processes. The data can be memory, disk file. There can be number of processes, which only
read the data (readers) and number of processes that only write the data (writers). The conditions
applicable are only one writer can write at time to the shared data area, if a writer is writing, then
no reader may read the shared area at that instant, and any number of readers may simultaneously
read the shared data. The Reader / Writer problem using message passing is given in Algorithm
3.5.
void reader(int i)
{
message m;
while(true)
{
m=i;
send(readrequest, m);
receive(mbox[i , m);
PERFORM_READ();
m=i;
send(finished, m);
}
}
void write(int j)
{
message m;
while(true)
{
m=i;
send(writerequest, m);
receive(mbox[j], m);
PERFORM_WRITE();
m = i;
send(finished, m);
}
}
void Check_status()
{
while(true)
{
if(count > 0)
{
if(!empty(finished))
{
receive(finished, m_1);
count++;
}
else if(!empty(write_request))
{
receive(write_request, m_1);
write_id = m_1_id;
count = count – 100;
}
Concurrency and Process Synchronization 44
else
{
receive(read_request, m_1);
count--;
send(m_1.id, “DONE”);
}
}
if (count==0)
{
send(writer_id, “DONE”);
receive(finished, m_1);
count=100;
}
while(count < 0)
{
receive(finished, m_1);
count++;
}
}
}
Algorithm 3.5
3.5 Semaphores
In design of operating system it is required to allow concurrent execution of a collection of
cooperating sequential processes. The two processes can cooperate through the means of signals,
so that a process can be held at a specific point, until it receives a specific signal. For this purpose
a special tool is used called Semaphore. A semaphore S, is a protected nonnegative integer
variable that can only be accessed or changed by the two atomic (indivisible or uninterruptible)
functions P and V, these operators operate on semaphore as given below:
P(s) : [
while(s == 0) {wait};
s = s – 1;
]
V(s) : [
s = s + 1;
]
•P operation decrements the semaphore value
•V operation increments semaphore value
A semaphore can be initialized to a nonnegative integer value. Binary semaphores are used to
implement mutual exclusion Binary semaphore can assume values 0 or 1 and are easy to
implement. Counting semaphores also called as general semaphores can assume only non-
negative integer values. Counting semaphores are particularly useful when a resource is to be
allocated from a pool of like resources. The semaphore is initialized to the number of resources in
the pool. Each operation decrements the semaphore by 1, indicating that another resource has
been removed from the pool and is in use by a process. Each V operation increments the
semaphore by 1, indicating that a process has returned a resource to the pool and the resource
may be reallocated to another process. If a P operation is attempted when the semaphore has been
decremented to 0, then the process must wait until a resource is returned to the pool by a V
operation. The structure for mutual exclusion using counting semaphore is given below:
Concurrency and Process Synchronization 45
struct semaphore {
int count;
queueType queue;
}
void get(semaphore s)
{ s.count--;
if (s.count < 0)
{enter this process in s.queue;
block this process
}
}
void release(semaphore s)
{ s.count++;
if (s.count <= 0)
{ remove a process P from s.queue;
enter process P on ready list;
}
}
3.4.1 Implementing semaphores P and V
Semaphore operations can also be implemented in the operating system design to avoid busy
waiting. A semaphore is implemented as a protected variable and a queue in which processes can
wait for V operations. When a process attempts a P operation on a semaphore whose current
value is zero, the process relinquished the processor and blocks itself to await a V operation on
the semaphore. A fist-in-first-out queue discipline or other disciplines, including priority
queueing, can be employed. The operating system then reassigns to the processor to the next
ready process.
The process in the semaphore queue eventually moves to the head of the queue. Hence the next V
operation removes the process from the semaphore queue and places it on the ready list. Of
course, processes attempting simultaneous P and V operations on a semaphore are guaranteed
exclusive access to the semaphore by the operating system. In case of uniprocessor systems, the
indivisibility of P and V can be ensured by disabling interrupts, while P and V operations are
manipulating a semaphore. This prevents the processor from being usurped until the manipulation
is complete (the point at which interrupts are again enabled).
Producer-consumer relationship with bounded buffer implemented using semaphores is
mentioned below in Algorithm 3.6.
void producer( )
{
int item;
while(TRUE){ /* TRUE is the constant 1 */
item = item_produce(); /* produce item to be placed in the buffer */
down(&empty); /* decrement empty count */
down(&mutex); /* Enter into the critical region */
put_item(item); /* Place the new item in the buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment count of full slots */
Concurrency and Process Synchronization 46
}
}
void consumer( )
{
int item;
while(TRUE){ /* infinite loop */
down(&full); /* decrement full count */
down(&mutex); /* Enter into the critical region */
item = take_item(); /* take item from the buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment count of empty slots */
item_consume(item); /* consume the item removed from the buffer */
}
}
Algorithm 3.6
The solution for Reader/Writer Problem with (readers have priority) using wait and signal is
given below in Algorithm 3.7.
void writer()
{
while(true)
{
wait(wait_sem);
PERFORM_WRITE();
signal(wait_sem);
}
}
void main()
{
count_ read=0;
parbegin(reader,writer);
Concurrency and Process Synchronization 47
Algorithm 3.7
The correctness condition is that a hungry philosopher should no face indefinite waits when (s) he
decides to eat. A solution which does not suffer from deadlocks or starvation is required. The
deadlock can be avoided using some standard deadlock avoidance approach. However, some
scheme to avoid starvation is also required. Consider a simple solution using philosopher
processes of the following form given below:
Repeat
Wait till the left fork becomes available;
Lift the left fork;
Wait till the right fork becomes available;
Lift the right fork;
{eat }
{think}
forever
This solution can suffer from deadlock. If all philosophers simultaneously lift their left forks,
none will be able to lift the right fork. Deadlocks can be avoided by modifying the philosopher
process such that if the right fork is not available, it puts down the left fork and repeats the
attempt to eat sometime in future. However, this solution suffers from starvation because the
same situation may occur again.
Concurrency and Process Synchronization 48
The solution given below avoids deadlocks because a philosopher picks up a fork only if both
forks are available simultaneously. This action ensures that at least some philosophers can eat at
any time. Since the availability of forks is checked within a CS, race conditions cannot arise. The
solution does have some deficiencies. Eating is performed outside the CS so that many
philosophers can eat concurrently. Availability of forks is checked inside a CS hence only one
philosopher can make this check at any time. Philosophers who are not neighbours should be able
to perform these checks concurrently. The solution also suffers from busy waits.
Repeat
If both forks are available then
Lift both forks;
{eat}
put down both forks;
{think}
forever
Algorithm 3.9
3.5 Monitotrs
The semaphore approach, in particular, has many weaknesses. If P is omitted, then mutual
exclusion is not enforced. If V is omitted, then tasks waiting for P operations could be
deadlocked. Once the P begins, the user cannot back out and take an alternate course of action
while the semaphore remains in use. A task may wait on only one semaphore at a time; this could
lead to deadlock in resource allocation situations. The potential applications for concurrent
programming are numerous. Obviously, operating systems themselves are good examples of
concurrent systems. A monitor is a language construct which provides abstraction and
encapsulation support for building large concurrent systems. Monitors are used to implement
operations on shared data and synchronization between processes which use shared data. A
monitor contains both the data and procedures needed to perform allocation of particular serially
reusable shared resource or group of serially reusable shared resources. To accomplish a resource
allocation function, a process must call a particular monitor entry. Many processes may want to
enter the monitor at various times. But mutual exclusion is rigidly enforced at the monitor
boundary. Only one process at a time is allowed to enter. Processes desiring to enter the monitor
when it is already in use must wait. This waiting is automatically managed by the monitor. Calls
on procedures of a monitor are serviced in FIFO manner to satisfy the bounded wait property.
This feature is implemented by maintaining a queue of processes which wish to execute monitor
procedures. A monitor is defined as a type in a program. If consists of four components
1.Declaration of shared data
2.Initialization of shared data
3.Operations on shared data
4.Synchronization statements.
The first three components resemble the corresponding components in a c++ or Java class. The
synchronize statements provide support for synchronization between processes using a monitor.
A program can create one or more instances of a monitor type. Each instance is a monitor
variable. When a monitor variable is instantiated, the initialization code is executed to initialize
its shared data. Operations over shared data are coded as procedures in the monitor type.
Operations over shared data are coded as procedures in the monitor type. These operations can be
invoked by processes using the syntax:
<monitor name> <operation name>
where <monitor name> is the name of a monitor variable. Events are called conditions in
monitors. A monitor may contain declarations of a few conditions. While executing a monitor
procedure, a process may decide to wait till a specific condition occurs. This process is blocked
on the condition. It is activated when the conditions occurs. Occurrence of a condition is
explicitly signaled by some process while executing a monitor procedure. Thus both blocking on
Concurrency and Process Synchronization 50
a condition and signaling of its occurrence happen inside monitor procedures. Occurrence of a
condition activates one process blocked on it; it has no effect if no process is blocked on it.
Process blocked in a monitor can be divided into two groups. Some processes are blocked on
conditions. Such processes are waiting to enter the monitor, i.e. waiting to start execution of some
monitor procedure. To ensure bounded waits in the system, the implementation of a monitor
favors processes blocked on conditions over those wishing to enter the monitor. The monitor
solution for reader/writers is given in Algorithm 3.10. The solution for dining philosopher’s
problem using monitors is given in Algorithm 3.11.
readfinish{
number_readersa = number_readers – 1;
if ( number_readers = 0) READY_TO_WRITE.signal;
};
writestart{
if ( number_readers != 0) || busy)
READY_TO_WRITE.wait;
Busy = TRUE;
};
writefinish{
busy = FALSE;
if(READY_TO_WRITE.queue)
READY_TO_WRITE.signal
Else
READY_TO_READ.signal;
};
};
Algorithm 3.10
Monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) {
state[i] = hungry;
test(i);
if (state[i] != eating)
Concurrency and Process Synchronization 51
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
test((i+4)%5);
test((i+1)%5);
}
void init() {
for (init i =0; i < 5; i++)
state [i]= thinking;
}
} / * end of monitor */
dp.pickup(i)
….
eat
….
dp.putdown(i)
Exercises
3.1 What does the term co-operating process mean? Explain. Are peer threads co-operating in
this way?
3.2 Mention the basic requirements for execution of concurrent processes.
3.3 What is the race condition in case of concurrent execution of processes.
3.3 Define what a critical section is and explain at least two techniques used to solve
synchronization problems in critical sections.
3.4 Describe the dining philosophers’ problem. Write C++, pseudo-code to implement a
solution to the problem that is (must be) free from deadlock (explain how they are
avoided).
3.5 What is a Semaphore? Define the wait operation wait(S) and the wakeup operation
signal(S).Give an explanation and an example when and how they are used.
3.6 Explain how the method of critical regions is related to semaphores, and what are the
advantages of semaphores?
3.7 What is the distinction between blocking and nonblocking with respect to message?
3.8 Explain what do you understand by the term Monitor?
3.9 Consider the following variation of two-phase locking.Each time a process fails to get a
lock and has to release all its locks and start over,it increases its age by 1.As soon as a
process reaches age 10,then all other processes that are in phase 1 have to release all their
locks and wait (they do not start trying to get locks again).Then this process (the one with
age 10)starts trying to get locks, but when it cannot get a lock it just waits for it. When this
process completes phase 1 and enters phase 2,then all the other processes in phase 1 can
Concurrency and Process Synchronization 52
start trying to get locks again. Does this algorithm prevent deadlock?Does it prevent
starvation ?Explain your reasoning for each
3.10 Show that, if the wait() and signal() operations are not atomic, the mutual exclusion may be
violated.
3.11 What are the methods (operations) on a lock? Write the (correct) C/C++ code for the lock
you have implemented during the laboratory work.
*****