Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
3 views22 pages

Chapter 3

Download as doc, pdf, or txt
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 22

Concurrency and Process Synchronization 31

Chapter 3
Concurrency and Process Synchronization

Concurrency and Process Synchronization


3.1 Need for Concurrent Process Synchronization
3.2 Cooperating Processes
3.3 Critical Regions and Critical Section Problem
3.4 Inter Process Communication
3.5 Semaphores
3.6 Monitors
Exercises

3.1 Need for Concurrent Process Synchronization


Since processes frequently need to communicate with other processes therefore, there is a need
for well-structured communication mechanisms, without using interrupts, among processes. In
operating systems, processes that are working together share some common storage (main
memory, file etc.) that each process can read and write. When two or more processes are reading
or writing some shared data and the final result depends on who runs precisely when, are called
race conditions. Concurrent execution of threads that share data, their operations and processing
need to be synchronized in order to avoid race condition on shared data. Only one ‘customer’
thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is implemented as a
linked list and if the ready queue is being manipulated during the handling of an interrupt, then
interrupts must be disabled to prevent another interrupt before the first one completes. If
interrupts are not disabled than the linked list could become corrupt.
The Processes are said to be concurrent if they exist at the same time. Concurrent processes can
function completely independent of one another, or they can be asynchronous, which means that
they require occasionally synchronization and cooperation.

3.2 Cooperating Processes


Any process, which shares data with other process, is a cooperating process. A cooperating
process can effect or any other process can affect it. There may be many reasons in which the
need for execution of processes cooperatively arises.
- Information or Data Sharing
As many users may be using same piece of data/file or any other information, concurrent
execution of the processes is to be allowed.
- Computing Speed
If certain operations can logically be performed in parallel, then computers will physically
perform them in parallel, even if the level of parallelism is thousands or perhaps millions of
concurrent activities. This can result in significant performance improvements in computation
speed over that possible with sequential computers.
- Modularity
Parallel processing is interesting and complex for several reasons. It is quite difficult and time
consuming to determine what activities or processes can and cannot be performed or execute in
parallel. Parallel programs are much more difficult to debug than sequential programs.
- User Convenience
Concurrent execution of processes requires techniques that allow the cooperating processes to
Concurrency and Process Synchronization 32

communicate with one another and synchronize their activities.

* The Bounded Buffer Producers and Consumers Problem


Let us assume the problem of producer / consumer to elaborate the concurrent execution of
cooperating processes as shown in Figure 3.1.
Two processes namely producer and consumer respectively are executing concurrently. The
producer process produces information, which is being consumed by the process consumer.
For concurrent execution of these two processes, a buffer is available of items that can be filled
up by the producer process, and can be emptied by the consumer process. A producer produces
one item at a time and consumer consumes one item at one time. For proper synchronization of
these two processes, it is required that the consumer process should not try to consume an item
which has not yet been produced by the producer process. So, the consumer process must wait till
an item is produced.
The buffer available capacity can be considered as unbounded buffer, means there is no limit on
the size of buffer. The producer can always produce new items and put it in the buffer, but
consumer may have to wait for new items, if the buffer is empty.
In case of bounded buffer, means a fixed size of buffer is available. The producer must wait if the
buffer is full, and consumer must wait if the buffer is empty.
A shared memory solution for the bounded buffer problem is given in Algorithm 3.1 below. The
shared variable is counter. Producers and consumers assume that there is a fixed buffer size
i.e., a finite numbers of slots are available. The requirement is to design an algorithm to suspend
the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to
make sure that only one process at a time manipulates a buffer so there are no race conditions or
lost updates.

Bounded Buffer

Empty Pool

Producer Consumer
Producer Consumer

Full Pool
Figure 3.1 Bounded Buffer Problem

/* Producer/Consumer with Bounded Buffer Problem */


# define BUFFER_SIZE 100
int counter = 0;

* Producer process:
Concurrency and Process Synchronization 33

While (1) {
/* produce an item in next_item_Produced */
While(counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = next_item_Produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

* Consumer process:
While(1) {
While(counter == 0)
; /* do nothing */
next_item_Consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}

* The statements:
+ counter = counter+ 1;
+ counter = counter-1;
must be executed atomically.

Algorithm 3.1

3.3 Critical Section Problem


The key to preventing trouble involving shared storage is to find some way to prohibit more than
one process from reading and writing the shared data simultaneously. The part of the program
where the shared memory is accessed is called the Critical Section. To avoid race conditions and
inconsistent results, one must identify codes in Critical Sections in each process or thread. The
characteristic properties of the code that form a Critical Section are
 Codes that reference one or more variables in a “read-update-write” fashion while any of those
variables are possibly being altered by another process or thread.
 Codes that alter one or more variables that are possibly being referenced in “read-update-write”
fashion by another process or thread.
 Codes use a data structure while any part of it is possibly being altered by another process or
thread.
 Codes alter any part of a data structure while it is possibly in use by another process or thread.

Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes should be mutually exclusive in time as shown in
Figure 3.2. The structure for a process to enter into its critical section is given as under:

Structure of process P i
do{
entry section
critical section
exit section
reminder section
} while(1);
Concurrency and Process Synchronization 34

Enter Wait S Critical Exit Signal S


Section

Figure 3.2 Mutual Exclusion with critical section


3.3.1 Critical section for n processes
Let us consider an algorithm for solving the critical section problem for n processes. This
algorithm is known as bakery algorithm, as this kind of scheduling is adopted on bakeries.
Before entering its critical section, process receives a number. The process holder of the smallest
number enters the critical section. If process Pi and process Pj receive the same number, if i < j,
then Pi is served first, else Pj is served first. The format for bakery algorithm is given below in
Algorithm 3.2.
It can be observed that the mutual execution is there. If suppose Pi is in its critical section and Pk
is trying to enter its critical section. When process Pk will try to execute second while statement
for j==i, it will find number[i] != 0, and (number[i],i) < (number[k],k). So process Pk will
continue to loop in the while statement until process Pi leaves its critical section.
Notation ≤ lexicographical order (ticket #, process id #)
 (a,b) < (c,d) if a < c or if a = c and b < d
 max (a0 ,…, a n-1 ) is a number, k, such that k ≥ ai for i = 0,…, n – 1
 Shared data:
boolean select[n];
int number[n];
Data structures are initialized to false and 0 respectively
do{
select[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])+1;
select[i] = false;
for (j= 0; j < n; j++) {
While (select[j]); /* busy waiting */
While ((number[j] != 0) &&(number[j], j) < (number[i], i) );
}
critical section
number[i] = 0;
remainder section
}While(1);

Algorithm 3.2

3.3.2 Mutual Exclusion


A way of making sure that if one process is using a shared modifiable data, the other processes
will be excluded from doing the same thing. Formally, while one process executes the shared
variable, all other processes desiring to do so at the same instant should be kept waiting; when
that process has finished executing the shared variable, one of the processes waiting to do so
should be allowed to proceed. In this manner, each process executing the shared data (variables)
excludes all others from doing so simultaneously. This is called Mutual Exclusion. Without
mutual exclusion, the results of execution are not consistent.
Concurrency and Process Synchronization 35

Mutual exclusion needs to be enforced only when processes access shared modifiable data - when
processes are performing operations that do not conflict with one another they can be allowed to
proceed concurrently. When a process is accessing shared modifiable data, the process is said to
be in its critical section.

3.3.3 Mutual Exclusion Conditions


If we could arrange activities such that no two processes were ever in their critical sections
simultaneously, we could avoid race conditions. The following conditions should hold to have a
good solution for the critical section problem (mutual exclusion).

3.3.4 Schemes to achieve Mutual Exclusion


 Only one process at a time can be in the Critical Section (CS).
 Only processes competing for a CS are involved in resolving the issue, who enters the CS.
 Once a process attempts to enter its CS, it cannot be postponed indefinitely.
 After requesting entry, only a bounded number of other processes may enter before the
requesting process.
 No deadlock or starvation.
 Any Process must not be delayed access to a critical section when there is no other process
using it.
 A process remains inside its critical section for a finite time only.
 No assumptions are made about relative process speeds or number of processes.
The mutual exclusion problem is to devise a protocol for entry and to keep two or more processes
or threads from being in their critical sections at the same time. Following are the schemes, which
can be used for mutual exclusion i.e., when one process is updating shared modifiable data in its
critical section, no other process should be allowed to enter in its critical section.
* Disabling Interrupts (Hardware Solution)
Each process disables all interrupts just after entering in its critical section and re-enable all
interrupts just before leaving critical section. With interrupts turned off the CPU could not be
switched to other process. Hence, no other process will enter its critical and mutual exclusion
achieved. Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique
within the kernel of an operating system, but it is not appropriate as a general mutual exclusion
mechanism for users’ process. The reason is that it is inappropriate to give user process the power
to turn off interrupts. The problem with this approach is starvation is possible: when a process
leaves a critical section and more than one process is waiting, the selection of a waiting process is
arbitrary; thus, some process could indefinitely be denied access. Deadlock is possible: if P1
executes and enters its critical section, P2 interrupts P1 due to high priority, but cannot enter its
critical section, P1 never dispatches again, since it’s low priority job. The structure for it is given
below:

 Shared data:
boolean lock = false;
 Process P i
do{
while(TestAndSet(lock)); /* busy wait */
critical section
lock = false; /* release lock */
remainder section
} while(1);

* Lock Variable (Software Solution)


Concurrency and Process Synchronization 36

In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to
enter in its critical section, it first tests the lock. If lock is 0, the process first sets it to 1 and then
enters the critical section. If the lock is already 1, the process just waits until (lock) variable
becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses -
some process is in its critical section. The flaw in this proposal can be best explained by example.
Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is
scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to
1, and two processes will be in their critical section simultaneously.

* Strict Alteration
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the
critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section.
Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes
1. Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.
Taking turns is not a good idea when one of the processes is much slower than the other. Suppose
process 0 finishes its critical section quickly, so both processes are now in their non critical
section. This situation violates above mentioned condition 3.

* Using Systems calls 'sleep' and 'wakeup'


Basically, what above mentioned solution do is this: when a process wants to enter in its critical
section, it checks to see if then entry is allowed. If it is not, the process goes into tight loop and
waits (i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some inter process communication primitives is the pair of steep-wakeup.
 Sleep
o It is a system call that causes the caller to block, that is, be suspended until some other
process wakes it up.
 Wakeup
o It is a system call that wakes up the process.

Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used
to match up 'sleeps' and 'wakeups' .

As an example how sleep-wakeup system calls are used, consider the producer-consumer
problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into
the buffer and the consumer takes information out.
Trouble arises when
(i) The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
(ii) The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.
This approach also leads to same race conditions we have seen in earlier approaches. Race
condition can occur due to the fact that access to 'count' is unconstrained. The essence of the
problem is that a wakeup call, sent to a process that is not sleeping, is lost. The algorithm for
producer / consumer using sleep and wakeup is given in Algorithm 3.3.

/* Producer Consumer problem with sleep and wakeup */


Concurrency and Process Synchronization 37

#define N 100 /* number of slots in the buffer */


int counter = 0;

void producer( )
{
int item;
while(TRUE){ /* Loop */
item =item_ produce(); /* produce item to be placed in the buffer */
if (counter == N) sleep(); /* Check if buffer is full */
put_item( item); /*put item in buffer */
counter = counter+1; /*increment counter */
if (counter == 0) wakeup(consumer);
}
}

void consumer( )
{
int item;

while(TRUE){ /* Loop */
if (counter == 0) sleep(); /* Check if buffer is empty */
item =take_item(); /* take item from buffer */
counter = counter-1; /* decrement counter */
if (counter == 0) wakeup(producer);
item_consume(item);
}
}

Algorithm 3.3

3.4 Inter Process Communication


The scheme uses the shared buffer, and implementing the buffer in the producer / consumer is
shown above. When the shared memory is limited, the cooperating processes can also
communicate through inter process communication facility (IPC) provided by the operating
system as given in Figure 3.3. IPC provides a way for the processes to communicate and
synchronize without sharing the address space. Inter process communication is useful in case of
distributed processing. Inter process communication primitives such as message passing, remote
procedure call (RPC), and transactions can be used. Communication of message between two
processes implies some level of synchronization between the two. The receiver cannot receive
any message until it has been sent by the sender process.

Lack of Shared Communicate by


Memory sending Messages

Figure 3.3. IPC : Solution for limited shared memory

Some of the primitives that can be used for inter process communication are as follows:
• Message passing supports the RISC among the IPC primitives
• Remote procedure call (RPC) supports process interaction at language level, type checking
• Transactions support for operations and their synchronization on shared objects
Concurrency and Process Synchronization 38

3.4.1 Message Passing


This is one of the approach used for inter process communication. With the trend to distributed
systems, there has been a surge of interest in message-based inter process communications.
Message-based communication is discussed below. Processes generally send and receive
messages by using calls such as:
send expression_list to destination_identifier;
receive variable_list from source_identifier;

Another variation can be as follows:


guarded receive:

receive variable_list from source_id when B;

selective receive:

select
receive var_list from source_id1;
|receive var_list from source_id2;
|receive var_list from source_id3;
end

The send and receive calls are normally implemented as operating system calls accessible from
many programming language environments.

 Characteristics of Message-Passing Primitives


A blocking send must wait for the receiver to receive the message. In case of nonblocking send,
the sender can continue with other processing even if the receiver has not yet received the
message; thus it requires a message buffering mechanism to hold the message until the receiver
receives it. A blocking send is an example of synchronous communication; a nonblocking send is
an example of asynchronous communication. Asynchronous communication with nonblocking
sends increases the level of concurrency. If a user gives a command to a printer server; the system
will buffer this information until the print server is ready to receive it. And the user who sent the
request to printer server can continue execution without having to wait for the printer server.
In distributed environment the transmissions or messages may be lost. Using an
acknowledgement protocol for confirming that each transmission has been properly received the
sender and receiver process can cooperate. The sender waiting for an acknowledgement message
from the receiver can use a time-out mechanism; on time out, if the acknowledgement has not
been received, the sender can retransmit the message. Message passing systems can identify each
new message with a sequence number. The receiver can examine the sequence numbers of
messages received. If an acknowledgement message is lost and the sender retransmits it, it
assigns the same sequence number to the retransmitted message as it was given to the originally
transmitted message. If the receiver detects several messages with the same sequence number it
accepts and keeps only one of these. One problem in distributed systems with send or receive
message passing is in naming processes explicitly, so that explicit send and receive calls
reference the proper processes. Process creation and destruction can be coordinated through some
centralized naming mechanism, but with considerable transmission overhead as individual
machines request permission to use new names. An alternate approach is to have each computer
ensure unique process names for its own processes; then processes may be addressed by
combining the computer name with the process name. Obviously, it requires centralized control
in determining a unique name for each computer in a distributed system, but it does not incur
significant overhead because computers are added and removed from networks infrequently.
Concurrency and Process Synchronization 39

Some of the characteristics used in message passing primitives are mentioned below:
• blocking vs. non-blocking
• buffered vs. unbuffered
• reliable vs. unreliable
• fixed-size vs. variable-size messages
• direct vs. indirect communication

* Blocking vs. Non-Blocking Primitives

Blocking Non-Blocking
send Returns control to user only after Returns control as soon as
message has been sent, or until message queued or copied.
acknowledgment has been received.
receive Returns only after message has been Signals willingness to
received. receive message.
Buffer is ready.

problems •Reduces concurrency. •Need buffering:


•still blocking
•deadlocks!
•Tricky to program.

Buffered vs. Unbuffered Primitives


* Asynchronous send is never delayed as it may get arbitrarily ahead of receive.
• However in this case messages need to be buffered.
• If no buffering is available, operations become blocking, and processes are synchronized on
operations: rendezvous.

Copy input Copy input


invoke parameters Accept on invoke parameters Accept on
entry entry entry entry
parameter parameter
s s
rendezvous rendezvous

Copy output Copy output


parameters parameters

Figure 3.4 Asynchronous Send / Receive

Reliable vs. Unreliable Primitives


Concurrency and Process Synchronization 40

• Transmission problems are there such as corruption, loss, duplication, reordering


• Recovery mechanism is required.
• For Reliable transmission, acknowledgments are required for messages.

Send Send
Time-out
Time-out

Receive Receive

Figure 3.5 Reliable communication with timeout interval

Direct vs. Indirect Communication


The setup for direct communication is shown in Figure 3.6. The structure for sending messages
can be as follows:
Send (P, message)
Receive (Q, message)

The other variation can be:


Send (P, message)
Receive (var, message)

Send (S, msg1)


Receive (&client_id, &msg)
C1
S Server

Receive (&client_id, &msg)


C2

Send ( S, msg2)

Figure 3.6 Direct Communication

Indirect Communication:
* Mailboxes and Ports
Message-based communications can occur directly between a sender and a receiver, or
intermediate queues can be used as shown in Figure 3.7. In case of ports, the sender and receiver
each specify the queue rather than the process with which they are communicating as shown in
Figure 3.8.
A mailbox is a message queue that may be used by multiple senders and receivers. In a
distributed system, the receivers can be disbursed across many computers, thus requiring two
transmissions for most of the messages—from the sender to the mailbox and from the mailbox to
the receiver. Giving each receiver a port solves this problem; this is defined as a mailbox used by
multiple senders and a single receiver. In distributed computing systems, each server ordinarily
has a port at which it receives requests from many other clients.
Concurrency and Process Synchronization 41

* Remote Procedure Calls


The remote procedure calls (RPC) was introduced to provide a structured, high level approach to
inter process communications in distributed systems. A remote procedure call allows a process on
one system to call a procedure in a process on another system. The calling process blocks
awaiting the return from the called procedure on the remote system; then the calling process
resumes execution from the point immediately following the call.
The called procedure and the caller reside in separate processes with separate address spaces so
there is no notion of shared global variable as is possible with regular procedures within a single
process. Therefore RPC transfers information strictly through the parameters in the call. RPC can
be implemented on top of an existing send/ receive mechanism. The call to the remote procedure
can be as follows:
send (remote_processs, inpu_tparameters);
receive (remote_processs, output_tparameters);

where remote_precess executes the desired procedure, and the where input_parameters are input
to the remote and output_parameters are the results returned to the caller.
Remote procedure calls present a number of challenges. Ideally, a remote procedure call should
be indistinguishable from a local procedure call as far as the use is concerned. Call-by-value is
easily implemented by sending copies of the data via messages. Call-by-reference is
uncomfortable because RPC crosses address spaces; internal data formats may be quite different
in networks of heterogeneous machines, thus requiring complex conversions and considerable
overhead. With local procedure calls we assume error free transmission, but remote calls must
deal with fault and transmissions loss.

Send (S, msg1) Receive (M, &msg)

Send (S, msg2) S Receive (M, &msg)

Send ( S, msg3) Mailbox M Receive (M, &msg)

Figure 3.7 Send / Receive Message using Mailbox

 Ports:
In case of ports there can be multiple senders, only one receiver, and access to port is passed
between processes in form of capabilities.

Process Send (P, Msg1)

FIFO Queue

Send (P, Msg2)

Port P
Send (P, Msg3)

Figure 3.8 Send / Receive Message using ports


Concurrency and Process Synchronization 42

Using message passing the mutual exclusion can be enforced. Let us consider the blocking and
receive primitive and the nonblocking send primitive. The shared mailbox mutex can be used by
the concurrent processes to send and receive. Initially the mailbox contain single message with
null content. A process trying to enter the critical section first makes an attempt to receive a
message. If the mailbox is empty, the process is blocked. Once a message is acquired by a
process, it performs its critical section and then places the message back into the mailbox.
A solution to the producer/consumer (with bounded buffer) problem using messages is shown
below in Algorithm 3.4.
/* Producer Consumer problem using N messages */
#define N 100 /* number of slots in the buffer */

void producer( )
{
int item;
message msg; /* buffer for messages */
while(TRUE){
item = item_produce(); /* produce item to be placed in the buffer */
receive(consumer, &msg); /*wait for an empty to arrive */
construct_message(&msg, item); /*construct a message to be send *.
send (consumer, &msg); /* send item to consume */
}
}

void consumer( )
{
int item,i;
message msg; /* buffer for messages */
while(TRUE) {
for (i=0; i<N;i++) send (producer, &msg) /*send N empties)
receiver(producer, &msg) /* get message containing item */
item = retrieve_item(); /* extract item from message */
send(producer, &msg); /*send back empty reply */
item_consume(item); /* do something with the item */
}
}
Algorithm 3.4
* Reader/Writer Problem
Concurrency and Process Synchronization 43

The reader writer problem is defined as follows: A data is being shared among number of
processes. The data can be memory, disk file. There can be number of processes, which only
read the data (readers) and number of processes that only write the data (writers). The conditions
applicable are only one writer can write at time to the shared data area, if a writer is writing, then
no reader may read the shared area at that instant, and any number of readers may simultaneously
read the shared data. The Reader / Writer problem using message passing is given in Algorithm
3.5.

/* Reader/Writer problem using message passing */

void reader(int i)
{
message m;
while(true)
{
m=i;
send(readrequest, m);
receive(mbox[i , m);
PERFORM_READ();
m=i;
send(finished, m);
}
}

void write(int j)
{
message m;
while(true)
{
m=i;
send(writerequest, m);
receive(mbox[j], m);
PERFORM_WRITE();
m = i;
send(finished, m);
}
}

void Check_status()
{
while(true)
{
if(count > 0)
{
if(!empty(finished))
{
receive(finished, m_1);
count++;
}
else if(!empty(write_request))
{
receive(write_request, m_1);
write_id = m_1_id;
count = count – 100;
}
Concurrency and Process Synchronization 44

else
{
receive(read_request, m_1);
count--;
send(m_1.id, “DONE”);
}
}
if (count==0)
{
send(writer_id, “DONE”);
receive(finished, m_1);
count=100;
}
while(count < 0)
{
receive(finished, m_1);
count++;
}
}
}
Algorithm 3.5

3.5 Semaphores
In design of operating system it is required to allow concurrent execution of a collection of
cooperating sequential processes. The two processes can cooperate through the means of signals,
so that a process can be held at a specific point, until it receives a specific signal. For this purpose
a special tool is used called Semaphore. A semaphore S, is a protected nonnegative integer
variable that can only be accessed or changed by the two atomic (indivisible or uninterruptible)
functions P and V, these operators operate on semaphore as given below:
P(s) : [
while(s == 0) {wait};
s = s – 1;
]
V(s) : [
s = s + 1;
]
•P operation decrements the semaphore value
•V operation increments semaphore value
A semaphore can be initialized to a nonnegative integer value. Binary semaphores are used to
implement mutual exclusion Binary semaphore can assume values 0 or 1 and are easy to
implement. Counting semaphores also called as general semaphores can assume only non-
negative integer values. Counting semaphores are particularly useful when a resource is to be
allocated from a pool of like resources. The semaphore is initialized to the number of resources in
the pool. Each operation decrements the semaphore by 1, indicating that another resource has
been removed from the pool and is in use by a process. Each V operation increments the
semaphore by 1, indicating that a process has returned a resource to the pool and the resource
may be reallocated to another process. If a P operation is attempted when the semaphore has been
decremented to 0, then the process must wait until a resource is returned to the pool by a V
operation. The structure for mutual exclusion using counting semaphore is given below:
Concurrency and Process Synchronization 45

struct semaphore {
int count;
queueType queue;
}
void get(semaphore s)
{ s.count--;
if (s.count < 0)
{enter this process in s.queue;
block this process
}
}
void release(semaphore s)
{ s.count++;
if (s.count <= 0)
{ remove a process P from s.queue;
enter process P on ready list;
}
}
3.4.1 Implementing semaphores P and V
Semaphore operations can also be implemented in the operating system design to avoid busy
waiting. A semaphore is implemented as a protected variable and a queue in which processes can
wait for V operations. When a process attempts a P operation on a semaphore whose current
value is zero, the process relinquished the processor and blocks itself to await a V operation on
the semaphore. A fist-in-first-out queue discipline or other disciplines, including priority
queueing, can be employed. The operating system then reassigns to the processor to the next
ready process.
The process in the semaphore queue eventually moves to the head of the queue. Hence the next V
operation removes the process from the semaphore queue and places it on the ready list. Of
course, processes attempting simultaneous P and V operations on a semaphore are guaranteed
exclusive access to the semaphore by the operating system. In case of uniprocessor systems, the
indivisibility of P and V can be ensured by disabling interrupts, while P and V operations are
manipulating a semaphore. This prevents the processor from being usurped until the manipulation
is complete (the point at which interrupts are again enabled).
Producer-consumer relationship with bounded buffer implemented using semaphores is
mentioned below in Algorithm 3.6.

/* Producer Consumer problem using semaphore */

#define N 100 /* number of slots in the buffer */


typedef int semaphore; /* semaphore (special kind of int variable */
semaphore mutex = 1; /* For controlling access to critical region *.
Semaphore empty = N; /* For counting empty buffer slots */
Semaphore full = 0 /* For counting the buffer as full */

void producer( )
{
int item;
while(TRUE){ /* TRUE is the constant 1 */
item = item_produce(); /* produce item to be placed in the buffer */
down(&empty); /* decrement empty count */
down(&mutex); /* Enter into the critical region */
put_item(item); /* Place the new item in the buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment count of full slots */
Concurrency and Process Synchronization 46

}
}

void consumer( )
{
int item;
while(TRUE){ /* infinite loop */
down(&full); /* decrement full count */
down(&mutex); /* Enter into the critical region */
item = take_item(); /* take item from the buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment count of empty slots */
item_consume(item); /* consume the item removed from the buffer */
}
}
Algorithm 3.6

The solution for Reader/Writer Problem with (readers have priority) using wait and signal is
given below in Algorithm 3.7.

/* Reader/Writer problem using wait and signal */


int count_read;
Semaphore k=1, wait_sem=1;
void reader()
{
while(true)
{
wait(k);
count_ read++;
if(count_read==1)
wait(wait_sem);
signal(k);
PERFORM_READ();
wait(k);
count_ read--;
if(count_read==0)
signal(wait_sem);
signal(k);
}
}

void writer()
{
while(true)
{
wait(wait_sem);
PERFORM_WRITE();
signal(wait_sem);
}
}

void main()
{
count_ read=0;
parbegin(reader,writer);
Concurrency and Process Synchronization 47

Algorithm 3.7

* Dining Philosophers Problem


A group of philosophers sit around a table discussing some philosophical issues. There can be
two states in which a philosopher can be either thinking if he is not eating and if he is not eating
he must be thinking. A plate of food is kept in front of each philosopher and a fork is available
towards the right and left of each philosopher’s plate as shown in Figure 3.9. To start eating, each
philosopher is required to get two forks (a fork in each hand). The problem is to design a set of
processes to represent philosophers such that each philosopher can eat periodically and none
remains hungry.

Figure 3.9 Dining Philosopher’s Problem

The correctness condition is that a hungry philosopher should no face indefinite waits when (s) he
decides to eat. A solution which does not suffer from deadlocks or starvation is required. The
deadlock can be avoided using some standard deadlock avoidance approach. However, some
scheme to avoid starvation is also required. Consider a simple solution using philosopher
processes of the following form given below:

Repeat
Wait till the left fork becomes available;
Lift the left fork;
Wait till the right fork becomes available;
Lift the right fork;
{eat }
{think}
forever

This solution can suffer from deadlock. If all philosophers simultaneously lift their left forks,
none will be able to lift the right fork. Deadlocks can be avoided by modifying the philosopher
process such that if the right fork is not available, it puts down the left fork and repeats the
attempt to eat sometime in future. However, this solution suffers from starvation because the
same situation may occur again.
Concurrency and Process Synchronization 48

The solution given below avoids deadlocks because a philosopher picks up a fork only if both
forks are available simultaneously. This action ensures that at least some philosophers can eat at
any time. Since the availability of forks is checked within a CS, race conditions cannot arise. The
solution does have some deficiencies. Eating is performed outside the CS so that many
philosophers can eat concurrently. Availability of forks is checked inside a CS hence only one
philosopher can make this check at any time. Philosophers who are not neighbours should be able
to perform these checks concurrently. The solution also suffers from busy waits.
Repeat
If both forks are available then
Lift both forks;
{eat}
put down both forks;
{think}
forever

The Dining Philosopher Problem using semaphore is given in Algorithm 3.9.


/* Solution to Dining philosopher using semaphore */

#define N 5 /* number of philosopher */


# define LEFT (i+N-1)%N /* number of i’s left neighbour */
# define RIGHT (i+N)%N /*number of i’s right neighbour */
# define THINKING 0 /* philosopher os thinking */
# define HUNGRY 1 /* philosopher is trying to get forks */
#define EATING 2 /*philosopher is eating */
typedef int semaphore; /*semaphores are special non negative integer variable */
int state[N]; /*array to keep statues of philosopher */
semaphore mutex = 1l /* mutual exclusion for critical regions */
semaphore s[N]; /* one semaphore is used for each philosopher */

void philosopher(int i) /* i:philosopher number, from 0 to N-1 */


{
while (TRUE) { /*repeat forever */
think(); /*philosopher is thinking */
acquire_forks(i); /*acquire two forks or block /
eat(); /* philosopher is eating*/
release_forks(i); /* put both forks back on the table */
}
}

void acquire_forks(int i) /* i :philosopher number, from 0 to N-1 */


{
down(&mutex); /* enter critical section */
state[i]=HUNGRY; /* update record philosopher I is hungry */
test(i); /* try to acquire two forks */
up(&mutex); /* exit critical section */
down(&s[i]); /* block if forks are not acquired */
}

void release_forks(i) /* i:philosopher number, from 0 to N-1 */


{
down(&mutex); /* enter critical section */
state[i]=THINKING; /* update record philosopher I is thinking */
test(LEFT); /* check if the left philosopher can eat now */
test(RIGHT); /* check if the left philosopher can eat now */
Concurrency and Process Synchronization 49

up(&mutex); /* exit critical section */


}

void test(i) /* i:philosopher number, from 0 to N-1 */


{
if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING)
{
state[i]=EATING;
up(&s[i]);
}
}

Algorithm 3.9

3.5 Monitotrs
The semaphore approach, in particular, has many weaknesses. If P is omitted, then mutual
exclusion is not enforced. If V is omitted, then tasks waiting for P operations could be
deadlocked. Once the P begins, the user cannot back out and take an alternate course of action
while the semaphore remains in use. A task may wait on only one semaphore at a time; this could
lead to deadlock in resource allocation situations. The potential applications for concurrent
programming are numerous. Obviously, operating systems themselves are good examples of
concurrent systems. A monitor is a language construct which provides abstraction and
encapsulation support for building large concurrent systems. Monitors are used to implement
operations on shared data and synchronization between processes which use shared data. A
monitor contains both the data and procedures needed to perform allocation of particular serially
reusable shared resource or group of serially reusable shared resources. To accomplish a resource
allocation function, a process must call a particular monitor entry. Many processes may want to
enter the monitor at various times. But mutual exclusion is rigidly enforced at the monitor
boundary. Only one process at a time is allowed to enter. Processes desiring to enter the monitor
when it is already in use must wait. This waiting is automatically managed by the monitor. Calls
on procedures of a monitor are serviced in FIFO manner to satisfy the bounded wait property.
This feature is implemented by maintaining a queue of processes which wish to execute monitor
procedures. A monitor is defined as a type in a program. If consists of four components
1.Declaration of shared data
2.Initialization of shared data
3.Operations on shared data
4.Synchronization statements.
The first three components resemble the corresponding components in a c++ or Java class. The
synchronize statements provide support for synchronization between processes using a monitor.
A program can create one or more instances of a monitor type. Each instance is a monitor
variable. When a monitor variable is instantiated, the initialization code is executed to initialize
its shared data. Operations over shared data are coded as procedures in the monitor type.
Operations over shared data are coded as procedures in the monitor type. These operations can be
invoked by processes using the syntax:
<monitor name> <operation name>
where <monitor name> is the name of a monitor variable. Events are called conditions in
monitors. A monitor may contain declarations of a few conditions. While executing a monitor
procedure, a process may decide to wait till a specific condition occurs. This process is blocked
on the condition. It is activated when the conditions occurs. Occurrence of a condition is
explicitly signaled by some process while executing a monitor procedure. Thus both blocking on
Concurrency and Process Synchronization 50

a condition and signaling of its occurrence happen inside monitor procedures. Occurrence of a
condition activates one process blocked on it; it has no effect if no process is blocked on it.
Process blocked in a monitor can be divided into two groups. Some processes are blocked on
conditions. Such processes are waiting to enter the monitor, i.e. waiting to start execution of some
monitor procedure. To ensure bounded waits in the system, the implementation of a monitor
favors processes blocked on conditions over those wishing to enter the monitor. The monitor
solution for reader/writers is given in Algorithm 3.10. The solution for dining philosopher’s
problem using monitors is given in Algorithm 3.11.

/* reader/writer problem using monitors */


monitor readersandwriters{
int number_readers = 0;
Boolean busy = FALSE;
Condition READY_TO_READ, READY_TO_WRITE;
Public :
Readstart {
If(busy || (READY_TO_READ.queue)) READY_TO_READ.wait;
Number_readers = number_readers + 1;
READY_TO_READ.signal;
};

readfinish{
number_readersa = number_readers – 1;
if ( number_readers = 0) READY_TO_WRITE.signal;
};

writestart{
if ( number_readers != 0) || busy)
READY_TO_WRITE.wait;
Busy = TRUE;
};

writefinish{
busy = FALSE;
if(READY_TO_WRITE.queue)
READY_TO_WRITE.signal
Else
READY_TO_READ.signal;
};
};

Algorithm 3.10

/* Monitor Solution to Dining Philosopher */

Monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) {
state[i] = hungry;
test(i);
if (state[i] != eating)
Concurrency and Process Synchronization 51

self[i].wait();
}

void putdown(int i) {
state[i] = thinking;
test((i+4)%5);
test((i+1)%5);
}

void test(int i){


if ((state[(i+4) %5] != eating &&
(state[i] == hungry &&
(state[(i+1)%5] != eating)) {
state[i] =eating;
self[I].signal();
}
}

void init() {
for (init i =0; i < 5; i++)
state [i]= thinking;
}
} / * end of monitor */

dp.pickup(i)
….
eat
….
dp.putdown(i)

 Exercises
3.1 What does the term co-operating process mean? Explain. Are peer threads co-operating in
this way?
3.2 Mention the basic requirements for execution of concurrent processes.
3.3 What is the race condition in case of concurrent execution of processes.
3.3 Define what a critical section is and explain at least two techniques used to solve
synchronization problems in critical sections.
3.4 Describe the dining philosophers’ problem. Write C++, pseudo-code to implement a
solution to the problem that is (must be) free from deadlock (explain how they are
avoided).
3.5 What is a Semaphore? Define the wait operation wait(S) and the wakeup operation
signal(S).Give an explanation and an example when and how they are used.
3.6 Explain how the method of critical regions is related to semaphores, and what are the
advantages of semaphores?
3.7 What is the distinction between blocking and nonblocking with respect to message?
3.8 Explain what do you understand by the term Monitor?
3.9 Consider the following variation of two-phase locking.Each time a process fails to get a
lock and has to release all its locks and start over,it increases its age by 1.As soon as a
process reaches age 10,then all other processes that are in phase 1 have to release all their
locks and wait (they do not start trying to get locks again).Then this process (the one with
age 10)starts trying to get locks, but when it cannot get a lock it just waits for it. When this
process completes phase 1 and enters phase 2,then all the other processes in phase 1 can
Concurrency and Process Synchronization 52

start trying to get locks again. Does this algorithm prevent deadlock?Does it prevent
starvation ?Explain your reasoning for each
3.10 Show that, if the wait() and signal() operations are not atomic, the mutual exclusion may be
violated.
3.11 What are the methods (operations) on a lock? Write the (correct) C/C++ code for the lock
you have implemented during the laboratory work.
*****

You might also like