Module III OS Notes SL
Module III OS Notes SL
Module III OS Notes SL
Module III
Process Synchronization:
❖ Synchronization:
❖ The critical section problem;
❖ Peterson’s solution;
❖ Synchronization hardware;
❖ Semaphores;
❖ Classical problems of synchronization;
❖ Monitors.
Deadlocks:
❖ Deadlocks; System model;
❖ Deadlock characterization;
❖ Methods for handling deadlocks;
❖ Deadlock prevention;
❖ Deadlock avoidance; Deadlock detection and recovery from deadlock.
MODULE 3
PROCESS SYNCHRONIZATION
A cooperating process is one that can affect or be affected by other processes executing
in the system. Cooperating processes can either directly share a logical address space
(that is, both code and data) or be allowed to share data only through files or messages.
Concurrent access to shared data may result in data inconsistency. To maintain data
consistency, various mechanisms is required to ensure the orderly execution of
cooperating processes that share a logical address space.
while (true) {
while (true){
while (counter ==0)
; // donothing
nextConsumed =buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in nextConsumed */
}
Race Condition
When the producer and consumer routines shown above are correct separately, they
may not function correctly when executed concurrently.
Illustration:
Suppose that the value of the variable counter is currently 5 and that the producer and
consumer processes execute the statements "counter++" and "counter--" concurrently.
The value of the variable counter may be 4, 5, or 6 but the only correct result is counter
== 5, which is generated correctly if the producer and consumer execute separately.
Note: It is arrived at the incorrect state "counter == 4", indicating that four buffers are
full, when, in fact, five buffers are full. If we reversed the order of the statements at T4
and T5, we would arrive at the incorrect state "counter==6".
Definition Race Condition: A situation where several processes access and
manipulate the same data concurrently and the outcome of the execution depends on
the particular order in which the access takes place, is called a RaceCondition.
To guard against the race condition, ensure that only one process at a time can be
manipulating the variable counter. To make such a guarantee, the processes are
synchronized in some way.
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
The critical section may be followed by an exit section. The remaining code is the
reminder section.
A solution to the critical-section problem must satisfy the following three requirements:
2. Progress:If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section next,
and this selection cannot be postponedindefinitely.
3. Bounded waiting:There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request isgranted.
PETERSON'S SOLUTION
Peterson's solution is restricted to two processes that alternate execution between theircritical
sections and remainder sections. The processes are numbered Po and P1 or Pi and Pj where j =
1-i
Peterson's solution requires the two processes to share two data items:
int turn;
boolean flag[2];
• turn: The variable turn indicates whose turn it is to enter its critical section. Ex:
if turn == i, then process Pi is allowed to execute in its criticalsection
• flag: The flag array is used to indicate if a process is ready to enter its critical
section. Ex: if flag [i] is true, this value indicates that Pi is ready to enter its critical
section.
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j)
; // do nothing
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
• To enter the critical section, process Pi first sets flag [i] to be true and then sets
turn to the value j, thereby asserting that if the other process wishes to enter the
critical section, it can doso.
• If both processes try to enter at the same time, turn will be set to both i and j at
roughly the same time. Only one of these assignments will last, the other will
occur but will be over written immediately.
• The eventual value of turn determines which of the two processes is allowed to
enter its critical sectionfirst
SYNCHRONIZATIONHARDWARE
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Definition:
booleanTestAndSet (boolean *target)
{
booleanrv = *target;
*target = TRUE;
return rv:
}
Figure: The definition of the TestAndSet () instruction.
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section
lock =FALSE;
// remaindersection
} while (TRUE);
Figure: Mutual-exclusion implementation with TestAndSet ()
• The Swap() instruction, operates on the contents of two words, it is defined as shown
below
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Figure: The definition of the Swap ( ) instruction
• Swap() it is executed atomically. If the machine supports the Swap() instruction, then
mutual exclusion can be provided as follows.
• A global Boolean variable lock is declared and is initialized to false. In addition, each
process has a local Boolean variable key. The structure of process Pi is shown in below
do {
key = TRUE;
while ( key == TRUE) Swap
(&lock, &key );
// critical section
lock =FALSE;
// remaindersection
} while (TRUE);
boolean
waiting[
n];
boolean
lock;
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section j
= (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
SEMAPHORE
wait (S) {
while S <= 0
; // no-op
S--;
signal (S) {
S++;}
All modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore
value.
Binary semaphore
The value of a binary semaphore can range only between 0 and1.
Binary semaphores are known as mutex locks, as they are locks that provide mutual
exclusion. Binary semaphores to deal with the critical-section problem for multiple
processes. Then processes share a semaphore, mutex, initialized to1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Counting semaphore
The value of a counting semaphore can range over an unrestricteddomain.
Counting semaphores can be used to control access to a given resource consisting
of a finite number ofinstances.
The semaphore is initialized to the number of resources available. Each process
that wishes to use a resource performs a wait() operation on the semaphore. When
a process releases a resource, it performs a signal()operation.
When the count for the semaphore goes to 0, all resources are being used. After
that, processes that wish to use a resource will block until the count becomes
greater than 0.
Implementation
The main disadvantage of the semaphore definition requires busywaiting.
While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code.
This continual looping is clearly a problem in a real multiprogramming system,
where a single CPU is shared among many processes.
Busy waiting wastes CPU cycles that some other process might be able to use
productively. This type of semaphore is also called a spinlock because the process
"spins" while waiting for thelock.
typedefstruct {
int value;
struct process *list;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must
wait on a semaphore, it is added to the list of processes. A signal() operation removes
one process from the list of waiting processes and awakens that process.
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S-
>list; block();
}}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P
from S->list;
wakeup(P);
}
}
The block() operation suspends the process that invokes it. The wakeup(P)
operation resumes the execution of a blocked process P. These two operations are
provided by the operating system as basic systemcalls.
In this implementation semaphore values may be negative. If a semaphore value
is negative, its magnitude is the number of processes waiting on thatsemaphore.
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
. .
signal(S); signal(Q);
signal(Q); signal(S);
Suppose that Po executes wait (S) and then P1 executes wait (Q). When Poexecutes
wait (Q), it must wait until P1 executes signal (Q). Similarly, when P1 executes wait
(S), it must wait until Po executes signal(S). Since these signal() operations cam1ot
be executed, Po and P1 are deadlocked.
Bounded-Buffer Problem
N buffers, each can hold one item
Semaphore mutexinitialized to the value 1
Semaphore full initialized to the value0
Semaphore empty initialized to the value N.
Readers-Writers Problem
A data set is shared among a number of concurrentprocesses
Readers – only read the data set; they do not perform anyupdates
Writers – can both read andwrite.
Problem – allow multiple readers to read at the same time. Only one single writer
can access the shared data at the sametime.
SharedData
Dataset
Semaphore mutexinitialized to 1.
Semaphore wrtinitialized to1.
Integer readcountinitialized to 0.
Dining-Philosophers Problem
Consider five philosophers who spend their lives thinking and eating. The philosophers share
a circular table surrounded by five chairs, each belonging to one philosopher. In the center of
the table is a bowl of rice, and the table is laid with five singlechopsticks.
A philosopher gets hungry and tries to pick up the two chopsticks that are closest to her
(the chopsticks that are between her and her left and right neighbors). A philosopher may
pick up only one chopstick at a time. When a hungry philosopher has both her chopsticks
at the same time, she eats without releasing the chopsticks. When she is finished eating,
she puts down both chopsticks and starts thinkingagain.
It is a simple representation of the need to allocate several resources among several
processes in a deadlock-free and starvation-freemanner.
• Allowaphilosophertopickupherchopsticksonlyifbothchopsticksareavailable.
• Use an asymmetric solution—that is, an odd-numbered philosopher picks up
first her left chopstick and then her right chopstick, whereas an even numbered
philosopher picks up her right chopstick and then her leftchopstick.
Monitor
An abstract data type—or ADT—encapsulates data with a set of functions to operate
on that data that are independent of any specific implementation of the ADT.
A monitor typeis an ADT that includes a set of programmer defined operations that
are provided with mutual exclusion within the monitor. The monitor type alsodeclares
the variables whose values define the state of an instance of that type, alongwith the
bodies of functions that operate on those variables.
The monitor construct ensures that only one process at a time is active within the
monitor.
Each philosopher I invokes the operations pickup() and putdown() in the following
The Resource Allocator monitor shown in the above Figure, which controls the
allocation of a single resource among competingprocesses.
A process that needs to access the resource in question must observe the following
sequence:
R.acquire(t);
...
access the resource;
...
R.release();
where R is an instance of type ResourceAllocator.
The monitor concept cannot guarantee that the preceding access sequence will be
observed. In particular, the following problems can occur:
A process might access a resource without first gaining access permission to the
resource.
A process might never release a resource once it has been granted access to the
resource.
A process might attempt to release a resource that it neverrequested.
A process might request the same resource twice (without first releasing the
resource).
DEADLOCKS
A process requests resources, if the resources are not available at that time, the process enters a
waiting state. Sometimes, a waiting process is never again able to change state, because the
resources it has requested are held by other waiting processes. This situation is called aDeadlock.
SYSTEM MODEL
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request: The process requests the resource. If the request cannot be granted
immediately, then the requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource.
A set of processes is in a deadlocked state when every process in the set is waiting for an event
that can be caused only by another process in the set. The events with which we are mainly
concerned here are resource acquisition and release. The resources may be either physical
resources or logical resources
Deadlocks may also involve different resource types. For example, consider a system with one
printer and one DVD drive. Suppose that process Pi is holding the DVD and process Pj is holding
the printer. If Pi requests the printer and Pj requests the DVD drive, a deadlock occurs.
DEADLOCK CHARACTERIZATION
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-sharable mode, that is,
only one process at a time can use the resource. If another process requests that resource,
the requesting process must be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption: Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set {P0, Pl, ... , Pn} of waiting processes must exist such that Po is waiting
for a resource held by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is waiting for a
resource held by Pn and Pn is waiting for a resource held by Po.
Resource-Allocation Graph
The graph consists of a set of vertices V and a set of edges E. The set of vertices V is
partitioned into two different types of nodes:
• P = {P1, P2, ...,Pn}, the set consisting of all the active processes in the system.
• R = {R1, R2, ..., Rm} the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj it signifies that process
Pi has requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj → Pi it signifies that an
instance of resource type Rj has been allocated to process Pi.
• A directed edge Pi → Rj is called a Request Edge.
• A directed edge Rj → Pi is called an Assignment Edge.
Pictorially each process Pi as a circle and each resource type Rj as a rectangle. Since resource
type Rj may have more than one instance, each instance is represented as a dot within the
rectangle.
A request edge points to only the rectangle Rj, whereas an assignment edge must also designate
one of the dots in the rectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resource-allocation graph. When this request can be fulfilled, the request edge is instantaneously
transformed to an assignment edge. When the process no longer needs access to the resource, it
releases the resource; as a result, the assignment edge is deleted.
Resource instances:
• One instance of resource type R1
• Two instances of resource type R2
• One instance of resource type R3
• Three instances of resource type R4
Process states:
• Process P1 is holding an instance of resource type R2 and is waiting for an instance of
resource type R1.
• Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an
instance of R3.
• Process P3 is holding an instance of R3.
• If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred. If the cycle involves only a set of resource types, each of which has only a
single instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked.
• If each resource type has several instances, then a cycle does not necessarily imply that
a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock.
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held
by process P3. Process P3 is waiting for either process P1 or process P2 to release resource R2.
In addition, process P1 is waiting for process P2 to release resource R1.
Consider the resource-allocation graph in below Figure. In this example also have a cycle:
P1→R1→P3→R2→P1
However, there is no deadlock. Observe that process P4 may release its instance of resource
type R2. That resource can then be allocated to P3, breaking the cycle.
To ensure that deadlocks never occur, the system can use either deadlock prevention or a
deadlock-avoidance scheme.
Deadlock prevention provides a set of methods for ensuring that at least one of the necessary
conditions cannot hold. These methods prevent deadlocks by constraining how requests for
resources can be made.
In the absence of algorithms to detect and recover from deadlocks, then the system is in a
deadlock state yet has no way of recognizing what has happened. In this case, the undetected
deadlock will result in deterioration of the system's performance, because resources are being
held by processes that cannot run and because more and more processes, as they make requests
for resources, will enter a deadlocked state. Eventually, the system will stop functioning and will
need to be restarted manually.
DEADLOACK PREVENTION
Deadlock can be prevented by ensuring that at least one of the four necessary conditions cannot
hold.
Mutual Exclusion
• The mutual-exclusion condition must hold for non-sharable resources. Sharable resources,
do not require mutually exclusive access and thus cannot be involved in a deadlock.
• Ex: Read-only files are example of a sharable resource. If several processes attempt to
open a read-only file at the same time, they can be granted simultaneous access to the file.
A process never needs to wait for a sharable resource.
• Deadlocks cannot prevent by denying the mutual-exclusion condition, because some
resources are intrinsically non-sharable.
Ex:
• Consider a process that copies data from a DVD drive to a file on disk, sorts the file, and
then prints the results to a printer. If all resources must be requested at the beginning of
the process, then the process must initially request the DVD drive, disk file, and printer.
It will hold the printer for its entire execution, even though it needs the printer only at the
end.
• The second method allows the process to request initially only the DVD drive and disk
file. It copies from the DVD drive to the disk and then releases both the DVD drive and
the disk file. The process must then again request the disk file and the printer. After
copying the disk file to the printer, it releases these two resources and terminates.
No Preemption
The third necessary condition for deadlocks is that there be no preemption of resources that have
already been allocated.
To ensure that this condition does not hold, the following protocols can be used:
• If a process is holding some resources and requests another resource that cannot be
immediately allocated to it, then all resources the process is currently holding are
preempted.
• The preempted resources are added to the list of resources for which the process is waiting.
The process will be restarted only when it can regain its old resources, as wellas the new
ones that it is requesting.
If a process requests some resources, first check whether they are available. If they are, allocate
them.
If they are not available, check whether they are allocated to some other process that is waiting
for additional resources. If so, preempt the desired resources from the waiting process and
allocate them to the requesting process.
If the resources are neither available nor held by a waiting process, the requesting process must
wait. While it is waiting, some of its resources may be preempted, but only if another process
requests them.
A process can be restarted only when it is allocated the new resources it is requesting and recovers
any resources that were preempted while it was waiting.
Circular Wait
One way to ensure that this condition never holds is to impose a total ordering of all resource
types and to require that each process requests resources in an increasing order of enumeration.
To illustrate, let R = {R1, R2, ... , Rm} be the set of resource types. Assign a unique integer
number to each resource type, which allows to compare two resources and to determinewhether
one precedes another in ordering. Formally, it defined as a one-to-one function
F: R ->N, where N is the set of natural numbers.
Example: if the set of resource types R includes tape drives, disk drives, and printers, then the
function F might be defined as follows:
F (tape drive) = 1
F (disk drive) = 5
F (printer) = 12
Now consider the following protocol to prevent deadlocks. Each process can request resources
only in an increasing order of enumeration. That is, a process can initially request any number of
instances of a resource type -Ri. After that, the process can request instances of resource type Rj
if and only if F(Rj) > F(Ri).
DEADLOCK AVOIDANCE
Safe State
• Safe state: A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. A system is in a safe state only if
there exists a safe sequence.
• Safe sequence: A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current
allocation state if, for each Pi, the resource requests that Pi can still make can be satisfied
by the currently available resources plus the resources held by all Pj, with j <i.
In this situation, if the resources that Pi needs are not immediately available, then Pi can wait
until all Pj have finished. When they have finished, Pi can obtain all of its needed resources,
complete its designated task, return its allocated resources, and terminate. When Pi terminates,
Pi+1 can obtain its needed resources, and so on. If no such sequence exists, then the system state
is said to be unsafe.
A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not all
unsafe states are deadlocks as shown in figure. An unsafe state may lead to a deadlock. As long
as the state is safe, the operating system can avoid unsafe states
Resource-Allocation-Graph Algorithm
• If a resource-allocation system has only one instance of each resource type, then a
variant of the resource-allocation graph is used for deadlock avoidance.
• In addition to the request and assignment edges, a new type of edge is introduced, called
a claim edge.
• A claim edge Pi ->Rj indicates that process Pi may request resource Rj at some time in
the future. This edge resembles a request edge in direction but is represented in the
graph by a dashed line.
• When process Pi requests resource Rj, the claim edge Pi ->Rj is converted to a request
edge. When a resource Rj is released by Pi the assignment edge Rj->Pi is reconverted to
a claim edge Pi->Rj.
Note that the resources must be claimed a priori in the system. That is, before process Pi starts
executing, all its claim edges must already appear in the resource-allocation graph.
We can relax this condition by allowing a claim edge Pi ->Rj to be added to the graph only if
all the edges associated with process Pi are claim edges.
Now suppose that process Pi requests resource Rj. The request can be granted only if
converting the request edge Pi ->Rj to an assignment edge Rj->Pi does not result in the
formation of a cycle in the resource-allocation graph.
There is need to check for safety by using a cycle-detection algorithm. An algorithm for detecting
a cycle in this graph requires an order of n2 operations, where n is the number of processes in the
system.
• If no cycle exists, then the allocation of the resource will leave the system in a safe state.
• If a cycle is found, then the allocation will put the system in an unsafe state. In that case,
process Pi will have to wait for its requests to be satisfied.
To illustrate this algorithm, consider the resource-allocation graph as shown above. Suppose that
P2 requests R2. Although R2 is currently free, we cannot allocate it to P2, since this action will
create a cycle in the graph.
A cycle, indicates that the system is in an unsafe state. If P1 requests R2, and P2 requests R1,
then a deadlock will occur.
Banker's Algorithm
The Banker’s algorithm is applicable to a resource allocation system with multiple instances of
each resource type.
• When a new process enters the system, it must declare the maximum number of instances
of each resource type that it may need. This number may not exceed the total number of
resources in the system.
• When a user requests a set of resources, the system must determine whether the allocation
of these resources will leave the system in a safe state. If it will, the resources are allocated;
otherwise, the process must wait until some other process releases enough resources.
To implement the banker's algorithm the following data structures are used.
Available: A vector of length m indicates the number of available resources of each type. If
available [j] = k, there are k instances of resource type Rj available.
Max: An n x m matrix defines the maximum demand of each process. If Max [i,j] = k,
thenprocess Pi may request at most k instances of resource type Rj
Allocation: An n x m matrix defines the number of resources of each type currently allocated
toeach process. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj
Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k,
then Pi may need k more instances of Rj to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state. This algorithm can be
described as follows:
4. If Finish [i] == true for all i, then the system is in a safe state
This algorithm may require an order of m x n2 operations to determine whether a state is safe.
Resource-Request Algorithm
1. If RequestiNeedigo to step 2. Otherwise, raise error condition, since process has exceeded
its maximum claim
2. If RequestiAvailable, go to step 3. Otherwise Pi must wait, since resources are not available
3. Have the system pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available – Request;
Allocationi= Allocationi +
Requesti;Needi=Needi – Requesti;
Example
Consider a system with five processes Po through P4 and three resource types A, B, and C.
Resource type A has ten instances, resource type B has five instances, and resource type C
hasseven instances. Suppose that, at time T0the following snapshot of the system has been
taken:
The system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0> satisfies the
safety criteria.
Suppose now that process P1 requests one additional instance of resource type A and two
instances of resource type C, so Request1 = (1,0,2). Decide whether this request can be
immediately granted.
Then pretend that this request has been fulfilled, and the following new state is arrived.
Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety
requirement.
DEADLOCK DETECTION
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. In this environment, the system may provide:
• An algorithm that examines the state of the system to determine whether a deadlock has
occurred
• An algorithm to recover from the deadlock
• If all resources have only a single instance, then define a deadlock detection algorithm
that uses a variant of the resource-allocation graph, called a wait-for graph.
• This graph is obtained from the resource-allocation graph by removing the resource nodes
and collapsing the appropriate edges.
• An edge from Pi to Pj in a wait-for graph implies that process Pi is waiting for process Pj
to release a resource that Pi needs. An edge Pi → Pj exists in a wait-for graph if and only if
the corresponding resource allocation graph contains two edges P i →Rq and Rq→Pi for some
resource Rq.
Example: In below Figure, a resource-allocation graph and the corresponding wait-for graph is
presented.
• A deadlock exists in the system if and only if the wait-for graph contains a cycle. To
detect deadlocks, the system needs to maintain the wait-for graph and periodically
invoke an algorithm that searches for a cycle in the graph.
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of vertices in the graph.
A deadlock detection algorithm that is applicable to several instances of a resource type. The
algorithm employs several time-varying data structures that are similar to those used in the
banker's algorithm.
Algorithm:
4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked
Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state
Consider a system with five processes Po through P4 and three resource types A, B, and C.
Resource type A has seven instances, resource type B has two instances, and resource type
Chas six instances. Suppose that, at time T0, the following resource-allocation state:
After executing the algorithm, Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for
all i
Suppose now that process P2 makes one additional request for an instance of type C. The
Request matrix is modified as follows:
The system is now deadlocked. Although we can reclaim the resources held by process Po, the
number of available resources is not sufficient to fulfill the requests of the other processes.
Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
If deadlocks occur frequently, then the detection algorithm should be invoked frequently.
Resources allocated to deadlocked processes will be idle until the deadlock can be broken.
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and
so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.
The system recovers from the deadlock automatically. There are two options for breaking a
deadlock one is simply to abort one or more processes to break the circular wait. The other is to
preempt some resources from one or more of the deadlocked processes.
Process Termination
To eliminate deadlocks by aborting a process, use one of two methods. In both methods, the
system reclaims all resources allocated to the terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock cycle, but
at great expense; the deadlocked processes may have computed for a long time, and the
results of these partial computations must be discarded and probably will have to be
recomputed later.
2. Abort one process at a time until the deadlock cycle is eliminated: This method
incurs considerable overhead, since after each process is aborted, a deadlock-detection
algorithm must be invoked to determine whether any processes are still deadlocked.
If the partial termination method is used, then we must determine which deadlocked process (or
processes) should be terminated. Many factors may affect which process is chosen, including:
Resource Preemption
3. Starvation. How do we ensure that starvation will not occur? That is, how can we
guarantee that resources will not always be preempted from the same process?
QUESTION BANK
1. What are semaphores? Explain two primitive semaphore operations. What are its advantages?
2. Explain any one synchronization problem for testing newly proposed sync scheme
3. Explain three requirements that a solution to critical –section problem must satisfy.
4. State Dining Philosopher’s problem and give a solution using semaphores. Write structure of
philosopher.
5. What do you mean by binary semaphore and counting semaphore? With C struct, explain
implementation of wait() and signal. Semaphore as General Synchronization Tool.
6. Describe term monitor. Explain solution to dining philosophers.
7. What are semaphores? Explain solution to producer-consumer problem using semaphores
8. What is critical section? Explain the various methods to implement process synchronization.
9. Explain the various classical synchronization problems.
10. What are deadlocks? What are its characteristics? Explain the necessary conditions forits
occurrence.
11. Define Semaphores. Explain its usage and implementation.
12. Explain Reader-Writer problem with semaphore in detail.
13. What are monitors? Explain dining Philospher's solution using monitor.
14. Illustrate how Reader's-Writer's problem can be solved by using semaphores.
15. Explain the process of recovery from deadlock.
16. Describe RAG:
i) With deadlock
ii) With a cycle but no deadlock
17. What is Resource Allocation Graph (RAG)? Explain how RAG is very useful indescribing
deadly embrace (dead lock ) by considering your own example.
18. With the help of a system model, explain a deadlock and explain the necessaryconditions that
must hold simultaneously in a system for a deadlock to occur.
19. Explain how deadlock can be prevented by considering four necessary conditions cannothold.
20. Using Banker's algorithm determines whether the system is in a safe state.
21. How is a system recovered from deadlock? Explain the different methods used torecover from
deadlock.
22. Explain deadlock detection with algorithm and example
Additional Quesitions:
1. Define the terms: safe state and safe sequence. Give an algorithm to find whether or nota
system is in a safe state.
2. Find need matrix and calculate the safe sequence by using Banker's algorithm. Mention the
above system is safe or not safe.
Jobs Allocation Maximum Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
J1
0 1 0 7 5 3
J2 2 0 0 3 2 2 3 3 2
J3 3 0 2 9 0 2
J4 2 1 1 2 2 2
J5 0 0 2 4 3 3
3. Given the memory partitions of 100 K, 500 K, 200 K, 300 K and 600 K apply first fit, best fit
and worst fit algorithms to place 212K, 417K, 112K and 426K.
4. Explain the structure of page table.
5. Assume that there are 5 processes PO through P4 and 4 types of resources. At time T0 we
have the following state:
Process Allocation Maximum Available
A B C D A B C D A B C D
P0
0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
P5 0 0 2 4 3 3
Apply Banker's algorithm to answer the following:
i) What is the content of need matrix?
ii) Is the system in a safe state?
iii) If a request from a process P1(0, 4, 2, 0) arrives, can it be granted?
A B C A B C A B C
P0
0 0 2 0 0 4
P1 1 0 0 2 0 1
P2 1 3 5 1 3 7 1 0 2
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7
Find the need matrix and calculate safe sequence using Banker's algorithm. Mention the above
system is safe or not safe.