Nothing Special   »   [go: up one dir, main page]

OS (2 CHP)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

36

CHAPTER 2

PROCESS
MANAGEMENT

PROCESS CONCEP

STATES OF APROCESS

ROCESS CONTROL BLOCK (PCB

HREADS

OPERATIONS ON PROCESS

YDEPENDENT ANDcOOPERATING PR ESSES


NTERPROCESS COMMUNICATION

CHEDULING OF PROCE

CHIEDUHNG OUEUES

CHIEDUL

CONTEXTISWITCEN

SCHEDULING ALGORITHM
WWWN
DATIONE
ALGORITHMIEVALUATION
34
Process Management
2. PROCESS
cONCEPT
Computer systems of
COnpared to earlier systems. the present generation have greater capabilitie
ties when
With earlier systems it was possibe ior us tO execute
A4One program at a time. The executing program was allocated all he resources
avallable in the system.
However today it is possible for us to execute multiple
POBranns at any instant. This creates a requlrement to have Breater control over
progranm execution.
Thus the concept of a process is introduced.
A process is a of work in a modern computer system. It is basically
program in the state unit
of execution, which includes the executable program, the
programs data and related data structures
such as stack and all
needed to run the program. The operating system is responsible other iniormation
tor managing the
process it should provide it with a secure and fair space for execution.
This means
that the program has to run
free of any modification or interference irom any other
program or hardware device on the computer. All work done by the CPU
to the execution of processes. contributes

If one process is allowed to modify another process, it could


alter functions, and corrupt memory., leading to fallures and
change variables
unexpected behavior.
The Operating System also must protect itself from other processes,
because any
damage one process can do to a process can also be done to the Operating System
The Operating System needs to assure that if a process requests access to a device
or a resource, it should eventually be granted that request. This
should occur
regardless of the demands of other process on the machine. No one process should
be able to claim resources indefinitely, blocking other processes from accessing
it.
A process is the most required concept for scheduling of the CPU
in an
Operating System. A process may be independent of other processes in the system,
or may interact with some processes. An interaction may involve the sharing of
data or exchanging ofmessages to coordinate its activities with the activities of the
other processes.

The relationship between a process and program, in simple terms is that a


program does not compete for resources like the CPU or the memory, whereas a
process does. A program is basically a passive activity; it may exist on paper or may
reside on the disk. It just represents a set of instructions to solve a given problem.
It may be complled or tested, but it still does not compete for CPU, ROM and other
resources. A PrOcess is a dynamic activity. A processes current activity 1s
represented by the values in the program counter and other associated registers.
Associated with a process are a number of resources available with the system.
Once a user wants to execute a program, It is located on the disk and then
loaded into the main memory, at this instance it becomes a process. A passive
activity such as a program nas now been converted to a dynamic actlvity, becaus
it is then an entity, wnich competesafor CPU dne. time. Many defnitio
definitions have bee
process, but we still defne
define a
given for a other
process xecution", whlch
program in exe
as "aaprogram
competes for the CPU time and resources.
Process Management 35

2.2 STATES OF A PROCESS

We have discussed how the Operating System deals with one process. This
process spends its time doing two things. computing and processing within its
program state, and waiting for the Operating System to fulAll its requests. While
the process is waiting or working on a device and the Operating System is not
dolng anything and the CPU Is idle. If there is only one process on the machine,
this is not a problem. But, if you have more than one process to run there is no
point in the system being idle waiting for a device to service one process request.
What the Operating System can do is, while one program waits on a device, another
process can bé run on the CPU.

The operating System keeps track of the processes under its control by
classifying them in terms of their state. We can see processes can be in three diiferent
states. Running on the CPU, ready to run and waiting.
During its existence, a process goes through a serles of discrete states.
Various events can cause a process to change states. The state of the process is
defined in part by the current activity of that process. Each process may be inn o
of the following states.

Exit

Running Terminated
Interrupt
1-O wait
Accepted;

New Ready -O Waiting


completion

Figure 2.1 States of a process

STATE ACTIVITY

NEW
RUNNINC
ntisistateie procesiscrent
in theatethe pispertoming its totubes te
nstrietions orthe proceSS arelbeing exeguted
WAIT He prOCcsSwalting fo:the ocouirence of someteven
such ashecomplionofand-O.0peraton
READ heprocesSS Vaitin: tobeassgneda processor
TERMINATEDHThe Procesasiferilnatdekecuuln
36 Process Manageme

ne activity
NEW state indicates that a passive program has just beeri converted t
amic called a process. In the RUNNING state the process possesses al
ources required for its execution including the CPU. Tne running procese
a sequence of instructions and may go to the WAIT state when it calls on
Cutes
e opeating system to perform an input-output a
operation. Irn normal sequcnc
o operations
only one process may be using the resources and the CPU, howecve
there may be any number of processcs, which are ready for execution they are tn
the READY state. Once the I-O operation is complete a control signal is sent and
the process is transferred to the READY state. The READY list is maintained
priority and the first process in that list receives the processor. Once the execution
of the process is completed it goes to the TERMINATED state. It is important to
realize that only one process can be unning on any processor at any instant,
iowevr many processes may be ready and waiting.

2.3 PROCESS CONTROL BLOCE (PCB)

Each process. which has been created. is identified with the help of a number
of infomation. The information identifies the current state of the process. The
Tequired information is stored in a data structure called as the Process Control
Block or the PCB. The Operating System uses the PCB to keep track of the status of
the process. Some of the information. which may be included in the PCB are:

OWNER The user that owns the process


FROCESS ID A unique identiier for the process
STATE The programs state
REGISTERS Part of the process state
PROCESS MEMORY Another part of the process state
DEVICES INFORMATION Fles and input devices aprocesses holds open
SCHEDULIKG INFORMATION J-Information for scheduling
ACCOUNTING INEORMATION ProCEsSES Tesource tisage

The owner field can be used to determine the processes perTmissions such
as which files and devices it has access to, what processes it has control over and
so on. Scheduling informatian seems like it could be important for the operating
system in determining when a program gets run. Accounting information is just
an administrative tool for keeping track for how a process is been behaving. and is
mostly useful for charging users for resource usage and helping administrators to
configure the Operating System for the processes usage patterns. 1-O information
is there so the Operating Sy'siem can tell at a gance if the processes has any open
files or devices. and is also used to make sure that when a processes exits, 1 1
hasn't dlosed them the operating system will do it
Basically. the PCB is used to keep track of any adminístrative detalls
tha
the operating system will want to know at a gance. This includes detalls that mus
be associated with the processes, as well as detalls that are diSficult to figure o
through other means. This allows the operating system to manage processe process
smoothly and ea51y. wnen tne operating system switches from one process
37
Process Management
another process, the data of the current process is saved in a separate area called
as the save area. The saved data is then used to restart the process when the process
next gets the CPU.

Process numbeT
Process state
Program counteer

Registers

Memory
limits
List of open aaerzaaeDh.
files

Figure 2.2 Process Control Block

Since the PCB needs to be manipulated very frequently in most computers


a separate hardware is maintained to contain the address of the currently used
PCB. Separate hardware instructions are made available, to store and retrieve
information from the PCB.

2.4THREADS

A thread basically represents a thread of ereeution. Threads are a way for a


program to split itself into two or more stmultaneously running tasks. Threads and
processes differ from one operating system to another. but tn general. the way that a
thread Is created and shares its resources is diferent from the way a process does.

Tll now we have understood thata process is a program in the state of


executlon 1.c.. It performs a stngle thread of execution. This stngle thread of control
a
allows the process to execute one task at time.

Multiple threads can be executed in parallel on many computer systems.


This multithreading generally occurs. by time slicing where a single processor
switches between different threads, in which case the processing is not literally
simultaneous, for the single processor is really dolng only one thing at a time.
This switching can happen so fast as to glve the illusion of stmultaneity to an end
User
38
Process Manageme
For
example we can perform
an audio playback typing in a document and also listen musie
uslct
program
rOgram. though the user experiences these things
things
simultaneous.
simultaneous, in truth, the a
these separate processes. processor quickly switches back and forth betweer

2.4.1 Processes and Threads


In many respect operate in the same way as that ot processes. Som.
of the similarities and threads
differences are:
a. Similarities between processes and threads

ke processes threads share CPU and only one thread activeirunning at a tme
2.
Lke processes, threads within processes, execute sequentially.
3. Like processes, thread can create children.
4. And like process, if one thread 1s blocked, another thread can run.

b. Differences between processes and threads


1. Unlike processes, threads are not independent of one another.
2. Unlike processes, all threads can access every address in the task.
3. Unlike processes, thread is design to assist one other. Note that processes
might or might not assist one another because processes may orlginate fro
different users.
2.4.2 hy Threads?
Following are some reasons why we use threads in designing operatins
systems.

1. A process with multiple threads makes a great server for example printer
server.
2. Because threads can share common data, they do not need to use inter
process communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
Threads are cheap in the sense that
1. They only need a stack and storage for registers therefore, threads are cheap
to create.
2. Threads use very little resources of an operating system in which they are
working. That is, threads do not need new address space, global data, prograiu
code or operating system resources.
3. Context swltching is fast when working with threàds. The reason is that we
only have to save and/or restore PC, SP and registers.

But this cheapness does not come free -the biggest drawback is that thereis
no protection between threads.
Process Management 39

2.5 OPERATIONS ON PROCESS


NST
2.5.1 Creating a process
The ereaton of the processes requires following steps. The order in which
they are carried out is not necessarily the same in all cases.

1. Name:The name of the program which is to run as a new process mustbe


known.
2. Pocess ID and Process Control Block: The system çreates new process
control block, or locates unused block in an array. This block is used is
used to follow the execution of the program through its course, keeping
track of its resources and priority. Each process control block is labeled by
its PID or process identifier.
3. Locate the program : Identify the program to be executed on disk and
allocate memory for code segment in RAM.
4. Load the program
5. Priority :It must be computed for the process using a default for the type of
process.
6. Schedule the process for execution.
In Unix, a new process is started with the fork0 command. It starts a new
process running in the same program, starting at the. statement 1mmediately
following the fork0 call. After the call, both the parent (the process that called fork0)
and the child are both executing at the same point in the program. The child is
given its own memory space, which is initialized wlth an exactly copy of the memory
space (globals, stack, heap objects) of the parent. Thus the child looks like an exact
clone of the parent, and indeed, it's hard to tell them apart. The only difference is
that fork0 returns 0 1in the child, but a non-zero value in the parent. Then the child
process may be overlapped with a different code so that child process becomes a
different process.

2.5.2 Process Termination

T Again from the outside there appear, to be several termination mechanisms.

Normal exit (voluntary)

t3.
4
2. Error exlt (voluntary)
Fatal error (involuntary) .
Killed by another process (involuntary).
And agaln, internally the situatlon is smpler. In Unix terminology, there
are two system calls kill and exlt that are used. Kill (poorly named in my view)
sends a signal to another process. If this signal is not caught (via the signal systemm
call) the process is terminated. There is also an "uncatchable" signal. Exdt ts used
r self termination and can indicate success or fallure.
40
Process Managemet

2.6 NDEPENDENT AND COOPERATING PROCESSES


of the rest of
dependent process is the one that is independent
process una nput sta
verse. Its state is not shared in can
any way by any other
and restart with no bad effects (on
one determines results, Process stop sums the integers from l to1 (input).
Lne varies). For example program that
2.6.1 Cooperating processes
processe.
A process is cooperating if 1t can effect or be effected by other
structures of the peop)e
executing un the system. Machine must model the social cooperauon.
iat use it. People cooperate, so machine must support that Cooperatine
processes are those that share state. Its behavior is nondeterministic and: depen
on relative execution sequence and cannot be predicted a priori. Behavior ie
ireproducible.
Reasons for process cooperation are:
1 To allow information sharing
2. To make fast computations
3. To allow modularity

To 1llustrate the concept of cooperating processes, let us consider the


producer-consumer problem, which is a common paradigm 1or cooperating
processes. A producer process produces information that is consumed by a
consumner process. To allow producer and consumer processes to run concurrently,
we must keep a buffer of items that can be filled by the producer and emptied by
the consumer. A producer can produce one item while the consumer is consuming
another item. The producer and consumer must be synchronized, so that the
consumer does not try to consume an item that has not yet been produced. In this
situatlon, the consumer must walt until an item is produced.

The unbounded-buffer producer-consumer problem places no practical limit


on the size of the buffer. The consumer may have to wat for new items, but the
producer can always produce new items. The bounded-buffer producer-consumer
problem assumes a fxed buffer size. In this case, the consumer must wait if the
buffer 1s empty, and the producer must walt if the buffer 1s full.

The buffer may elther be provided by the operating system through the use
of an interprocess-communication (UPC) facility or by explicitly coded by the
application programmer with the use of shared memory. Let us illustrate a
shared-memory solution to the bounded-buffer problem. The producer and
consumer processes share the following varlables:
#define BUFFER-SIZE 10
typedef struct

) tem: 1 t
Process Management
41
item buffer[BUFFER-SIZE]:
intin =0:
int out O;'
The shared buffer is implemented as a circular array with two 10glcal
pointers: in and out. The varlable in polnts to the next free position in the buller
out points to the first full position in the buffer. The buffer is empty when in == out;
the buffer is full when (in + 1) % BUFFER-SIZE) == out.
The code for the producer and consumer processes follows. Theproducer process
has a local vartable next Produced in which the new item to be produced is stored:
while (1)

/produce an item in nextProduced/


while ((in+ 1))
BUFFER-SIZE) == out):
/ do nothing */
bufferlin] = nextProduced;
in = (in + 1) % BUFFER-SIZE;

The consumer process has a local variable next Consumed in which the
item to be consumed is stored:
while (1)

while (in ==out) ertto t71


do nothing
nextConsumed bufferfout:
out (out+ 1) % BUFFER-SIZE; 3
/ consume the item in next Consumed
.22

This scheme allows at most BUFFER-SIZE - 1 1items in the buffer at the same time.

2.7 INTERPROCESS COMMUNICATION

One of the benefits of multitasking is that several processes can be made to


cooperate in order to achleve thelr ends. To do thifs, they must do one of the following.

i Communicate : Interprocess communtcation (IPC) Involves sending


information from one process to another. This can be achieved using many
methods such as a mailbox system, a socket which behaves lilke a virtual.
communications network, or through the use of plpes. Pipes are a system
construction which enables one process to open another process as if it
were a file for writing or reading.
. 42

C data: A segment of memory


Process Managemen

must be available to both processes,


Waiting: Some rocesses walt for other processes to give a signal before
continuing. This is an issue of synchronization.
Inter process communication is best provided with messagepassingssing systems
and message systems can be defined in different way's.
2.7.1 Message passing system
nissystem allows processes to communicate with one another with the
need to use shared data. Messages sent by processes can be elther ot tixed or variable
size. Fibxed-size messages are easy to implement whereas variable-size messages
are complex to implement. For two processes to comnmunicate we should first
establish a communication link. The link can be implemented using one of the
following methods.
1. Direct or Indirect communication
2. Symmetric or Asymmetric communication
3. Automatic or explicit buffering
4.
13it T
Send by copy or Send by reference
5. Fixed-size or variable-size messages
2.7.1.1 Naming
Processes that should communicate should have a way to refer to each other.
They can use either direct or indirect communication.
2.7.1.2 Direct communication

In this type of communication the process should identify the sender and
the recipient. A communication of this type should have the following propertles.
1. A link established automatlcally between every pair of processes.
2. A link assoctated with exactly two processes.
3. Only one link exdsting between. two processes.

This system establishes symmetry in addressing 1.e., both the sender and
the receiver must name each other for communication. In Asymmetry addressing
only the sender names the reciplent while the reciplent need not name the sender.

2.7.1.3 Indirect Communication ei b,TuT

In this type of communication the messages are sent and recelve througn
mailboxes or ports. A mailbox can be viewed as an object where messages'can be
stored. Each mailbox is uniquely identifiled. In this system two messages can
communicate only if they share a mal box. A communication of this type should
have the following propertles.
1. A link established automatcaly between every, palr of processes only "
they share a mallbox.
2. A link may be associated with more than two processes.
43
Process Management
3. A number of different links may exist between two processes. With eacn
lInk corresponding to the mallbox.

Other methods 11ke synchronlzatlon and buffering can also be used for
communication.
Concurrent execution of cooperating processes requires above mechanisms that
allows processes to communicate with one another and to synichronlzE, their actions.

2.8 SCHEDULING OF PROCEssES

All modern computers can do several things at the same time. While running
the user program, a computer can also be reading from the disk and printing onha
terminal or a printer. In a multprogramming environment. the processor swtches
from program to program, running each program for a smail duration of time.
However at any instant of time, the processor is executing only one program. The
switching between programs occurs at such a fast pace the user has an impression
that he is the only person using the system i.e., an ililusion of parallelism. The
processor switches back and forth between programs keeping track of multiple
activities. The entire process is shown in the Figure 2.3.
Process 1 Process 2
Executing

Idle

ReloadReg
Idle
Interrupt
Executing

Regser
Idle
ReloadkRepistar

Executing
Figure 2.3 CPU Switching from one process to another
44 Process Managemen

2.9 sCHEDULING gUEUES

na multiprogramming environment, as a processes enters the system it t


put into a queue called as the job queue. Thts queue consists otall processes tha
ane
ac auready in the system. The processes that are residing in the main memory
are ready and waiting to execute are kept in a list called the ready gueue. Th
queue is generally stored as linked list. Their may also exists other queues in tha
executes for a certat
is allocated the CPU it
stem. For example
may
when process
period of time and eventually terminate or may be interrupted, 0r may wait fo
the occurrence of a particular event, such as the completion ot an input-outpu
quest process. If there are many processes requesting an input-output operatio
then the processes may have to walt. Then the list of processes walting a particula
tor
device forms a device quene. Each device may have its own device gueue. Th
enture cooncept of scheduling can be clearly understood with the help of a dlagran
called as the quening diagram. The diagram includes certan symbolstointerpre
certain elements. A rectangular box represents a queue and a circle represents al
the resources that are required for a process and the arrows indicate the flow o
processes in the system.

Ready gueue (CPU)

I-O 1-0 Queue I-0 request

Time slice
completed

sub sub
processs process Createa
cnds Crecites Sub-process

Interrupt Wait foor


interrupt
Flgure 2.4 Queuing diagram representing process scheduling

Whenever a process is created it is immediately put into the ready queue


The process wats in the ready queue until it is taken up for execution by the
processor. Process execution is a sequence of events,
which goes through a cycle
which includes a CPU execution and 1-O walt. Processes alternate back and fort
between these two stages. The executlon of a process begins with CPU burst anc
Anally terminates with a CPU burst. In between it may go between
CPU bursts,
burst, 11ke this pattern goes on. Once the process is allocated the CPU and
- 1S

executing. one of the several events could occur.


45
Process Management
1 Process could glve an I-O request and these places it in the I-O queue.
2 Process could create a new sub process and wait for its termination.
3. Process could be removed forcibly from the CPU as result of an interrupt,
and be put back in the ready queue.

As described in the first two situations a process switches from the walting
state to ready state, and 1s then put back n the ready queue. A process continues
this cycle until it terminates, at which tume it is removed form all queues and has
1ts PCB and resources de-allocated.

2.10 SCHEDULERS

The life of a process requires 1t to go through varlous scheduling queues


before it completes its execution. It also requires the use of various resources
during its execution: the CPU Is one the primary cornpeting resources. The scheduling
of a process is the central activity of the Operating Svstem. The cholce of the queue
equtres a lot of consideration. The selection of the process and 1ts corresponding
queue is done by that part of the operating system called as the scheduler.
Schedulers are of different types
1. Long-Term scheduler
2. Short-Term scheduler

Long-Term scheduler determines which jobs are admitted to the system for
rocessing. For example in a batch processing system there are often a number o
jobs submitted to the systemn than can be executed. All the processes are spooled to
a suitable storage device such as a disk, where they are kept for executlon at a
later stage. The Long-Term Scheduler selects jobs from the job pool and loads it
into the memory for execution. Short-Term cheduler selects from the pool of
processes that is already in the ready queue for execution. The Short-Term
Scheduler then allocates the CPU to the selected process or job.
Medium term,
scheduler
Suspended or
Swapped
process queuc
Long term| Short term
scheduler scheduler
Ready gueue

I-O waiting
I-O gueue
Flgure 2.5 Types of schedulers
46
Proces Managemen

.ne primary distinction between these two schedulers is frequency


execution. Short-Term
frequently. The Scheduler should select a new process for the CPU PU qult
qu
execution of a'process is normally for a very short durationoftime
may be for a few microsecond Because of the short duration of timebetwe between
executions Short-Term Scheduler should be very 1ast. Long-Term Scheduler
the other hand executes ong-Tem
much less frequently. The execution of the Long-Te
Scheduler depends on the number of jobs arriving in the system 1or execution,

LOng-Term Scheduler normally controls the degree of multprogrammina ing


the degree of multüprogramming is stable then average rate of processes leaving
leavin
the system after execution must be equal to the average rate oi processes enterina
the system for execution. Thus Long-Term Scheduler may be involved only whes
a job leaves the system. It is important that Long-Term Scheduler makes a carefui
selection. In general most processes can be classtfed as either 1-0 bound or C
bound. An I-0 bound process is one that spends more time doing I-O than
oputations. A CPU bound process on the other hand performs I-O requests les
irequently when compared to computations. It is important that Long-Term
Scheduler selects a good process mix of I-O bound and CPU bound processes

If all the processes in a system are 1-O bound then the ready queue will be
almost empty and the Short-Term Scheduler will have lite work to do. If al1
processes are in the system are CPU bound then the I-0 queue will be emnpty
and
devices will go unused and again the system will be unbalanced. A System
with
best performance is one, which will have a good combination of CPU bound
and
I-O bound processes

In some Operating Systems intermediate level of scheduling is introduced.


This is taken care of by a scheduler called the Medium-Term Scheduler. The
functioning of the medium scheduler is shown in the flgure. The main idea behind
the Medium-Term Scheduler is that sometimes it can be advantageous to remove
processes from memory and thus to reduce multiprogramming. At some later point
of time the process can be reintroduced into memory and execution can be
continued from where it left off. This scheme is called swapping. This scheme
helps in improvingjob work.

Dispatcher is a component in the CPU scheduler that actually glves control


of the CPU to the process selected by Short-Term Scheduler. This involves loading
the registers of the process, switching to the user mode and jumping to the proper
location in the user program to restart 1t.

2.11 GOALS OF SCHEDULING

Many goals must be considered in the destgn of a scheduling discipline. In


particular, a scheduler should conslder fairness, efficiency, response time
turnaround time, throughput, etc., Some of these goals depends on
the system one
Process Management 47

Is uslng lor example batch system, Interactive system or real-time system, etc. but
there are also some goals that aredeslrable In all systems.

2.11.1 General Goals

Palrness: Palrness Is important under all circumstances. A scheduler makes


Sure that each process gets its fair share of the CPU and no process can
Sufler indelinlte postponement. Note that giving equlvalent or equal tme is
not falr. Think of safety control and payroll at a nuclear plant.

Polley Enforcement : The scheduler has to make sure that system's policy
Is enlorced. For example, if the local policy is safety then the salety contro
processes must be able to run whenever they want to, even if ft means delay
in payroll processes.

i. Efficiency: Scheduler should keep the system (or in particular CPU) busy
cent percent of the time when possible. If the CPU and all the Input/Output
devices can be kept running all the time, more work gets done per second
than if some components are idle.

iv. Response Time : A scheduler should minimize the response time for
interactive user.

V Turnaround: A scheduler should minimize the time batch users must wait
for an output.

vi. Throughput:A scheduler should maximize the number of jobs processed


per unit time.

A little thought will show


that some of these goals are contradictory. It can
be shown that any scheduling algorthm that favors some class of jobs hurts another
The amount of CPU time available is fnite, after al.

.
class of Jobs,

2.12 CONTEXT SWITCH


elr T A

In amultiprogramming environment it is necessary to move from one process


to another Switchtng the CPU to another process requtres saving the state of the
old process and loading the saved state for the new process. This task is known as
contert switeh. Context switeh time ls pure Overhead, becatúse the system does no
useful work while switchtng Its speed varies from machlne to machine depending
on memory speed, the number of reglsters, whtch must be copied. Context
switching Is highly dependent on the hardware support provided to the operating
system. The more complex the operating system the amount of tme requiredto
perform the operation of switchtng between processes is more.
48

2.13 SCHEDULING Process Managemes


ALGORITHNS

Scheduling
Anumber
has its own set ofof scheduling algorithms are avallable. Each scheduling algorth
eiecang an algorithmproperties and favors one class of processes over anothener.
properties ofvarlous for use in a particular situation, we must I
In
consider
algorithms. Many criteria have been suggested 1or comparin.th
vanous CPU scheduling ring
algorithms. Some of the criterias are:
CPU Utilization : The CPU
time the CPU is busy utilization is the average fraction of the clock
executing the users programs and periorming
various operating system functions. the
busy. CPU utilization may It is always necessary to Keep the CPU
range from 0% to 100%. În a real time system
should vary from 40 to 90%. t
i. Throughput: It is the number of jobs completed per
processes this rate may be less unit time. For long
long
order to compare the performance
whereas for the short processes it ismore.In
of various algorithms on this basis
necessary to present the same set of jobs it is
to different scheduling algorithms,
ili. Turnaround Tine : It is the time interval from the
to its of submnission of job
time
completion. The turnaround time includes the sum
walting to get into the memory, of the times spent
waiting in the ready queue, executing
the CPU., and performing 1-o. with

iv. Response Time: Often a process can produce some


can continue computing new results. While output very early and
to the user. Thus another measure previous results are being output
of scheduling is the time from the
submission of the request until the first response is produced. This measure
is called the Response time and is
the amount of time taken to start
responding and not the time taken to output that response.
Waiting Time : This represents the amount of
time spent waiting for the
allocation of various resources. It is the difference
time and the actual execution of the job. between the turnaround

It is desirable to maximize CPU utilzation 'iu


and throughput, and to minimize
turnaround time., response time and waiting time. In some
to optimize the minimum or maximum values rather situations it is desirable
than the average. For
interactive systems it is more important minimize the variance
time than to minimize average response time. in the response

The main problem while declding on scheduling algorithms


that some ol
the above criterlas may coniict wth each other. For example fastisresponse time
49
Process Management
may result in low CPU utlltzaton time. Increase in resource utilization by
increastng the active processes may result in increased response time. In view
all this the general requlrement is to maximize CPU utilization and throughput
o
and to minimize turnaround. response and walting ume. There are different
schedullng algorithms. Let us discuss some of them in detall.

2.13.2 First-Come First-Served (FCFs) Scheduling

This is the simplest of all scheduling schemes that are available. In this
scheme the process, which requests the CPU first 1s allocated the CPU afirst. The
implementation of this scheduling algorithm requires us to consider natural
data structure called a queue. The jobs are executed in the order of their arival.
When the process enters the ready queue, 1ts PCB is línked on to the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the ready queue.

The main advantage of this policy is that it is very simple to understand and
implement and the main disadvantage is that the average waiting time is often
quite long. Consider the following processes arriving for execution at time 0 with
the length of the CPU burst given in microseconds.
PROCESS BXECUTION TIME
32

If the processes arrived in the order Pl. P2, P3 then the result of their
execution would be as shown in the Gantt chart below

Pl P2 P3

32 34 39
rThe waiting time for process P1 is O microseconds, for process P2 it 1s 32
microseconds and for process P3 It is 34 microseconds. Thus the average waiting
time is
-

(O+32+34) /3 22 microseconds.
r
execution
If the processesarrived in the order P2, P3, P1 then the result of their
would be as shown in the Gantt chart below.

P1

L
O
P2

2
P3

7
= 39
50 Process Managemen
The waiting time for process P1 is 7 microseconds, for process P2 itit o0
microseconds and for proce P3 is
average walting
it is 2 microseconds. Thus the
time is
(7+0+2)/3 =3 mlcroseconds.

m the
nus the sequence of arrival of the jobs the system plays an important
amount of for execuuon.
role
time the jobs has to spend waiting

Let us now consider the performance of FOCFS algorithm in.a dynamlo


Suaton. Assume we have one CPU bound process and many I-O bound processes
AS the processes proceeds around the system, the following situation may result
ne CPU bound process will get the CPU and hold it. During this time, all the other
processes will finish their 1-O and move into the ready queue, waiting for the CPU
While they wait in the ready queue, the 1-0 devices are all idle. Eventually, the
CPU bound process finishes it CPU usage and moves to an 1-0 device. All the 1-0
bound processes. which have very short CPU bursts, execute quickly and move
back to the 1-0 queues. At this point the CPU sits idle. The CPU bound process wl
then move back to the ready queue and be allocated the CPU. Again, all the I-o
processes end up waiting in the ready queue until the CPU bound process is done
nis results in an effect called as the Convoy Effect, as all the other processes wat
for the one-big process to get off the CPU. This effect results in lower CPU and
device utilization.
2.13.3 shortest-Job-First (SJF) Scheduling
Another algorithm that is normally considered is the Shortest-Job-First
algorithm. The main consideration of this algorithm is the time required for the
execution of each process. When the CPU is avalable, it is asslgned to the process
that has the smallest next CPU execution time. If two or more processes have the
same next CPU burst then the FCFS is used to break the tie.
Shortest-Job-First Scheduling is provably optimal, in thatit glves the
minimum average waiting time for a given set of processes. Moving a short process
before a long one decreases the waiting time of the short process more than, it
increases the waiting time of the long process. Consequently, the average time
decreases. The main dificulty with SJF is in finding out the length of the next CPU
request. For long-term job scheduling in a batch system, we çan use the
process-time limit. Lower value may mean faster response.E

Consider the following processes arrlving for execution at time O with the
length of the CPU burst given in microseconds.
PROCEsS EXECUTION TIME

P2
P3
Procees Management 51

If the processes arrlved In the order P1, P2, P3, P4 then the result of their
executlon wlth the SJF would be as shown in the Gantt chart below.

P4 P1 P2
L

2 5 10 17

time for process P1 is 5 microseconds. for process P2 it is 1o


The waiting
microseconds, for process P3 it is O microseconds and for process P4 it is 2
microseconds. Thus the average waiting time is
= (5+ 10+0 +2)/4 4.25 microseconds.
Although SJF is optimal, it cannot be implemented at the level of short-term
CPU scheduling.-It is not possible to know the length of the next CPU execution
time. One way is to try to approximate SJF scheduling by predicting the value of
the next CPU execution time and then we can pick the process with the shortest
predicted CPU execution time. The next CPU execution tirne is generally predicted
as an exponential average of the measured lengths of the previous CPU execution
times. Let tn be the length of the nth CPU execution time, and let Cn +1 be our
predicted value for the next CPU burst. Then, for a. O«=a<1, define

Cn +1 at + (l- a) n
This formula defines an exponential average. tn contains our most recent
information; Cn stores the part history. The parameter a controls the relative weight
of recent and past history in our predictions.
2.13.4 Priorlty Scheduling
A
preemptive algorithm has the abililty to remove the currently. executing
process from executlon and allocate the CPU to another process, the next process
considered for execution will normally be chosen on some priority. SJF is a special
case of the general priority-scheduling
algorithm. A prlority is associated with
each process, allocated to the process with the highest prlority. If
and, the CPU is
there are some processes with equal priority then they are scheduled using FCFS.
An SJF algorithm can be a priority algorithm where the prlority (P) is the inverse of
the predicted next CPU execution time.

The larger the CPU.execution time the lower will be the priority. The
expressing of priorlty can be done using different methods. It is normally expressed
as a fixed range of numberS. Some of them use small numbers to, represent low
priority whlle others use small numbers for high prlority.
52 Process Managen

externally. Intermallyly delne


Priorities
11 can be defined elther internally or dan
Dr
prioritless
priorities use some compute the priority of the
easurable quantities to the number or open flles, and
rOr example time limits, memory requlrements,
ratio of average I-O burst to average CPU burst may be considered to comp ite
external to the Operatt
External priorities can be a set of criteria that are ex
rorides.
System. Externa priorities can include concepts such as the importance of
process, the type and amount of funds being paid for computer use, the departme d h
sponsoring the work.

lority scheduling can be either preemptive or non-preemptive. Wha


PrOCess arrives at the ready queue, its priority is compared with the priority of
currently running process. A preemptive algorithm will preempt the CPU, if t
priority of the newly arrived process is higher than the priority of the current
running process. A non-preemptive algorithm will simply put the new process
the head of the ready queue.

Amajor problem wvith priority scheduling is indefinite starvation or blocki


A process that is ready to run but lacking in priority can be considered blocke
and waiting for the CPU as there may be other processes with higher priorh
continuously arriving in the system. This leads to the problem of indefinjt
starvation or Aging. A priority scheduling can leave some low priority processe
wafting indefinitely for the CPU. A solution to this problem is to gradually increas
the priority of processes that wait in the system for a long time. Over a perioda
time the process will gain enough priority to be executed. Thus every process wil
get the chance to run in CPU within finite timne limits.

2.13.5 Round Robin (RR) scheduling

One of the most important requirements of an interactive Time-Sharlns


system is to provide good terminal response to all the users. The Roùnd Robin
technique is mainly designed for time-sharing systems. In this technique d
scheduling a small unit of time, called a time quantum or time slice 1s defined
The time slice normally varles but is generally between 10 to 100 micròseconds
The ready queue i.e., the queue that contains the jobs that are ready to be executed
is a circular queue. The CPU scheduler goes around the ready queue, allocating
the CPU to each process. Each process will run for not more than one time-slice.1
a process needs more CPU time after executing for one time-slice then it is relieved
of the processor and it is put at the end of the ready queue. If the
job is completed within the provided time-slice then the next
execution of the
process in the ready
queue is allocated to the CPU.

The preference of RR algorithm depends to a large extent


on size of tne
time-slice. If the time-slice is very large, the RR policy is the sametheas the FCP
Dolicy. However if the time-slce is very small then the RR
approach can be calle
as processor sharing scheduling and appears to the users as though each o
processes has its own processor running at 1/n th speed of
the real processor.
Smaller time quantum increases the context switches or preemption.
Thus
Process Management 63
time-slice must be large if the amount of context switches is to be reduced.
Turn-around tme also depends on the size of the time-sllce. The average
turnaround time for set of processes does not necessarily 1mprove as the
time-slice increases, it can be improved if most process finish their next CPU burst
in a single time quantum.

2.13.6 Multllevel 9ueue Scheduling


In an environment where a number of different users are using the system
some of them may be interactive jobs and others non-interactive batch jobs. The
usual practice is to classfy the processes according to the characteristics of the
jobs and maintain separate process queues. Processes are divided or classified
into different groups. A common division is to classify as either foreground
(interactive) processes and background (batch) processes. These two types of processes
have quite different response time requirements, and so might have different
scheduling needs. Foreground processes may have greater priority over background
processes. The figure below glves a rough idea of multilevel queue scheduling.

Highest Priority CPU


System, Process

Interactive Process

Interactive editing Process

Batch Process

User Process

Lowest Priority

Figure 2.6 Multlevel queue scheduling


A multilevel queue scheduling algorithm partitlons the ready queue into
separate queues. Processes are permanently assigned to one queue, gene ally
based on some property of the process, such as memory size or process type. Each
queue has its own scheduling algorithm. Separate queues might be used for
foreground and background processes. The foreground queue may be scheduled
using the RR algorithm, while the background an FCFS algorithm schedules queue.
There can be schedullng between the queues. This is commonly a fixed-priority
preemptive scheduling. The foreground queue may have absolute priority over the
background queue.' Consider the e.g. of a multilevel queue scheduling algorithm
with five queues
54
Process Manage
2.
System process
3. Interactive processes
4. Interactive editing processes
5.
Batch processes
Student processes
Each queue had absolute
the batch queue
the batch
processes, and could run
priority over lower priority queues. No proce.
No process
unless the queues for the system processes, interact
!

process was interactive editing pr w a bat


process entered the ready queue while
rumning, the batch prprocess would be preempted. Anot
is to time
slice other possibil
ane, which it between the qu queues. Each queue gets a certaln portiorion of
the CE
can then schedule, the various processes in
its queues.
2.13.7 Multilevel
Feedback gueue Scheduling
In a multilevel
assigned to a queue on queue-scheduling algorithm. processes are permanent
move between the queues. the entry to the system. Processes do not have the
If there are separate queues 1or foreground abilt
Dackground processes. processes do
the processes do not move from one queue to the other, stno
not change their foreground or background
infleible, though has low nature. But this
can be improved by scheduling overhead. The effectiveness of
multilevel feedback queues. In this technique scheduli
dynamically allowed to move a.processi
The idea is to separate between queues depending on its runtime behavio
f a process uses
processes with different CPU requirement characteristics
too much CPU time, it will be moved to a
scheme leaves I-0 bound and interactive lower priority queue. The
Similarly, a process that waits too processes in the higher-priority queues
a higher priority queue. long in a lower-priority queue may be movedt
This is a form of aging that would prevent
starvation:
Time slice = 5

Time slice = 12

FCFS queue
Figure 2.7 Multilevel feedback queue
scheduling
In general, a multilevel feedback queue schedule is
parameters. defined by the followe
Proceas Manngement 66

The number of queues.


2. The schcdullng algorithm for each queue.
3. The method to upgrade a process to a higher prlorlty queue.
4. Method to denote a process to a process to a lower prlority queue.
5. Method to determine which qucue a process will enter when that process
needs servlce.

2.13.8 Multiple Processor Scheduling

One of the significant achlevements in the fteld of computers is the use of


multiple processors in a single system. Multiple processors increase the system
throughput, reliability and the computing power of the system. One of the main
advantages of multiprocessor envlronment is the higher rellability of the system. If
a processor falls during usage then the remaining processors can share the load
and continue processing. Multiprocessor systems can provide a means of increasing
the computing capacity of the system at a nominal cost. However if multüple
processors or CPUs are available, the scheduling problem becomes more complex.
One major factor is the type of processes involved. The processors may be identical
(a homogeneous system) or different (a heterogeneous system).

If the processors are different. the options are relatlvely limited. Each
processor has its own scheduling algorithm. Processes are intrinsically typed by
their structure; they must be run on a particular processor. Processes are self
segregating. and each process can schedule itself. If several identical processors
are available, then load sharing could occur. It would be possible to provide a
separate queue for each processor. Here, one could be idle, with an empty queue,
while another processor was very busy. To prevent this, we use a common ready
queue. Al processes go into one queue and are scheduled on to any available
processor. Appoint one processor as scheduler for other processors, thus creating a
master-slave structure. This is asymmetric multiprocessing.

2.14 ALGORITHM EVALUATION

How do we select a CPU scheduling algorithm for a particular system? The


first problem is deflning the criteria to be used in electing an algorithm. The
crlterta are often defined in terms of CPU utilization, response time, or throughput.
To select an algorlthm, we must first defîne the relative importance of these
measures. Our criterl may include several measures, such as:

. Maximlze CPU utilization under the constraint that the maximum response
time is 1 second.
2 Maximize throughput such that turmaround time is linearly proportional to
total execution time.

Once the selection criterla have been defined, we want to evaluate the
varlous algorlthms under consideration. Some of the evaluation methods are:
56
Process
Managem
2.14.1 Deterministic
Modelng
One major class
evaluationuses
evaluaton 1on. Ana
of evaluation methods is called analytic evaluation. AnalAy
the given algorithm and the system workload to produce a for
Der that evaluates the performance of the algorithm ior mu
that workload,
One type of analytic
takes a particular evaluation is deterministic modeling. This
rhis meth
predetermined
algorithm for that workload. workload and defines the ce of eeaq
performance

or
example, assume that we have the workload shoWn. All iive proceso
arrive at time 0, in the esse
order given. with the length of the CPU-burst time
dme
milliseconds: glven
glven

ProcsSS Burst Time


PI 10
29
P4
PD
Consider the FCFS, SJF and RR (quantum = 10 milliseconds) schedulin
algorithms for this set of processes. Which
average waiting time? algorithm would give the minimun

For the FCFS algorithm, we would execute the processes as

Pl P2 P3 P4 P5

0 10 39
39 42 49 61
The waiting time is O milliseconds for process P1, 10 proces
P2, 39 milliseconds for process P3. 42 milliseconds milliseconds for
for process P4.
and 49
milliseconds for process P5.
Thus the average walting timeis (0+ 10+39+ 42+ 49)/5=
28 milliseconds
With non-preenptive SJF scheduling, we execute the processes
as

P3 P4 P1 P5
P2

3 10 20 32 61
57
Process Management
The waltingtime is 10 milliseconds for process P1, 32 milliseconds for
process P2, 0 milliseconds for process P3, 3 milliseconds for process P4, and 20
milliseconds for process P5.
Thus, the average waiting time is (10+ 32 + 0+3+20)/5 = 13 milliseconds.

With the RR algorithm, we execute the processes as

P1 P2 P3 P4 P5 P2

10 20 23 30 40 50
The waiting time is 0 milliseconds for process P1, 32milisecondsfor process
P2. 20 milliseconds for process P3, 23 milliseconds for process P4, and 40
milliseconds for process P5.
Thus, the average waiting time is (0 +32 + 20+ 23+40)/5 = 23 millisecondis.

We see that, in this case, the sJF policy results in less than one-half the
average waiting time obtained with FCFS scheduling: the RR algorithm gives us an
intermediate value.
Deterministic modeling is simple and fast. It gives exact numbers, allowing
the algorithms to be compared. However, it requires exact numbers for input, andi
its answers apply to only those cases. The main uses of deterministie modeling
are in describing scheduling algorithms and providing examples: In cases where
we may be running the same programs over and over again and can measure the
-programs processing requirements exactly, we may be able to use deterministic
modeling to select a scheduling algorlthm. Over a set of examples, deterministic
modeling may indicate trends that can then be analyzed and proved separately.
For examnple, it can be shown that, for the environment described (all processes
and their times available at time 0). the SJF policy will always result in the minimum
waiting time.

In general, however. deterministic modeling is too specific, and requires


too much exact krnowledge, to be useful.

2.14.2 gueuing Models


The processes that are run on many systems vary from day to day, so there is
no static set of processes to use for deterministic modeling. What can be
determined; however, is the distribution of CPU and I/O bursts. These distributions
may be measured and then approximated or simply estimated. The result is a
mathematical formula describing the probability of a particular CPU burst.
Commonly, this distribution is exponential and is described by its mean. Similarly,
58
Process Managem
the distribution
distribution of times when processes arrive in the system - the arrival -
must be given. tUm

The computer system


a queue of is described as a network of servers. Each.server
walting
I-O system with processe queue, as ha
The CPU is a server with its ready queue, is
its devic queues. Knowing arrival rates and service rates, we.
compute utilization. th
average queue length, average wait time, and so on. can
area of study is called This
queuing network analysis.
an example.
As
the average queue length (excluding the proce
being serviced). let W belet n be
average
average arrival rate for newthe waiting time in the queue, and let 11 be
processes in the queue (such as three processes thepDea
Secona). Then. we
processes will arive expect that during the time W that a process waitS, Ix W ne
in the queue. If the system is in a steady state, then
number of processes leaving the queue must be equal to the number of processes ne
that arive. Thus,

N=yxw
This equation is known as Littles formula. Littles forTmula is particularly
useful because 1t is valid for any
scheduling algorithm and arrival distribution.
We can use Littles formula to compute one
the other two. For. example, if of the three variables, if we know
we know that seven processes arrive every
(on average). and that second
there are normally 14 processes in the queue, then we can
Compute the average waiting time per process
as 2 seconds.
9ueutng analysis can be useful in comparing scheduling
also has its limitations. At the moment, the algorithms, but it
classes of algorithms and distributions
that can be handled are fairly limited. The mathematics
or distributions can be difficult to work with. Thus, of complicated algorithms
arrival and service distributions
are often defined in unrealistic, but mathematically
tractable, ways. It is also
generally necessary to.make a number of independent
be accurate. Thus, so that they will be able to compute assumptions that may not
an answer, queuing models
are often only an approximation of a real system. As a result,
computed results may be questionable. the accuracy of the

2.14.3 Simulations
To get a more accurate evaluation of scheduling algorithms,
we can use
simulations. Simulations involve programming a model of the computer system.
Software data structures represent the major components of the system. The
simulator has a varlable representing a clock; as this variables value is increased
the simulator modifles the system state to reflect the activitles of the devices, the
rocesses. and the scheduler. As the simulation executes, statistics that indicate
algorithm perfornmance are gathered and printed
Process Management 69

The data to drlve the simulation can be generated in several The


most common method uses a random-number generator, which 1s programmed to
generate processes, CPU-burst times, arrivals, departures, and so on. according to
probablilty distributions. The distrlbutions may be deflned mathematically
(unlform, exponentlal, Polsson) or empirlcally. If the distribution is to be defined
empiricaly, measurements of the actual system under study are taken. The results
are used to define the actual distribution of events in the real system., and this
distribution can then be used to drive the simulation.

Adistribution-driven simulation may be inaccurate, however. due to


relationships between successive events in the real system. The frequency
distribution indicates only how many of each. event occur: it does not indicate
anything about the order of their occurrence. To correct this problem, we can use
trace tapes. We create a trace tape by monitoring the real system, recording the
sequence of actual events. This sequence is then used to drive the simulation.
Trace tapes provide an excellent way to compare two algorithms on exactly the
same set of real inputs. Thls method can produce accurate results for its inputs.

Simulations can be expensive, however, often requiring hours of computer time.


A more detalled. simulation provides more accurate results, but also requires more
computer time. In addition, trace tapes can require large amounts of storage space.
Finally, the design, coding and debugging of the simulator can be a major task.

2.14.4 Implementation

Even a simulation is of limited accuracy. The only completely accurate


way to evaluate a scheduling algorithm is to code it, put it in the operating system,
and see how it works. This approach puts the actual algorithm In the real system
for evaluation under real operating condtons.

The major difflculty is the cost of this approach. The expense is incurred
not only tn coding the algorithm and modifying the operating system to support it
as well as its required data structures, but also in the reaction of the users toa
constantly changing operating system. Most users are not interested In building
a
better operating system: they merely want to get thetr processes executed and to
use thelr results. A constantly changing operating system does not help the users
to get their work done. A form of this method is used commonly for new computer
installations. For instance, a new web facility may have stimulated user loads
generated agatnst it before t "goes ive", to determine any bottlenecks in the factlity
and to estimate how many users the system can support.

The other difflculty with any algorithm evaluation is that the environment
In which the algorithm is used wll change. The environment will change not
only
In the usual way, as new programs are written and the types
of problems change.
but also as a result of the performance of the scheduler. If
short processors are
60
Process Manageme
Bven priority, then users may
If interactive break larger processes Into sets of smaller proceso
may switch to processes are given priority over non interactive processes, then sen
interactive use.
or processes in DEC TOPS-20, the system classlied interactlve and non
interactive
example,
1/0.
amount of terminal 1/0.
process did not input automatically by looking at the1-minute interval, the 1f
or itput to the terminal in a th process
was classified as non interactive
policy resulted in a and was moved to a lowerpony queue. Thi
situation where one programmer modifled his programs
te
ne
an arbitrary character to the th
terminal at regular intervals of less than minute
sy'stem gave his programs a high priority. even though the terminal output w
completely meaningless.

most flexible scheduling algorithms can be altered by the syste


The
managers or by the users. tem
During operating-system build time, boot time, or n
me, the varlables used by the schedulers can be changed to relect the expecte
future use of the system. The need for flexible scheduling is another instance
where the separation of mechanism from policy is useful. For instance,
if paychecks
need to be processed and printed immediately, but are normally
done as a
low-priority batch job, the batch queue could be glven a higher priority temporarily
Unfortunately. few operating systems allow this type of tunable scheduling.

QUESTIONS
1. What is a process?
2. What is the relationship between a program and a process?
3. Give the difference between a passive activity and a dynamic
activity?
What are the different states of a process?
5. What are the activities performed in each state of a process?
6. What is PCB? What is its functon?
7 What is a thread?
8. What do we mean by multithreading?
9. What are the similarities between processes and threads?
10. What are the differences between processes and threads?
11. Why should we use threads in designing operating systems?
12. Explain inter process communication in deta
13. What should processes do to communicate with each other?,
14. Explain message passing system in detall.
15. How can we implement message passing?
16. What is the information, which is included in the process control block?
17. what are the objectives ol scnedunngr
lu et L a 92i
Process Management 61

18. Explain how the processor switches back and forth between programs keeping
track of multiple activities, with suitable illustrations.
19. What is a scheduling queue? What is its use?
20. Explain the role of
a. Job queue
b. Ready queue
C Device queu1e
21. With the help of a queuing diagram show the concept of scheduling.
22. What 1s a scheduler?
23. What are the different types of schedulers?
24. What is the difference between a long-term scheduler and a short-term
scheduler?
25. What is the role of a medium-term scheduler?
26. Explain the role of swapping in a medium-term scheduler.
27. What are the levels of scheduling? Descritbe the role of the three schedulers
in a multi-programming environment.
28. What are the general goals of scheduling?
29. What 1s context switch?
30. Explain some of the scheduling criterias.
31. Explain a. Throughput
b. Turnaround time
C. Response time
d. Walting time
32. What are the functions of process scheduling? Describe the policies used
for process scheduling.
33. Explain First-come-first-served scheduling with a suitable example.
34. Explain the importance of the order, of arrival of processes in FCFSS
scheduling.
35. Explain Shortest-job-flrst scheduling with a suitable example.
36. What does it mean for a process to be preemptive and non-preemptive?
37. Explain priority scheduling
38. What is starvation or blocking?
39. How does the concept of aging effect the execution of a process?
40. Explain the round robin scheduling with a sultable example.
41. Explain multilevel queue scheduling.
42. How doesi multilevel feedback queue differ from multilevel queue
scheduling?
62 Process Manage

43. Exxplain multtple processor scheduling with a suitable example.


ultiple scheuu
44.
a thm
How do we select CPU scheduling algorithm for a particular lar system?
particula
45. DXplain deterministic modeling for selecting a scheduling agorithm n
detail.
46. queuing models for selecting a scheduling
algorithm in detait
47. lain
plain simulaton for selecting a scheduling algorithm in detail.
48. te a note on implementing simulation models in detail.

EXAMINATION QUESTIONS

1. What is a PCB? Explain briefly. February 2002

2. What is a process? Explain the different states of a process using process


state transition diagram. February 2002
3. Explain sJF and round robin algorithms with suitable examples.
February 2002
4. What are threads? August-September 2002
5. Explain process control block{PCB]along with block schematic
August-September 2002
6. Consider the following set of processes with the length of CPU burst time in

Miliseconds
Processs burst time
P1 10
P2
P3 2
P4
P5 5

Draw Gantt charts illustrating the execution of these processes using


FCFs. SJF, Non -preemptive priority and RR scheduling.
August-September 2002
What are scheduling criteria? Explain briefly. January 2003

8. Explain process state with neat diagram. January 2003


9. Discuss all the scheduling algorithms. June 2003
10. Explain process control block and 1ts entrles.
June 2003
Process Management 63

11. caleulate averagewalting time. Constder the following set of processes with
the length of the CPU burst tume in milliseconds.

Process Burst Time

P1 10
P2 1
P3 2
P4 5

Draw Gantt charts illustrating the execution of these processes using


FCFS, SJF, and RR. June 2003

12. Explain process control block. June 2003

13. What is a process? Describe the process state with the help of a process
transition diagram. Januany 2004

14. Consider the following set of processes:

Process Burst Time


P1 10
P2 1
P3 2
P4 10
P5 5
The processes arrived in the order P1, P2. P3, P4 and P5.For the above obtain
the Gantt chart and waiting time using Jannary 2004
i) FCFS.
i1) Round Robin.
15. Explain FCFS CPU Scheduling algorithm with example.
June 2004

16. What is a PCB? Draw the process transition ddagram.


June 2004

17. Explain different kinds of schedulers. Junc 2004


18. Consider the following set of processes with the length of the CPU burst time
given in milliseconds.
Process Burst time Priority
P1 10 3
P2 1 1
P3 2
P4
P5 5 2
64 Process Managem
P2, P3, P4
processes are assumed to have arrived in the order P1,
all at time 0.
processes usth
Draw four Gantt chart illustrating the execution of these n
FCFS, SJF a non preemptive priority and RR scheduling
Junc 2004
19. What is the turn around time and waiting time of each process all
in all the-
th-
scheduling algorithms mentioned above? June 2004
20.
0 Describe the difference between preemptive and non -preemptive scheduli
algorithm. Nov-Dec 2004

21. What are scheduling criteria? Explain briefly. Nov-Dec 2004

22. What is a process? Describe the process states with the help of a process
transition diagram. Nov-Dec 2004

23. Explain CPU scheduling criteria. . Nov-Dec 2005

24. Explain SJF CPU scheduling algorithm with an example.


Nov-Dec 2005

25. Explain the terns:


a) Waiting time. Nov-Dec 2006
b) Turn around time. Nov-Dec 2006
c) Throughput. Nov-Dec 2006

26 Explain the operations of multi level schedulinng


Nov-Dec 2006

27. Explain state transtion of process in execution.


Nov-Dec 2006

28. Explain the terms preemptive and non preemptive scheduling.


Nov-Dec 2006
29. State why strict non preemptive scheduling is unlikely too be-used in a
Computer Center. Nov-Dec 2006

30. What are some scheduling decislons which must be taken above the Leve
of CPU scheduling? Nov-Dec 2006

You might also like