Nothing Special   »   [go: up one dir, main page]

Process Scheduling (First Part)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Process Scheduling

[B. Sc. (G) Computer Science, 4 th Semester, Paper: GCC4]


Notes by: Amitav Biswas (Department of Computer Sci.), Behala College
Date: 06/04/2020
1.1 Objectives
This chapter introduces the different ways processes in the ready queue. We will focus on
the different scheduling schemes available, as well as discuss the scheduling algorithm
actually being used by the Solaris system.

1.2 Chapter Outline


This chapter is divided into the following sections:
● Scheduling introduction
● First-come-first-serve scheduling
● Shortest Job first scheduling
● Priority scheduling
● Round-robin scheduling
● Multi-queue scheduling
● Solaris process scheduling

1.3 Introduction
As operating systems allow for multiple processes sharing a single processor, the operating
system must establish a consistent scheme that chooses the next process to run on the CPU
in the event that the currently running process voluntarily or involuntarily relinquishes control
of the CPU.
An analogy for this concept (and we will be using this throughout this chapter) is the idea of
the CPU being a swing in a playground, with processes as children all lining up to use the
swing, which only one child at a time can use. All the children must take turns using the
swing in a way that effectively minimizes swing idle time and would allow all the children to
ride on the swing.
We will now introduce some scheduling concepts.

1.3.1 CPU burst – I/O burst cycle


As was discussed in an earlier chapter, a process undergoes a CPU burst – I/O burst cycle. A
CPU burst is when the process is running instructions. An I/O burst is when the process is
waiting for some I/O to finish. When the process is waiting for I/O, it sits idle on the CPU,
waiting for the I/O instruction to finish. To allow for more efficient use of the CPU, the process
is removed from the CPU, entering the waiting state. This allows the other process to run their
CPU burst. When the process is finished with its I/O burst, it comes back to the ready queue
in order to have a turn at the CPU to run its next CPU burst.

Operating Systems 1
CPU – I/O Burst cycle
Again, with our playground analogy, eventually a child on the swing will tire and would have to
rest for a while. To make sure that the swing is always used, a different child takes the place
in the swing. After a while, the child would be finished with his rest and would go back to the
line on the swing.

1.3.2 CPU bound, I/O bound processes


A CPU bound process is a process that spends most of its execution in a CPU burst. This
would occur for those process that are primarily computational in nature rather than
interactive. An application that computes the result of complicated mathematical functions is
CPU bound. Photo editing applications as well as applications that render computer graphics
are also CPU bound.
I/O bound processes on the other hand are processes that mostly wait for user input or take

Operating Systems 2
time to produce user output. Most interactive applications such as word processors
and spreadsheets are usually I/O bound.
This distinction is used by the long-term scheduler to optimize CPU performance. We
will discuss the long-term scheduler later on.

1.3.3 Schedulers
Primarily, when we talk about process scheduling, we consider what is called the short-term
scheduler. The purpose of the short-term scheduler is to choose which process is to run
next on the CPU.
There are four possible times when the process selection is made.
● First, a process's CPU burst period ends and begins to wait for I/O. As discussed earlier, this
process is removed from the CPU to allow the CPU burst of other processes to run.
● A second scenario is when a process starts. Often, processes are created by other
processes. When you double click an application icon, the mouse control process
spawns the application process. When invoking an application from the command line,
the command line process starts your application process. A decision must be made
on which process to run, the parent process or the child process.
● Decisions are also made when a process ends.
● A fourth scenario occurs in certain types of scheduling algorithms where the OS
itself revokes a process' turn on the CPU in what is termed as preemption. We will
discuss this in detail in a later section.
For all of these scenarios, the scheduler always tries to maximize CPU use as well as give
all the processes a chance to run. The focus of the rest of this chapter is on what
scheduling algorithms are used by the short term scheduler.
It is worth mentioning that some operating systems come with a long-term scheduler. The
long-term scheduler is responsible for fine-tuning the mix of processes in the ready queue.
As was discussed earlier, processes are either CPU bound or I/O bound. If the processes in
the ready queue are mostly CPU bound, then most of the time a lot of the processes would
be waiting for their turn on the CPU. If most of the processes are I/O bound on the other
hand, then the CPU would be mostly idle as most of the processes would be waiting for I/O to
finish instead of running on the CPU. The long-term scheduler tries to prevent these
extremes from happening. Sometimes, the long-term scheduler may temporarily swap out
certain processes from memory (effectively making these processes halt) in order to
maintain an optimal mix of CPU and I/O bound processes.

1.3.4 Preemption
Some scheduling routines may suddenly forcibly evict a currently running process from the
CPU, even if that process is still not finished with its CPU burst. If this happens, we say
that the scheduling algorithm has a preempted the process.
Preemption may occur when a process with a higher priority enters the ready queue. This
will be discussed in greater detail later on.

1.3.5 Context switch time


The context switch time is the time spent when a process is removed from the CPU in order
for another process to take its place. During the context switch time, the OS writes the
Process Control Block information of the exiting process. After this is done, the OS loads from
the incoming process' Process Control Block the values of the CPU registers, sets memory

Operating Systems 3
locations, goes to the latest executing instruction of the incoming process in order to pick
up where the process left off.
While this is being done, the CPU is left totally idle. In essence , the CPU does not do any
work while processes are being switched. This is why doing too much process switching
actually decreases CPU utilization, and has a derogatory effect in some scheduling algorithms.

1.3.6 Scheduling criteria


As this chapter discusses different scheduling algorithms, we will introduce some metrics that
would characterize a scheduling algorithm's performance.
● CPU utilization – a good scheduling algorithm tries to maximize the amount of time
the CPU is doing work.
● Turn-around time – this describes how long it takes for a process to finish running.
This may not be just dependent on how long the process takes to run. At some point
the process may be waiting for I/O or idle as it waits in line in the ready queue for its
turn to run on the CPU.
● Throughput – this describes how many processes get finished in a given time. We
would like to have as many processes finished as possible. However, if the scheduler
only chooses processes that take only a short time to run, we would be maximizing
throughput at the expense of turn-around time.
● Waiting-time – this is how long the process is kept idle in the ready queue
● Response time – this describes how long does it takes after a user has provided input
before the process can respond. This is more of a subjective measure. Most users are
comfortable with long program loading times but would like a very responsive
program once it is loaded. Typing on the keyboard should have a nearly instantaneous
response (the character should appear on the screen immediately) while copying a file
is expected to take some time.
● Fairness – the OS must give a chance for all the processes to run on the CPU.
Certain scheduling algorithms have the tendency of picking processes over others,
and some process may end up not running at all. This behavior is called starvation.

1.3.7 Representing process execution


There are many ways to represent process execution. One of the common ways to show
which processes are running on the CPU is via a Gantt chart. For example, the following Gantt
chart shows Process P1 starting at time 0 and ending just before time 15. Process 2 runs for
10 seconds, ending at just before time 25 and so on.

Another way is through a process execution table. Here is an example process execution table:

Scheduling: First-come first-served with context switch time of 1 second.

Operating Systems 4
Processes: P1 with CPU burst 4, P2 with CPU burst 5, P3 with CPU burst 2 arriving at time
0 P4 with CPU burst 1 arriving at time 2.

Time CPU Ready Queue Comment


0 P1 (4) P2 (5), P3 (2) P1, P2, P3 arrives. P1 chosen to run
1 P1 (3) P2 (5), P3 (2)
2 P1 (2) P2 (5), P3 (2), P4 (1) P4 arrives.
3 P1 (1) P2 (5), P3 (2), P4 (1)
4 idle P2 (5), P3 (2), P4 (1) Process 1 ends, context switch time
5 P2 (5) P3 (2), P4 (1) Process 2 chosen to run next

Each row of the process execution table shows what the current state of the CPU and the
ready queue is at each second (indicated by the Time column). The CPU column indicates what
process is currently running on the CPU while the ready queue shows the processes in line,
sorted in an order described by the scheduling algorithm. The number in parenthesis next to
the processes indicates burst time left.
The process execution table is a great way to gather our metrics. It should follow that the
process running on the CPU has its burst time left reduced at every time cycle. A process with
4 seconds of burst time should appear for 4 rows on the CPU. The waiting time of a process is
the amount of time it spends in the ready queue. For example, the waiting time of P2 is 5
seconds as it appears for 5 rows in the process execution table. The waiting time for P1 is 0
as it is immediately chosen to run on the CPU.
To make things simple, most of discussion would not consider a context switch time,
although context switch time plays a pivotal role in some scheduling algorithms and will be
mentioned as such.

1.3.8 Determining CPU burst


In reality, the OS doesn't really know how long the CPU burst of a process is (it would have
to involve program analysis, something the OS is too busy for).
So, our figures for CPU burst are actually determined by analyzing that process' previous
CPU bursts. If we consider tn to be the last CPU burst, and t' n to be our last CPU burst guess,
then we can get our estimate t' n+1 using the formula
t ' n 1=a t n 1−a t ' n
a in this case is a value between 0 and 1. If a = 0, then it means that our next estimate
would be based on our previous estimate. If a = 1 then we will assume that the next CPU
burst is exactly the same as the previous CPU burst, ignoring all the previous CPU bursts.
Often, a is set to ½ so that both have equal weights.
This formula is called the exponential average because if we expand t'n into the
formula considering t'n-1 and so on, we will end up with the formula:
2 3
t ' n 1=at n 1−a t n 1−a at n−1 1−a a tn−21−a n 1
t'0

1.4 Scheduling algorithms


We will now discuss the different scheduling algorithms. Operating systems may actually use a
combination of these algorithms and may have more than one queue for ready processes.
Operating Systems 5
For our examples, we will consider CPU burst times in the scale of a few seconds. In reality,
these burst times are in the millisecond range. We will use seconds in order to easily
visualize the examples.

1.4.1 First-come first-served


Perhaps the simplest scheduling algorithm, the first-come first-served algorithm simply
queues the processes in the order that they arrive. When the process on the CPU is finished
with its CPU burst, the next process in line will be chosen. A newly arriving process is just
simply added to the end of the queue.
For example, we have 4 processes P1 to P4 with burst times 5, 3, 4, 6 arriving in that order
at time 0. Then the Gantt chart of which process is running is as follows.

However, a problem comes with the FCFS algorithm. Consider a process P1 with CPU burst
time of 100 seconds, while 4 process P2 to P5 with burst time of 1 second each. If process
execution is done in the order P1, P2, P3, P4, P5, then the Gantt chart would look like this:

The average waiting time would be: (0 + 100 + 101 + 102 + 103) / 5 = 81.2 seconds
If however, we made P1 run last, then the Gantt chart would look like this:

The average waiting time would then be: (0 + 1 + 2 + 3 + 4) / 5 = 2 seconds. Thus,


having shorter running processes run first greatly decreases waiting time.
To make things work, if P1 enters the queue again, then the entire process execution would
run at the speed of P1 (i.e. Every other process has to wait 100 seconds before running).
This is called as the convoy effect, execution runs at the speed of the slowest process. Some
scheduling algorithms would preempt P1 after a certain amount of time, giving chance for
other processes to run, improving performance.

Operating Systems 6

You might also like