Nothing Special   »   [go: up one dir, main page]

Os Guides 1-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Applied Operating System

Study Guide

Module 1
INTRODUCTION TO OPERATING SYSTEM

SUBTOPIC 1: Operating System Overview

What is an Operating System?


An Operating System (OS) is a program or system software that acts as an interface between the
user and the computer hardware and controls the execution of all kinds of programs.

Goals of an Operating System?


 Execute user programs and make solving user problems easier.
 Make the computer system convenient to use.
 Use the computer hardware in an efficient manner.

Components of a Computer System


 Computer hardware – CPU, memory and I/O devices, provides the basic computing
resources.
 Application programs – are used to solve the computing problems of the users such as
word processors, games and business programs.
 Users – who utilize a computer or network service trying to solve different problems.
 Operating System – controls and coordinates the use of the hardware among the various
application programs for the various users.

What is a Kernel?
Kernel is the central part of an OS which manages system resources and is always resident in
memory. It also acts like a bridge between application and hardware of the computer. It is also
the first program that loads after the bootloader.

Bootloader is a program that loads and starts the boot time tasks and processes of an OS. It
also places the OS of a computer into memory.

Common Services Offered By Almost All Operating Systems:


 User Interface
 Program Execution
 File system manipulation
 Input / Output Operations
 Communication
 Resource Allocation
 Error Detection
 Accounting
 Security and protection

1
1. User Interface (UI) refers to the part of an OS, or device that allows a user to enter and
receive information.
Types of UI:
 Command line interface
 Batch based interface
 Graphical User Interface

2. Program Execution. The OS must have the capability to load a program into memory
and execute that program.
3. File system manipulation. Programs need has to be read and then write them as files
and directories. File handling portion of OS also allows users to create and delete files by
specific name along with extension, search for a given file and / or list file information.
4. Input / Output Operations. A program which is currently executing may require I/O, which
may involve file or other I/O device. The OS is responsible for reading and/or writing data
from I/O devices such as disks, tapes, printers, keyboards, etc.
5. Communication. Process needs to swap over information with other process. Processes
executing on same computer system or on different computer systems can communicate
using operating system support.
6. Resource Allocation The OS manages the different computer resources such as CPU
time, memory space, file storage space, I/O devices, etc. and allocates them to different
application programs and users.
7. Error Detection The operating system should be able to detect errors within the computer
system (CPU, memory, I/O, or user program) and take the appropriate action.
8. Job Accounting. OS keeps track of time and resources used by various tasks and users,
this information can be used to track resource usage for a particular user or group of user.
9. Security and Protection. Protection is any mechanism for controlling access of
processes or users to resources defined by the OS. Security is a defense of the system
against internal and external attacks (denial-of-service, worms, viruses, identity theft, theft
of service)

Types of Operating System


 Batch operating system
 Time-sharing operating systems
 Distributed operating system
 Network operating system
 Real-time operating system
 Handheld operating system

Batch Operating System (BOS)


The user of a BOS never directly interacts with the computer.
Every user prepares his or her job on an offline device like a punch card and submit it to the
computer operator.
 Examples: Payroll System, Bank Statements etc.

2
Time-sharing Operating System
Time-sharing or multitasking is logical extension in which CPU switches jobs so frequently that
users can interact with each job while it is running, creating interactive computing.
• Examples: Unix, Linux, Multics and Windows

Distributed Operating System


Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly.
• Examples: Telecom Network, WWW, Cloud Computing, etc.

Network Operating System (NOS)


A NOS runs on a server and provides the server the capability to manage data, users, groups,
security, applications, and other networking functions.
 Examples: Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX,
Linux, Mac OS X Server, Novell NetWare, and BSD/OS (Berkeley Software Design)

Real-time Operating System (RTOS)


RTOS is an operating system intended to serve real-time systems/applications that process data
as it comes in, mostly without buffer delay.
• Examples: LynxOS, OSE, QNX, RTLinux, VxWorks, Windows CE

Handheld Operating System


It is also known as Mobile OS which is built exclusively for a mobile device, such as a smartphone,
personal digital assistant (PDA), tablet, wearable devices or other embedded mobile OS.
• Examples: Android, Symbian, iOS, BlackBerry OS and Windows Mobile.

SUBTOPIC 2: Computer System Organization, Architectures and Computing Environment

A computer system is made up of various components. The components can be hardware or


software. Because these systems are so massively complex, the components are organized in
layers.

The important points about the previous figure displaying Computer System Organization is:
 One or more CPUs, device controllers connect through common bus providing access to
shared memory.
 The I/O devices and the CPU both execute concurrently.

3
 Some of the processes are scheduled for the CPU and at the same time, some are
undergoing input/output operations.
 There are multiple device controllers, each in charge of a particular device such as
keyboard, mouse, printer etc.
 There is buffer available for each of the devices. The input and output data can be stored
in these buffers. Buffer is a region of memory used to temporarily hold data while it is being
moved from one place to another.
 The data is moved from memory to the respective device buffers by the CPU for I/O
operations and then this data is moved back from the buffers to memory.
 The device controllers use an interrupt to inform the CPU that I/O operation is completed.

What is an Interrupt?
 An operating system is interrupt driven.
 Interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention.
 It alerts the processor temporarily to a high priority process requiring interruption of the
current working process and then return to its previous task.

Types of Interrupts:
 Hardware interrupt
 Software interrupt

Hardware interrupt is a signal created and sent to the CPU that is caused by some action taken
by a hardware device.
Example: When a key is pressed or when the mouse is moved.

Software Interrupt arises due to illegal and erroneous use of an instruction or data. It often occurs
when an application software terminates or when it requests the operating system for some
service.
Example: stack overflow, division by zero, invalid opcode, etc. These are also called traps.

Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter
Determines which type of interrupt has occurred:
 Polling – operating system sends signal to each devices asking if they have a request
 Vectored interrupt system – requesting device sends interrupt to the operating system.
Separate segments of code determine what action should be taken for each type of interrupt.

Operating System Operations


 Dual-mode operation allows OS to protect itself and other system components
 User mode and kernel mode
 Mode bit provided by hardware
 Provides ability to distinguish when system is running user code(1) or kernel code(0)
 Some instructions designated as privileged, only executable in kernel mode
 System call changes mode to kernel, return from call resets it to user

4
A system call is a way for programs to interact with the OS. A computer program makes a system
call when it makes a request to the OS‘s kernel.

Single-Processor System
There is one main CPU capable of executing a general-purpose instruction set, including
instructions from user processes.

Multiprocessor System
Also known as parallel-system or multicore.
First appeared in servers and now in smartphones and tablet computers.

Computer System Architecture


 Most systems use a single general-purpose processor (PDAs through mainframes)
 Most systems have special-purpose processors as well
 Multiprocessors systems growing in use and importance
 Also known as parallel systems or tightly-coupled systems
Advantages:
1. Increased throughput. Increasing the number of processor, expect to get more work
done in less time.
2. Economy of scale. It can cost less than equivalent multiple single-processor systems
because they can share peripherals, mass storage and power supplies.
3. Increased reliability. Functions can be distributed properly among several processors. If
one processor fails, the other processor can pick-up the task.
Two types of Multiprocessing:
 Asymmetric Multiprocessing
 Symmetric Multiprocessing

1. Asymmetric multiprocessing. Each processor is assigned a specific task. A boss


processor controls the system and the other processors either look to the boss for
instructions or have predefined tasks. Boss-worker relationship.
2. Symmetric multiprocessing(SMP). The most commonly used. In which each processor
performs all tasks within the operating system. All processors are peers and no boss-
worker relationship.
The difference between symmetric and asymmetric may result from either hardware or
software.

A recent trend in CPU design is to include multiple computing cores on a single chip. Such
multiprocessor systems are termed multicore. They can be more efficient than multiple chips with
single core.
A dual-core design with two cores on the same chip. Each core has its own register set as well
as its own local cache.

Clustered System
 Like multiprocessor systems, but multiple systems working together
 Usually sharing storage via a storage-area network (SAN)
 Provides a high-availability service which survives failures
 Asymmetric clustering has one machine in hot-standby mode

5
 Symmetric clustering has multiple nodes running applications, monitoring each other
 Some clusters are for high-performance computing (HPC).

Computing Environment
 Office computing environment
o PCs connected to a network, terminals attached to mainframe or minicomputers
providing batch and timesharing
o Now portals allowing networked and remote systems access to same resources

 Home computing environment


o Used to be single system, then modems
o Now firewalled, networked

 Mobile computing
o Refers to computing on handheld smartphones and tablet computers.

 Distributed system
o It is a collection of physically separate, possibly heterogeneous computer systems
that are networked to provide users with access to the various resources that the
system maintains.
o Network operating system is an operating system that provides services across
the network.

Types of Distributed System:


 Client-Server Computing
o Dumb terminals succeeded by smart PCs
o Many systems now servers, responding to requests generated by clients
 Compute-server provides an interface to client to request services (i.e.
database)
 File-server provides interface for clients to store and retrieve files
 P2P does not distinguish clients and servers
o Instead all nodes are considered peers
o May each act as client, server or both
o Node must join P2P network
 Registers its service with central lookup service on network, or
 Broadcast request for service and respond to requests for service via
discovery protocol
o Examples: Napster and BitTorrent

 Virtualization
o It is a technology that allows operating systems to run as applications within other
operating system.
o Virtualization plays a major role in cloud computing as it provides a virtual storage
and computing services to the cloud clients which is only possible through
virtualization.

6
o It is one member of the class software that also includes emulation. Emulation is
used when the source CPU type is different from the target CPU type.
Example: virtual machine, OracleVirtualBox

Cloud Computing
 It is a type of computing that delivers computing, storage and even applications as a
service across a network.
 It is a logical extension of virtualization.

Types of Cloud
 Public cloud – cloud available via the Internet
 Private cloud – cloud run by a company for that company’s own use
 Hybrid cloud – cloud that includes both public and private

Cloud Computing Service Models


 Software as a service(SaaS) – one or more applications available via the Internet
 Platform as a service(PaaS) – software stack ready for application use via the Internet
 Infrastructure as a service(IaaS) – servers or storage available over the Internet

Open-Source Operating System


 Open Source operating systems are released under a license where the copyright
holder allows others to study, change as well as distribute the software to other people.
 Counter to the copy protection and Digital Rights Management (DRM) movement
 Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License
(GPL)
 Examples: GNU(GNU’s Not Unix)/Linux, BSD UNIX (including core of Mac OS X), and
Sun Solaris

7
Applied Operating System
Study Guide

Module Number
2

SUBTOPIC 1: PROCESS MANAGEMENT

OBJECTIVES

Upon completion of this module, the student will be able to:

 Explain the process concept, architecture and types of computer processes;


 Describe different states of a process and the process control block;
 Discuss concurrent processes;
 Give the concept of scheduling and CPU scheduler;
 Solve CPU scheduling problems using different CPU scheduling algorithms/techniques;

PROCESS CONCEPT

• A process is a program in execution. A program by itself is not a process.


• A program is a passive entity, such as the contents of a file stored on disk while a process
is an active entity.
• A computer system consists of a collection of processes:
• Operating system processes execute system code, and
• User processes execute user code.
• Although several processes may be associated with the same program, they are
nevertheless considered separate execution sequences.
• All processes can potentially execute concurrently with the CPU (or CPUs) multiplexing
among them (time sharing).
• A process is actually a cycle of CPU execution (CPU burst) and I/O wait (I/O burst).
Processes alternate back and forth between these two states.
• Process execution begins with a CPU burst that is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on. Eventually, the last
CPU burst will end with a system request to terminate execution.

Example:

1
PROCESS ARCHITECTURE

 To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.

 When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The figure shows a simplified layout of a
process inside main memory

The process Stack contains the temporary data such as


Stack
method/function parameters, return address and local variables.

Heap This is dynamically allocated memory to a process during its run time.

This includes the current activity represented by the value of Program


Text
Counter and the contents of the processor's registers.

This section contains the global and static variables.


Data

2
PROCESS STATE

As a process executes, it changes state. The current activity of a process party defines its state.
Each sequential process may be in one of following states:

1. New. The process is being created.


2. Running. The CPU is executing its instructions.
3. Waiting. The process is waiting for some event to occur (such as an I/O completion).
4. Ready. The process is waiting for the OS to assign a processor to it.
5. Terminated. The process has finished execution.

PROCESS STATE DIAGRAM

new terminated
admitted exit

interrupt

ready running

I/O or event scheduler dispatch I/O or event


completion wait

waiting

PROCESS CONTROL BLOCK

Each process is represented in the operating system by a process control block (PCB) – also
called a task control block. A PCB is a data block or record containing many pieces of the
information associated with a specific process including:

1. Process state. The state may be new, ready, running, waiting, or halted.
2. Program Counter. The program counter indicates the address of the next instruction to
be executed for this process.
3. CPU Registers. These include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the program
counter, this information must be saved when an interrupt occurs, to allow the process to
be continued correctly afterward.

3
4. CPU Scheduling Information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
5. Memory Management Information. This information includes limit registers or page
tables.
6. Accounting Information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
7. I/O Status Information. This information includes outstanding I/O requests, I/O devices
(such as disks) allocated to this process, a list of open files, and so on.

• The PCB simply serves as the repository for any information that may vary from process to
process.

process
pointer
state
process number

program counter

registers

memory limits

list of open files

.
.
.

4
PROCESS CONCEPT

Example of the CPU being switched from one process to another. This is also known as Context
Switch Diagram

process P0 operating system process P1

interrupt or system
executing
call

save state into PCB0


.
. idle
.

reload state into PCB1

idle interrupt or executing


system call

save state into PCB1


.
. idle
.

reload state into PCB0


executing

CONCURRENT PROCESSES

• The processes in the system can execute concurrently; that is, many processes may be
multitasked on a CPU.
• A process may create several new processes, via a create-process system call, during
the course of execution. Each of these new processes may in turn create other processes.
• The creating process is the parent process whereas the new processes are the children
of that process.
• When a process creates a sub-process, the sub-process may be able to obtain its
resources directly from the OS or it may use a subset of the resources of the parent
process.

Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many processes.

When a process creates a new process, two common implementations exist in terms of execution:

1. The parent continues to execute concurrently with its children.


2. The parent waits until all its children have terminated.

5
• A process terminates when it finishes its last statement and asks the operating system to
delete it using the exit system call.

• A parent may terminate the execution of one of its children for a variety of reason, such as:
1. The child has exceeded its usage of some of the resources it has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the OS does not allow a child to continue if its parent
terminates. In such systems, if a process terminates, then all its children must also be
terminated by the operating system. This phenomenon is referred to as cascading
termination.

The concurrent processes executing in the OS may either be independent processes or


cooperating processes.
• A process is independent if it cannot affect or be affected by the other processes. Clearly,
any process that does not share any data (temporary or persistent) with any other process is
independent. Such a process has the following characteristics:

1. Its execution is deterministic; that is, the result of the execution depends solely on the
input state.
2. Its execution is reproducible; that is, the result of the execution will always be the same
for the same input.
3. Its execution can be stopped and restarted without causing ill effects.

• A process is cooperating if it can affect or be affected by the other processes. Clearly, any
process that shares data with other processes is a cooperating process. Such a process has
the following characteristics:

1. The results of its execution cannot be predicted in advance, since it depends on relative
execution sequence.
2. The result of its execution is nondeterministic since it will not always be the same for the
same input.

Concurrent execution of cooperating process requires mechanisms that allow processes to


communicate with one another and to synchronize their actions.

SCHEDULING CONCEPTS

• The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
• Multiprogramming also increases throughput, which is the amount of work the system
accomplishes in a given time interval (for example, 17 processes per minute).

6
Example:
Given two processes, P0 and P1.

process P0
start idle; idle; idle; stop
input input input

process P1
start idle; idle; idle; stop
input input input
If the system runs the two processes sequentially, then CPU utilization is only 50%.

• The idea of multiprogramming is if one process is in the waiting state, then another process
which is in the ready state goes to the running state.

Example: Applying multiprogramming to the two processes, P0 and P1

process P0
start idle; idle; idle; stop
input input input

process P1
start idle; idle; idle; stop
input input input

Then CPU utilization increases to 100%.


• As processes enter the system, they are put into a job queue. This queue consists of all
processes in the system.
• The processes that are residing in main memory and are ready and waiting to execute are
kept on another queue which is the ready queue.
• A process migrates between the various scheduling queues throughout its lifetime. The OS
must select processes from these queues in some fashion.
• The selection process is the responsibility of the appropriate scheduler.

TYPES OF SCHEDULER

• Long-term scheduler (or Job scheduler) selects processes from the secondary
storage and loads them into memory for execution.
• The long-term scheduler executes much less frequently.

7
• There may be minutes between the creation of new processes in the system.
• The long-term scheduler controls the degree of multiprogramming – the number of
processes in memory.
• Because of the longer interval between executions, the long-term scheduler can afford to
take more time to select a process for execution.
• Short-term scheduler (or CPU scheduler) selects process from among the processes
that are ready to execute, and allocates the CPU to one of them.
• The short-term scheduler must select a new process for the CPU frequently.
• A process may execute for only a few milliseconds before waiting for an I/O request.
• Because of the brief time between executions, the short-term scheduler must be very
fast.
• Medium-term scheduler removes (swaps out) certain processes from memory to
lessen the degree of multiprogramming (particularly when thrashing occurs).
• At some later time, the process can be reintroduced into memory and its execution can
be continued where it left off.
• This scheme is called swapping.

SCHEDULING CONCEPTS

• Switching the CPU to another process requires some time to save the state of the old
process and loading the saved state for the new process. This task is known as context
switch.
• Context-switch time is pure overhead, because the system does no useful work while
switching and should therefore be minimized.
• Whenever the CPU becomes idle, the OS (particularly the CPU scheduler) must select
one of the processes in the ready queue for execution.

CPU SCHEDULER

CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example,
I/O request, invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O).
4. When a process terminates.

• For circumstances 1 and 4, there is no choice in terms of scheduling. A new process (if
one exists in the ready queue) must be selected for execution. There is a choice,
however, for circumstances 2 and 3.

• When scheduling takes place only under circumstances 1 and 4, the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.

8
CPU SCHEDULER

• Non-preemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or switching
states. No process is interrupted until it is completed, and after that processor
switches to another process.

• Preemptive scheduling works by dividing time slots of CPU to a given process. The
time slot given might be able to complete the whole process or might not be able to it.
When the burst time of the process is greater than CPU cycle, it is placed back into the
ready queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.

CPU SCHEDULING ALGORITHMS

• Different CPU-scheduling algorithms have different properties and may favour one class
of processes over another.
• Many criteria have been suggested for comparing CPU-scheduling algorithms.
• The characteristics used for comparison can make a substantial difference in the
determination of the best algorithm. The criteria should include: CPU Utilization,
Throughput, Turnaround Time, Waiting Time, and Response Time

1. CPU Utilization measures how busy is the CPU. CPU utilization may range from 0 to 100
percent. In a real system, it should range from 40% (for a lightly loaded system) to 90% (for
a heavily loaded system).

2. Throughput is the amount of work completed in a unit of time. In other words throughput is
the processes executed to number of jobs completed in a unit of time. The scheduling
algorithm must look to maximize the number of jobs processed per time unit.

3. Turnaround Time measures how long it takes to execute a process. Turnaround time is
the interval from the time of submission to the time of completion. It is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing in the CPU,
and doing I/O.

4. Waiting Time is the time a job waits for resource allocation when several jobs are
competing in multiprogramming system. Waiting time is the total amount of time a process
spends waiting in the ready queue.

5. Response Time is the time from the submission of a request until the system makes the
first response. It is the amount of time it takes to start responding but not the time that it
takes to output that response.

9
A good CPU scheduling algorithm maximizes CPU utilization and throughput and minimizes
turnaround time, waiting time and response time.

• In most cases, the average measure is optimized. However, in some cases, it is desired
to optimize the minimum or maximum values, rather than the average.
• For example, to guarantee that all users get good service, it may be better to minimize
the maximum response time.
• For interactive systems (time-sharing systems), some analysts suggests that
minimizing the variance in the response time is more important than averaging
response time.
• A system with a reasonable and predictable response may be considered more
desirable than a system that is faster on the average, but is highly variable.

Non-preemptive:
• First-Come First-Served(FCFS)
• Shortest Job First (SJF)
• Priority Scheduling (Non-preemptive)

Preemptive:
• Shortest Remaining Time First (SRTF)
• Priority Scheduling (Preemptive)
• Round-robin (RR)

10
SUBTOPIC 2: CPU SCHEDULING TECHNIQUES – NON PREEMPTIVE

FIRST-COME FIRST-SERVED (FCFS)

• FCFS is the simplest CPU-scheduling algorithm.


• The process that requests the CPU first gets the CPU first.
• The FCFS algorithm is non-preemptive.

Example 1:

• Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds (ms).
• Illustrate the Gantt chart and compute for the average waiting time and average
turnaround time

Given:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 0 3 5-0 = 5 8-0 = 8
P3 0 8 8-0 = 8 16-0 =16
P4 0 6 16-0 = 16 22-0 = 22
29/4 51/4
Average
7.25 ms 12.75 ms

Gantt Chart:

Formulas:

• Waiting time = Start time – Arrival time


• Turnaround time = Completion time – Arrival time
• Average Waiting Time = Sum of Waiting time / No. of processes
• Average Waiting Time = Sum of Turnaround time / No. of processes

11
Example 2:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-1 = 4 8-1 = 7
P3 2 8 8-2 =6 16-2 = 14
P4 3 6 16-3 = 13 22-3 = 19
23/4 45/4
Average
5.75 ms 11.25 ms

Gantt Chart:

SHORTEST JOB FIRST (SJF)

• SJF algorithm associates with each process the length of the latter’s next CPU burst.

• When the CPU is available, it is assigned to the process that has the smallest next
CPU burst.

• If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie.

• SJF algorithm is non-preemptive.

12
Example 1:

Arrival Burst Waiting Turnaround


Process
Time time Time Time
P1 0 5 3-0 = 3 8-0 = 8
P2 0 3 0-0 = 0 3-0 =3
P3 0 8 14-0 = 14 22-0 = 22
P4 0 6 8-0 = 8 14-0 =14
25/4 47/4
Average
6.25 ms 11.75 ms

Gantt Chart:

Example 2:

Arrival Burst Waiting Turnaround


Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-1 = 4 8-1 = 7
P3 2 8 14-2 =12 22-2 = 20
P4 3 6 8-3 = 5 14-3 = 11
21/4 43/4
Average
5.25 ms 10.75 ms

Gantt Chart:

PRIORITY SCHEDULING (NP)

13
• Priority scheduling (non-preemptive) algorithm is one of the most common scheduling
algorithms in batch systems.

• Each process is assigned a priority.

• Process with highest priority is to be executed first and so on.

• Processes with same priority are executed on FCFS basis.

• Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Example 1:

Consider the lowest priority value gets the highest priority. In this case, process with priority
value (1) is the highest priority.

Arrival Burst Waiting Turnaround


Process Priority
Time time Time Time
P1 0 5 1 0-0 = 0 5-0 = 5
P2 0 3 2 13-0 = 13 16-0 = 16
P3 0 8 1 5-0 = 5 13-0 = 13
P4 0 6 3 16-0 = 16 22-0 = 22
34/4 56/4
Average
8.5 ms 14 ms

Gantt Chart:

14
SUBTOPIC 3: CPU SCHEDULING TECHNIQUES–PREEMPTIVE

SHORTEST JOB FIRST (SJF)

• The SJF algorithm may be either preemptive or non-preemptive.

• A new process arriving may have a shorter next CPU burst than what is left of the currently
executing process.

• A preemptive SJF algorithm will preempt the currently executing process.

• Preemptive SJF scheduling is sometimes called Shortest Remaining Time First


scheduling.

SHORTEST REMAINING TIME FIRST (SRTF)

Example :

Arrival Burst Turnaround


Process Waiting Time
Time time Time
P1 0 5 (0-0) + (4-1) = 3 8-0 = 8
P2 1 3 1-1 = 0 4-1 = 3
P3 2 8 14-2 =12 22-2 = 20
P4 3 6 8-3 = 5 14-3 = 11
20/4 42/4
Average
5 ms 10.5 ms

Gantt Chart:

15
PRIORITY SCHEDULING (P)

• Priority scheduling can either be preemptive or non-preemptive.


• When a process arrives at the ready queue, its priority is compared with the priority at the
currently running process.
• A preemptive priority scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the currently running process.

Example 3:

Arrival Burst Waiting Turnaround


Process Priority
Time time Time Time
P1 0 5 1 0-0 = 0 5-0 = 5
P2 1 3 2 13-1 = 12 16-1 = 15
P3 2 8 1 5-2 = 3 13-2 = 11
P4 3 6 3 16-3 = 13 22-3 = 19
28/4 50/4
Average
7 ms 12.5 ms

Gantt Chart:

ROUND-ROBIN (RR) SCHEDULING

• This algorithm is specifically for time-sharing systems.

• A small unit of time, called a time quantum or time slice, is defined.

• The ready queue is treated as a circular queue.

• The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.

• The RR algorithm is therefore preemptive.

16
Example:
Time Quantum = 3ms

Arrival Burst Turnaround


Process Waiting Time
Time time Time
P1 0 5 (0-0 + 12-3) = 9 14-0 = 14
P2 1 3 3-1 = 2 6-1 = 5
(6-2 + 14-9 +
P3 2 8 22-2 = 20
20-17) =12
P4 3 6 (9-3 + 17-12) = 11 20-3 = 17
34/4 56/4
Average
8.5 ms 14 ms

Gantt Chart:

• The performance of the RR algorithm depends heavily on the size of the time
quantum.

• If the time quantum is too large (infinite), the RR policy degenerates into the FCFS
policy.

• If the time quantum is too small, then the effect of the context-switch time becomes a
significant overhead.

• As a general rule, 80 percent of the CPU burst should be shorter than the time
quantum.

REFERENCES

• Siberschatz, A. (2018). Operating System Concepts, Wiley.


• Tomsho, G. (2019). Guide to Operating Systems, Cengage Learning.

17

You might also like