Nothing Special   »   [go: up one dir, main page]

CS330 Operating Systems Lec03

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

10/5/2017

Agenda
• Recap
• General scheduling mechanism
Process Scheduling • Metrics and goals of scheduling
• Scheduling algorithms
• Multiprocessor scheduling
• Case studies: UNIX, Linux, Solaris, Windows XP
• Scheduling in thread libraries
• Evaluating scheduling algorithms

General scheduling mechanism


Recap • Process scheduling is the activity of selecting the
process that will run next on the CPU
• Concept of process • If the scheduler needs to run in the kernel mode,
– Loosely speaking, anything that runs on the CPU there has to be a mode switch in the currently
• System calls to control what a process does running process before scheduling can take place
(other than computing) – The mode switch is usually a result of a system call or
an interrupt
– Creation of new process/thread
– A scheduling decision may have to be taken only on a
– Communicating with another process/thread: long-latency system call and some interrupts such as
message passing, shared memory, event signal the timer interrupt
– Saving and restoring the context of a process • A scheduler saves the context of the currently
• Multithreading libraries running process, selects a process from the ready
queue, restores the context of selected process

1
10/5/2017

General scheduling mechanism Goals of process scheduling


• The scheduler can be invoked in four possible
• A scheduling algorithm can target a subset of the
circumstances
following
– The currently running process goes on a long-latency
system call i.e. transitions from running to sleep state – Maximize throughput: rate at which processes
complete
– A new process is created or a process completes a
long-latency system call i.e., a process transitions – Minimize turnaround time
from created to ready or from sleep to ready • Average, maximum, standard deviation?
– The currently running process terminates – Minimize waiting time in the ready queue
• Direct measure of scheduler efficiency
– The currently running process receives a timer
• Average, maximum, standard deviation?
interrupt
– Minimize response time
– The first and the third cases lead to non-preemptive
• How long a process takes to produce the first result
or co-operative scheduling
• Important for interactive systems
– The remaining two cases lead to pre-emptive
• Average, maximum, standard deviation?
scheduling

Scheduling algorithms: FCFS Scheduling algorithm: SJF


• Non-preemptive shortest next CPU burst
• Non-preemptive first-come-first-serve scheduling
– Popularly known as shortest job first scheduling
• May lead to a convoy effect if one process has
– Provably optimal for average waiting time and
large CPU bursts and others have small CPU average turnaround time
bursts – Drawback: requires knowledge about future CPU
• In the worst case, FCFS has an unbounded bursts
average waiting time – One popular way of estimating CPU bursts is
exponential averaging: s(n+1) = at(n) + (1-a)s(n)
• Need to start with a guess for s(0), but little effect in long
run
– Pre-emptive version is called shortest remaining time
first (SRTF)

2
10/5/2017

Scheduling algorithms: Priority Scheduling algorithms: Round-robin


• Pre-emptive quantum scheduling
• Each process is assigned a priority – The ready queue is treated as a circular FIFO and
– Either by kernel based on the resource usage profile each process in the FIFO order is assigned a fixed
of the process or externally by the user time slice or quantum for execution
• The process with the highest priority is – If the running process goes to sleep or terminates
scheduled before the expiry of the quantum, the next process in
the FIFO order is scheduled
– Can be pre-emptive or non-preemptive
– If the running process’s quantum expires, it is put
• A steady flow of CPU bursts from the high- back at the tail of the ready queue
priority processes can starve the low-priority – The time quantum should be chosen to be larger
ones than the context switch overhead
– Age-based priority modulation solves this problem • Too large a time quantum leads to FCFS sheduling
• SJF is a special-case priority scheduling policy • 80% of the CPU bursts should be shorter than the time
quantum: is this thumb rule useful?

Scheduling algorithms: Multi-level queues Scheduling algorithms: Multi-level queues


• Often it is necessary to design different • Multi-level queue scheduling
scheduling policies for different types of – Round-robin scheduling across queues
processes
– Quantum attached to a queue is proportional to its
– Foreground processes would require a scheduling priority
policy that honours time-sharing e.g., round-robin • Processes from higher-priority queues get to run longer
– Background processes are happy with something as – Within a queue, the scheduling algorithm depends on
simple as FCFS the type of the processes in the queue
• One possible implementation: maintain multiple – Drawback: a process cannot migrate from one queue
ready queues each having its own policy to another even if its behavior has changed
– One for each type of processes • The priority of a process may change over its life time
– Order queues by priority for scheduling across • Solution: multi-level feedback queues
queues

3
10/5/2017

Scheduling algorithms: Multi-level queues Scheduling algorithms: Deadlines


• Multi-level feedback queue scheduling: an • In real-time systems, usually a deadline is
example attached with each process
– Suppose we have three queues with three different – Scheduling algorithm needs to minimize the average,
scheduling quanta maximum, or standard deviation of overshoot
– A new process is always enqueued at the tail of the – Two types of processes
queue with the smallest quantum
• With hard deadlines (must be met)
– If a process fails to complete its current CPU burst
• With soft deadlines (minimize some function of overshoot)
within the time quantum allocated for its current
queue, the process is enqueued at the tail of the – Tempting to sort the processes by deadline and
queue with the next higher time quantum scheduling them in earliest-deadline-first order
• The queue with the largest time quantum could just • No guarantee on overshoot if all the deadlines cannot be
execute non-preemptive FCFS met
– The frequency of scheduling a queue must be – How to handle a continuous flow of processes?
inversely proportional to the time quantum • All deadlines not known a priori (pre-emptive EDF)

Scheduling algorithms: Deadlines


• How to make sure that the owner of a process Multiprocessor scheduling
submits the true deadline? • Two important aspects: load balancing and data
– Possible to submit an earlier deadline than the actual affinity
to gain priority in scheduling • Where does the OS code run? Options:
• Particularly problematic if the user knows that the – A group of processors dedicated to carry out kernel
scheduling algorithm is EDF activities (known as asymmetric scheduling)
• Any priority scheme that depends on deadline alone will – A single OS node simplifies OS design and improves
suffer performance
• No solution would be perfect in this case – The OS code runs on the processor that asks for a
– If there are too many processes, the scheduler could kernel service (known as symmetric scheduling)
switch to round-robin • Design of the ready queue
– Important for the user to specify correct deadlines – Centralized (one queue), distributed (one per
when the hard deadlines are mission-critical processor), hierarchical (combination of both)

4
10/5/2017

Data affinity in multiprocessors Load balancing in multiprocessors


• A process can be scheduled on different • Distributed and hierarchical ready queue designs
processors during different quanta given to the may lead to load imbalance
process – Scheduler’s responsibility to keep all processors
– Known as process migration equally busy
– Each migration comes with the overhead of cache – Receiver-initiated and sender-initiated diffusion (RID
warm-up and possible remote memory accesses and SID)
– A process can specify its affinity toward a processor • Also known as pull migration and push migration
through system calls • RID is invoked on a processor whose ready queue size has
• Prevents the scheduler from migrating the process to a dropped below a threshold; migrates a number of
different processor processes from a processor whose ready queue size is
above the threshold
– A multiprocessor scheduler must be aware of data • SID is invoked on a processor whose ready queue size has
affinity and the overheads involved in migration gone above a threshold

Load balancing in multiprocessors Case study: UNIX


• RID and SID algorithms can work together • Priority-based fixed quantum scheduling
– In multiprocessor Linux, every 200 ms the SID – Priority of a process may change during its life time
algorithm is invoked on every processor and the RID – A higher priority value indicates lower priority
algorithm is invoked whenever the ready queue of a – The timer interrupt handler keeps track of the CPU
processor is empty usage of the currently running process
• Load balancing and data affinity have conflicting – Every one second, the priorities of all the user mode
goals processes are updated by the timer interrupt handler
– Achieving both is usually challenging • Set CPU usage of each process to CPU usage/2
• Set priority value of each process to (base priority + CPU
usage/2). The base priority is the minimum user mode
priority value. Priority is inversely related to CPU usage.

5
10/5/2017

Case study: UNIX Case study: UNIX


• Priority-based fixed quantum scheduling • Priority-based fixed quantum scheduling
– A process on entering the kernel mode receives a – A switch from kernel to user mode sets the process
priority value lower than the user mode base priority priority value to at least the user mode base priority
indicating a higher priority than all user mode – The priority of a process can be controlled by the
processes owner of the process through the nice() call
– A process about to enter the sleep state (in kernel • Takes an integer argument, which is usually non-negative
mode) receives a predetermined priority value • The passed argument is added to the priority value of the
• Value depends on the reason to sleep process
• I/O calls from lower levels of the kernel receive very high • The value of the argument is usually called the “nice value”
priority because these calls hold a lot of resources indicating how nice the process is to the other processes
• Disk I/O always receives higher priority than a process • The nice value, the priority value, and the CPU usage value
waiting for some memory resources of a process are stored in the process table entry
• This priority is used to schedule the process right after it • Forked children inherit the nice value of the parent
returns to the ready queue at the end of the sleep (still in – Priority ties are broken by scheduling the process
kernel mode) with a larger waiting time in the ready queue

Case study: Linux Case study: Solaris


• Similar to UNIX with a few differences • Four classes of processes
– Time quantum is not fixed and is inversely related to – Time sharing, interactive, system, and real-time
the priority value (i.e., lower priority processes get
smaller time quantum) • A user process is classified as time sharing unless
– The priority of a process is updated only after it gets
it is interactive (e.g., GUI process) or real-time
at least one quantum to execute – Multi-level feedback queue scheduling with queues
ordered by priority; a lower priority queue can be
– Supports symmetric multiprocessing and each
scheduled only if all higher priority queues are empty
processor executes the scheduling algorithm locally
• Does not honour global priority – The processes in the highest priority queue get the
smallest time quantum (can be switched quickly so
– An interactive process receives a nice value of -5 if it
that all high priority processes make some progress)
is sleeping on I/O for a long time
• When such a process returns to the ready queue they get a – When a process returns from sleep state to ready
somewhat higher priority to improve the degree of state, its priority is boosted to somewhere between
interaction 50 and 59

6
10/5/2017

Case study: Solaris Case study: Solaris


• At the time of selecting a new process for
• Interactive class executes a similar scheduling scheduling
algorithm as the time sharing class – The scheduler translates all local priorities within
– The GUI processes are assigned the highest priority each class into global priorities
• Done through a pre-determined priority order among the
• System class includes all kernel processes classes
– User processes executing in kernel mode are not in – Selects the process with the highest global priority
this class • If there are multiple such processes, all of them are
chained up in a single special queue
– Scheduler, system daemons, etc. are in this class
• This special queue is scheduled in a round-robin fashion
– Each system process has a pre-determined fixed with a fixed pre-determined time quantum assigned to
priority, based on which they get scheduled each process
• Real-time processes are time-critical – A process is pre-empted if it goes to sleep state, or its
time quantum expires, or a new higher-priority
– Assigned highest global priority process arrives

Case study: Windows XP Case study: Windows XP


• Pre-emptive priority-based scheduling
– 42 priority levels • Pre-emptive priority-based scheduling
– Six priority classes: REALTIME, HIGH, – The priority has an inverse relationship with CPU
ABOVE_NORMAL, NORMAL (this is default), usage like in UNIX
BELOW_NORMAL, and IDLE – A process returning to the ready queue from sleep
– Seven relative priority levels within each priority state undergoes a priority boost
class: TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, • The amount of boost depends on the reason for sleep
NORMAL (this is default), BELOW_NORMAL, LOWEST, • A process sleeping on keyboard I/O always gets a bigger
and IDLE priority boost compared to one sleeping on disk I/O
– The global priority level of a process is determined
– Interactive processes use a different variable-
based on its priority class and its priority level within
the class quantum scheduling algorithm (next slide)
• The Win32 kernel thread library allows the user to control
these two parameters from the user program

7
10/5/2017

Scheduling in thread libraries


Case study: Windows XP • Kernel-level threads are scheduled by the OS and
the user-level threads are scheduled by the thread
• Pre-emptive priority-based scheduling libraries
– A group of user-level threads may be mapped to one or
– The process currently selected on the screen is called more kernel-level threads
a foreground process and everything else is a
• Process contention scope is used to carry out user-
background process level thread scheduling within a process
• The foreground processes always get a bigger quantum
– OS scheduler maps the user-level threads to the available
than the background processes
kernel-level threads
• When a background process moves to foreground, its
– When a kernel-level thread is scheduled, the user-level
quantum is multiplied by a constant positive integer larger threads mapped to it share the quantum through user-
than one (usually three) level thread switching or one of the mapped threads runs
– User-level thread priorities are interpreted relative to the
global priority

Scheduling in thread libraries Scheduling in thread libraries


• System contention scope
– All threads are visible to the kernel • POSIX thread library allows one to specify the
– Thread priorities are interpreted relative to the thread scheduling policy and thread priorities
process threads only – Priorities are non-negative integers less than 100
– Linux supports only system contention scope – The scheduling algorithm can be read and written to
using the pthread_attr_getschedpolicy() and
• Contention scopes in the POSIX thread library
pthread_attr_setschedpolicy() functions
– pthread_attr_getscope() and pthread_attr_setscope()
– The thread priorities can be read and written to using
for getting and setting scheduler scope
the pthread_attr_getschedparam() and
• Two defined scopes: PTHREAD_SCOPE_PROCESS and
pthread_attr_setschedparam() functions
PTHREAD_SCOPE_SYSTEM

8
10/5/2017

Evaluating scheduling algorithms Evaluating scheduling algorithms


• Analytical modeling
• Must evaluate a set of algorithms before picking – Design a mathematical model for the entire system
one that takes as input the description of the scheduling
– Need to decide the optimization criterion first algorithm and the description of the processes
possibly in terms of the distribution of arrival times,
– Possible example: maximize CPU utilization under the
values of CPU and I/O burst lengths or simply a
constraint that the maximum response time is T
sequence of CPU and I/O bursts
– Possible example: maximize system throughput under
– The model computes the target criterion
the constraint that the turnaround time of each
process is linearly proportional to its execution time – Such a model can be as simple as a single formula or
• Short-burst processes wait less as complicated as a queuing model
– Once the criterion is decided, a set of candidate – Usually difficult to capture the real-world behavior of
algorithms must be evaluated to select the best one the system, but useful for eliminating some obviously
poor scheduling algorithms

Evaluating scheduling algorithms


• Simulation
– Rigorous simulation must be carried out before
selecting a few good candidates
– A simulator is a software model of the system, but
simpler than the full OS
– The simulator can accept real-world user programs
and simulate them to evaluate the target criterion
• NachOS is a simplified OS simulator
• Final phase of the design involves incorporating
the shortlisted candidate algorithms in the real
OS and evaluating the target criterion

You might also like