Nothing Special   »   [go: up one dir, main page]

AOS 1-6 Compilation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 86

Applied Operating System

Study Guide

Module 1
INTRODUCTION TO OPERATING SYSTEM

SUBTOPIC 1: Operating System Overview

What is an Operating System?


An Operating System (OS) is a program or system software that acts as an interface between the
user and the computer hardware and controls the execution of all kinds of programs.

Goals of an Operating System?


 Execute user programs and make solving user problems easier.
 Make the computer system convenient to use.
 Use the computer hardware in an efficient manner.

Components of a Computer System


 Computer hardware – CPU, memory and I/O devices, provides the basic computing
resources.
 Application programs – are used to solve the computing problems of the users such as
word processors, games and business programs.
 Users – who utilize a computer or network service trying to solve different problems.
 Operating System – controls and coordinates the use of the hardware among the various
application programs for the various users.

What is a Kernel?
Kernel is the central part of an OS which manages system resources and is always resident in
memory. It also acts like a bridge between application and hardware of the computer. It is also
the first program that loads after the bootloader.

Bootloader is a program that loads and starts the boot time tasks and processes of an OS. It
also places the OS of a computer into memory.

Common Services Offered By Almost All Operating Systems:


 User Interface
 Program Execution
 File system manipulation
 Input / Output Operations
 Communication
 Resource Allocation
 Error Detection
 Accounting
 Security and protection

1
1. User Interface (UI) refers to the part of an OS, or device that allows a user to enter and
receive information.
Types of UI:
 Command line interface
 Batch based interface
 Graphical User Interface

2. Program Execution. The OS must have the capability to load a program into memory
and execute that program.
3. File system manipulation. Programs need has to be read and then write them as files
and directories. File handling portion of OS also allows users to create and delete files by
specific name along with extension, search for a given file and / or list file information.
4. Input / Output Operations. A program which is currently executing may require I/O, which
may involve file or other I/O device. The OS is responsible for reading and/or writing data
from I/O devices such as disks, tapes, printers, keyboards, etc.
5. Communication. Process needs to swap over information with other process. Processes
executing on same computer system or on different computer systems can communicate
using operating system support.
6. Resource Allocation The OS manages the different computer resources such as CPU
time, memory space, file storage space, I/O devices, etc. and allocates them to different
application programs and users.
7. Error Detection The operating system should be able to detect errors within the computer
system (CPU, memory, I/O, or user program) and take the appropriate action.
8. Job Accounting. OS keeps track of time and resources used by various tasks and users,
this information can be used to track resource usage for a particular user or group of user.
9. Security and Protection. Protection is any mechanism for controlling access of
processes or users to resources defined by the OS. Security is a defense of the system
against internal and external attacks (denial-of-service, worms, viruses, identity theft, theft
of service)

Types of Operating System


 Batch operating system
 Time-sharing operating systems
 Distributed operating system
 Network operating system
 Real-time operating system
 Handheld operating system

Batch Operating System (BOS)


The user of a BOS never directly interacts with the computer.
Every user prepares his or her job on an offline device like a punch card and submit it to the
computer operator.
 Examples: Payroll System, Bank Statements etc.

2
Time-sharing Operating System
Time-sharing or multitasking is logical extension in which CPU switches jobs so frequently that
users can interact with each job while it is running, creating interactive computing.
• Examples: Unix, Linux, Multics and Windows

Distributed Operating System


Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly.
• Examples: Telecom Network, WWW, Cloud Computing, etc.

Network Operating System (NOS)


A NOS runs on a server and provides the server the capability to manage data, users, groups,
security, applications, and other networking functions.
 Examples: Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX,
Linux, Mac OS X Server, Novell NetWare, and BSD/OS (Berkeley Software Design)

Real-time Operating System (RTOS)


RTOS is an operating system intended to serve real-time systems/applications that process data
as it comes in, mostly without buffer delay.
• Examples: LynxOS, OSE, QNX, RTLinux, VxWorks, Windows CE

Handheld Operating System


It is also known as Mobile OS which is built exclusively for a mobile device, such as a smartphone,
personal digital assistant (PDA), tablet, wearable devices or other embedded mobile OS.
• Examples: Android, Symbian, iOS, BlackBerry OS and Windows Mobile.

SUBTOPIC 2: Computer System Organization, Architectures and Computing Environment

A computer system is made up of various components. The components can be hardware or


software. Because these systems are so massively complex, the components are organized in
layers.

The important points about the previous figure displaying Computer System Organization is:
 One or more CPUs, device controllers connect through common bus providing access to
shared memory.
 The I/O devices and the CPU both execute concurrently.

3
 Some of the processes are scheduled for the CPU and at the same time, some are
undergoing input/output operations.
 There are multiple device controllers, each in charge of a particular device such as
keyboard, mouse, printer etc.
 There is buffer available for each of the devices. The input and output data can be stored
in these buffers. Buffer is a region of memory used to temporarily hold data while it is being
moved from one place to another.
 The data is moved from memory to the respective device buffers by the CPU for I/O
operations and then this data is moved back from the buffers to memory.
 The device controllers use an interrupt to inform the CPU that I/O operation is completed.

What is an Interrupt?
 An operating system is interrupt driven.
 Interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention.
 It alerts the processor temporarily to a high priority process requiring interruption of the
current working process and then return to its previous task.

Types of Interrupts:
 Hardware interrupt
 Software interrupt

Hardware interrupt is a signal created and sent to the CPU that is caused by some action taken
by a hardware device.
Example: When a key is pressed or when the mouse is moved.

Software Interrupt arises due to illegal and erroneous use of an instruction or data. It often occurs
when an application software terminates or when it requests the operating system for some
service.
Example: stack overflow, division by zero, invalid opcode, etc. These are also called traps.

Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter
Determines which type of interrupt has occurred:
 Polling – operating system sends signal to each devices asking if they have a request
 Vectored interrupt system – requesting device sends interrupt to the operating system.
Separate segments of code determine what action should be taken for each type of interrupt.

Operating System Operations


 Dual-mode operation allows OS to protect itself and other system components
 User mode and kernel mode
 Mode bit provided by hardware
 Provides ability to distinguish when system is running user code(1) or kernel code(0)
 Some instructions designated as privileged, only executable in kernel mode
 System call changes mode to kernel, return from call resets it to user

4
A system call is a way for programs to interact with the OS. A computer program makes a system
call when it makes a request to the OS‘s kernel.

Single-Processor System
There is one main CPU capable of executing a general-purpose instruction set, including
instructions from user processes.

Multiprocessor System
Also known as parallel-system or multicore.
First appeared in servers and now in smartphones and tablet computers.

Computer System Architecture


 Most systems use a single general-purpose processor (PDAs through mainframes)
 Most systems have special-purpose processors as well
 Multiprocessors systems growing in use and importance
 Also known as parallel systems or tightly-coupled systems
Advantages:
1. Increased throughput. Increasing the number of processor, expect to get more work
done in less time.
2. Economy of scale. It can cost less than equivalent multiple single-processor systems
because they can share peripherals, mass storage and power supplies.
3. Increased reliability. Functions can be distributed properly among several processors. If
one processor fails, the other processor can pick-up the task.
Two types of Multiprocessing:
 Asymmetric Multiprocessing
 Symmetric Multiprocessing

1. Asymmetric multiprocessing. Each processor is assigned a specific task. A boss


processor controls the system and the other processors either look to the boss for
instructions or have predefined tasks. Boss-worker relationship.
2. Symmetric multiprocessing(SMP). The most commonly used. In which each processor
performs all tasks within the operating system. All processors are peers and no boss-
worker relationship.
The difference between symmetric and asymmetric may result from either hardware or
software.

A recent trend in CPU design is to include multiple computing cores on a single chip. Such
multiprocessor systems are termed multicore. They can be more efficient than multiple chips with
single core.
A dual-core design with two cores on the same chip. Each core has its own register set as well
as its own local cache.

Clustered System
 Like multiprocessor systems, but multiple systems working together
 Usually sharing storage via a storage-area network (SAN)
 Provides a high-availability service which survives failures
 Asymmetric clustering has one machine in hot-standby mode

5
 Symmetric clustering has multiple nodes running applications, monitoring each other
 Some clusters are for high-performance computing (HPC).

Computing Environment
 Office computing environment
o PCs connected to a network, terminals attached to mainframe or minicomputers
providing batch and timesharing
o Now portals allowing networked and remote systems access to same resources

 Home computing environment


o Used to be single system, then modems
o Now firewalled, networked

 Mobile computing
o Refers to computing on handheld smartphones and tablet computers.

 Distributed system
o It is a collection of physically separate, possibly heterogeneous computer systems
that are networked to provide users with access to the various resources that the
system maintains.
o Network operating system is an operating system that provides services across
the network.

Types of Distributed System:


 Client-Server Computing
o Dumb terminals succeeded by smart PCs
o Many systems now servers, responding to requests generated by clients
 Compute-server provides an interface to client to request services (i.e.
database)
 File-server provides interface for clients to store and retrieve files
 P2P does not distinguish clients and servers
o Instead all nodes are considered peers
o May each act as client, server or both
o Node must join P2P network
 Registers its service with central lookup service on network, or
 Broadcast request for service and respond to requests for service via
discovery protocol
o Examples: Napster and BitTorrent

 Virtualization
o It is a technology that allows operating systems to run as applications within other
operating system.
o Virtualization plays a major role in cloud computing as it provides a virtual storage
and computing services to the cloud clients which is only possible through
virtualization.

6
o It is one member of the class software that also includes emulation. Emulation is
used when the source CPU type is different from the target CPU type.
Example: virtual machine, OracleVirtualBox

Cloud Computing
 It is a type of computing that delivers computing, storage and even applications as a
service across a network.
 It is a logical extension of virtualization.

Types of Cloud
 Public cloud – cloud available via the Internet
 Private cloud – cloud run by a company for that company’s own use
 Hybrid cloud – cloud that includes both public and private

Cloud Computing Service Models


 Software as a service(SaaS) – one or more applications available via the Internet
 Platform as a service(PaaS) – software stack ready for application use via the Internet
 Infrastructure as a service(IaaS) – servers or storage available over the Internet

Open-Source Operating System


 Open Source operating systems are released under a license where the copyright
holder allows others to study, change as well as distribute the software to other people.
 Counter to the copy protection and Digital Rights Management (DRM) movement
 Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License
(GPL)
 Examples: GNU(GNU’s Not Unix)/Linux, BSD UNIX (including core of Mac OS X), and
Sun Solaris

7
Applied Operating System
Study Guide

Module Number
2

SUBTOPIC 1: PROCESS MANAGEMENT

OBJECTIVES

Upon completion of this module, the student will be able to:

 Explain the process concept, architecture and types of computer processes;


 Describe different states of a process and the process control block;
 Discuss concurrent processes;
 Give the concept of scheduling and CPU scheduler;
 Solve CPU scheduling problems using different CPU scheduling algorithms/techniques;

PROCESS CONCEPT

• A process is a program in execution. A program by itself is not a process.


• A program is a passive entity, such as the contents of a file stored on disk while a process
is an active entity.
• A computer system consists of a collection of processes:
• Operating system processes execute system code, and
• User processes execute user code.
• Although several processes may be associated with the same program, they are
nevertheless considered separate execution sequences.
• All processes can potentially execute concurrently with the CPU (or CPUs) multiplexing
among them (time sharing).
• A process is actually a cycle of CPU execution (CPU burst) and I/O wait (I/O burst).
Processes alternate back and forth between these two states.
• Process execution begins with a CPU burst that is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on. Eventually, the last
CPU burst will end with a system request to terminate execution.

Example:

1
PROCESS ARCHITECTURE

 To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.

 When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The figure shows a simplified layout of a
process inside main memory

The process Stack contains the temporary data such as


Stack
method/function parameters, return address and local variables.

Heap This is dynamically allocated memory to a process during its run time.

This includes the current activity represented by the value of Program


Text
Counter and the contents of the processor's registers.

This section contains the global and static variables.


Data

2
PROCESS STATE

As a process executes, it changes state. The current activity of a process party defines its state.
Each sequential process may be in one of following states:

1. New. The process is being created.


2. Running. The CPU is executing its instructions.
3. Waiting. The process is waiting for some event to occur (such as an I/O completion).
4. Ready. The process is waiting for the OS to assign a processor to it.
5. Terminated. The process has finished execution.

PROCESS STATE DIAGRAM

new terminated
admitted exit

interrupt

ready running

I/O or event scheduler dispatch I/O or event


completion wait

waiting

PROCESS CONTROL BLOCK

Each process is represented in the operating system by a process control block (PCB) – also
called a task control block. A PCB is a data block or record containing many pieces of the
information associated with a specific process including:

1. Process state. The state may be new, ready, running, waiting, or halted.
2. Program Counter. The program counter indicates the address of the next instruction to
be executed for this process.
3. CPU Registers. These include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the program
counter, this information must be saved when an interrupt occurs, to allow the process to
be continued correctly afterward.

3
4. CPU Scheduling Information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
5. Memory Management Information. This information includes limit registers or page
tables.
6. Accounting Information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
7. I/O Status Information. This information includes outstanding I/O requests, I/O devices
(such as disks) allocated to this process, a list of open files, and so on.

• The PCB simply serves as the repository for any information that may vary from process to
process.

process
pointer
state
process number

program counter

registers

memory limits

list of open files

.
.
.

4
PROCESS CONCEPT

Example of the CPU being switched from one process to another. This is also known as Context
Switch Diagram

process P0 operating system process P1

interrupt or system
executing
call

save state into PCB0


.
. idle
.

reload state into PCB1

idle interrupt or executing


system call

save state into PCB1


.
. idle
.

reload state into PCB0


executing

CONCURRENT PROCESSES

• The processes in the system can execute concurrently; that is, many processes may be
multitasked on a CPU.
• A process may create several new processes, via a create-process system call, during
the course of execution. Each of these new processes may in turn create other processes.
• The creating process is the parent process whereas the new processes are the children
of that process.
• When a process creates a sub-process, the sub-process may be able to obtain its
resources directly from the OS or it may use a subset of the resources of the parent
process.

Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many processes.

When a process creates a new process, two common implementations exist in terms of execution:

1. The parent continues to execute concurrently with its children.


2. The parent waits until all its children have terminated.

5
• A process terminates when it finishes its last statement and asks the operating system to
delete it using the exit system call.

• A parent may terminate the execution of one of its children for a variety of reason, such as:
1. The child has exceeded its usage of some of the resources it has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the OS does not allow a child to continue if its parent
terminates. In such systems, if a process terminates, then all its children must also be
terminated by the operating system. This phenomenon is referred to as cascading
termination.

The concurrent processes executing in the OS may either be independent processes or


cooperating processes.
• A process is independent if it cannot affect or be affected by the other processes. Clearly,
any process that does not share any data (temporary or persistent) with any other process is
independent. Such a process has the following characteristics:

1. Its execution is deterministic; that is, the result of the execution depends solely on the
input state.
2. Its execution is reproducible; that is, the result of the execution will always be the same
for the same input.
3. Its execution can be stopped and restarted without causing ill effects.

• A process is cooperating if it can affect or be affected by the other processes. Clearly, any
process that shares data with other processes is a cooperating process. Such a process has
the following characteristics:

1. The results of its execution cannot be predicted in advance, since it depends on relative
execution sequence.
2. The result of its execution is nondeterministic since it will not always be the same for the
same input.

Concurrent execution of cooperating process requires mechanisms that allow processes to


communicate with one another and to synchronize their actions.

SCHEDULING CONCEPTS

• The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
• Multiprogramming also increases throughput, which is the amount of work the system
accomplishes in a given time interval (for example, 17 processes per minute).

6
Example:
Given two processes, P0 and P1.

process P0
start idle; idle; idle; stop
input input input

process P1
start idle; idle; idle; stop
input input input
If the system runs the two processes sequentially, then CPU utilization is only 50%.

• The idea of multiprogramming is if one process is in the waiting state, then another process
which is in the ready state goes to the running state.

Example: Applying multiprogramming to the two processes, P0 and P1

process P0
start idle; idle; idle; stop
input input input

process P1
start idle; idle; idle; stop
input input input

Then CPU utilization increases to 100%.


• As processes enter the system, they are put into a job queue. This queue consists of all
processes in the system.
• The processes that are residing in main memory and are ready and waiting to execute are
kept on another queue which is the ready queue.
• A process migrates between the various scheduling queues throughout its lifetime. The OS
must select processes from these queues in some fashion.
• The selection process is the responsibility of the appropriate scheduler.

TYPES OF SCHEDULER

• Long-term scheduler (or Job scheduler) selects processes from the secondary
storage and loads them into memory for execution.
• The long-term scheduler executes much less frequently.

7
• There may be minutes between the creation of new processes in the system.
• The long-term scheduler controls the degree of multiprogramming – the number of
processes in memory.
• Because of the longer interval between executions, the long-term scheduler can afford to
take more time to select a process for execution.
• Short-term scheduler (or CPU scheduler) selects process from among the processes
that are ready to execute, and allocates the CPU to one of them.
• The short-term scheduler must select a new process for the CPU frequently.
• A process may execute for only a few milliseconds before waiting for an I/O request.
• Because of the brief time between executions, the short-term scheduler must be very
fast.
• Medium-term scheduler removes (swaps out) certain processes from memory to
lessen the degree of multiprogramming (particularly when thrashing occurs).
• At some later time, the process can be reintroduced into memory and its execution can
be continued where it left off.
• This scheme is called swapping.

SCHEDULING CONCEPTS

• Switching the CPU to another process requires some time to save the state of the old
process and loading the saved state for the new process. This task is known as context
switch.
• Context-switch time is pure overhead, because the system does no useful work while
switching and should therefore be minimized.
• Whenever the CPU becomes idle, the OS (particularly the CPU scheduler) must select
one of the processes in the ready queue for execution.

CPU SCHEDULER

CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example,
I/O request, invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O).
4. When a process terminates.

• For circumstances 1 and 4, there is no choice in terms of scheduling. A new process (if
one exists in the ready queue) must be selected for execution. There is a choice,
however, for circumstances 2 and 3.

• When scheduling takes place only under circumstances 1 and 4, the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.

8
CPU SCHEDULER

• Non-preemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or switching
states. No process is interrupted until it is completed, and after that processor
switches to another process.

• Preemptive scheduling works by dividing time slots of CPU to a given process. The
time slot given might be able to complete the whole process or might not be able to it.
When the burst time of the process is greater than CPU cycle, it is placed back into the
ready queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.

CPU SCHEDULING ALGORITHMS

• Different CPU-scheduling algorithms have different properties and may favour one class
of processes over another.
• Many criteria have been suggested for comparing CPU-scheduling algorithms.
• The characteristics used for comparison can make a substantial difference in the
determination of the best algorithm. The criteria should include: CPU Utilization,
Throughput, Turnaround Time, Waiting Time, and Response Time

1. CPU Utilization measures how busy is the CPU. CPU utilization may range from 0 to 100
percent. In a real system, it should range from 40% (for a lightly loaded system) to 90% (for
a heavily loaded system).

2. Throughput is the amount of work completed in a unit of time. In other words throughput is
the processes executed to number of jobs completed in a unit of time. The scheduling
algorithm must look to maximize the number of jobs processed per time unit.

3. Turnaround Time measures how long it takes to execute a process. Turnaround time is
the interval from the time of submission to the time of completion. It is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing in the CPU,
and doing I/O.

4. Waiting Time is the time a job waits for resource allocation when several jobs are
competing in multiprogramming system. Waiting time is the total amount of time a process
spends waiting in the ready queue.

5. Response Time is the time from the submission of a request until the system makes the
first response. It is the amount of time it takes to start responding but not the time that it
takes to output that response.

9
A good CPU scheduling algorithm maximizes CPU utilization and throughput and minimizes
turnaround time, waiting time and response time.

• In most cases, the average measure is optimized. However, in some cases, it is desired
to optimize the minimum or maximum values, rather than the average.
• For example, to guarantee that all users get good service, it may be better to minimize
the maximum response time.
• For interactive systems (time-sharing systems), some analysts suggests that
minimizing the variance in the response time is more important than averaging
response time.
• A system with a reasonable and predictable response may be considered more
desirable than a system that is faster on the average, but is highly variable.

Non-preemptive:
• First-Come First-Served(FCFS)
• Shortest Job First (SJF)
• Priority Scheduling (Non-preemptive)

Preemptive:
• Shortest Remaining Time First (SRTF)
• Priority Scheduling (Preemptive)
• Round-robin (RR)

10
SUBTOPIC 2: CPU SCHEDULING TECHNIQUES – NON PREEMPTIVE

FIRST-COME FIRST-SERVED (FCFS)

• FCFS is the simplest CPU-scheduling algorithm.


• The process that requests the CPU first gets the CPU first.
• The FCFS algorithm is non-preemptive.

Example 1:

• Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds (ms).
• Illustrate the Gantt chart and compute for the average waiting time and average
turnaround time

Given:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 0 3 5-0 = 5 8-0 = 8
P3 0 8 8-0 = 8 16-0 =16
P4 0 6 16-0 = 16 22-0 = 22
29/4 51/4
Average
7.25 ms 12.75 ms

Gantt Chart:

Formulas:

• Waiting time = Start time – Arrival time


• Turnaround time = Completion time – Arrival time
• Average Waiting Time = Sum of Waiting time / No. of processes
• Average Waiting Time = Sum of Turnaround time / No. of processes

11
Example 2:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-1 = 4 8-1 = 7
P3 2 8 8-2 =6 16-2 = 14
P4 3 6 16-3 = 13 22-3 = 19
23/4 45/4
Average
5.75 ms 11.25 ms

Gantt Chart:

SHORTEST JOB FIRST (SJF)

• SJF algorithm associates with each process the length of the latter’s next CPU burst.

• When the CPU is available, it is assigned to the process that has the smallest next
CPU burst.

• If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie.

• SJF algorithm is non-preemptive.

12
Example 1:

Arrival Burst Waiting Turnaround


Process
Time time Time Time
P1 0 5 3-0 = 3 8-0 = 8
P2 0 3 0-0 = 0 3-0 =3
P3 0 8 14-0 = 14 22-0 = 22
P4 0 6 8-0 = 8 14-0 =14
25/4 47/4
Average
6.25 ms 11.75 ms

Gantt Chart:

Example 2:

Arrival Burst Waiting Turnaround


Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-1 = 4 8-1 = 7
P3 2 8 14-2 =12 22-2 = 20
P4 3 6 8-3 = 5 14-3 = 11
21/4 43/4
Average
5.25 ms 10.75 ms

Gantt Chart:

PRIORITY SCHEDULING (NP)

13
• Priority scheduling (non-preemptive) algorithm is one of the most common scheduling
algorithms in batch systems.

• Each process is assigned a priority.

• Process with highest priority is to be executed first and so on.

• Processes with same priority are executed on FCFS basis.

• Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Example 1:

Consider the lowest priority value gets the highest priority. In this case, process with priority
value (1) is the highest priority.

Arrival Burst Waiting Turnaround


Process Priority
Time time Time Time
P1 0 5 1 0-0 = 0 5-0 = 5
P2 0 3 2 13-0 = 13 16-0 = 16
P3 0 8 1 5-0 = 5 13-0 = 13
P4 0 6 3 16-0 = 16 22-0 = 22
34/4 56/4
Average
8.5 ms 14 ms

Gantt Chart:

14
SUBTOPIC 3: CPU SCHEDULING TECHNIQUES–PREEMPTIVE

SHORTEST JOB FIRST (SJF)

• The SJF algorithm may be either preemptive or non-preemptive.

• A new process arriving may have a shorter next CPU burst than what is left of the currently
executing process.

• A preemptive SJF algorithm will preempt the currently executing process.

• Preemptive SJF scheduling is sometimes called Shortest Remaining Time First


scheduling.

SHORTEST REMAINING TIME FIRST (SRTF)

Example :

Arrival Burst Turnaround


Process Waiting Time
Time time Time
P1 0 5 (0-0) + (4-1) = 3 8-0 = 8
P2 1 3 1-1 = 0 4-1 = 3
P3 2 8 14-2 =12 22-2 = 20
P4 3 6 8-3 = 5 14-3 = 11
20/4 42/4
Average
5 ms 10.5 ms

Gantt Chart:

15
PRIORITY SCHEDULING (P)

• Priority scheduling can either be preemptive or non-preemptive.


• When a process arrives at the ready queue, its priority is compared with the priority at the
currently running process.
• A preemptive priority scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the currently running process.

Example 3:

Arrival Burst Waiting Turnaround


Process Priority
Time time Time Time
P1 0 5 1 0-0 = 0 5-0 = 5
P2 1 3 2 13-1 = 12 16-1 = 15
P3 2 8 1 5-2 = 3 13-2 = 11
P4 3 6 3 16-3 = 13 22-3 = 19
28/4 50/4
Average
7 ms 12.5 ms

Gantt Chart:

ROUND-ROBIN (RR) SCHEDULING

• This algorithm is specifically for time-sharing systems.

• A small unit of time, called a time quantum or time slice, is defined.

• The ready queue is treated as a circular queue.

• The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.

• The RR algorithm is therefore preemptive.

16
Example:
Time Quantum = 3ms

Arrival Burst Turnaround


Process Waiting Time
Time time Time
P1 0 5 (0-0 + 12-3) = 9 14-0 = 14
P2 1 3 3-1 = 2 6-1 = 5
(6-2 + 14-9 +
P3 2 8 22-2 = 20
20-17) =12
P4 3 6 (9-3 + 17-12) = 11 20-3 = 17
34/4 56/4
Average
8.5 ms 14 ms

Gantt Chart:

• The performance of the RR algorithm depends heavily on the size of the time
quantum.

• If the time quantum is too large (infinite), the RR policy degenerates into the FCFS
policy.

• If the time quantum is too small, then the effect of the context-switch time becomes a
significant overhead.

• As a general rule, 80 percent of the CPU burst should be shorter than the time
quantum.

REFERENCES

• Siberschatz, A. (2018). Operating System Concepts, Wiley.


• Tomsho, G. (2019). Guide to Operating Systems, Cengage Learning.

17
Applied Operating System
Study Guide

Module 3
MEMORY MANAGEMENT

SUBTOPIC 1: EARLY SYSTEM MEMORY MANAGEMENT TECHNIQUES

What is Memory management?


Memory management - is the functionality of an operating system which handles or manages
primary memory and moves processes back and forth between main memory and disk during
execution.
• It keeps track of each and every memory location, regardless of either it is allocated to
some process or it is free.
• It checks how much memory is to be allocated to processes.
• It decides which process will get memory at what time.

It tracks whenever some memory gets freed or unallocated and correspondingly it updates the
status

• Address binding is the process of mapping from one address space to another address
space.
• Usually, a program resides on a disk as a binary executable file. The program must then
be brought into main memory before the CPU can execute it.
• Depending on the memory management scheme, the process may be moved between
disk and memory during its execution.
• The collection of processes on the disk that are waiting to be brought into memory for
execution forms the job queue or input queue.

• Compile Time Address Binding – If you know that during compile time where process
will reside in memory then absolute address is generated.
• For example, physical address is embedded to the executable of the program during
compilation. Loading the executable as a process in memory is very fast. But if the
generated address space is preoccupied by other process, then the program crashes
and it becomes necessary to recompile the program to change the address space.

• Load time – If it is not known at the compile time where process will reside then
relocatable address will be generated. Loader translates the relocatable address to
absolute address. The base address of the process in main memory is added to all
logical addresses by the loader to generate absolute address. In this, if the base address
of the process changes then we need to reload the process again.

1
• Execution time - The instructions are in memory and are being processed by the CPU.
Additional memory may be allocated and/or deallocated at this time. This is used if
process can be moved from one memory to another during execution(dynamic linking-
Linking that is done during load or run time). e.g. – Compaction.

• To obtain better memory-space utilization, dynamic loading is often used.


• With dynamic loading, a routine is not loaded until it is called. All routines are kept on
disk in a relocatable load format.
• Whenever a routine is called, the relocatable linking loader is called to load the
desired routine into memory and to update the program’s address tables to reflect this
change.
• Linking is a method that helps OS to collect and merge various modules of code and
data into a single executable file.
• The file can be loaded into memory and executed.
• OS can link system-level libraries into a program that combines the libraries at load time.
• In Dynamic linking method, libraries are linked at execution time, so program code
size can remain small.

MULTI-STEP PROCESSING OF A USER PROGRAM

Source
Program

Compiler or compile
Assembler time

Object
Other Module
Object
Modules

Linkage
Editor

Load load
Module time
System
Library

Loader
Dynamically
Loade d
Sys te m
Library

dynamic In-memory execution


linking Binary time (run
Memory time)
Image

2
LOGICAL AND PHYSICAL ADDRESS SPACE
• An address generated by the CPU is commonly referred to as a logical address.
• An address seen by the memory unit is commonly referred to as a physical address.
• The compile-time and load-time address binding schemes result in an environment
where the logical and physical addresses are the same.

• However, the execution-time address-binding scheme results in an environment


where the logical and physical addresses differ.
• The run-time mapping from logical to physical addresses is done by the memory
management unit (MMU), which is a hardware device.

Relocation
Register
14000

logical physical
address address
CPU + MEMORY
346 14346

MMU

EARLY SYSTEMS MEMORY ALLOCATION TECHNIQUES


Memory allocation is the process of reserving a partial or complete portion of computer
memory for the execution of programs and processes

Main memory usually has two partitions:


• Low Memory − Operating system resides in this memory.

• High Memory − User processes are held in high memory.

3
MEMORY ALLOCATION TECHNIQUES
• Single Partition Allocation
• Multiple Partition Allocation
• Fixed partitions (MFT)
• Variable/Dynamic partitions (MVT)
• Relocatable Variable/Dynamic partitions (MRVT)
• Paging
• Segmentation

SINGLE PARTITION
• Single Partition Allocation set aside some memory for the OS and user program gets
the rest.
• Use MMU to translate addresses by simple addition.
• Does not support multiprogramming.

MULTIPLE FIXED PARTITIONS


Fixed Partitions is the oldest and simplest technique used to put more than one processes in
the main memory.
In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of each
partition may or may not be same.
FIXED PARTITIONS
Since the size of a typical process is much smaller than the main memory, the operating system
divides main memory into a number of partitions wherein each partition may contain only one
process.
The degree of multiprogramming is bounded by the number of partitions.

4
When a partition is free, the operating system selects a process from the input queue and
loads it into the free partition.

When the process terminates, the partition becomes available for another process.
Example:
Assume a 32K main memory divided into the following partitions:
10K operating system
8K user partition 1
6K user partition 2
4K user partition 3
4K user partition 4
32K

• One flaw of the FIRST-FIT algorithm is that it forces other jobs (particularly those at the
latter part of the queue to wait even though there are some free memory partitions).
• An alternative to this algorithm is the BEST-FIT algorithm. This algorithm allows small
jobs to use a much larger memory partition if it is the only partition left. However, the
algorithm still wastes some valuable memory space.

Other problems with Fixed Partitions:


1. What if a process requests for more memory?
Possible Solutions:
• kill the process
• return control to the user program with an “out of memory” message
• reswap the process to a bigger partition, if the system allows dynamic relocation
2. How does the system determine the sizes of the partitions?
3. Multiple Fixed Partition Technique (MFT) results in internal and external fragmentation
which are both sources of memory waste.

5
The OS places jobs or process entering the memory in a job queue on a predetermined manner
(such as first-come first-served).

There are two possible ways to assign or allocate jobs to the available memory partitions:
• First-fit allocation
• Best-fit allocation

FIXED PARTITIONS – First Fit Allocation

In FIRST-FIT allocation, first job claims the first available memory with space more than or
equal to it’s size.
The OS doesn’t search for appropriate partition but just allocate the job to the nearest memory
partition available with sufficient size.

In FIRST-FIT memory allocation, below will be observed:


1. Assign Job 1 (4K) to User Partition 1 (8K) – since Partition 1 is the FIRST encountered
memory available that is large enough for Job 1.
2. Assign Job 2 (6K) to User Partition 2 (6K) - since Partition 1 is already occupied by Job 1.
Thus, Partition 2 is the FIRST encountered memory available that is large enough for Job 2.
3. Job 3 (6K) won’t fit to the rest of the available partitions (Partition 3 and 4). Thus, Job 3
should wait for its turn when there is a memory available that is large enough for Job 3.
4. Job 4 cannot use User Partition 3 since it will go ahead of Job 3 thus breaking the FCFS
rule. So it will also have to wait for its turn even though User Partition 3 is free.

6
FIXED PARTITIONS – Best Fit Allocation
In BEST-FIT allocation, keeps the free/busy list in order by size – smallest to largest.

In this method, the OS first searches the whole memory according to the size of the given job
and allocates it to the closest-fitting free partition in the memory, making it able to use memory
efficiently.

In BEST-FIT memory allocation, below will be observed:


1. Assign Job 1 (4K) to User Partition 3 (4K) – since Job 1 fits exactly to Partition 3.
2. Assign Job 2 (6K) to User Partition 2 (6K) - since Job 2 fits exactly to Partition 2.
3. Assign Job 3 (6K) to User Partition 1 (8K) - since Partition 1 is the closest-fitting free
partition in the memory for Job 3.
4. Assign Job 4 (2K) to User Partition 4 (8K) - since Partition 4 is the closest-fitting free
partition in the memory for Job 3.

FRAGMENTATION
• As processes are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks remains unused. This
problem is known as Fragmentation.
• Fragmentation is of two types:

• Internal fragmentation
• External fragmentation

Internal fragmentation occurs when a partition is too big for a process. The difference between
the partition and the process is the amount of internal fragmentation.
External fragmentation occurs when a partition is available, but is too small for any waiting job.

7
Example 1:

Using FIRST-FIT algorithm, only Jobs 1 and 2 can enter memory at Partitions 1 and 2.

During this time:


I.F. = (8K - 4K) + (6K - 6K)
= 4K
E.F. = 4K + 4K

= 8K
Therefore:
Memory Utilization = (total process size inside partitions / total memory available)
= (4K + 6K)/22K x 100
= 45.5%

Example 2:

8
Using BEST-FIT algorithm
During this time:
I.F. = (8K-6K)+(6K-6K)+(4K-4K)+(4K-2K)
= 4K
E.F. = 0
(since all partitions are containing processes)
Therefore:
Memory Utilization = (total process size inside partitions / total memory available)
= (6K + 6K + 4K + 2K)/22K x 100
= 81.81%

VARIABLE PARTITIONS (MVT)


• In Multiple Variable Partition Technique (MVT), the system allows the region sizes to
vary dynamically. It is therefore possible to have a variable number of tasks in memory
simultaneously.
• Initially, the OS views memory as one large block of available memory called a hole.
When a job arrives and needs memory, the system searches for a hole large enough for
this job. If one exists, the OS allocates only as much as is needed, keeping the rest
available to satisfy future requests.
• Assume that the memory size is 256K with the OS residing at the first 40K memory
locations.
• Assume further that the following jobs are in the job queue:
• The system again follows the FCFS algorithm in scheduling processes.

Example:
Assume that all jobs arrived at the same time

Run
Job Memory
Time
1 60K 10ms
2 100K 5ms
3 30K 20ms
4 70K 8ms
5 50K 15ms

9
0 0 0

OS OS OS

40K 40K 40K

Job 5
Job 1 Job 1 Job 1 out, Job 5 in after
next 5 time units
90K
100K 100K 100K

Job 4 Job 4
Job 2 out, Job 4 in after
Job 2 5 time units

170K 170K

200K 200K 200K

Job 3 Job 3 Job 3

230K 230K 230K

256K 256K 256K

Table shows when a job has started and ended. It also shows the waiting time (time started –
arrival time) and the memory available left after job allocation.

MEMORY
TIME STARTED TIME WAITING
Job Memory Run Time AVAILABLE when
(ms) FINISHED (ms) TIME (ms)
job was allocated
1 60K 10ms 0 10 0 156K
2 100K 5ms 0 5 0 56K
3 30K 20ms 0 20 0 26K
4 70K 8ms 5 13 5 56K
5 50K 15ms 10 25 10 66K
Note that using FIRST-FIT algorithm, the last job finished at 25ms.

This example illustrates several points about :


1. In general, there is at any time a set of holes, of various sizes, scattered
throughout memory.
2. When a job arrives, the operating system searches this set for a hole large
enough for the job (using the first-fit, best-fit, or worst fit algorithm).
First Fit
• Allocate the first hole that is large enough. This algorithm is generally faster and
empty spaces tend to migrate toward higher memory. However, it tends to
exhibit external fragmentation.

10
Best Fit
• Allocate the smallest hole that is large enough. This algorithm produces the
smallest leftover hole. However, it may leave many holes that are too small to be
useful.
Worst Fit
• Allocate the largest hole. This algorithm produces the largest leftover hole.
However, it tends to scatter the unused portions over non-contiguous areas of
memory.

• If the hole is too large for a job, the system splits it into two: the operating system gives
one part to the arriving job and it returns the other the set of holes.
• When a job terminates, it releases its block of memory and the operating system returns
it in the set of holes.
• If the new hole is adjacent to other holes, the system merges these adjacent holes to
form one larger hole. This is also known as coalescing.
• Internal fragmentation does not exist in MVT but external fragmentation is still a
problem. It is possible to have several holes with sizes that are too small for any
pending job.
• The solution to this problem is COMPACTION. The goal is to shuffle the memory
contents to place all free memory together in one large block.

RELOCATABLE VARIABLE PARTITIONS (MRVT )


Example:
• Compaction is possible only if relocation is dynamic, and is done at execution time.
Given:

Run
Job Memory
Time
1 60K 10ms
2 100K 5ms
3 30K 20ms
4 70K 8ms
5 50K 15ms

11
0 0 0 0

OS OS OS OS

40K 40K
40K 40K

Job 1 Job 1 Job 1 Job 1 Job 1 out, Job 5 i


next 5 time units

100K 100K 100K 100K 1

Job 4 Job 4 Job 4


Job 2 out, Job 4 in after
Job 2 5 time units

170K 170K 170K 1

Job 3 Job 3
200K 200K 200K 200K 2

Job 3 Job 3
Job 5
56K 230K 230K 2
250K
256K 256K 6K 256K 256K 2

Table shows when a job has started and ended. It also shows the waiting time (time started –
arrival time) and the memory available left after job allocation.

TIME TIME MEMORY


WAITING
Job Memory Run Time STARTED FINISHED AVAILABLE when
TIME (ms)
(ms) (ms) job was allocated
1 60K 10ms 0 10 0 156K
2 100K 5ms 0 5 0 56K
3 30K 20ms 0 20 0 26K
4 70K 8ms 5 13 5 56K
5 50K 15ms 5 20 5 6K
Note that using BEST-FIT algorithm, the last job finished at 20ms which is earlier than FIRST-
FIT algorithm thus making it able to use memory efficiently.

12
SUBTOPIC 2: PAGING AND SEGMENTATION MEMORY ALLOCATION TECHNIQUES
PAGING
• MVT still suffers from external fragmentation when available memory is not
contiguous, but fragmented into many scattered blocks.
• Aside from compaction (MRVT), paging can minimize external fragmentation.
Paging permits a program’s memory to be non-contiguous, thus allowing the operating
system to allocate a program physical memory whenever possible.
• Memory paging is a memory management technique for controlling how a computer or
virtual machine's (VM's) memory resources are shared.
• This non-physical memory, which is called virtual memory, is actually a section of a
hard disk that's set up to emulate the computer's RAM.
• The portion of the hard disk that acts as physical memory is called a page file.
• In paging, the OS divides main memory into fixed-sized blocks called frames.
• The system also breaks a process into blocks called pages.
• The size of a memory frame is EQUAL to the size of a process page.
• The pages of a process may reside in different frames in main memory.
• The OS translates this logical address into a physical address in main memory where
the word actually resides. This translation process is possible through the use of a page
table.
• Every address generated by the CPU is a logical address.
• A logical address has two parts:
1. The page number (p) indicates what page the word resides.
2. The page offset (d) selects the word within the page.

logical physical
address address
Main
CPU p d f d
Memory

p
f

page table

13
• The page number is used as an index into the page table.
• The page table contains the base address of each page in physical memory.
• This base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.

frame
page 0 number 0

Frame number
Page number
0 1
page 1 1 page 0
1 4
2 3
page 2 2
3 7

page 3 3 page 2
Page Table

Logical 4 page 1
Memory
5

7 page 3

Physical
Memory

• The page size (like the frame size) is defined by the hardware. The size of a page is
typically a power of 2 varying between 512 bytes and 16 MB per page, depending on
the computer architecture.
• If the size of a logical address space is 2m, and a page size is 2n addressing units (bytes
or words), then the high-order m – n bits of a logical address designate the page
number, and the n lower-order bits designate the page offset. Thus, the logical address
is as follows:

page number page offset


p d
m-n n

14
Example
Main Memory Size = 32 bytes
Process Size = 16 bytes

Page or Frame Size = 4 bytes

No. of Process Pages = 4 pages


No. of MM Frames = 8 frames

• Physical Memory Address for 32 bytes Memory Size


(No. of frames = 8 frames; frame size = 4 bytes )

Physical Address Format:


A4 A3 A2 A1 A0
frame no. page offset

Logical Memory Address for 16 bytes Process Size


(No. of pages = 4 pages; Page Size = 4 bytes)

Logical Address Format:


A3 A2 A 1 A0
page no. page offset

15
0

0 a 1
0
2
1 b 0 a Frame 3 1 0
2 c 1 b number 4
5
i 2 1
j
3 d 2 c 6 k 3 2
3 d 7 l 3
4 e 0 5 8 m 4 i
4 e 0 5 4

number
9 n
5 f 1 6 5 j

Page
5 f 1 6 10 o 5
6 g g 11 p 6 k
6 2 1 2 1 12
6
7 h 7 h 13
7 l 7
3 2 3 2 14
8 i 8 i 15 8 m8
9 j 9 j 16 9 n9
k Page Table
Page Table 17
10 k 10 18 10 o10
11 l 11 l 19 11 p11
20 a
m
12 m 12 21 b 12 12
13 n 22 c 13
13 n o 23 d 13
14 14
14 o 15 p
24 e
14 15
25 f
15 p 26 g 15
Logical 27 h 16
Logical M emory 28 16 17
29
Memory 30
17 18
31 18 19
19 20
21
20 a
• Logical address 0 (a) is page 0, offset 0. 22
21 b23
• Indexing into the page table, it is seen that page 0 is in frame 5. Thus, logical address 0
(a) maps to physical address 20. 22 c
23 d24
• Physical address = frame no x page size + offset = (5 x 4 + 0). 25
• Logical address 3 (d) is page 0, offset 3 maps to physical address 23 (5 x 4 + 3).
24 e26
25 f27
• Logical address 4 (e) is page 1, offset 0; according to the page table, page 1 is mapped
to frame 6. 26 g28
27 h29
• Logical address 4 (e) maps to physical address 24 (6 x 4 + 0).
30
• Logical address 13 (n) maps to physical address 9 (2 x 4 + 1). 28 32
29
Ph
30
M
32
Physical
Memory
16
0 a
1 b 0
2 c
3 d 1
4 e 2 0 5
5 f
6 g
3 1 6
2 1
7 h
0 5 4 i 3 2
8 i
1 6 9 j 5 j
Page Table
10 k 6 k
2 1 11 l
12 m
7 l
3 2 13 n
14 o 8 m
15 p 9 n
Page Table Logical
10 o
M emory
11 p

Given: 12
13
Main Memory Size = 32 bytes
14
Process Size = 16 bytes
15
16
Page or Frame Size = 4 bytes 17
18
No. of Process Pages= 4 pages 19
No. of MM Frames = 8 frames 20 a
21 b
22 c
23 d
24 e
25 f
26 g
27 h
28
29
30
32
Physical 17
Memory
logical memory physical memory
0000 a 00000
0001 b 00001
page 0 frame 0
0010 c 00010
0011 d 00011
0100 e 00100 i
0101 f 00101 j
page 1 frame 1
0110 g 00110 k
0111 h 00111 l
1000 i 01000 m
1001 j 01001 n
page 2 frame 2
1010 k 01010 o
page table
1011 l 01011 p
1100 m 00 101 01100
1101 n 01 110 01101
page 3 frame 3
1110 o 10 001 01110
1111 p 11 010 01111
10000
10001
frame 4
10010
10011
10100 a
CP U sends logical address 01 01 10101 b
frame 5
10110 c
That address is translated to 10111 d
physical address 110 01
11000 e
11001 f
frame 6
11010 g
11011 h
11100
11101
frame 7
11110
11111

logical memory physical memory


0000 a 00000
0001 b 00001
Frame number

page 0 frame 0
0010 c 00010
0011 d 00011
0100 e 00100 i
0101 f 00101 j
page 1 frame 1
0110 g 00110 k
Page

0111 h 00111 l
1000 i 01000 m
1001 j 01001 n
page 2 frame 2
1010 k 01010 o
page table
1011 l 01011 p
1100 m 00 101 01100
1101 n 01 110 01101
page 3 frame 3
1110 o 10 001 01110
1111 p 11 010 01111
` 10000
10001
frame 4
10010
10011
10100 a
CPU sends logical address 01 01 10101 b
frame 5
10110 c
That address is translated to 10111 d
physical address 110 01
11000 e
11001 f
frame 6
11010 g
11011 h
11100
11101
frame 7
11110
11111
18
• There is no external fragmentation in paging since the operating system can allocate
any free frame to a process that needs it.
• However, it is possible to have internal fragmentation if the memory requirements of
a process do not happen to fall on page boundaries.
• In other words, the last page may not completely fill up a frame.
• Example: Page Size = 2,048 bytes

Process Size = 72,766 bytes

No. of Pages = Process Size / Page Size = 72,766/2,048


= 36 pages (35 pages plus 1,086 bytes)

Internal Fragmentation: 2,048 - 1,086 = 962


• In the worst case, a process would need n pages plus one byte. It would be allocated n
+ 1 frames, resulting in an internal fragmentation of almost an entire frame.

SEGMENTATION
• Segmentation is a memory management technique in which each job is divided into
several segments of different sizes, one for each module that contains pieces that perform
related functions. Each segment is actually a different logical address space of the
program.
• When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of available
memory.
• Segmentation memory management works very similar to paging but here segments are
of variable-length where as in paging pages are of fixed size.
• A program segment contains the program's main function, utility functions, data structures,
and so on.
• A logical address space is a collection of segments. Each segment has a name and a
length. Addresses specify the name of the segment or its base address and the offset
within the segment.

stack

subrouti
ne
symbol
table

Sqrt
main
program

19
LOGICAL ADDRESS SPACE
• The OS maintains a segment map table for every process and a list of free memory
blocks along with segment numbers, their size and corresponding memory locations in
main memory.
• For each segment, the table stores the starting address of the segment (base) and the
length of the segment (limit).

1400
segment 0
stack 2400

segment 3
subrouti limit base
ne symbol 0 1000 1400
table 3200
1 400 6300
2 400 4300
segment 0 segment 4 segment 3
3 1100 3200
Sqrt main 4 1000 4700
program 4300
segment 2
SEGMENT 4700
TABLE
segment 1 segment 2 segment 4

5700

LOGICAL ADDRESS SPACE 6300

segment 1

6700
PHYSICAL
MEMORY

20
• A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 =
4052.
• A reference to byte 1222 of segment 0 would result in a trap to the operating system
since this segment is only 1000 bytes long.

1400
segment 0
stack 2400

segment 3
subrouti limit base
ne symbol 0 1000 1400
table 3200
1 400 6300
2 400 4300
segment 0 segment 4 segment 3
3 1100 3200
Sqrt main 4 1000 4700
program 4300
segment 2
SEGMENT 4700
TABLE
segment 1 segment 2 segment 4

5700

LOGICAL ADDRESS SPACE 6300

segment 1

6700
PHYSICAL
MEMORY

21
Applied Operating System
Study Guide

Module 4
VIRTUAL MEMORY

SUBTOPIC 1: VIRTUAL MEMORY OVERVIEW

What is Virtual Memory?


• Virtual Memory is a technique that allows the execution of processes that may
not be completely in memory. One major advantage of this scheme is that
programs can be larger than physical memory.
• Further, virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from physical
memory.

VIRTUAL MEMORY BENEFITS


This technique frees programmers from the concerns of memory-storage limitations.
Requiring that the entire process to be in main memory before they can be
executed limits the size of a program to the size of physical memory.
However, an examination of real programs shows that, it need not be in main
memory at the same time.
For example:
1. Programs often have code to handle unusual error conditions. Since these
errors seldom, if ever, occur in practice, this code is almost never executed.
The ability to execute a program that is only partially in memory would confer many
benefits:
1. A program would no longer be constrained by the amount of physical memory
that is available. Users would be able to write programs for an extremely large
virtual-address space.
2. Because each user program could take less physical memory, more programs
could run at the same time
3. Less I/O would be needed to load or swap each user program into memory, so
each program would run faster.

1
VIRTUAL MEMORY
Virtual memory is the separation of user logical memory from physical memory.
This separation allows programmers to have a very large virtual memory when only
a small physical memory is available.

page 0
page 1
page 2
page 3

.
.
.
memory
map

physical
memory
page n

virtual
memory

2
DEMAND-PAGING
A demand-paging system is similar to a paging system with swapping. However,
instead of swapping the entire process into memory, the OS (particular the pager)
swaps only the necessary pages into memory (lazy swapping).

0
1
valid-invalid
bit
2
0 A 3
1 B 0 4 v 4 A
1 i
2 C 2 6 v 5
3 i
4 i A B
3 D 5 9 v 6 C
4 E 6 i 7 C D E
7 i
5 F page 8 F
6 G table 9 F
7 H 10
11
logical
memory 12
13

physical
memory

• There is an additional bit in the page table which is the valid-invalid bit.
• This bit is set to valid to indicate that a corresponding page is in memory.
• This bit is set to invalid to indicate that the corresponding page is in secondary
storage.
• If a process tries to use a page that is not in physical memory, then a page-fault
will occur. This will cause a trap to the operating system indicating an invalid
address error.

3
STEPS IN HANDLING PAGE-FAULT

• In the extreme case, the system could start executing a process with no pages in
memory. The process would immediately fault for the page with the first
instruction.
• After the first page is brought into memory, the process would continue to
execute, faulting as necessary until every page that it needed was actually in
memory.
• This is pure demand paging: never bring a page into memory until it is
required.
• The principle of locality of reference ensures that programs do not access a
new page of memory with each instruction execution.
• The effectiveness of the demand paging is based on a property of computer
programs called the locality of reference.
• Analysis of programs shows that most of their execution time is spent on
routines in which many instructions are executed repeatedly.
• These instructions may constitute a simple loop, nested loops, or a few
procedures or functions that repeatedly call each other.

4
• It is important to keep the page-fault rate low in a demand-paging system.
• Otherwise, the effective access time increases, slowing down process execution
dramatically.
• A problem occurs if there is a need to transfer a page from disk to memory but
there is no memory space available (there are no free frames). In other words,
memory is over-allocated.

PAGE-REPLACEMENT
• The operating system has several options at this point:
• It could terminate the user process. However, demand paging is the
operating system’s attempt to improve the computer system’s utilization
and throughput.
• Users should not be aware their processes are running on a paged
system – paging should be logically transparent to the user.
• Another possibility is page replacement. In this scheme, the operating
system removes or replaces one of the existing pages in memory to give
way for the incoming page.
• A page replacement algorithm is necessary to select which among the
pages currently residing in memory will be replaced.
• Page replacement takes the following approach:
• If no frame is free, the system finds one that is currently being used and
frees it.
• Freeing a frame means transferring its contents to the disk and changing
the page table (and all other tables) to indicate that the page is no longer
in memory.

5
1 swap out
2
f 0 v i change to victim page
invalid
f victim
f v 4 reset page
table for new
page
3 swap desired
page table page in

physical
memory

PAGE-REPLACEMENT
The page fault service routine is now modified to include page replacement:
1. Find the location of the desired page on the disk.
2. Find a free frame:
3. If there is a free frame, use it.
4. Otherwise, use a page-replacement algorithm to select a victim frame.
5. Write the victim page to the disk; change the page and frame tables accordingly.
6. Read the desired page into the (newly) free frame; change the page and frame
tables.
7. Restart the user process.

6
• Notice that, if no frames are free, two page transfers (one out and one in) are
required. This situation effectively doubles the page-fault service time and will
increase the effective access time accordingly.
• To reduce this overhead, a modify or dirty bit is necessary for each page or
frame.
• The modify bit for a page is set whenever any word or byte is written into,
indicating that the page has been modified.
• It is no longer necessary to swap out pages whose modify bit is 0 (there was
no modification). An incoming page may simply overwrite an unchanged
page.
• There are two major problems in the implementation of demand paging:
• Frame Allocation
How many frames will the operating system allocate to a process, particularly if
there are multiple processes?
• Page Replacement
How will the operating system select pages that are to be removed from
memory to give way for incoming pages?

SUBTOPIC 2: PAGE-REPLACEMENT ALGORITHMS


• Page replacement algorithms are the techniques using which an OS decides
which memory pages to swap out, write to disk when a page of memory needs to
be allocated.
• A good page-replacement algorithm is one with a low page-fault rate.
• An algorithm is evaluated by running it on a particular string of memory
references and computing the number of page faults. The string of memory
references is called the reference string.
• Common Page-replacement algorithms or techniques are First-In, First-Out
(FIFO), Optimal, and Least Recently Used (LRU) Page Algorithm.

7
First-In First-Out (FIFO) Algorithm
This is the simplest page-replacement algorithm. When a page must be
replaced, the oldest page is chosen.
Example:
Assume that there are three frames in memory and that the following is the
page reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

First-In First-Out (FIFO) Algorithm

There are 15 page faults.


Page fault

First-In First-Out (FIFO) Algorithm


• It is not strictly necessary to record the time when a page is brought in. The OS
simply creates a FIFO queue to hold all pages in memory. The page at the head
of the queue is replaced if a frame is needed. When a page is brought into
memory, it is inserted at the tail of the queue.
• The FIFO page-replacement algorithm is easy to understand and program.
However, its performance is not always good. The page replaced may be an
initialization module that was used a long time ago and is no longer needed.
• Notice that, even if algorithm selects for replacement a page that is in active use,
everything still works correctly. After an active page is removed to bring in a new
one, a fault occurs almost immediately to retrieve the active page.
• Thus, a bad replacement choice increases the page-fault rate and slow process
execution, but does not cause incorrect execution.

8
• As a general rule, the more frames available in physical memory, the lower the
page-fault rate.
• However, there are some instances where the page-fault rate may increase as
the number of physical memory frames increases. This is known as Belady’s
Anomaly.

First-In First-Out (FIFO) Algorithm suffers from Belady’s Anomaly.


Example:
Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

14
number of page faults
12
10
8
6
4
2

1 2 3 4 5 6
number of frames

Optimal Algorithm
• This algorithm has the lowest page-fault rate of all algorithms.
• In an optimal algorithm, the page that will be replaced is the one that will not be
used for the longest period of time.


There are 9 page faults.

9
Optimal Algorithm
• In the example, the optimal page-replacement algorithm produced only 9 page
faults as compared to the FIFO algorithm which produced 15 page faults.
• If the first three page faults were ignored (since all algorithms suffer from these
initialization page faults), then the optimal algorithm is twice as good as FIFO.
• Unfortunately, this algorithm is difficult to implement, since it requires future
knowledge of the reference string. The optimal algorithm is used mainly for
comparison studies.

Least Recently Used (LRU) Algorithm


• This algorithm uses the recent past to approximate the near future. It simply
replaces the page that has not been used for the longest period of time.

There are 12 page faults.

Least Recently Used (LRU) Algorithm


• The LRU algorithm produced 12 page faults. Although it is not as good as the 9
of the optimal algorithm (in fact, no other algorithm will produce less than 9
page faults for this example), it is much better than the 15 of the FIFO
algorithm.
• The LRU policy is often used as a page replacement algorithm and is considered
to be quite good. The major problem is how to implement LRU replacement.

10
Applied Operating System
Study Guide

Module 5
MASS STORAGE MANAGEMENT

SUBTOPIC 1: MASS STORAGE STRUCTURE CONCEPTS

INTRODUCTION
• The main requirement of secondary storage is to be able to store a very large amount of
data permanently.
• One of the earliest secondary-storage media is the magnetic tape. However, it was too
slow in comparison with the access time of main memory.
• Magnetic disks provide the bulk of secondary storage for modern computer systems.

1
DISK STRUCTURE
• A magnetic disk system has several disk platters.

• Each disk platter has a flat circular shape, like a phonograph record.
• A magnetic material, similar to that of a magnetic tape or floppy diskette, covers its two
surfaces.
• Information is recorded on the surfaces.

• Disks are rigid metal or glass platters covered with magnetic recording material.
• The disk surface is logically divided into tracks, which are subdivided into sectors.
• The head does not actually touch the surface of a disk. Instead, it floats or flies only
microns from the disk surface, supported by a cushion of air.
• Typical hard disks have a rotation speed from 4,500 to 7,200 rpm (revolution per
minute), a 10,000 rpm drive just hit the market.
• The faster the rotation, the higher the transfer rate, but also the louder and hotter the
HD.
• You may need to cool a 7200 rpm disk with an extra fan, or its life would be much
shorter. Modern HD's read all sectors of a track in one turn (Interleave 1:1). The rotation
speed is constant.

2
• Head crashes can be problem. If the head contacts the disk surface (due to power
failure, for example), the head will scrape the recording medium off the disk, destroying
data that had been there.
• Magnetic disks provide the bulk of secondary storage for modern computer systems.
• Magnetic tape was used as an early secondary-storage medium, but the access time is
much slower than for disks.

• Magnetic tapes are currently used mainly for backup, for storage of infrequently used
information, and as a medium for transferring information from one system to another.
• The system stores information by recording it magnetically on the sector under the
read-write head.
• A fixed-head system has a separate read-write head for each track. This arrangement
allows the computer to switch from track to track quickly, but it requires a large number
of heads, making the device extremely expensive.
• A movable-head system has only one read-write head per surface and the system
moves the head to access a particular track (slower but less expensive). In this system,
all the tracks on one drive that can be accessed without moving the heads are called a
cylinder.
• Floppy disks take a different approach. The disks are coated with a hard surface, so the
read-write head scans it directly on the disk surface without destroying the data.
• A 3.5" floppy diskette, which was one of the most commonly used floppy diskettes,
capable of storing 1.44 MB of data.

• A disk normally has a device directory indicating which files are on the disk.
• The directory lists the file by name, and includes such information as where on the disk
the file is, and what are its length, type, owner, time of creation, time of last use,
protections, and so on.
• The system stores the disk directory on the device, often at some fixed disk address,
such as disk address 00001.

3
• Modern disks are addressed as large one-dimensional arrays of logical blocks, where
the logical block is the smallest unit of transfer.
• The size of a logical block is usually 512 bytes, although some disks can be low-level
formatted to choose a different logical block size, such as 1,024 bytes.
There are two ways of reading and writing of data on disks:
1. Constant Linear Velocity (CLV)
• In disks using CLV, the density of bits (bits/unit length) per track is uniform.
• Since the outer tracks are longer, the number of bits on the outer tracks is more than
inner tracks. Consequently, there are more sectors on the outer tracks.
• The drive increases its rotation speed as the head from the outer to the inner tracks
to keep the same rate of data moving the head.
• This method is used in CD-ROM and DVD-ROM drives.
2. Constant Angular Velocity (CAV)
• In disks using CAV, the number of bits per track is uniform (constant number of
sectors).
• This means that the density of bits per track increases from outer tracks to inner tracks.
• This implies that the disk rotation stays constant.
• This method is used in hard disks and floppy disks.
• The time it takes to access a sector depends on three parameters, the seek time,
latency time, and transfer time.
• Seek time is the time it takes to move the read-write head to the correct track.
• Latency time is the time it takes for the sector to rotate under the head.
• Transfer time is the time it takes to actually transfer data between disk and main
memory.

DISK MANAGEMENT

There are other aspects of disk management for which an operating system is responsible.
They are disk formatting, booting from disk, and bad-block recovery.
1. Disk Formatting
• Before a computer can make use of a new disk, the disk must be broken into the sectors
that the computer can understand. This process is called physical formatting.
• When formatted, a disk consists of a set of sectors on each track the head can address.
• Each sector has a header containing the sector number and space for error correcting
code (ECC).

4
2. Boot Block
• For a computer to start running – for instance, when it is powered up or rebooted – it
needs to have an initial program to run.
• This initial program, or bootstrap program, tends to be simple.
• The bootstrap program initializes all aspects of the system, from CPU registers to device
controllers to memory contents.
3. Bad Blocks
• Because disks have moving parts and small tolerances, they are prone to failure.
• Sometimes, the failure is complete, and the disk needs to be replaced. More frequently,
one or more blocks become unreadable and not writable.

FREE SPACE MANAGEMENT


• Since there is only a limited amount of disk space, it is necessary to reuse the space
from deleted files for new files.
• To keep track of the free disk space, the system maintains a free-space list.
• Implementation of the free-space list:
1. Bit Vector

• A single bit represents each block.


• If the block is free, the bit is 0; otherwise, the bit is 1.

Example:
If blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are free, and the rest of the
blocks are allocated, then the free-space bit map or vector would be:
1 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1....
The main disadvantage of this technique is that the OS often keeps this bit map in main
memory.

5
2. Linked List
• Another approach is to link all the free disk blocks together, keeping a pointer to the
first free block.
• This block contains a pointer to the next free disk block, and so on.
• This technique however, is not efficient because of the sequential nature of lists.

free-space
list head

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

16 17 18 19

20 21 22 23

24 25 26 27

28 29 30 31

3. Grouping
• A modification of the free-list approach is to store the addresses of n free blocks in the
first free block.
• The first n-1 of these are actually free.
• The last one is the disk address of another block containing the addresses of another n
free blocks.
• The importance of this implementation is that the addresses of a large number of free
blocks can be found quickly.

6
4. Counting
• Another approach is to take advantage of the fact that, generally, several contiguous
blocks may be allocated or freed simultaneously.
• Thus, rather than keeping a list of n free disk addresses, the system keeps the address
of the first free block and the number n of free contiguous blocks that follow the first
block.
• Each entry in the free-space list then consists of a disk address and a count.

ALLOCATION METHODS

• There are three methods of allocating space to files so that disk space utilization is
effective and access time of files is fast.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation

1. Contiguous Allocation
• The contiguous allocation method requires each file to occupy a set of contiguous
addresses on the disk.
• If the file is n blocks long, and starts at location b, then it occupies blocks b, b + 1, b + 2,
..., b + n - 1.
• The directory entry for each file indicates the address of the starting block and the
length of the area allocated for this file.

7
file1
directory
0 1 2 3 filename start length
file2
file1 0 2
4 5 6 7 file2 6 2
file3 19 6
file4 28 4
8 9 10 11

12 13 14 15
file3

16 17 18 19

20 21 22 23

24 25 26 27
file4

28 29 30 31


• Linked-List Allocation
• Each file is a linked list of disk blocks; the disk blocks may be scattered anywhere on
the disk.
• The directory contains a pointer to the first and last blocks of the file.
• Each block contains a pointer to the next block.

8
directory
0 1 10 2 3
filename start end
file1 9 25
4 5 6 7

8 9 16 10 25 11

12 13 14 15

16 1 17 18 19

20 21 22 23

24 25 -1 26 27

28 29 30 31

• There is no external fragmentation with linked-list allocation.


• Any free block on the free-space list can be used to satisfy a request.
• A file can also continue to grow as long as there are free blocks.
• A disadvantage of this technique is that the pointers also occupy precious disk space.
• If a pointer requires 4 bytes out of a 512-byte block, then 0.78 % of the disk is being
used for pointers, rather than information (for a 40 GB disk, that’s 320 MB).
• An important variation on the linked allocation method is the use of the file-allocation
table (FAT).

• This table has one entry for each disk block, and is indexed by block number.
• The OS uses this table much like a linked list.
• The directory contains the block number of the first block of the file.

9
• The table entry indexed by that block number then contains the block number of the next
block of the file. The chain continues until the last block, which has a special end-of-file
value in the table entry. A zero table value indicates free memory blocks.

Example:
A file consisting of disk blocks 217, 618, and 339.

directory entry
file1 . . . 217 0
name start block

217 618

339 end-of-file

618 339

no. of disk blocks -1


FAT

3. Indexed Allocation
• Indexed allocation is similar to linked allocation except for the fact that the pointers to the
blocks are not scattered but are grouped in one location which is the index block.
• Each file has its own index block, which is an array of disk-block addresses. The ith
entry in the index block points to the ith block of the file.
• The directory contains the address of the index block.
• Indexed allocation supports direct access, without suffering from external fragmentation.
• However, it suffers from wasted space. The pointer overhead of the index block is
generally greater than the pointer overhead of linked allocation.
• The OS must allocate an entire index block, even if only one or two pointers will be non-
nil.

10
directory
0 1 2 3
filename index block

4 5 6 7 file1 19

8 9 10 11

12 13 14 15 9
16
1
10
16 17 18 19 s s
25
-1
-1
20 21 22 23 -1

24 25 26 27

28 29 30 31

11
SUBTOPIC 2: DISK SCHEDULING TECHNIQUES
• Since most jobs depend heavily on the disk for loading and input and output files, it is
important that disk service be as fast as possible.
• The OS can improve on the average disk service time by scheduling the requests for
disk access.
• If the desired disk drive and controller are available, it can service the request
immediately. However, while the drive or controller is serving one request, any
additional requests, normally from other processes, should go to a queue.
• Disk Scheduling Algorithms
• FCFS Scheduling
• SSTF Scheduling
• SCAN Scheduling
• C-SCAN Scheduling
• LOOK Scheduling
• C-LOOK Scheduling

FCFS DISK SCHEDULING


1. FCFS Scheduling
• The simplest form of scheduling is the first-come first-served (FCFS).
Example:
Assume that there are 200 tracks (0-199) and the read-write head is currently at track 53.
Consider now an ordered disk queue with requests involving tracks.

98, 183, 37, 122, 14, 124, 65, and 67

12
14 37 53 65 67 98 122124 183

Computing for the Total Head Movement (THM):


from 53 to 98 = 98 - 53 = 45
from 98 to 183 = 183 - 98 = 85
from 183 to 37 = 183 - 37 = 146
from 37 to 122 = 122 - 37 = 85
from 122 to 14 = 122 - 14 = 108
from 14 to 124 = 124 - 14 = 110
from 124 to 65 = 124 - 65 = 59
from 65 to 67 = 67 - 65 = 2
Total Head Movement = 640 tracks

13
SSTF DISK SCHEDULING
2. SSTF Scheduling
• The Shortest Seek Time First selects the request with the minimum seek time from the
current head position. This means that the system moves the head to the closest track
in the request queue.
Example:
Assume that the read-write head is currently at track 53. Consider now an ordered disk queue
with requests involving tracks: 98, 183, 37, 122, 14, 124, 65, and 67
14 37 53 65 67 98 122124 183

Computing for the Total Head Movement (THM):

from 53 to 65 = 65 - 53 = 12
from 65 to 67 = 67 - 65 = 2
from 67 to 37 = 67 - 37 = 30
from 37 to 14 = 37 - 14 = 23
from 14 to 98 = 98 - 14 = 84
from 98 to 122 = 122 - 98 = 24
from 122 to 124 = 124 - 122 = 2
from 124 to 183 = 183 - 124 = 59
Total Head Movement = 236 tracks

14
• This scheduling method results in a total head movement of 236 tracks. This algorithm
would result in a substantial improvement in average disk service.
• The problem with SSTF scheduling is that it may cause starvation of some requests.
• In theory, a continual stream of requests near one another could arrive, causing the
request for a farther track to wait indefinitely.
• The SSTF algorithm, although a substantial improvement over the FCFS algorithm, is
not optimal.

SCAN DISK SCHEDULING


3. SCAN Scheduling
• The read-write head starts at one end of the disk, and moves toward the other end,
servicing requests as it reaches each track, until it gets to the other end of the disk.
• At the other end, the direction of head movement reverses and servicing continues.
• The head continuously scans the disk from end to end.
• The SCAN algorithm is sometimes called the elevator algorithm, since it is similar to
the behavior of elevators as they service requests to move from floor to floor in a
building.
Example:
Assume that there are 200 tracks (0-199) and the read-write head is currently at track 53
(going to the direction of track 0). Consider now an ordered disk queue with requests
involving tracks: 98, 183, 37, 122, 14, 124, 65, and 67.

14 37 53 65 67 98 122124 183

15
Computing for the Total Head Movement (THM):
from 53 to 37 = 53 - 37 = 16
from 37 to 14 = 37 - 14 = 23
from 14 to 0 = 14 - 0 = 14
from 0 to 65 = 65 - 0 = 65
from 65 to 67 = 67 - 65 = 2
from 67 to 98 = 98 - 67 = 31
from 98 to 122 = 122 - 98 = 24
from 122 to 124 = 124 - 122 = 2
from 124 to 183 = 183 - 124 = 59
Total Head Movement = 236 tracks

C-SCAN DISK SCHEDULING


4. C-SCAN Scheduling
• The Circular-SCAN algorithm is a variant of the SCAN algorithm.
• As does SCAN scheduling, C-SCAN scheduling moves the head from one end of the
disk to the other, servicing requests as it goes.
• When it reaches the other end, however, it immediately returns to the beginning of
the disk without servicing any requests on the return trip.
• C-SCAN scheduling essentially treats the disk as though it were circular, with the last
track adjacent to the first one.

16
14 37 53 65 67 98 122124 183

Computing for the Total Head Movement (THM):


from 53 to 65 = 65 - 53 = 12
from 65 to 67 = 67 - 65 = 2
from 67 to 98 = 98 - 67 = 31
from 98 to 122 = 122 - 98 = 24
from 122 to 124 = 124 - 122 = 2
from 124 to 183 = 183 - 124 = 59
from 183 to 199 = 183 - 199 = 16
from 199 to 0 = 199 - 0 = 199
from 0 to 14 = 14 - 0 = 14
from 14 to 37 = 37 – 14 = 23
Total Head Movement = 382 tracks

17
LOOK DISK SCHEDULING
5. LOOK Scheduling

• In SCAN and C-SCAN scheduling, the head always moves from one end of the disk to
the other.
• More commonly, the head is only moved as far as the last request in each direction.
• As soon as there are no requests in the current direction, the head movement is
reversed.
• These versions of SCAN and C-SCAN scheduling are called LOOK (“look” for a request
before moving in that direction) and C-LOOK scheduling.

Computing for the Total Head Movement (THM):


from 53 to 65 = 65 - 53 = 12
from 65 to 67 = 67 - 65 = 2
from 67 to 98 = 98 - 67 = 31
from 98 to 122 = 122 - 98 = 24
from 122 to 124 = 124 - 122 = 2
from 124 to 183 = 183 - 124 = 59
from 183 to 14 = 183 - 37 = 146
from 14 to 37 = 37 - 14 = 23
Total Head Movement = 299 tracks

18
SELECTING DISK SCHEDULING ALGORITHM
• SCAN and C-SCAN are more appropriate for systems that place a heavy load on the
disk.
• If the queue seldom has more than one outstanding request, then all scheduling
algorithms are effectively equivalent. In this case, FCFS is a reasonable algorithm (due
to its simplicity).

19
Applied Operating System
Study Guide

Module 6
PROTECTION AND SECURITY

SUBTOPIC 1: ROLE OF OPERATING SYSTEM IN SECURITY


Operating system plays a key role in computer system security.
• Any vulnerability at the operating system level opens the entire system to attack
• The more complex and powerful the operating system, the more likely it is to have
vulnerabilities to attack
System administrators must be on guard to arm their operating systems with all available
defenses against attack

System Survivability is the capability of a system to fulfill its mission, in a timely manner, in the
presence of attacks, failures, or accidents
Key properties of survivable systems:
• Resistance to attacks
• Recognition of attacks and resulting damage
• Recovery of essential services after an attack
• Adaptation and evolution of system defense mechanisms to lessen future attacks

Four key properties of a survivable system

1
LEVELS OF PROTECTION
System administrator must evaluate the risk of intrusion for each computer configuration, which
in turn depends on the level of connectivity given to the system

A simplified comparison of security protection required for


three typical computer configurations

BACKUP AND RECOVERY


• Backup and recovery policies are essential for most computing systems
• Many system managers use a layered backup schedule
• Backups, with one set stored off-site, are crucial to disaster recovery
• Written policies and procedures and regular user training are essential elements of
system management
• Written security procedures should recommend:
• Frequent password changes
• Reliable backup procedures
• Guidelines for loading new software
• Compliance with software licenses
• Network safeguards
• Guidelines for monitoring network activity
• Rules for terminal access

2
SYSTEM SECURITY BREACHES AND SYSTEM PROTECTION
• A gap in system security can be malicious or not
• Intrusions can be classified as:
• Due to uneducated users and unauthorized access to system resources
• Purposeful disruption of the system’s operation
• Purely accidental
Examples: Hardware malfunctions, undetected errors in OS or applications, or natural disasters
• Malicious or not, a breach of security severely damages the system’s credibility

UNINTENTIONAL INTRUSIONS

• Any breach of security or modification of data that was not the result of a planned
intrusion
• Examples:
• Accidental incomplete modification of data
• When non-synchronized processes access data records and modify
some but not all of a record’s fields
• Errors due to incorrect storage of data values
Example: When the field isn’t large enough to hold the numeric value stored there

(a) Original data value in a field large enough to hold it. If the field is too small,
(b) FORTRAN replaces the data with asterisks,
(c) COBOL truncates the higher order digits and stores only the digits that remain

3
INTENTIONAL ATTACKS
• Types of Intentional attacks:

• Intentional unauthorized access


Examples: denial of service attacks, browsing, wire tapping, repeated trials, trap doors, and
trash collection
• Viruses and worms
• Trojan Horses
• Bombs
• Blended threats

INTENTIONAL UNAUTHORIZED ACCESS


• Denial of service (DoS) attack is one in which a malicious hacker takes over
computers via the Internet and causes them to flood a target site with demands for data
and other small tasks causing a computer to perform repeated unproductive task.
• Browsing is when unauthorized users gain access to search through secondary storage
directories or files for information they should not have the privilege to read.
• Wire Tapping Unauthorized users monitor or modify a user’s transmission
• Repeated Trials refer to entering systems by guessing authentic passwords
• Trap doors refer to an unspecified and undocumented entry point to the system
• Installed by a system diagnostician or programmer for future use
• Leaves the system vulnerable to future intrusion
• Trash collection refers to the use of discarded materials such as disks, CDs, printouts,
etc., to enter the system illegally.

Average time required to guess passwords up to ten alphabetic characters (A-Z) using brute force

4
VIRUSES
• Small programs written to alter the way a computer operates, without permission of the
user
• Must meet two criteria: It must be self-executing and self-replicating
• Usually written to attack a certain OS
• Spread via a wide variety of applications
• Macro virus works by attaching itself to a template (such as NORMAL.DOT), which in
turn is attached to word processing documents

A file infector virus attacks a clean file (a) by attaching a small program to it (b)

TYPES OF VIRUS

5
BOMBS AND BLENDED THREATS
• Logic bomb is a destructive program with a fuse
• a certain triggering event (such as a keystroke or connection with the Internet)
• Spreads unnoticed throughout a network
• Time bomb is a destructive program triggered by a specific time, such as a day of the
year
• Blended Threat combines into one program the characteristics of other attacks
Examples: virus, worm, Trojan Horse, spyware, and other malicious code into a single program

SNIFFERS AND SPOOFING


• Sniffers are programs that reside on computers attached to the network. Peruse data
packets as they pass by, examine each one for specific information
• Spoofing is the act of disguising a communication from an unknown source as being
from a known, trusted source.
• Spoofing can apply to emails, phone calls, and websites
• Used when unauthorized users want to disguise themselves as friendly sites
(hoax sites)

6
SOCIAL ENGINEERING
A technique whereby system intruders gain access to information about a legitimate user to
learn active passwords by
• Looking in and around the user’s desk for a written reminder
• Trying the user logon ID as the password
• Searching logon scripts
• Telephoning friends and co-workers to learn the names of user’s family
members, pets, vacation destinations, favorite hobbies, car model, etc.
• Phishing is the act of fraudulently using email to try to get the recipient to reveal
personal data. In a phishing scam, con artists send legitimate-looking emails urging the
recipient to take action to avoid a negative consequence or to receive a reward.
• Spear-phishing is a variation of phishing in which the phisher sends fake emails to a
certain organization’s employees. It is known as spear-phishing because the attack is
much more precise and narrow, like the tip of a spear.
• Smishing is a type of phishing that involves the use of Short Message Service (SMS)
texting. In a smishing scam, people receive a legitimate-looking text message on their
phone telling them to call a specific phone number or to log on to a Web site.
• Vishing is similar to smishing except that the victims receive a voice mail telling them to
call a phone number or access a Web site.
• Rootkits is a set of programs that enables its user to gain administrator-level access to
a computer without the end user’s consent or knowledge. Once installed, the attacker
can gain full control of the system.
• Ransomware is a malware that disables a computer or smart-phone until the victim
pays a fee, or ransom.

SYSTEM PROTECTION

• No single guaranteed method of protection


• System vulnerabilities include:
• File downloads, e-mail exchange
• Vulnerable firewalls
• Improperly configured Internet connections, etc.
• Need for continuous attention to security issues
• System protection is multifaceted protection methods include:
• Use of antivirus software, firewalls, restrictive access and encryption

7
ANTIVIRUS SOFTWARE
• Software to combat viruses that can be preventive, diagnostic, or both
• Preventive programs may calculate a checksum for each production program
• Diagnostic software compares file sizes, looks for replicating instructions or
unusual file activity
• Can sometimes remove the infection and leave the remainder intact
• Unable to repair worms, Trojan horses, or blended threats as they are malicious code in
entirety

(a) Uninfected file;


(b) file infected with a virus;
(c) a Trojan horse or worm consists entirely of malicious code

FIREWALLS

• A set of hardware and/or software designed to protect a system by disguising its IP


address from unauthorized users
• Sits between the Internet and network
• Blocks curious inquiries and potentially dangerous intrusions from outside the system

8
Firewall sitting between campus networks and Internet, filtering requests for access

• Mechanisms used by the firewall to perform various tasks include:


• Packet filtering
• Proxy servers
• Typical tasks of the firewall are to:

• Log activities that access the internet


• Maintain access control based on senders’ or receivers’ IP addresses
• Maintain access control based on services that are requested
• Hide internal network from unauthorized users
• Verify that virus protection is installed and enforced
• Perform authentication based on the source of a request from the Internet

AUTHENTICATION
• Authentication is a verification that an individual trying to access a system is authorized
to do so.
• Kerberos is a network authentication protocol.
• Designed to provide strong authentication for client/server applications by using secret-
key cryptography.

9
Using Kerberos, when client A attempts to access server B, user is authenticated (a), and receives a
ticket for the session (b). Once the ticket is issued, client and server can communicate at will (c). Without
the ticket, access is not granted.

ENCRYPTION
Most extreme protection method for sensitive data where data is put into a secret code
• To communicate with another system, data is encrypted, transmitted, decrypted,
and processed
• Sender inserts public key with the message
• Message receiver required to have private key to decode the message
Disadvantages:
• Increases system’s overhead
• System becomes totally dependent on encryption process itself

PASSWORD MANAGEMENT
Most basic techniques used to protect hardware and software investments include:
• Good passwords
• Careful user training

10
Password Construction:
• Good password is unusual, memorable, and changed often
• Password files normally stored in encrypted form
• Password length has a direct effect on the ability of password to survive
password cracking attempts

PASSWORD CONSTRUCTION
Reliable techniques for generating a good password:
• Use minimum of eight characters, including numbers and non-alphanumeric
characters
• Create a misspelled word or join bits of phrases into a word that’s easy to
remember
• Follow a certain pattern on the keyboard
• Create acronyms from memorable sentences
• Use upper and lowercase characters if allowed
• Never use a word that’s included in any dictionary
• Dictionary attack is a method of breaking encrypted passwords
• Requirements:

• A copy of the encrypted password file


• Algorithm used to encrypt the passwords
• Prevention:
• Some operating systems “salt” user passwords with extra random bits to
make them less vulnerable to dictionary attacks

PASSWORD ALTERNATIVES
• Use of a smart card

• A credit card-sized calculator that requires both “something you have and
something you know”
• User must type in the number that appears at that moment on the smart card
• For added protection, user then enters a secret code
• User is admitted to the system only if both number and code are validated

11
• Biometrics
• The science and technology of identifying individuals based on unique biological
characteristics of each person such as human face, fingerprints, hand
measurements, iris/retina, and voice prints
• Positively identifies the person being scanned
• Critical factor is reducing the margin of error

12

You might also like