Unit 1
Unit 1
Unit 1
OPERATING SYSTEM
SUBJECT CODE: SAE5A, SAU5B, TAB6D, SAZ4B, TAC4B
SYLLABUS
UNIT - I
Introduction: Views – Types of System – OS Structure – Operations – Services – Interface – System Calls-System
Structure – System Design and Implementation. Process Management: Process – Process Scheduling – Inter-process
Communication. CPU Scheduling: CPU Schedulers – Scheduling Criteria – Scheduling Algorithms.
UNIT - II
Process Synchronization: Critical – Section Problem – Synchronization Hardware Semaphores – Classical Problems
of Synchronization – Monitors. Deadlocks: Characterization – Methods for Handling Deadlocks – Deadlock
Prevention – Avoidance – Detection – Recovery.
UNIT - III
Memory Management: Hardware – Address Binding – Address Space – Dynamic Loading and Linking – Swapping –
Contiguous Allocation – Segmentation – Paging – Structure of the Page Table.
UNIT - IV
Virtual Memory Management: Demand Paging - Page Replacement Algorithms - Thrashing. File System: File
Concept –. Access Methods – Directory and Disk Structure – Protection – File System Structures – Allocation
Methods – Free Space Management.
UNIT - V
I/O Systems: Overview – I/O Hardware – Application I/O Interface – Kernel I/O Subsystem – Transforming 1/0
Requests to Hardware Operations – Performance. System Protection: Goals – Domain – Access matrix. System
Security: The Security Problem – Threats – Encryption – User Authentication.
UNIT – 1
1
OPERATING SYSTEM BASICS
Introduction: Views – Types of System – OS Structure – Operations – Services – Interface – System Calls – System
Structure – System Design and Implementation.
1.1 INTRODUCTION:
The purpose of an operating system is to provide an environment in which a user can execute programs in a
convenient and efficient manner. An operating system is large and complex, it must be created piece by piece. Each of
these pieces should be a well-delineated portion of the system, with carefully defined inputs, outputs, and functions.
An operating system is software that manages the computer hardware. The hardware must provide appropriate
mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering
with the proper operation of the system.
Definition:
“An operating system is a program that manages a computer's hardware. It also provides a basis for application
programs and acts as an intermediary between the computer user and the computer hardware.”
A computer system can be divided into four components with hardware, the operating system, the application
programs, and the users as shown in figure.
The hardware Central Processing Unit (CPU), the Memory, and the Input/Output (l / O) devices provides the basic
computing resources for the system. The Application Programs such as word processors, spreadsheets, compilers, and
Web browsers define the ways in which these resources are used to solve users' computing problems.
A computer system as consisting of hardware, software, and data. The operating system provides the means for proper
use of these resources in the operation of the computer system.
1.3 COMPUTER SYSTEM ORGANIZATION:
A modern general-purpose computer system consists of one or more CPUs and a number of device controllers
connected through a common bus that provides access to shared memory. Each device controller is in charge of a
specific type of device (for example, disk drives, audio devices, or video displays).
The CPU and the device controllers can execute in parallel, competing for memory cycles. To ensure orderly access to
the shared memory, a memory controller synchronizes access to the memory.
When the system is powered up or rebooted it needs to have an initial program to run. This initial program, or
bootstrap program, tends to be simple.
Typically, it is stored within the computer hardware in read-only memory (ROM) or electrically erasable
programmable read-only memory (EEPROM), known by the general term firmware. It initializes all aspects of the
system, from CPU registers to device controllers to memory contents. The bootstrap program must know how to load
the operating system and how to start executing that system.
The bootstrap program must locate the operating-system kernel and load it into memory. Once the kernel is loaded
and executing, it can start providing services to the system and its users. Some services are provided outside of the
kernel, by system programs that are loaded into memory at boot time to become system processes, or system daemons
that run the entire time the kernel is running.
A Kernel is a central component of an operating system. It acts as an interface between the user applications and
the hardware.
Booting is a start-up sequence that starts the operating system of a computer when it is turned on.
A Bootstrap is the program that initializes the Operating System (OS) during start-up which initiates the smaller
program that executed a larger program such as the OS.
The occurrence of an event is usually signalled by an interrupt from either the hardware or the software. Hardware
may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus. Software may
trigger an interrupt by executing a special operation called a system call (also called a monitor call).
When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location. The
fixed location usually contains the starting address where the service routine for the interrupt is located. The interrupt
service routine executes; on completion, the CPU resumes the interrupted computation.
Interrupt is a signal that gets the attention of the CPU and is usually generated when I/O is required".
A System Call is the programmatic way in which a computer program requests a service from the kernel of the
operating system it is executed on. It provides an interface between a process and operating system to allow user-
level processes to request services of the operating system.
All forms of memory provide an array of bytes. Each byte has its own address. Interaction is achieved through a
sequence of load or store instructions to specific memory addresses. The load instruction moves a byte or word from
main memory to an internal register within the CPU, whereas the store instruction moves the content of a register to
main memory.
A typical instruction-execution cycle, as executed on a system with a Von Neumann architecture, first fetches an
instruction from memory and stores that instruction in the instruction register. The instruction is then decoded and
may cause operands to be fetched from memory and stored in some internal register. After the instruction on the
operands has been executed, the result may be stored back in memory.
Each storage system provides the basic functions of storing a datum and holding that datum until it is retrieved at a
later time. The main differences among the various storage systems lie in speed, cost, size, and volatility. The wide
variety of storage systems can be organized in a hierarchy Figure1.3 according to speed and cost.
Storage-device hierarchy
USER VIEW:
The user view of the computer varies by the interface being used. The examples are Windows XP, Vista, Windows 7,
etc. Most computer users sit in front of a personal computer (PC). In this case, the operating system is designed mostly
for ease of use, with some attention paid to resource utilization. Some users sit at a terminal connected to a mainframe
or minicomputer. In this scenario, other users are accessing the same computer through other terminals. These users
share resources and may exchange information. The operating system in this case is designed to maximize resource
utilization, ensuring that all available CPU time, memory, and I/O are used efficiently, and no individual user takes
more than their fair share. Other users sit at workstations connected to a network of other workstations and servers.
These users have dedicated resources but share resources such as networking and servers, including file, compute, and
print servers. Here, the operating system is designed to balance individual usability and resource utilization.
SYSTEM VIEW:
From the computer's point of view, the operating system is the program most closely interacting with the hardware.
An operating system manages resources such as CPU time, memory space, file storage space, and I/O devices, which
may be required to solve various problems. Therefore, the operating system acts as a manager of these resources.
Another view of the operating system is as a control program. A control program manages the execution of user
programs to prevent errors in the proper use of the computer. It is especially concerned with the operation and control
of I/O devices.
A general-purpose computer system consists of CPUs and multiple device controllers that are connected through a
common bus. Each device controller is in charge of a specific type of device. Depending on the controller, more than
one device may be attached. Seven or more devices can be attached to the small computer-systems interface (SCSI)
controller. A device controller maintains some local buffer storage and a set of special- purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage.
Typically, operating systems have a device driver for each device controller. This device driver understands the device
controller and provides the rest of the operating system with a uniform interface to the device. To start an 1/0
operation, the device driver loads the appropriate registers within the device controller. The device controller, in turn,
examines the contents of these registers to determine what action to take (such as "read a character from the
keyboard"). The controller starts the transfer of data from the device to its local buffer. Once the transfer of data is
complete, the device controller informs the device driver via an interrupt that it has finished its operation.
The device driver then returns control to the operating system, possibly returning the data or a pointer to the data if the
operation was a read. For other operations, the device driver returns status information. This form of interrupt-driven
I/O is fine for moving small amounts of data but can produce high overhead when used for bulk data movement such
as disk 1/0.
Direct Memory Access (DMA) is a method that allows an input/output (I/O) device to send or receive data
directly to or from the main memory. bypassing the CPU to speed up memory operations. The process is
managed by a chip known as a DMA controller (DMAC).
To solve this problem, Direct Memory Access (DMA) is used. After setting up buffers, pointers, and counters for the
I/O device, the device controller transfers an entire block of data directly to or from its own buffer storage to memory,
with no intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that the operation
has completed, rather than the one interrupt per byte generated for low-speed devices. While the device controller is
performing these operations, the CPU is available to accomplish other work. Some high-end systems use switch rather
than bus architecture. On these systems, multiple components can talk to other components concurrently, rather than
competing for cycles on a shared bus. The DMA is shown more effectively in Figure which shows the interplay of all
components of a computer system.
Working of Computer System
TYPES OF OPERATING SYSTEMS:
This type of operating system does not interact with the computer directly. There is an operator which takes similar
jobs having the same requirement and groups them into batches. It is the responsibility of the operator to sort jobs with
similar needs.
(1) The computer operators should be well known with batch systems.
(2) Batch systems are hard to debug.
(3) It is sometimes costly.
(4) The other jobs will have to wait for an unknown time if any job fails.
(5) In batch operating system the processing time for jobs is commonly difficult to accurately predict while they
are in the queue.
(6) It is difficult to accurately predict the exact time required for a job to complete while it is in the queue.
Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for better execution of resources.
Multi-Programming OS
There is not any facility for user interaction of system resources with the system.
Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.
Multi-Processing OS
Due to the multiple CPU, it can be more complex and somehow difficult to understand.
Multitasking Operating System is simply a multiprogramming Operating System with having facility of a Round-
Robin Scheduling Algorithm. It can run multiple programs simultaneously.
There are two types of Multi-Tasking Systems which are listed below.
(a) Pre-emptive Multi-Tasking
(b) Cooperative Multi-Tasking
Multitasking OS
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as
they use a single system. These systems are also known as Multitasking Systems. The task can be from a single user
or different users also. The time that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
Time-Sharing OS
IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first introduced in 1972. It is still
in use today, providing a virtual machine environment that allows multiple users to run their own instances of
operating systems and applications.
TSO (Time Sharing Option): TSO is a time-sharing operating system that was first introduced in the 1960s
by IBM for the IBM System/360 mainframe computer. It allowed multiple users to access the same computer
simultaneously, running their own applications.
Windows Terminal Services: Windows Terminal Services is a time-sharing operating system that allows
multiple users to access a Windows server remotely. Users can run their own applications and access shared
resources, such as printers and network storage, in real-time.
These types of operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers communicate
with each other using a shared communication network. Independent systems possess their own memory unit and
CPU. These are referred to as loosely coupled systems or distributed systems. These systems’ processors differ in size
and function. The major benefit of working with these types of the operating system is that it is always possible that
one user can access the files or software which are not actually present on his system but some other system connected
within this network i.e., remote access is enabled within the devices connected in that network.
Architecture of Distributed OS
(a) Failure of one will not affect the other network communication, as all systems are independent of each other.
(b) Electronic mail increases the data exchange speed.
(c) Since resources are being shared, computation is highly fast and durable.
(d) Load on host computer reduces.
(e) These systems are easily scalable as many systems can be easily added to the network.
(f) Delay in data processing reduces.
(a) Failure of the main network will stop the entire communication.
(b) To establish distributed systems the language is used not well-defined yet.
(c) These types of systems are not readily available as they are very expensive. Not only that the underlying
software is highly complex and not understood well yet.
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.
Network OS
Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008,
UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time. Real-time systems are used when there are time requirements that are very
strict like missile systems, air traffic control systems, robots, etc.
Difference between Hard real time and Soft real time system:
In hard real time system, the size of In soft real time system, the size of
data file is small or medium. data file is large.
In this system response time is in In this system response time are
millisecond. higher.
Peak load performance should be In soft real time system, peak load
predictable. can be tolerated.
A hard real time system is very A Soft real time system is less
restrictive. restrictive.
In case of an error in a hard real time In case of an soft real time system,
system, the computation is rolled computation is rolled back to
back. previously established a checkpoint.
Satellite launch, Railway signaling DVD player, telephone switches,
system etc. electronic games etc.
Guarantees response within a specific Does not guarantee response within a
deadline. specific deadline.
Catastrophic or severe consequences Minor consequences (e.g., degraded
(e.g., loss of life or property damage). performance or reduced quality).
Focused on processing critical tasks Focused on processing tasks with
with high priority. lower priority.
Less predictable, with behavior that
Highly predictable, with well-defined may vary depending on system load
and deterministic behavior. or conditions.
Real-Time OS
Advantages of RTOS:
(a) Maximum Consumption: Maximum utilization of devices and systems, thus more output from all the
resources.
(b) Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds in shifting from one task to another, and in the latest systems, it takes
3 microseconds.
(c) Focus on Application: Focus on running applications and less importance on applications that are in the
queue.
(d) Real-time operating system in the embedded system: Since the size of programs is small, RTOS can also be
used in embedded systems like in transport and others.
(e) Error Free: These types of systems are error-free.
(f) Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
(a) Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few applications
to avoid errors.
(b) Use heavy system resources: Sometimes the system resources are not so good and they are expensive as well.
(c) Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
(d) Device driver and interrupt signals: It needs specific device drivers and interrupts signal to respond earliest to
interrupts.
(e) Thread Priority: It is not good to set thread priority as these systems are very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
◇ User Interface
◇ Program Execution
◇ File system manipulation
◇ Input/Output Operations
◇ Communication
◇ Resource Allocation
◇ Error Detection
◇ Accounting
◇ Security and protection
A computer system can be organized in a number of different ways, which we can categorize roughly according to the
number of general- purpose processors used. Different types of Operating Systems for Different Kinds of Computer
Environments are classified as
On a single processor system, there is one main CPU capable of executing a general-purpose instruction set, including
instructions from user processes. Almost all single processor systems have other special- purpose processors as well.
They may come in the form of device- specific processors, such as disk, keyboard, and graphics controllers; or, on
mainframes, they may come in the form of more general-purpose processors, such as I/O processors that move data
rapidly among the components of the system. All of these special purpose processors run a limited instruction set and
do not run user processes.
Multiprocessor Systems (also known as parallel systems or multicore systems have two or more processors for
communication, sharing the computer bus and the clock, memory, and peripheral devices. Multiprocessor systems first
appeared in servers and now it has migrated to desktop and laptop systems. Multiple processors have appeared on
mobile devices such as smartphones and tablet computers also.
◆ Increased throughput: By increasing the number of processors, we expect to get more work done in less time.
◆ Economy of scale: Multiprocessor systems can cost less than equivalent multiple single-processor systems,
because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of
data, it is cheaper to store those data on one disk and to have all the processors share them than to have many
computers with local disks and many copies of the data.
◆ Increased reliability: If functions can be distributed properly among several processors, then the failure of one
processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the
remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10
percent slower, rather than failing altogether. Increased reliability of a computer system is crucial in many
applications.
The ability to continue providing service proportional to the level of surviving hardware is called graceful
degradation. Some systems go beyond graceful degradation and are called fault tolerant, because they can suffer a
failure of any single component and still continue operation. Fault tolerance requires a mechanism to allow the failure
to be detected, diagnosed, and, if possible, corrected.
The multiprocessor systems are classified into two categories and they are
1. Asymmetric multiprocessor
2. Symmetric multiprocessor
1. Asymmetric multiprocessor is a processor in which each processor is assigned a specific task. This scheme
defines a boss- worker relationship. The boss processor schedules and allocates work to the worker processors.
2. Symmetric multiprocessor (SMP), in which each processor performs all tasks within the operating system. SMP
means that all processors are peers; no boss-worker relationship exists between processors. Figure 1.5 illustrates a
typical SMP architecture. Multiprocessing adds CPUs to increase computing power. If the CPU has an integrated
memory controller, then adding CPUs can also increase the amount of memory addressable in the system.
Another type of multiprocessor system is a clustered system in Figure 1.6, which gathers together multiple CPUs.
Clustered systems differ from the multiprocessor systems which are composed of two or more individual systems or
nodes joined together. Such systems are considered loosely coupled. Each node may be a single processor system or a
multicore system.
Clustering is usually used to provide high-availability service that is, service will continue even if one or more
systems in the cluster fail. Generally, we obtain high availability by adding a level of redundancy in the system.
A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others (over the
LAN). If the monitored machine fails the monitoring machine can take ownership of its storage and restart the
applications that were running on the failed machine.
The users and clients of the applications see only a brief interruption of service. Clustering can be structured
asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is
running the applications. In symmetric clustering, two or more hosts are running applications and are monitoring each
other.
1.5 OPERATING-SYSTEM STRUCTURE:
One of the most important aspects of operating systems is the ability to multi program. A single program cannot be
kept either in the CPU or in the I/O devices as the processor will be busy at all times. Single users frequently have
multiple programs running. Multiprogramming increases CPU utilization by organizing jobs (code and data) so that
the CPU always has one to execute.
The operating system keeps several jobs in memory simultaneously. Since main memory is too small to accommodate
all jobs, the jobs are kept initially on the disk in the job pool. This pool consists of all processes residing on disk
awaiting allocation of main memory.
The set of jobs in memory can be a subset of the jobs kept in the job pool. The operating system picks and begins to
execute one of the jobs in memory. Eventually, the job may have to wait for some tasks, such as an I/O operation, to
complete.
In a non-multiprogrammed system, the CPU would sit idle. In a multiprogrammed system, the operating system
simply switches to, and executes, another job. When that job needs to wait, the CPU switches to another job, and so
on. Eventually, the first job finishes waiting and gets the CPU back.
Time sharing (or multitasking) is a logical extension of multiprogramming. In time-sharing systems, the CPU executes
multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each
program while it is running. Time sharing requires an interactive computer system, which provides direct
communication between the user and the system.
The user gives instructions to the operating system or to a program directly, using a input device such as a keyboard,
mouse, touch pad, or touch screen, and waits for immediate results on an output device. Accordingly, the response
time should be short-typically less than one second. A time-shared operating system allows many users to share the
computer simultaneously. A time-shared operating system uses CPU scheduling and multiprogramming to provide
each user with a small portion of a time- shared computer. Each user has at least one separate program in memory.
In addition, if several jobs are ready to run at the same time, the system must choose which job will run first. Making
this decision is CPU Scheduling. Finally, running multiple jobs concurrently requires that their ability to affect one
another be limited in all phases of the operating system, including process scheduling, disk storage, and memory
management.
The main advantage of the virtual-memory scheme is that it enables users to run programs that are larger than actual
Physical Memory. Further, it abstracts main memory into a large, uniform array of storage, separating Logical
Memory as viewed by the user from physical memory. This arrangement frees programmers from concern over
memory-storage limitations.
Modern operating systems are interrupt driven. If there are no processes to execute, no I/O devices to service, and no
users to whom to respond, an operating system will sit quietly, waiting for something to happen. Events are almost
always signalled by the occurrence of an interrupt or a trap. A trap (or an exception) is a software-generated interrupt
caused either by an error (for example, division by zero or invalid memory access) or by a specific request from a user
program that an operating- system service be performed.
The interrupt-driven nature of an operating system defines that system's general structure. For each type of interrupt,
separate segments of code in the operating system determine what action should be taken. An interrupt service routine
is provided to deal with the interrupt. Since the operating system and the users share the hardware and software
resources of the computer system, we need to make sure that an error in a user program could cause problems only for
the one program running.
With sharing, many processes could be adversely affected by a bug in one program. For example, if a process gets
stuck in an infinite loop, this loop could prevent the correct operation of many other processes. More subtle errors can
occur in a multiprogramming system, where one erroneous program might modify another program, the data of
another program, or even the operating system itself. Without protection against these sorts of errors, either the
computer must execute only one process at a time or all output must be suspect. A properly designed operating system
must ensure that an incorrect (or malicious) program cannot cause other programs to execute incorrectly.
In order to ensure the proper execution of the operating system, we must be able to distinguish between the execution
of operating-system code and user defined code. The approach taken by most computer systems is to provide
hardware support that allows us to differentiate among various modes of execution.
We need two separate modes of operation: user mode and kernel node (also called supervisor mode, system mode, or
privileged mode) bit, called the mode bit, is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1). With the mode bit, we can distinguish between a task that is executed on behalf of the operating
system and one that is executed on behalf of the user. When the computer system is executing on behalf of a user
application, the system is in user mode.
However, when a user application requests a service from the operating system (via a system call), the system must
transition from user to kernel mode to fulfil the request. This is shown in Figure 1.8. This architectural enhancement is
useful for many other aspects of system operation as well. At system boot time, the hardware starts in kernel mode.
The operating system is then loaded and starts user applications in user mode.
Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode (that is, changes the state
of the mode bit to 0). Thus, whenever the operating system gains control of the computer, it is in kernel mode. The
system always switches to user mode (by setting the mode bit to 1) before passing control to a user program.
The dual mode of operation provides us with the means for protecting the operating system from errant users and
errant users from one another. We accomplish this protection by designating some of the machine instructions that
may cause harm as privileged instructions.
The hardware allows privileged instructions to be executed only in kernel mode. If an attempt is made to execute a
privileged instruction in user mode, the hardware does not execute the instruction but rather treats it as illegal and
traps it to the operating system.
Timer:
The operating system maintains control over the CPU. We cannot allow a user program to get stuck in an infinite loop
or to fail to call system services and never return control to the operating system. To accomplish this goal, we can use
a timer. A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example,
1/60 second) or variable (for example, from 1 millisecond to 1 second).
A variable timer is generally implemented by a fixed-rate clock and a counter. The operating system sets the counter.
Every time the clock ticks, the counter is decremented. When the counter reaches 0, an interrupt occurs. For instance,
a 10-bit counter with a 1-millisecond clock allows interrupts at intervals from 1 millisecond to 1,024 milliseconds, in
steps of 1 millisecond.
Before turning over control to the user, the operating system ensures that the timer is set to interrupt. If the timer
interrupts, control transfers automatically to the operating system, which may treat the interrupt as a fatal error or may
give the program more time. Clearly, instructions that modify the content of the timer are privileged. We can use the
timer to prevent a user program from running too long.
A simple technique is to initialize a counter with the amount of time that a program is allowed to run. A program with
a 7-minute time limit, for example, would have its counter initialized to 420. Every second, the timer interrupts, and
the counter is decremented by 1. As long as the counter is positive, control is returned to the user program. When the
counter becomes negative, the operating system terminates the program for exceeding the assigned time limit.
The major operations of the operating system are process management, memory management, device management
and file management.
Process
Management
Operating
File Memory
System
Management Management
Operations
Device
Management
Process Management:
The operating system is responsible for managing the processes i.e., assigning the processor to a process at a time.
This is known as process scheduling. The different algorithms used for process scheduling are FCFS (first come first
served), SJF (shortest job first), priority scheduling, round robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management. When the processes enter
the system, they are put into the job queue.
The processes that are ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.
Memory Management:
Memory management plays an important part in operating system. It deals with memory and the moving of processes
from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are –
The operating system assigns memory to the processes as required. This can be done using best fit, first fit and
worst fit algorithms.
All the memory is tracked by the operating system i.e., it notes what memory parts are in use by the processes
and which are empty.
The operating system deallocated memory from processes as required. This may happen when a process has
been terminated or if it no longer needs the memory.
There are many I/O devices handled by the operating system such as mouse, keyboard, disk drive etc. There
are different device drivers that can be connected to the operating system to handle a specific device.
The device controller is an interface between the device and the device driver. The user applications can
access all the I/O devices using the device drivers, which are device specific codes.
File Management:
Files are used to provide a uniform view of data storage by the operating system. All the files are mapped onto
physical devices that are usually non-volatile so data is safe in the case of system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access−
Sequential Access
The information in a file is processed in order using sequential access. The files records are accessed on after
another. Most of the file systems such as editors, compilers etc. use sequential access.
Direct Access
In direct access or relative access, the files can be accessed in random for read and write operations. The direct
access model is based on the disk model of a file, since it allows random accesses.
An operating system provides an environment for the execution of programs. It provides certain services to
programs and to the users of those programs.
The specific services provided, of course, differ from one operating system to another, but we can identify
common classes.
These operating system services are provided for the convenience of the programmer, to make the
programming task easier. Below figure shows one view of the various operating-system services and the
communications between them.
1. User interface:
Almost all operating systems have a user interface (UI). This interface can take several forms. One is a
command-line interface (CLI), which uses text commands and a method for entering them (say, a keyboard
for typing in commands in a specific format with specific options).
Another is a batch interface, in which commands and directives to control those commands are entered into
files, and those files are executed.
Most commonly, a graphical user interface (GUI) is used. Here, the interface is a window system with a
pointing device to direct I/O, choose from menus, and make selections and a keyboard to enter text.
Some systems provide two or all three of these variations.
2. Program execution:
The system must be able to load a program into memory and to run that program. The program must be able
to end its execution, either normally or abnormally (indicating error).
3. I/O operations:
A running program may require I/O, which may involve a file or an I/O device. For specific devices, special
functions may be desired (such as recording to a CD or DVD drive or blanking a display screen).
For efficiency and protection, users usually cannot control I/O devices directly Therefore, the operating
system must provide a means to do I/O.
4. File-system manipulation:
The file system is of particular interest. Obviously, programs need to read and write files and directories. They
also need to create and delete them by name, search for a given file, and list file information.
Finally, some operating systems include permissions management to allow or deny access to files or
directories based on file ownership. Many operating systems provide a variety of file systems, sometimes to
allow personal choice and sometimes to provide specific features or performance characteristics.
5. Communications:
Communication may occur between processes that are executing on the same computer or between processes
that are executing on different computer systems tied together by a computer network.
Communications may be implemented via shared memory, in which two or more processes read and write to a
shared section of memory, or message passing, in which packets of information in predefined formats are
moved between processes by the operating system.
6. Error detection:
The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and
memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a
connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location, or a too-great use of CPU time).
For each type of error, the operating system should take the appropriate action to ensure correct and consistent
computing.
Sometimes, it has no choice but to halt the system. At other times, it might terminate an error causing process
or return an error code to a process for the process to detect and possibly correct.
Another set of operating system functions exists not for helping the user but rather for ensuring the efficient
operation of the system itself. Systems with multiple users can gain efficiency by sharing the computer
resources among the users.
7. Resource allocation:
When there are multiple users or multiple jobs running at the same time, resources must be allocated to each
of them. The operating system manages many different types of resources. Some (such as CPU cycles. main
memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have
much more general request and release code.
For instance, in determining how best to use the CPU, operating systems have CPU-scheduling routines that
take into account the speed of the CPU, the jobs that must be executed, the number of registers available, and
other factors.
There may also be routines to allocate printers, USB storage drives, and other peripheral devices.
8. Accounting:
We want to keep track of which users use how much and what kinds of computer resources. This record
keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics.
Usage statistics may be a valuable tool for researchers who wish to reconfigure the system to improve
computing services.
9. Protection and security:
The owners of information stored in a multiuser or networked computer system may want to control use of
that information. When several separate processes execute concurrently, it should not be possible for one
process to interfere with the others or with the operating system itself. Protection involves ensuring that all
access to system resources is controlled.
Security of the system from outsiders is also important. Such security starts with requiring each user to
authenticate. If a system is to be protected and secure, precautions must be instituted throughout it. A chain is
only as strong as its weakest link.
1. Command-Line Interface or Command Interpreter that allows users to directly enter commands to be performed by
the operating system.
2. Users to interface with the operating system via a Graphical User Interface or GUI.
(a) It gets and processes the next user request, and launches the requested programs.
(b) In some systems the CI may be incorporated directly into the kernel.
(c) More commonly the CI is a separate program that launches once the user logs in or otherwise accesses the
system.
(d) UNIX, for example, provides the user with a choice of different shells, Bourne-shell, C shell, Bourne-Again
shell, Korn shell, and others which may either be configured to launch automatically at login, or which may
be changed on the fly. Below figure shows the Bourne shell command interpreter being used on Solaris 10.
(e) Different shells provide different functionality, in terms of certain commands that are implemented directly by
the shell without launching any external programs. Most provide at least a rudimentary command
interpretation structure for use in shell script programming (loops, decision constructs, variables)
(f) An interesting distinction is the processing of wild card file naming and I/O re-direction. On UNIX systems
those details are handled by the shell, and the program which is launched sees only a list of filenames
generated by the shell from the wild cards. On a DOS system, the wild cards are passed along to the programs,
which can interpret the wild cards as the program sees fit.
A second strategy for interfacing with the operating system is through a user-friendly graphical user interface, or
GUL. Here, rather than entering commands directly via a command-line interface, users employ a mouse- based
window and-menu system characterized by a desktop metaphor.
The user moves the mouse to position its pointer on images, or icons, on the screen (the desktop) that represent
programs, files, directories, and system functions. Depending on the mouse pointer's location, clicking a button on the
mouse can invoke a program, select a file or directory- known as a folder or pull down a menu that contains
commands.
(a) Generally implemented as a desktop metaphor, with file folders, trash cans, and resource icons.
(b) Icons represent some item on the system, and respond accordingly when the icon is activated.
(c) First developed in the early 1970's at Xerox PARC research facility.
(d) In some systems the GUI is just a front end for activating a traditional command line interpreter running in the
background. In others the GUI is a true graphical shell in its own right.
(e) Mac has traditionally provided ONLY the GUI interface. With the advent of OSX (based partially on UNIX),
a command line interface has also become available.
(f) Because mice and keyboards are impractical for small mobile devices, these normally use a touch-screen
interface today that responds to various patterns of swipes or "gestures". Whes these first came out they often
had a physical keyboard and/or a trackball of some kind built in, but today a virtual keyboard is more
commonly implemented on the touch screen.
The choice of whether to use a command-line or GUI interface is mostly one of personal preference System
administrators who manage computers and power users who have deep knowledge of a system frequently use the
command-line interface. It is more efficient, giving faster access to the activities needed to perform. Indeed, on some
systems, only a subset of system functions is available via the GUI, leaving the less common tasks to those who are
command-line knowledgeable.
Further, command line interfaces usually make repetitive tasks easier, in part because they have their own
programmability. For example, if a frequent task requires a set of command-line steps, those steps can be recorded
into a file, and that file can be run just like a program. The program is not compiled into executable code but rather is
interpreted by the command-line interface. These shell scripts are very common on systems that are command-line
oriented, such as UNIX and Linux.
The user interface can vary from system to system and even from user to user within a system. It typically is
substantially removed from the actual system structure. The design of a useful and friendly user interface is therefore
not a direct function of the operating system. In this book, we concentrate on the fundamental problems of providing
adequate service to user programs. From the point of view of the operating system, we do not distinguish between user
programs and system programs.
System Calls:
A system call is a method for a computer program to request a service from the kernel of the operating system on
which it is running. A system call is a method of interacting with the operating system via programs. A system call is a
request from computer software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts as a link
between the operating system and a process, allowing user-level programs to request operating system services. The
kernel system can only be accessed using system calls. System calls are required for any programs that use resources.\
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege
executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel
protection domain.
4. The code and data for system calls are stored in global kernel memory.
There are various situations where you must require system calls in the operating system. Following of the situations
are as follows:
1. It is must require when a file system wants to create or delete a file.
2. Network connections require the system calls to sending and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a system call.
5. System calls are used to create and manage new processes.
There are commonly five types of system calls. These are as follows:
Process Control:
Process control is the system call that is used to direct the processes. Some process control examples include creating,
load, abort, end, execute, process, terminate the process, etc.
File Management:
File management is a system call that is used to handle the files. Some file management examples include creating
files, delete files, open, close, read, write, etc.
Device Management:
Device management is a system call that is used to deal with devices. Some examples of device management include
read, device, write, get device attributes, release device, etc.
Information Maintenance:
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data, etc.
Communication:
Communication is a system call that is used for communication. There are some examples of communication,
including create, delete communication connections, send, receive messages, etc.
Operating System provides the medium for the user to communicate with the computer hardware. System software is
installed on top of the operating system. Operating System structure is the basic model which is needed to implement
Operating Systems.
1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-kernel Structure
It is the simplest Operating System Structure and is not well defined; It can only be used for small and limited
systems. In this structure, the interfaces and levels of functionality are well separated; hence programs can access I/O
routines which can cause unauthorized access to I/O routines.
The MS-DOS operating System is made up of various layers, each with its own set of functions.
These layers are:
o Application Program
o System Program
o MS-DOS device drivers
o ROM BIOS device drivers
Layering has an advantage in the MS-DOS operating system since all the levels can be defined separately and
can interact with each other when needed.
It is easier to design, maintain, and update the system if it is made in layers. So that's why limited systems
with less complexity can be constructed easily using Simple Structure.
If one user program fails, the entire operating system gets crashed.
The abstraction level in MS-DOS systems is low, so programs and I/O routines are visible to the end-user, so
the user can have unauthorized access.
Layering in simple structure is shown below:
The Monolithic operating System in which the kernel acts as a manager by managing all things like file management,
memory management, device management, and operational processes of the Operating System. The kernel is the heart
of a computer operating system (OS). Kernel delivers basic services to all other elements of the System. It serves as
the primary interface between the Operating System and the hardware. In monolithic systems, kernels can directly
access all the resources of the operating System like physical hardware, exp Keyboard, Mouse etc.
The monolithic kernel is another name for the monolithic operating system. Batch processing and time-sharing
maximize the usability of a processor by multiprogramming. The monolithic kernel functions as a virtual machine by
working on top of the Operating System and controlling all hardware components. This is an outdated operating
system that was used in banks to accomplish minor activities such as batch processing and time-sharing, which
enables many people at various terminals to access the Operating System.
It is simple to design and implement because all operations are managed by kernel only, and layering is not
needed.
As services such as memory management, file management, process scheduling, etc., are implemented in the
same address space, the execution of the monolithic kernel is relatively fast as compared to normal systems.
Using the same address saves time for address allocation for new processes and makes it faster.
Simple design and implementation.
If any service in the monolithic kernel fails, the entire System fails because, in address space, the services are
connected to each other and affect each other.
Lack of modularity makes maintenance and extensions difficult.
It is not flexible, and to introduce a new service.
Micro-Kernel structure designs the Operating System by removing all non-essential components of the kernel. These
non-essential components of kernels are implemented as systems and user programs. Hence these implemented
systems are called as Micro-Kernels.
Each Micro-Kernel is made independently and is isolated from other Micro-Kernels. So, this makes the system more
secure and reliable. If any Micro-Kernel fails, then the remaining operating System remains untouched and works
fine.
Microkernel Operating System
A hybrid kernel structure combines elements of both monolithic and microkernel structures. The core of the hybrid
kernel is a small microkernel that provides essential services, such as memory management, process scheduling, and
inter-process communication (IPC). Other operating system services are implemented as modules that run in user
space.
This approach aims to strike a balance between the performance advantages of a monolithic kernel and the stability
and modularity benefits of a microkernel.
Hybrid kernels can achieve good performance because frequently used services can be implemented as
modules in user space, which avoids the overhead of system calls.
Hybrid kernels are more modular than monolithic kernels, which makes them easier to maintain and extend.
Hybrid kernels can be tailored to specific needs by adding or removing modules.
Hybrid kernels can be more secure than monolithic kernels because modules in user space are less likely to
contain security vulnerabilities.
Hybrid kernels can be more complex than monolithic kernels because they must manage the interaction
between the microkernel and the modules in user space.
There is some performance overhead associated with IPC between the microkernel and the modules in user
space.
An exokernel is a type of operating system kernel that provides only the most basic services, such as memory
management and communication. All other services are provided by user-level processes. This makes exokernels
highly modular and flexible, but also very complex to implement.
The exokernel structure is an alternative operating system design that is quite different from traditional monolithic or
microkernel architectures. In an exokernel, the kernel provides a minimal interface to applications and exposes
hardware resources directly, allowing applications to manage these resources.
Exokernels are the most modular type of operating system structure. This makes them easy to modify and
extend.
Exokernels are the most flexible type of operating system structure. They can be used to support a wide
variety of operating system designs.
Exokernels can be more secure than other types of operating system structures because they provide a smaller
attack surface.
Exokernels can achieve good performance because they avoid the overhead of system calls.
Exokernels are the most complex type of operating system structure to implement. This is because they must
provide a low-level interface to hardware that is still secure and reliable.
There can be some performance overhead associated with the communication between user-level processes
and the exokernel.
In this type of structure, OS is divided into layers or levels. The hardware is on the bottom layer (layer 0), while the
user interface is on the top layer (layer N). These layers are arranged in a hierarchical way in which the top-level
layers use the functionalities of their lower-level levels.
In this approach, functionalities of each layer are isolated, and abstraction is also available. In layered structure,
debugging is easier as it is a hierarchical model, so all lower-level layered is debugged, and then the upper layer is
checked. So, all the lower layers are already checked, and the current layer is to be checked only.
The following are some of the key characteristics of a layered operating system structure:
Each layer is responsible for a specific set of tasks. This makes it easier to understand, develop, and maintain
the operating system.
Layers are typically arranged in a hierarchy. This means that each layer can only use the services provided by
the layers below it.
Layers are independent of each other. This means that a change to one layer should not affect the other layers.
A layered structure is highly modular, meaning that each layer is responsible for a specific set of tasks. This
makes it easier to understand, develop, and maintain the operating system.
Each layer has its functionalities, so work tasks are isolated, and abstraction is present up to some level.
Debugging is easier as lower layers are debugged, and then upper layers are checked.
In a modular operating system structure, the operating system is divided into a set of independent modules. Each
module is responsible for a specific task, such as memory management, process scheduling, or device drivers.
Modules can be loaded and unloaded dynamically, as needed.
A modular structure is highly modular, meaning that each module is independent of the others. This makes it
easier to understand, develop, and maintain the operating system.
A modular structure is very flexible. New modules can be added easily, and existing modules can be modified
or removed without affecting the rest of the operating system.
There can be some performance overhead associated with the communication between modules. This is
because modules must communicate with each other through well-defined interfaces.
A modular structure can be more complex than other types of operating system structures. This is because the
modules must be carefully designed to ensure that they interact correctly.
1.10.8 Virtual Machines:
Virtual Machines (VMs) are a form of virtualization technology that allows multiple operating systems to run on a
single physical machine simultaneously. Each virtual machine acts as an independent, isolated system with its own OS
and applications.
VMs provide a high degree of isolation between guest operating systems. This makes it difficult for malware
or other problems in one guest to affect other guests or the host system.
VMs can be used to create secure environments for running untrusted code. For example, a VM can be used to
run a web browser without risking the entire system to malware infection.
VMs can be easily moved from one physical machine to another. This makes it easy to deploy and manage
applications across multiple servers.
VMs typically have some performance overhead compared to running software directly on the hardware. This
is because the VM must emulate the hardware for the guest operating system.
VMs can be complex to manage. This is because they require additional software to be installed and
configured.
VMs can consume a significant amount of system resources. This is because they must run a guest operating
system in addition to the host operating system.
o Requirements define properties which the finished system must have, and are a necessary first step in
designing any large complex system.
User requirements are features that users care about and understand, and are written in commonly
understood vernacular. They generally do not include any implementation details, and are written
similar to the product description one might find on a sales brochure or the outside of a shrink-
wrapped box.
System requirements are written for the developers, and include more details about implementation
specifics, performance requirements, compatibility constraints, standards compliance, etc. These
requirements serve as a "contract" between the customer and the developers, (and between developers
and subcontractors), and can get quite detailed.
o Requirements for operating systems can vary greatly depending on the planned scope and usage of the system.
(Single user/multi- user, specialized system / general purpose, high/low security, performance needs,
operating environment, etc.)
1.11.3 Implementation:
o Traditionally OSes were written in assembly language. This provided direct control over hardware-related
issues, but inextricably tied a particular OS to a particular HW platform.
o Recent advances in compiler efficiencies mean that most modern OS are written in C, or more recently, C++,
Critical sections of code are still written in assembly language.
o Operating systems may be developed using emulators of the target hardware, particularly if the real hardware
is unavailable or not a suitable platform for development, (e.g. smart phones, game consoles, or other similar
devices.)
QUESTIONS AND ANSWERS
PART-A (2 MARKS)
An operating system is a program that manages the computer hardware. It also provides a basis for application
programs and act as an intermediary between a user of a computer and the computer hardware. It controls and
coordinates the use of the hardware among the various application programs for the various users.
2. Why is the Operating System viewed as a resource allocator & control program?
A computer system has many resources - hardware & software that may be required to solve a problem, like CPU
time, memory space, file- storage space, I/O devices & so on. The OS acts as a manager for these resources so it is
viewed as a resource allocator. The OS is viewed as a control program because it manages the execution of user
programs to prevent errors & improper use of the computer.
A more common definition is that the OS is the one program running at all times on the computer, usually called the
kernel, with all else being application programs.
Batch systems are quite appropriate for executing large jobs that need little interaction. The user can submit jobs and
return later for the results. It is not necessary to wait while the job is processed. Operators batched together jobs with
similar needs and ran them through the computer as a group.
Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute. Several
jobs are placed in the main memory and the processor is switched from job to job as needed to keep several jobs
advancing while keeping the peripheral devices in use.
Interactive computer system provides direct communication between the user and the system. The user gives
instructions to the operating system or to a program directly, using a keyboard or mouse and waits for immediate
results.
Time-sharing or multitasking is a logical extension of multiprogramming. It allows many users to share the computer
simultaneously. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that
the users can interact with each program while it is running.
Multiprocessor systems also known as parallel systems or tightly coupled systems are systems that have more than
one processor in close communication, sharing the computer bus, the clock and sometimes memory & peripheral
devices.
Asymmetric multiprocessing: Each processor is assigned a specific task. A master processor controls the system; the
other processors look to the master for instructions or predefined tasks. It defines a master-slave relationship. Example
are SunOS Version 4.
In multiprocessor systems, failure of one processor will not halt the system, but only slow it down. If there are ten
processors & if one fails the remaining nine processors pick up the work of the failed processor. This ability to
continue providing service is proportional to the surviving hardware is called graceful degradation.
REVIEW QUESTIONS