Nothing Special   »   [go: up one dir, main page]

Unit-5 COA

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 54

UNIT-5

Peripheral Devices
Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also
called peripherals.
For example: Keyboards, display units and printers are common peripheral devices.
There are three types of peripherals:
1. Input peripherals: Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside
world. Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as well
as, output(from computer to the outside world). Example: Touch screen etc.
Classification of Peripheral devices:
It is generally classified into 3 basic categories which are given below:
1. Input Devices:
The input devices is defined as it converts incoming data and instructions into a pattern of
electrical signals in binary code that are comprehensible to a digital computer.
Example:
Keyboard, mouse, scanner, microphone etc.
2. Output Devices:
An output device is generally reverse of the input process and generally translating the
digitized signals into a form intelligible to the user. The output device is also performed for
sending data from one computer system to another. For some time punched-card and paper-
tape readers were extensively used for input, but these have now been supplanted by more
efficient devices.
Example:
Monitors, headphones, printers etc.
3. Storage Devices:
Storage devices are used to store data in the system which is required for performing any
operation in the system. The storage device is one of the most requirement devices and also
provide better compatibility.
Example:
Hard disk, magnetic tape, Flash memory etc.
Advantage of Peripherals Devices:
Peripherals devices provides more feature due to this operation of the system is easy. These are
given below:
 It is helpful for taking input very easily.
 It is also provided a specific output.
 It has a storage device for storing information or data
 It also improves the efficiency of the system.
Interfaces
Interface is a shared boundary between two separate components of the computer system which
can be used to attach two or more components to the system for communication purposes.
There are two types of interface:
1. CPU Inteface
2. I/O Interface
Let's understand the I/O Interface in details,
Input-Output Interface
Peripherals connected to a computer need special communication links for interfacing with
CPU. In computer system, there are special hardware components between the CPU and
peripherals to control or manage the input-output transfers. These components are called input-
output interface units because they provide communication links between processor bus and
peripherals. They provide a method for transferring information between internal system and
input-output devices.

Input/Output Ports
A connection point that acts as interface between the computer and external devices like mouse,
printer, modem, etc. is called port. Ports are of two types −
 Internal port − It connects the motherboard to internal devices like hard disk drive, CD
drive, internal modem, etc.
 External port − It connects the motherboard to external devices like modem, mouse,
printer, flash drives, etc.
Serial Port
Serial ports transmit data sequentially one bit at a time. So they need only one wire to transmit 8
bits. However it also makes them slower. Serial ports are usually 9-pin or 25-pin male
connectors. They are also known as COM (communication) ports or RS323C ports.

Parallel Port
Parallel ports can send or receive 8 bits or 1 byte at a time. Parallel ports come in form of 25-pin
female pins and are used to connect printer, scanner, external hard disk drive, etc.

USB Port
USB stands for Universal Serial Bus. It is the industry standard for short distance digital data
connection. USB port is a standardized port to connect a variety of devices like printer, camera,
keyboard, speaker, etc.
PS-2 Port
PS/2 stands for Personal System/2. It is a female 6-pin port standard that connects to the male
mini-DIN cable. PS/2 was introduced by IBM to connect mouse and keyboard to personal
computers. This port is now mostly obsolete, though some systems compatible with IBM may
have this port.
Infrared Port
Infrared port is a port that enables wireless exchange of data within a radius of 10m. Two
devices that have infrared ports are placed facing each other so that beams of infrared lights can
be used to share data.
Bluetooth Port
Bluetooth is a telecommunication specification that facilitates wireless connection between
phones, computers and other digital devices over short range wireless connection. Bluetooth
port enables synchronization between Bluetooth-enabled devices. There are two types of
Bluetooth ports −
 Incoming − It is used to receive connection from Bluetooth devices.
 Outgoing − It is used to request connection to other Bluetooth devices.
FireWire Port
FireWire is Apple Computer’s interface standard for enabling high speed communication using
serial bus. It is also called IEEE 1394 and used mostly for audio and video devices like digital
camcorders.

Interrupts
In computer architecture, an interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention.
Interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high priority process requiring interruption of the
current working process. In I/O devices one of the bus control lines is dedicated for this purpose
and is called the Interrupt Service Routine (ISR).
TYPES OF INTERRUPTS
Although interrupts have highest priority than other signals, there are many type of interrupts but
basic type of interrupts are ->
1. Hardware Interrupts: If the signal for the processor is from external device or hardware is
called hardware interrupts.
Example: from keyboard we will press the key to do some action this pressing of key in
keyboard will generate a signal which is given to the processor to do action, such interrupts are
called hardware interrupts.
Hardware interrupts can be classified into two types ->
 Maskable Interrupt: The hardware interrupts which can be delayed when a much
highest priority interrupt has occurred to the processor.
 Non Maskable Interrupt: The hardware which cannot be delayed and should process by
the processor immediately.
2. Software Interrupts: Software interrupt can also divided in to two types they are ->
 Normal Interrupts: the interrupts which are caused by the software instructions are
called software instructions.
 Exception: unplanned interrupts while executing a program is called Exception. For
example: while executing a program if we got a value which should be divided by zero is called a
exception.
NEED FOR INTERRUPTS
 The operating system is a reactive program
1. When you give some input it will perform computations and produces output but
meanwhile you can interact with the system by interrupting the running process or you can stop
and start another process
 This creativeness is due to the interrupts
 Modern operating systems are interrupt driven
INTERRUPT SERVICE ROUTINE AND its WORKING
The routine that gets executed when an interrupt request is made is called as interrupt service
routine.
 Step 1: When the interrupt occurs the processor is currently executing i’th instruction and
the program counter will be currently pointing to (i + 1)th instruction.
 Step 2: When the interrupt occurs the program counter value is stored on the processes
stack.
 Step 3: The program counter is now loaded with the address of interrupt service routine.
 Step 4: Once the interrupt service routine is completed the address on the processes stack
is pop and place back in the program counter.
 Step 5: Execution resumes from (i + 1)th line of COMPUTE routine.
INTERRUPT HARDWARE
Many computers have facility to connect two or more input and output devices to it like laptop
may have 3 USB slots. All these input and output devices are connected via switches as shown -

So there is a common interrupt line for all N input/output devices and the interrupt handling
works in the following manner ->
1. When no interrupt is issued by the input/output devices then all the switches are open and
the entire voltage from Vdd is flown through the single line INTR and reaches the processor.
Which means the processor gets a voltage of 1V.
2. When the interrupt is issued by the input/output devices then the switch associated with
the input/output device is closed, so the entire current now passes via the switches which means
the hardware line reaching the processes i.e INTR line gets 0 voltage. This is an indication for
the processor that an interrupt has occurred and the processor needs to identify which
input/output device has triggered the interrupt
3. The value of INTR is a logical OR of the requests from individual devices.
4. The resistor R is called as a pull up resistor because it pulls the line voltage to high
voltage state when all switches are open( no interrupt state).
ENABLING AND DISABLING INTERRUPTS
An interrupt can stop the currently executed program temporarily and branch to an interrupt
service routine.
An interrupt can occur at any time during the execution of a program.
Because of the above reasons it may be some time necessary to disable the interrupt and enable it
later on in the program. For this reason some processor may provide special machine instructions
such as interrupt enable an interrupt disable that performs these tasks.
Let us take an example of recursive interrupt problem ->
Let’s assume that an interrupt has occurred and the interrupt service routine is about to be
executed. The interrupt line is still high and it may go low only when the interrupt routine is
returned. This may take some time during which the processor may think that some more
interrupts have arrived and needs to be scheduled first so that the interrupt service routine do not
leads to an infinite loop of interrupts.
There are two methods to handle the situation:
 Method 1 : As soon as the interrupt service routine is entered all interrupts are disabled.
The interrupts are enabled only while exiting the interrupt service routine. Also note that the
program written for the interrupt service routine has to enable and disable the interrupts in this
case. This is done explicitly by the programmer.
 Method 2 : In case of simple processor, the processor can disable the interrupts when the
interrupt service routine is entered. It will enable the interrupt when the interrupt service routine
is about to exit. This method is different from first method as in first method we had to explicitly
write the program for the interrupt service routines to enable and disable the interrupts, where is
in this case the processor has inbuilt mechanism to perform this task.
HANDLING MULTIPLE DEVICES
There could be scenarios when there are multiple input/output devices connected to the CPU that
could be raising interrupts, since these interrupts are raised at a random time there can be several
issues like->
 How will the processor identify the device using the interrupt
 How will the processor handle two simultaneous interrupts
 Should the device be allowed to raise an interrupt while another interrupt service routine
is already being executed
How to identify the device raising the interrupts?
The status register can be used to identify the device using an interrupt. When a device raises an
interrupt it will set a specific bit to one . That bit is called IRQ(Interrupt ReQuest) .
IRQs are hardware lines over which devices can send interrupt signal to the microprocessor
When you add a new device to a PC you sometime need to set its IRQ number by setting
a DIP switch. Earlier the KIRQ bit is set to 1 when an interrupt is taken from the keyboard and
the DIRQ bit it was set to 1 while displaying an output.
DISADVANTAGES -> A lot of time spent in checking for the IRQ bits of all the devices,
considering that most of devices are generating interrupts at a random time.
DAISY CHAIN MECHANISM
Suppose when there are multiple input/output devices raising an interrupt simultaneously then it
is straight forward for us to select which interrupt to handle depending on the interrupt having the
highest priority.
The Daisy chain interrupt handling mechanism works as->

 Step 1: Multiple devices try to raise an interrupt by trying to pull down the interrupt
request line(INTR).
 Step 2 : The processes realises that there are devices trying to raise an interrupt ,so it
makes the INTA line goes high, is that it is set to 1.
 Step 3 : The INTA Line is connected to a device, device one in this case.
1. If this device one had raised an interrupt then it goes ahead and passes the identifying
code to the data line.
2. If device one had not raise an interrupt then it passes the INTA signal to device two and
so on.
Conclusion: So priority is given to device nearest to the processor. This method ensures that
multiple interrupt request are handled properly, even when all the devices are connected to a
common interrupt request line.
Interrupt Latency
When an interrupt occur, the service of the interrupt by executing the ISR may not start
immediately by context switching. The time interval between the occurrence of interrupt and start
of execution of the ISR is called interrupt latency.
 Tswitch = Time taken for context switch
 ΣTexec = The sum of time interval for executing the ISR
 Interrupt Latency = Tswitch + ΣTexec
Handling Multiple Devices:
When more than one device raises an interrupt request signal, then additional information is
needed to decide which which device to be considered first. The following methods are used to
decide which device to select: Polling, Vectored Interrupts, and Interrupt Nesting. These are
explained as following below.
1. Polling:
In polling, the first device encountered with with IRQ bit set is the device that is to be serviced
first. Appropriate ISR is called to service the same. It is easy to implement but a lot of time is
wasted by interrogating the IRQ bit of all devices.
2. Vectored Interrupts:
In vectored interrupts, a device requesting an interrupt identifies itself directly by sending a
special code to the processor over the bus. This enables the processor to identify the device that
generated the interrupt. The special code can be the starting address of the ISR or where the ISR
is located in memory, and is called the interrupt vector.
3. Interrupt Nesting:
In this method, I/O device is organized in a priority structure. Therefore, interrupt request from a
higher priority device is recognized where as request from a lower priority device is not. To
implement this each process/device (even the processor). Processor accepts interrupts only from
devices/processes having priority more than it.
Interrupts and Exceptions
Exceptions and interrupts are unexpected events which will disrupt the normal flow of
execution of instruction(that is currently executing by processor). An exception is an unexpected
event from within the processor. Interrupt is an unexpected event from outside the process.
Whenever an exception or interrupt occurs, the hardware starts executing the code that performs
an action in response to the exception. This action may involve killing a process, outputting an
error message, communicating with an external device, or horribly crashing the entire computer
system by initiating a “Blue Screen of Death” and halting the CPU. The instructions responsible
for this action reside in the operating system kernel, and the code that performs this action is
called the interrupt handler code. Now, We can think of handler code as an operating system
subroutine. Then, After the handler code is executed, it may be possible to continue execution
after the instruction where the execution or interrupt occurred.
Exception and Interrupt Handling :
Whenever an exception or interrupt occurs, execution transition from user mode to kernel mode
where the exception or interrupt is handled. In detail, the following steps must be taken to
handle an exception or interrupts.
While entering the kernel, the context (values of all CPU registers) of the currently executing
process must first be saved to memory. The kernel is now ready to handle the
exception/interrupt.
1. Determine the cause of the exception/interrupt.
2. Handle the exception/interrupt.
When the exception/interrupt have been handled the kernel performs the following steps:
1. Select a process to restore and resume.
2. Restore the context of the selected process.
3. Resume execution of the selected process.
At any point in time, the values of all the registers in the CPU defines the context of the CPU.
Another name used for CPU context is CPU state.
The exception/interrupt handler uses the same CPU as the currently executing process. When
entering the exception/interrupt handler, the values in all CPU registers to be used by the
exception/interrupt handler must be saved to memory. The saved register values can later
restored before resuming execution of the process.
The handler may have been invoked for a number of reasons. The handler thus needs to
determine the cause of the exception or interrupt. Information about what caused the exception
or interrupt can be stored in dedicated registers or at predefined addresses in memory.
Next, the exception or interrupt needs to be serviced. For instance, if it was a keyboard interrupt,
then the key code of the key press is obtained and stored somewhere or some other appropriate
action is taken. If it was an arithmetic overflow exception, an error message may be printed or
the program may be terminated.
The exception/interrupt have now been handled and the kernel. The kernel may choose to
resume the same process that was executing prior to handling the exception/interrupt or resume
execution of any other process currently in memory.
The context of the CPU can now be restored for the chosen process by reading and restoring all
register values from memory.
The process selected to be resumed must be resumed at the same point it was stopped. The
address of this instruction was saved by the machine when the interrupt occurred, so it is simply
a matter of getting this address and make the CPU continue to execute at this address.
Priority Interrupts | (S/W Polling and Daisy Chaining)
In I/O Interface (Interrupt and DMA Mode), we have discussed concept behind the Interrupt-
initiated I/O.
To summarize, when I/O devices are ready for I/O transfer, they generate an interrupt request
signal to the computer. The CPU receives this signal, suspends the current instructions it is
executing and then moves forward to service that transfer request. But what if multiple devices
generate interrupts simultaneously. In that case, we have to have a way to decide which interrupt
is to be serviced first. In other words, we have to set a priority among all the devices for
systemic interrupt servicing.
The concept of defining the priority among devices so as to know which one is to be serviced
first in case of simultaneous requests is called priority interrupt system. This could be done with
either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This
program then checks with each device if it is the one generating the interrupt. The order of
checking is determined by the priority that has to be set. The device having the highest priority
is checked first and then devices are checked in descending order of priority. If the device is
checked to be generating the interrupt, another service program is called which works
specifically for that particular device.
The structure will look something like this-
if (device[0].flag)
device[0].service();
else if (device[1].flag)
device[1].service();
else
//raise error
The major disadvantage of this method is that it is quite slow. To overcome this, we can use
hardware solution, one of which involves connecting the devices in series. This is called Daisy-
chaining method.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a
serial manner. This configuration is governed by the priority of the devices. The device with the
highest priority is placed first followed by the second highest priority device and so on. The
given figure depicts this arrangement.
WORKING:
There is an interrupt request line which is common to all the devices and goes into the CPU.
 When no interrupts are pending, the line is in HIGH state. But if any of the devices raises
an interrupt, it places the interrupt request line in the LOW state.
 The CPU acknowledges this interrupt request from the line and then enables the interrupt
acknowledge line in response to the request.
 This signal is received at the PI(Priority in) input of device 1.
 If the device has not requested the interrupt, it passes this signal to the next device
through its PO(priority out) output. (PI = 1 & PO = 1)
 However, if the device had requested the interrupt, (PI =1 & PO = 0)
 The device consumes the acknowledge signal and block its further use by placing
0 at its PO(priority out) output.
 The device then proceeds to place its interrupt vector address(VAD) into the data
bus of CPU.
 The device puts its interrupt request signal in HIGH state to indicate its interrupt
has been taken care of.
NOTE: VAD is the address of the service routine which services that device.
 If a device gets 0 at its PI input, it generates 0 at the PO output to tell other devices that
acknowledge signal has been blocked. (PI = 0 & PO = 0)
Hence, the device having PI = 1 and PO = 0 is the highest priority device that is requesting an
interrupt. Therefore, by daisy chain arrangement we have ensured that the highest priority
interrupt gets serviced first and have established a hierarchy. The farther a device is from the
first device, the lower its priority.
Purpose of an Interrupt in Computer Organization
Interrupt is the mechanism by which modules like I/O or memory may interrupt the normal
processing by CPU. It may be either clicking a mouse, dragging a cursor, printing a document
etc the case where interrupt is getting generated.
Why we require Interrupt?
External devices are comparatively slower than CPU. So if there is no interrupt CPU would
waste a lot of time waiting for external devices to match its speed with that of CPU. This
decreases the efficiency of CPU. Hence, interrupt is required to eliminate these limitations.
With Interrupt:
1. Suppose CPU instructs printer to print a certain document.
2. While printer does its task, CPU engaged in executing other tasks.
3. When printer is done with its given work, it tells CPU that it has done with its work.
(The word ‘tells’ here is interrupt which sends one message that printer has done its work
successfully.).
Advantages:
 It increases the efficiency of CPU.
 It decreases the waiting time of CPU.
 Stops the wastage of instruction cycle.
Disadvantages:
 CPU has to do a lot of work to handle interrupts, resume its previous execution of
programs (in short, overhead required to handle the interrupt request.).
Difference between Hardware Interrupt and Software Interrupt
1. Hardware Interrupt :
Hardware Interrupt is caused by some hardware device such as request to start an I/O, a
hardware failure or something similar. Hardware interrupts were introduced as a way to avoid
wasting the processor’s valuable time in polling loops, waiting for external events.
For example, when an I/O operation is completed such as reading some data into the computer
from a tape drive.
2. Software Interrupt :
Software Interrupt is invoked by the use of INT instruction. This event immediately stops
execution of the program and passes execution over to the INT handler. The INT handler is
usually a part of the operating system and determines the action to be taken. It occurs when an
application program terminates or requests certain services from the operating system.
For example, output to the screen, execute file etc.
Difference between Hardware Interrupt and Software Interrupt :

SR.NO
. HARDWARE INTERRUPT SOFTWARE INTERRUPT

Hardware interrupt is an Software interrupt is the interrupt that is


interrupt generated from an generated by any internal system of the
1 external device or hardware. computer.

It do not increment the program


2 counter. It increment the program counter.

Hardware interrupt can be


invoked with some external
device such as request to start an
I/O or occurrence of a hardware Software interrupt can be invoked with the
3 failure. help of INT instruction.

It has lowest priority than


4 software interrupts It has highest priority among all interrupts.

Hardware interrupt is triggered Software interrupt is triggered by software


by external hardware and is and considered one of the ways to
considered one of the ways to communicate with kernel or to trigger system
communicate with the outside calls, especially during error or exception
5 peripherals, hardware. handling.

6 It is an asynchronous event. It is synchronous event.

Hardware interrupts can be


classified into two types they Software interrupts can be classified into two
are: 1. Maskable Interrupt. 2. types they are: 1. Normal Interrupts. 2.
7 Non Maskable Interrupt. Exception

8 Keystroke depressions and All system calls are examples of software


mouse movements are examples interrupts
SR.NO
. HARDWARE INTERRUPT SOFTWARE INTERRUPT

of hardware interrupt.
Difference between Interrupt and Exception
Interrupt is one of the classes of Exception. There are 4 classes of Exception- interrupt, trap,
fault and abort. Though, interrupt belongs to exception still there are many differences between
them.
In any computer, during its normal execution of a program, there could be events that can cause
the CPU to temporarily halt. Events like this are called interrupts. Interrupts can be caused by
either software or hardware faults. Hardware interrupts are called Interrupts, while software
interrupts are called Exceptions. Once an interrupt is raised, the control is transferred to a special
sub-routine called Interrupt Service Routine (ISR), that can handle the conditions that are raised
by the interrupt.
What is Trap, Fault and Abort ?
1. Trap –
It is typically a type of synchronous interrupt caused by an exceptional condition (e.g.,
breakpoint, division by zero, invalid memory access).
2. Fault –
Fault exception is used in a client application to catch contractually-specified SOAP faults. By
the simple exception message, you can’t identify the reason of the exception, that’s why a Fault
Exception is useful.
3. Abort –
It is a type of exception occurs when an instruction fetch causes an error.
What is Interrupt ?
The term Interrupt is usually reserved for hardware interrupts. They are program control
interruptions caused by external hardware events. Here, external means external to the CPU.
Hardware interrupts usually come from many different sources such as timer chip, peripheral
devices (keyboards, mouse, etc.), I/O ports (serial, parallel, etc.), disk drives, CMOS clock,
expansion cards (sound card, video card, etc). That means hardware interrupts almost never
occur due to some event related to the executing program.
Example –
An event like a key press on the keyboard by the user, or an internal hardware timer timing out
can raise this kind of interrupt and can inform the CPU that a certain device needs some
attention. In a situation like that the CPU will stop whatever it was doing (i.e. pauses the current
program), provides the service required by the device and will get back to the normal program.
When hardware interrupts occur and the CPU starts the ISR, other hardware interrupts are
disabled (e.g. in 80×86 machines). If you need other hardware interrupts to occur while the ISR
is running, you need to do that explicitly by clearing the interrupt flag (with sti instruction). In
80×86 machines, clearing the interrupt flag will only affect hardware interrupts.
What is Exception ?
Exception is a software interrupt, which can be identified as a special handler routine. Exception
can be identified as an automatically occurring trap. Generally, there are no specific instructions
associated with exceptions (traps are generated using a specific instruction). So, an exception
occurs due to an “exceptional” condition that occurs during program execution.
Example –
Division by zero, execution of an illegal opcode or memory related fault could cause exceptions.
Whenever an exception is raised, the CPU temporarily suspends the program it was executing
and starts the ISR. ISR will contain what to do with the exception. It may correct the problem or
if it is not possible it may abort the program gracefully by printing a suitable error message.
Although a specific instruction does not cause an exception, an exception will always be caused
by an instruction. For example, the division by zero error can only occur during the execution of
the division instruction.
Difference between Interrupt and Exception :

INTERRUPT EXCEPTION

These are Hardware interrupts. These are Software Interrupts.

Occurrences of hardware interrupts


usually disable other hardware interrupts. This is not a true case in terms of Exception.

These are asynchronous external requests These are synchronous internal requests for service
for service (like keyboard or printer needs based upon abnormal events (think of illegal
service). instructions, illegal address, overflow etc).

Being synchronous, exceptions occur when there is


Being asynchronous, interrupts can occur abnormal event in your program like, divide by
at any place in the program. zero or illegal memory location.

These are normal events and shouldn’t


interfere with the normal running of a These are abnormal events and often result in the
computer. termination of a program
Difference between Interrupt and Polling
Interrupt:
Interrupt is a hardware mechanism in which, the device notices the CPU that it requires its
attention. Interrupt can take place at any time. So when CPU gets an interrupt signal trough the
indication interrupt-request line, CPU stops the current process and respond to the interrupt by
passing the control to interrupt handler which services device.
Polling:
In polling is not a hardware mechanism, its a protocol in which CPU steadily checks whether the
device needs attention. Wherever device tells process unit that it desires hardware processing, in
polling process unit keeps asking the I/O device whether or not it desires CPU processing. The
CPU ceaselessly check every and each device hooked up thereto for sleuthing whether or not
any device desires hardware attention.
Each device features a command-ready bit that indicates the standing of that device, i.e.,
whether or not it’s some command to be dead by hardware or not. If command bit is ready one,
then it’s some command to be dead else if the bit is zero, then it’s no commands.
Let’s see that the difference between interrupt and polling:

S.N
O INTERRUPT POLLING

In interrupt, the device notices


the CPU that it requires its Whereas, in polling, CPU steadily checks
1. attention. whether the device needs attention.

An interrupt is not a protocol, its Whereas it isn’t a hardware mechanism,


2. a hardware mechanism. its a protocol.

In interrupt, the device is serviced While in polling, the device is serviced by


3. by interrupt handler. CPU.

Interrupt can take place at any Whereas CPU steadily ballots the device
4. time. at regular or proper interval.

In interrupt, interrupt request line


is used as indication for While in polling, Command ready bit is
indicating that device requires used as indication for indicating that
5. servicing. device requires servicing.

6. In interrupts, processor is simply On the opposite hand, in polling, processor


disturbed once any device waste countless processor cycles by
interrupts it. repeatedly checking the command-ready
S.N
O INTERRUPT POLLING

little bit of each device.

Modes of I/O Data Transfer


Data transfer between the central unit and I/O devices can be handled in generally three types of
modes which are given below:
1. Programmed I/O
2. Interrupt Initiated I/O
3. Direct Memory Access
Programmed I/O
Programmed I/O instructions are the result of I/O instructions written in computer program.
Each data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.

Interrupt Initiated I/O


In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is time consuming process because it keeps the processor
busy needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After receiving
the interrupt signal, the CPU stops the task which it is processing and service the I/O transfer
and then returns back to its previous processing task.
Direct Memory Access
Removing the CPU from the path and letting the peripheral device manage the memory buses
directly would improve the speed of transfer. This technique is known as DMA.
In this, the interface transfer data to and from the memory through memory bus. A DMA
controller manages to transfer data between peripherals and memory unit.
Many hardware systems use DMA such as disk drive controllers, graphic cards, network cards
and sound cards etc. It is also used for intra chip data transfer in multicore processors. In DMA,
CPU would initiate the transfer, do other operations while the transfer is in progress and receive
an interrupt from the DMA controller when the transfer has been completed.
Programmable Peripheral interface (PPI)
A programmable peripheral interface is a multiport device. The ports may be programmed in a
variety of ways as required by the programmer. The device is very useful for interfacing
peripheral devices. The term PIA, Peripheral Interface Adapter is also used by some
manufacturer.

I/O Interface (Interrupt and DMA Mode)


The method that is used to transfer information between internal storage and external I/O
devices is known as I/O interface. The CPU is interfaced using special communication links by
the peripherals connected to any computer system. These communication links are used to
resolve the differences between CPU and peripheral. There exists special hardware components
between CPU and peripherals to supervise and synchronize all the input and output transfers that
are called interface units..
Mode of Transfer:
The binary information that is received from an external device is usually stored in the memory
unit. The information that is transferred from the CPU to the external device is originated from
the memory unit. CPU merely processes the information but the source and target is always the
memory unit. Data transfer between CPU and the I/O devices may be done in different modes.
Data transfer to and from the peripherals may be done in any of the three possible ways
1. Programmed I/O.
2. Interrupt- initiated I/O.
3. Direct memory access( DMA).
Now let’s discuss each mode one by one.
1. Programmed I/O: It is due to the result of the I/O instructions that are written in the
computer program. Each data item transfer is initiated by an instruction in the program. Usually
the transfer is from a CPU register and memory. In this case it requires constant monitoring by
the CPU of the peripheral devices.
Example of Programmed I/O: In this case, the I/O device does not have direct access to the
memory unit. A transfer from I/O device to memory requires the execution of several
instructions by the CPU, including an input instruction to transfer the data from device to the
CPU and store instruction to transfer the data from CPU to memory. In programmed I/O, the
CPU stays in the program loop until the I/O unit indicates that it is ready for data transfer. This
is a time consuming process since it needlessly keeps the CPU busy. This situation can be
avoided by using an interrupt facility. This is discussed below.
2. Interrupt- initiated I/O: Since in the above case we saw the CPU is kept busy unnecessarily.
This situation can very well be avoided by using an interrupt driven method for data transfer.
By using interrupt facility and special commands to inform the interface to issue an interrupt
request signal whenever data is available from any device. In the meantime the CPU can
proceed for any other program execution. The interface meanwhile keeps monitoring the
device. Whenever it is determined that the device is ready for data transfer it initiates an
interrupt request signal to the computer. Upon detection of an external interrupt signal the
CPU stops momentarily the task that it was already performing, branches to the service
program to process the I/O transfer, and then return to the task it was originally performing.

Note: Both the methods programmed I/O and Interrupt-driven I/O require the active
intervention of the
processor to transfer data between memory and the I/O module, and any data transfer must
transverse
a path through the processor. Thus both these forms of I/O suffer from two inherent drawbacks.
1. The I/O transfer rate is limited by the speed with which the processor can test and
service a
device.
2. The processor is tied up in managing an I/O transfer; a number of instructions
must be executed
for each I/O transfer.
3. Direct Memory Access: The data transfer between a fast storage media such as
magnetic disk and memory unit is limited by the speed of the CPU. Thus we can allow the
peripherals directly communicate with each other using the memory buses, removing the
intervention of the CPU. This type of data transfer technique is known as DMA or direct
memory access. During DMA the CPU is idle and it has no control over the memory buses. The
DMA controller takes over the buses to manage the transfer directly between the I/O devices
and the memory unit.

Bus Request : It is used by the DMA controller to request the CPU to relinquish the control of
the buses.
Bus Grant : It is activated by the CPU to Inform the external DMA controller that the buses are
in high impedance state and the requesting DMA can take control of the buses. Once the DMA
has taken the control of the buses it transfers the data. This transfer can take place in many ways.
Types of DMA transfer using DMA controller:
Burst Transfer :
DMA returns the bus after complete data transfer. A register is used as a byte count,
being decremented for each byte transfer, and upon the byte count reaching zero, the DMAC
will
release the bus. When the DMAC operates in burst mode, the CPU is halted for the duration of
the data
transfer.
Steps involved are:
1. Bus grant request time.
2. Transfer the entire block of data at transfer rate of device because the device is
usually slow than the
speed at which the data can be transferred to CPU.
3. Release the control of the bus back to CPU
So, total time taken to transfer the N bytes
= Bus grant request time + (N) * (memory transfer rate) + Bus release control time.
Where,
X µsec =data transfer time or preparation time (words/block)
Y µsec =memory cycle time or cycle time or transfer time (words/block)
% CPU idle (Blocked)=(Y/X+Y)*100
% CPU Busy=(X/X+Y)*100
Cyclic Stealing :
An alternative method in which DMA controller transfers one word at a time after which it must
return the control of the buses to the CPU. The CPU delays its operation only for one memory
cycle to allow the direct memory I/O transfer to “steal” one memory cycle.
Steps Involved are:
4. Buffer the byte into the buffer
5. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant request)
6. Transfer the byte (at system bus speed)
7. Release the control of the bus back to CPU.
Before moving on transfer next byte of data, device performs step 1 again so that bus isn’t tied
up and
the transfer won’t depend upon the transfer rate of device.
So, for 1 byte of transfer of data, time taken by using cycle stealing mode (T).
= time required for bus grant + 1 bus cycle to transfer data + time required to release the bus, it
will be
NxT
In cycle stealing mode we always follow pipelining concept that when one byte is getting
transferred then Device is parallel preparing the next byte. “The fraction of CPU time to the data
transfer time” if asked then cycle stealing mode is used.
Where,
X µsec =data transfer time or preparation time
(words/block)
Y µsec =memory cycle time or cycle time or transfer
time (words/block)
% CPU idle (Blocked) =(Y/X)*100
% CPU busy=(X/Y)*100
Interleaved mode: In this technique , the DMA controller takes over the system bus when the
microprocessor is not using it.An alternate half cycle i.e. half cycle DMA + half cycle processor.
Programmed I/O:
 CPU requests I/O operation
 I/O module performs operations.
 I/O module sets status bits
 CPU checks status bits periodically
 I/O module does not inform CPU directly
 I/O module does not interrupt CPU
 CPU may wait or come back later
 Under programmed I/O data transfer is very like memory access (CPU viewpoint)
 Each device is given an unique identifier
 CPU commands contain identifier (address)
Dis-advantage of Programmed I/O:
 The problem with programmed I/O is that the processor has to wait a long time for the
I/O module of concern to be ready for either reception or transmission of data.
 The processor, while waiting, must repeatedly interrogate the status of the I/O module.
 Performance of the entire system is severely degraded.
Interrupt Driven I/O Basic Operation:
 CPU issues read command
 I/O module gets data from peripheral whilst CPU does other work
 I/O module interrupts CPU
 CPU requests data
 I/O module transfers data
Working of CPU in terms of interrupts:
 Issue read command.
 Do other work.
 Check for interrupt at end of each instruction cycle.
 If interrupted:-
–Save context (registers)
–Process interrupt 
 Fetch data & store.
 See Operating Systems notes.
Multiple Interrupts:
 Each interrupt line has a priority
 Higher priority lines can interrupt lower priority lines
 If bus mastering only current master can interrupt

Direct Memory Access (DMA)


 Interrupt driven and programmed I/O require active CPU intervention
 Transfer rate is limited (processor to test and service the device)          
 CPU is tied up for managing I/O transfer.
 Additional Module (hardware) on bus
 DMA controller takes over from CPU for I/ODMA is the answer.
 DMA module must use the bus only when the processor does not need it,
 It must force the processor to suspend operation temporarily. This technique is called
cycle stealing

DMA Operation:
 CPU tells DMA controller:-
–Read/Write
–Device address
–Starting address of memory block for data
–Amount of data to be transferred
 CPU carries on with other work
 DMA controller deals with transfer
 DMA controller sends interrupt when finished
Cyclic Stealing :
In this DMA controller transfers one word at a time after which it must return the control of the
buses to the CPU. The CPU merely delays its operation for one memory cycle to allow the
direct memory I/O transfer to “steal” one memory cycle.
Structure of Input-Output Interface
Last Updated: 07-08-2020
The block diagram of an Input-Output Interface unit contain the following blocks :
1. Data Bus Buffer
2. Read/Write Control Logic
3. Port A, Port B register
4. Control and Status register

These are explained as following below.


Data Bus Buffer :
The bus buffer use bi-directional data bus to communicate with CPU. All control word data and
status information between interface unit and CPU are transferred through data bus.
Port A and Port B :
Port A and Port B are used to transfer data between Input-Output device and Interface Unit.
Each port consist of bi-directional data input buffer and bi-directional data output buffer.
Interface unit connect directly with an input device and output disk or with device that require
both input and output through Port A and Port B i.e. modem, external hard-drive, magnetic disk.
Control and Status Register :
CPU gives control information to control register on basis of control information. Interface unit
control input and output operation between CPU and input-output device. Bits which are present
in status register are used for checking of status conditions. Status register indicate status of data
register, port A, port B and also record error that may be occur during transfer of data.
Read/Write Control Logic :
This block generates necessary control signals for overall device operations. All commands
from CPU are accepted through this block. It also allow status of interface unit to be transferred
onto data bus through this block accept CS, read and write control signal from system bus and
S0 , S1 from system address bus. Read and Write signal are used to define direction of data
transfer over data bus.
Read Operation: CPU <---- I/O device
Write Operation: CPU ----> I/O device
The read signal direct data transfer from interface unit to CPU and write signal direct data
transfer from CPU to interface unit through data bus.
Address bus is used to select to interface unit. Two least significant lines of address bus ( A0 ,
A1 ) are connected to select lines S0, S1. This two select input lines are used to select any one of
four registers in interface unit. The selection of interface unit is according to the following
criteria :
HIP SELECTION OF 
SELEC
T OPERATION SELECT LINES
INTERFACE UNIT

0 0 1 0 0 Port A

0 0 1 0 1 Port B

0 0 1 1 0 Control Register

0 0 1 1 1 Status Register

Write State :

CHIP SELECT SELECTION


SELECT OPERATION LINES OF
S S INTERFACE
CS READ WRITE 0 1 UNIT

0 1 0 0 0 Port A

0 1 0 0 1 Port B

Control
0 1 0 1 0 Register

0 1 0 1 1 Status Register
Example :
 If S0, S1 = 0 1, then Port B data register is selected for data transfer between CPU and I/O
device.
 If S0, S1 = 1 0, then Control register is selected and store the control information send by
the CPU.
Difference between Programmed and Interrupt Initiated I/O
Data transfer between the CPU and I/O devices can be done in variety of modes. These are three
possible modes:
1. Programmed I/O
2. Interrupt initiated I/O
3. Direct Memory Access (DMA)
In this article we shall discuss the first two modes only.
1. Programmed I/O :
In this mode the data transfer is initiated by the instructions written in a computer program. An
input instruction is required to store the data from the device to the CPU and a store instruction
is required to transfer the data from the CPU to the device. Data transfer through this mode
requires constant monitoring of the peripheral device by the CPU and also monitor the
possibility of new transfer once the transfer has been initiated. Thus CPU stays in a loop until
the I/O device indicates that it is ready for data transfer. Thus programmed I/O is a time
consuming process that keeps the processor busy needlessly and leads to wastage of the CPU
cycles.
This can be overcome by the use of an interrupt facility. This forms the basis for the Interrupt
Initiated I/O.
2. Interrupt Initiated I/O :
This mode uses an interrupt facility and special commands to inform the interface to issue the
interrupt command when data becomes available and interface is ready for the data transfer. In
the meantime CPU keeps on executing other tasks and need not check for the flag. When the
flag is set, the interface is informed and an interrupt is initiated. This interrupt causes the CPU to
deviate from what it is doing to respond to the I/O transfer. The CPU responds to the signal by
storing the return address from the program counter (PC) into the memory stack and then
branches to service that processes the I/O request. After the transfer is complete, CPU returns to
the previous task it was executing. The branch address of the service can be chosen in two ways
known as vectored and non-vectored interrupt. In vectored interrupt, the source that interrupts,
supplies the branch information to the CPU while in case of non-vectored interrupt the branch
address is assigned to a fixed location in memory.

Difference between Programmed and Interrupt Initiated I/O :


PROGRAMMED I/O INTERRUPT INITIATED I/O

Data transfer is initiated by the means of


instructions stored in the computer program.
Whenever there is a request for I/O transfer the The I/O transfer is initiated by the interrupt
instructions are executed from the program. command issued to the CPU.

There is no need for the CPU to stay in the


The CPU stays in the loop to know if the device loop as the interrupt command interrupts
is ready for transfer and has to continuously the CPU when the device is ready for data
monitor the peripheral device. transfer.

This leads to the wastage of CPU cycles as CPU


remains busy needlessly and thus the efficiency The CPU cycles are not wasted as CPU and
of system gets reduced. hence this method is more efficient.

CPU cannot do any work until the transfer is CPU can do any other work until it is
complete as it has to stay in the loop to interrupted by the command indicating the
continuously monitor the peripheral device. readiness of device for data transfer

Its module is faster than programmed I/O


Its module is treated as a slow module. module.

It can be tricky and complicated to


It is quite easy to program and understand. understand if one uses low level language.

The performance of the system is severely The performance of the system is enhanced
degraded. to some extent.
Difference between Maskable and Non Maskable Interrupt
An interrupt is an event caused by a component other than the CPU. It indicates the CPU of an
external event that requires immediate attention. Interrupts occur asynchronously. Maskable and
non-maskable interrupts are two types of interrupts.
1. Maskable Interrupt :
An Interrupt that can be disabled or ignored by the instructions of CPU are called as Maskable
Interrupt.The interrupts are either edge-triggered or level-triggered or level-triggered.
Eg:
RST6.5,RST7.5,RST5.5 of 8085
2. Non-Maskable Interrupt :
An interrupt that cannot be disabled or ignored by the instructions of CPU are called as Non-
Maskable Interrupt.A Non-maskable interrupt is often used when response time is critical or
when an interrupt should never be disable during normal system operation. Such uses include
reporting non-recoverable hardware errors, system debugging and profiling and handling of
species cases like system resets.
Eg:
Trap of 8085
Difference between maskable and nonmaskable interrupt :

SR.NO
. MASKABLE INTERRUPT NON MASKABLE INTERRUPT

Maskable interrupt is a
hardware Interrupt that can A non-maskable interrupt is a hardware
be disabled or ignored by interrupt that cannot be disabled or ignored
1 the instructions of CPU. by the instructions of CPU.

When maskable interrupt


occur, it can be handled When non-maskable interrupts occur, the
after executing the current current instructions and status are stored in
2 instruction. stack for the CPU to handle the interrupt.

Maskable interrupts help to Non-maskable interrupt help to handle higher


3 handle lower priority tasks. priority tasks such as watchdog timer.

Maskable interrupts used to Non maskable interrupt used for emergency


interface with peripheral purpose e.g power failure, smoke detector etc
4 device. .

In maskable interrupts, In non maskable interrupts, response time is


5 response time is high. low.

It may be vectored or non-


6 vectored. All are vectored interrupts.

Operation can be masked or Operation Cannot be masked or made


7 made pending. pending.

RST6.5, RST7.5, and


RST5.5 of 8085 are some
common examples of Trap of 8085 microprocessor is an example
8 maskable Interrupts. for non-maskable interrupt.
Types of Interrupts and How to Handle Interrupts
What is an Interrupt?
Interrupt is a signal which has highest priority from hardware or software which processor
should process its signal immediately.
Types of Interrupts:
Although interrupts have highest priority than other signals, there are many type of interrupts
but basic type of interrupts are
1. Hardware Interrupts: If the signal for the processor is from external device or
hardware is called hardware interrupts. Example: from keyboard we will press the key to do
some action this pressing of key in keyboard will generate a signal which is given to the
processor to do action, such interrupts are called hardware interrupts. Hardware interrupts can be
classified into two types they are
 Maskable Interrupt: The hardware interrupts which can be delayed when a
much highest priority interrupt has occurred to the processor.
 Non Maskable Interrupt: The hardware which cannot be delayed and should
process by the processor immediately.
. Software Interrupts: Software interrupt can also divided in to two types. They are
 Normal Interrupts: the interrupts which are caused by the software instructions
are called software instructions.
 Exception: unplanned interrupts while executing a program is called Exception.
For example: while executing a program if we got a value which should be divided by zero is
called a exception.
Classification of Interrupts According to Periodicity of Occurrence:
1. Periodic Interrupt: If the interrupts occurred at fixed interval in timeline then that
interrupts are called periodic interrupts
2. Aperiodic Interrupt: If the occurrence of interrupt cannot be predicted then that
interrupt is called aperiodic interrupt.
Classification of Interrupts According to the Temporal Relationship with System Clock:
1. Synchronous Interrupt: The source of interrupt is in phase to the system clock is called
synchronous interrupt. In other words interrupts which are dependent on the system clock.
Example: timer service that uses the system clock.
2. Asynchronous Interrupts: If the interrupts are independent or not in phase to the
system clock is called asynchronous interrupt.
Interrupt Handling:
We know that instruction cycle consists of fetch, decode, execute and read/write functions. After
every instruction cycle the processor will check for interrupts to be processed if there is no
interrupt is present in the system it will go for the next instruction cycle which is given by the
instruction register.
If there is an interrupt present then it will trigger the interrupt handler, the handler will stop the
present instruction which is processing and save its configuration in a register and load the
program counter of the interrupt from a location which is given by the interrupt vector table.
After processing the interrupt by the processor interrupt handler will load the instruction and its
configuration from the saved register, process will start its processing where it’s left. This
saving the old instruction processing configuration and loading the new interrupt configuration
is also called as context switching.
The interrupt handler is also called as Interrupt service routine (ISR). There are different types
of interrupt handler which will handle different interrupts. For example for the clock in a system
will have its interrupt handler, keyboard it will have its interrupt handler for every device it will
have its interrupt handler.
The main features of the ISR are
 Interrupts can occur at any time they are asynchronous. ISR’s can call for asynchronous
interrupts.
 Interrupt service mechanism can call the ISR’s from multiple sources.
 ISR’s can handle both maskable and non maskable interrupts. An instruction in a
program can disable or enable an interrupt handler call.
 ISR on beginning of execution it will disable other devices interrupt services. After
completion of the ISR execution it will re initialize the interrupt services.
 The nested interrupts are allowed in ISR for diversion to other ISR.
Type of Interrupt Handlers:
1. First Level Interrupt Handler (FLIH) is hard interrupt handler or fast interrupt handler.
These interrupt handlers have more jitter while process execution and they are mainly maskable
interrupts
2. Second Level Interrupt Handler (SLIH) is soft interrupt handler and slow interrupt
handler. These interrupt handlers are having less jitter.
Interrupt Latency:
When an interrupt occur, the service of the interrupt by executing the ISR may not start
immediately by context switching. The time interval between the occurrence of interrupt and
start of execution of the ISR is called interrupt latency.
 Tswitch = Time taken for context switch
 ΣTexec = The sum of time interval for executing the ISR
 Interrupt Latency = Tswitch + ΣTexec
Direct Memory Access (DMA)
Direct Memory Access (DMA) transfers the block of data between the memory and peripheral
devices of the system, without the participation of the processor. The unit that controls the
activity of accessing memory directly is called a DMA controller.
The processor relinquishes the system bus for a few clock cycles. So, the DMA controller can
accomplish the task of data transfer via the system bus. In this section, we will study in brief
about DMA, DMA controller, registers, advantages and disadvantages. So let us start.
Direct Memory Access in Computer Architecture
What is DMA and Why it is used?
Direct memory access (DMA) is a mode of data transfer between the memory and I/O devices.
This happens without the involvement of the processor. We have two other methods of data
transfer, programmed I/O and Interrupt driven I/O. Let’s revise each and get acknowledge
with their drawbacks.
In programmed I/O, the processor keeps on scanning whether any device is ready for data
transfer. If an I/O device is ready, the processor fully dedicates itself in transferring the data
between I/O and memory. It transfers data at a high rate, but it can’t get involved in any other
activity during data transfer. This is the major drawback of programmed I/O.
In Interrupt driven I/O, whenever the device is ready for data transfer, then it raises
an interrupt to processor. Processor completes executing its ongoing instruction and saves its
current state. It then switches to data transfer which causes a delay. Here, the processor doesn’t
keep scanning for peripherals ready for data transfer. But, it is fully involved in the data transfer
process. So, it is also not an effective way of data transfer.
The above two modes of data transfer are not useful for transferring a large block of data. But,
the DMA controller completes this task at a faster rate and is also effective for transfer of large
data block.
The DMA controller transfers the data in three modes:
1. Burst Mode: Here, once the DMA controller gains the charge of the system bus, then it
releases the system bus only after completion of data transfer. Till then the CPU has to wait for
the system buses.
2. Cycle Stealing Mode: In this mode, the DMA controller forces the CPU to stop its
operation and relinquish the control over the bus for a short term to DMA controller. After
the transfer of every byte, the DMA controller releases the bus and then again requests for the
system bus. In this way, the DMA controller steals the clock cycle for transferring every byte.
3. Transparent Mode: Here, the DMA controller takes the charge of system bus only if
the processor does not require the system bus.
Direct Memory Access Controller & it’s Working
DMA controller is a hardware unit that allows I/O devices to access memory directly without
the participation of the processor. Here, we will discuss the working of the DMA controller.
Below we have the diagram of DMA controller that explains its working:

1. Whenever an I/O device wants to transfer the data to or from memory, it sends the DMA
request (DRQ) to the DMA controller. DMA controller accepts this DRQ and asks the CPU to
hold for a few clock cycles by sending it the Hold request (HLD).
2. CPU receives the Hold request (HLD) from DMA controller and relinquishes the bus
and sends the Hold acknowledgement (HLDA) to DMA controller.
3. After receiving the Hold acknowledgement (HLDA), DMA controller acknowledges I/O
device (DACK) that the data transfer can be performed and DMA controller takes the charge of
the system bus and transfers the data to or from memory.
4. When the data transfer is accomplished, the DMA raise an interrupt to let know the
processor that the task of data transfer is finished and the processor can take control over the bus
again and start processing where it has left.
Now the DMA controller can be a separate unit that is shared by various I/O devices, or it can
also be a part of the I/O device interface.

Direct Memory Access Diagram


After exploring the working of DMA controller, let us discuss the block diagram of the DMA
controller. Below we have a block diagram of DMA controller.

Whenever a processor is requested to read or write a block of data, i.e. transfer a block of data, it
instructs the DMA controller by sending the following information.
1. The first information is whether the data has to be read from memory or the data has to
be written to the memory. It passes this information via read or write control lines that is
between the processor and DMA controllers control logic unit.
2. The processor also provides the starting address of/ for the data block in the memory,
from where the data block in memory has to be read or where the data block has to be written in
memory. DMA controller stores this in its address register. It is also called the starting
address register.
3. The processor also sends the word count, i.e. how many words are to be read or written.
It stores this information in the data count or the word count register.
4. The most important is the address of I/O device that wants to read or write data. This
information is stored in the data register.
Direct Memory Access Advantages and Disadvantages
Advantages:
1. Transferring the data without the involvement of the processor will speed up the read-
write task.
2. DMA reduces the clock cycle requires to read or write a block of data.
3. Implementing DMA also reduces the overhead of the processor.
Disadvantages
1. As it is a hardware unit, it would cost to implement a DMA controller in the system.
2. Cache coherence problem can occur while using DMA controller.
Key Takeaways
 DMA is an abbreviation of direct memory access.
 DMA is a method of data transfer between main memory and peripheral devices.
 The hardware unit that controls the DMA transfer is a DMA controller.
 DMA controller transfers the data to and from memory without the participation of the
processor.
 The processor provides the start address and the word count of the data block which is
transferred to or from memory to the DMA controller and frees the bus for DMA controller to
transfer the block of data.
 DMA controller transfers the data block at the faster rate as data is directly accessed by
I/O devices and is not required to pass through the processor which save the clock cycles.
 DMA controller transfers the block of data to and from memory in three modes burst
mode, cycle steal mode and transparent mode.
 DMA can be configured in various ways it can be a part of individual I/O devices, or all
the peripherals attached to the system may share the same DMA controller.
Thus the DMA controller is a convenient mode of data transfer. It is preferred over the
programmed I/O and Interrupt-driven I/O mode of data transfer.
Introduction of Input-Output Processor
The DMA mode of data transfer reduces CPU’s overhead in handling I/O operations. It also
allows parallelism in CPU and I/O operations. Such parallelism is necessary to avoid wastage of
valuable CPU time while handling I/O devices whose speeds are much slower as compared to
CPU. The concept of DMA operation can be extended to relieve the CPU further from getting
involved with the execution of I/O operations. This gives rises to the development of special
purpose processor called Input-Output Processor (IOP) or IO channel.
The Input Output Processor (IOP) is just like a CPU that handles the details of I/O operations. It
is more equipped with facilities than those are available in typical DMA controller. The IOP can
fetch and execute its own instructions that are specifically designed to characterize I/O transfers.
In addition to the I/O – related tasks, it can perform other processing tasks like arithmetic, logic,
branching and code translation. The main memory unit takes the pivotal role. It communicates
with processor by the means of DMA.
The block diagram –

The Input Output Processor is a specialized processor which loads and stores data into memory
along with the execution of I/O instructions. It acts as an interface between system and devices.
It involves a sequence of events to executing I/O operations and then store the results into the
memory.
Advantages –
 The I/O devices can directly access the main memory without the intervention by the
processor in I/O processor based systems.
 It is used to address the problems that are arises in Direct memory access method.
Q. Characteristics of input- output channels?
 The I/O channel represents an extension of DMA concept. An I/O channel has ability to
execute I/O instructions that gives complete control over I/O operation. With such devices the
CPU doesn't execute I/O instructions. These kinds of instructions are stored in main memory to
be executed by a special-purpose processor in I/O channel itself. So CPU initiates an I/O
transfer by instructing I/O channel to execute a program in memory. Two kinds of I/O channels
are commonly used that can be seen in Figure (a and b).

A selector channel controls multiple high-speed devices and at any one time is dedicated to
transfer of data with one of those devices. Every device is handled by a controller or I/O
interface. So I/O channel serves in place of CPU in controlling these I/O controllers.
A multiplexer channel can handle I/O with many devices at the same time. If devices are slow
then byte multiplexer is used.  Let's explain this with illustration. If we have 3 slow devices that
need to send individual bytes as:
      X1  X2   X3  X4  X5 ......
      Y1  Y2   Y3  Y4  Y5......
       Z1  Z2   Z3   Z4  Z5......
Then on a byte multiplexer channel they can transmit bytes as X1  Y1  Z1  X2  Y2  Z2  X3  Y3 
Z3...... For high-speed devices blocks of data from various devices are interleaved. These
devices are known as block multiplexer.
Input/Output Channel
A channel is an independent hardware component that co-ordinate all I/O to a set of controllers.
Computer systems that use I/O channel have special hardware components that handle all I/O
operations.
Channels use separate, independent and low cost processors for its functioning which are called
Channel Processors.
Channel processors are simple, but contains sufficient memory to handle all I/O tasks. When I/O
transfer is complete or an error is detected, the channel controller communicates with the CPU
using an interrupt, and informs CPU about the error or the task completion.
Each channel supports one or more controllers or devices. Channel programs contain list of
commands to the channel itself and for various connected controllers or devices. Once the
operating system has prepared a list of I/O commands, it executes a single I/O machine
instruction to initiate the channel program, the channel then assumes control of the I/O
operations until they are completed.
IBM 370 I/O Channel
The I/O processor in the IBM 370 computer is called a Channel. A computer system
configuration includes a number of channels which are connected to one or more I/O devices.
Categories of I/O Channels
Following are the different categories of I/O channels:
Multiplexer
The Multiplexer channel can be connected to a number of slow and medium speed devices. It is
capable of operating number of I/O devices simultaneously.
Selector
This channel can handle only one I/O operation at a time and is used to control one high speed
device at a time.
Block-Multiplexer
It combines the features of both multiplexer and selector channels.
The CPU directly can communicate with the channels through control lines. Following diagram
shows the word format of channel operation.
Alternatively referred to as the input channel and I/O channel, the input/output channel is a
line of communication in a computing device. The I/O channel is the channel between the
input/output bus and memory to the CPU or a computer peripheral.

Synchronous Transmission
In synchronous transmission, data moves in a completely paired approach, in the form of chunks
or frames.  Synchronisation between the source and target is required so that the source knows
where the new byte begins, since there are no spaces included between the data.
Synchronous transmission is effective, dependable, and often utilised for transmitting a large
amount of data.  It offers real-time communication between linked devices.
An example of synchronous transmission would be the transfer of a large text file.  Before the
file is transmitted, it is first dissected into blocks of sentences.  The blocks are then transferred
over the communication link to the target location.
Because there are no beginning and end bits, the data transfer rate is quicker but there’s an
increased possibility of errors occurring.  Over time, the clocks will get out of sync, and the
target device would have the incorrect time, so some bytes could become damaged on account
of lost bits.  To resolve this issue, it’s necessary to regularly re-synchronise the clocks, as well
as to make use of check digits to ensure that the bytes are correctly received and translated.
Characteristics of Synchronous Transmission
 There are no spaces in between characters being sent.
 Timing is provided by modems or other devices at the end of the transmission.
 Special ’syn’ characters goes before the data being sent.
 The syn characters are included between chunks of data for timing functions.
Examples of Synchronous Transmission
 Chatrooms
 Video conferencing
 Telephonic conversations
 Face-to-face interactions

Asynchronous Transmission
In asynchronous transmission, data moves in a half-paired approach, 1 byte or 1 character at a
time.  It sends the data in a constant current of bytes.  The size of a character transmitted is 8
bits, with a parity bit added both at the beginning and at the end, making it a total of 10 bits.  It
doesn’t need a clock for integration—rather, it utilises the parity bits to tell the receiver how to
translate the data.
It is straightforward, quick, cost-effective, and doesn’t need 2-way communication to function.
Characteristics of Asynchronous Transmission
 Each character is headed by a beginning bit and concluded with one or more end bits.
 There may be gaps or spaces in between characters.
Examples of Asynchronous Transmission
 Emails
 Forums
 Letters
 Radios
 Televisions

Synchronous and Asynchronous Transmission


Synchronous Asynchronous
Point of Comparison
Transmission Transmission

Transmits 1 byte
Transmits data in the form
Definition or character at a
of chunks or frames
time

Speed of Transmission Quick Slow

Cost Expensive Cost-effective

Time Interval Constant Random

Gaps between the data? Does not exist Exist

Chat Rooms, Telephonic


Email, Forums,
Examples Conversations, Video
Letters
Conferencing
Synchronous vs. Asynchronous Transmission
1. In synchronous transmission data is transmitted in the form of chunks, while in
asynchronous transmission data is transmitted one byte at a time.
2. Synchronous transmission needs a clock signal between the source and target to let the
target know of the new byte.  In comparison, with asynchronous transmission, a clock signal is
not needed because of the parity bits that are attached to the data being transmitted, which serves
as a start indicator of the new byte.
3. The data transfer rate of synchronous transmission is faster since it transmits in chunks
of data, compared to asynchronous transmission which transmits one byte at a time.
4. Asynchronous transmission is straightforward and cost-effective, while synchronous
transmission is complicated and relatively pricey.
5. Synchronous transmission is systematic and necessitates lower overhead figures
compared to asynchronous transmission.
  Both synchronous and asynchronous transmission have their benefits and limitations. 
Asynchronous transmission is used for sending a small amount of data while synchronous
transmission is used for sending bulk amounts of data.  Thus, we can say that both synchronous
and asynchronous transmission are essential for the overall process of data transmission.
Difference between Synchronous and Asynchronous Transmission
In Synchronous Transmission, data is sent in form of blocks or frames. This transmission is the
full duplex type. Between sender and receiver the synchronization is compulsory. In
Synchronous transmission, There is no gap present between data. It is more efficient and more
reliable than asynchronous transmission to transfer the large amount of data.

Asynchronous Transmission:
In Asynchronous Transmission, data is sent in form of byte or character. This transmission is the
half duplex type transmission. In this transmission start bits and stop bits are added with data. It
does not require synchronization.

.
NO SYNCHRONOUS TRANSMISSION ASYNCHRONOUS TRANSMISSION

In Synchronous transmission, Data is In asynchronous transmission, Data is sent in


1. sent in form of blocks or frames. form of byte or character.

2. Synchronous transmission is fast. Asynchronous transmission is slow.

3. Synchronous transmission is costly. Asynchronous transmission economical.

In Synchronous transmission, time In asynchronous transmission, time interval of


4. interval of transmission is constant. transmission is not constant, it is random.

In Synchronous transmission, There In asynchronous transmission, There is present


5. is no gap present between data. gap between data.

While in asynchronous transmission,


Efficient use of transmission line is transmission line remains empty during gap in
6. done in synchronous transmission. character transmission.

Synchronous transmission needs Asynchronous transmission have no need of


precisely synchronized clocks for the synchronized clocks as parity bit is used in this
7. information of new bytes. transmission for information of new bytes.
Difference between Serial and Parallel Transmission
There are two methods are used for transferring data between computers which are given below:
Serial Transmission, and Parallel Transmission.
Serial Transmission:
In Serial Transmission, data-bit flows from one computer to another computer in bi-direction. In
this transmission one bit flows at one clock pulse. In Serial Transmission, 8 bits are transferred
at a time having a start and stop bit.

Parallel Transmission:
In Parallel Transmission, many bits are flow together simultaneously from one computer to
another computer. Parallel Transmission is faster than serial transmission to transmit the bits.
Parallel transmission is used for short distance.

Difference between Serial and Parallel Transmission:

S.N PARALLEL
O SERIAL TRANSMISSION TRANSMISSION

1. In serial transmission, data(bit) flows in bi-direction. In Parallel


Transmission, data
flows in multiple
lines.

Parallel
Transmission is not
2. Serial Transmission is cost efficient. cost efficient.

In Parallel
Transmission, eight
bits transferred at
3. In serial transmission, one bit transferred at one clock pulse. one clock pulse.

Parallel
Transmission is
fast in comparison
Serial Transmission is slow in comparison of Parallel of Serial
4. Transmission. Transmission.

Generally, Parallel
Transmission is
used for short
5. Generally, Serial Transmission is used for long distance. distance.

The circuit used in


Parallel
Transmission is
6. The circuit used in Serial Transmission is simple. relatively complex.

Difference between Simplex, Half duplex and Full Duplex Transmission Modes
There are 3 types of transmission modes which are given below: Simplex mode, Half duplex
mode, and Full duplex mode. These are explained as following below.
1. Simplex mode:
In simplex mode, Sender can send the data but that sender can’t receive the data. It is a
unidirectional communication.
2. Half-duplex mode:
In half duplex mode, Sender can send the data and also can receive the data but one at a time. It
is two-way directional communication but one at a time.

3. Full duplex mode:


In full duplex mode, Sender can send the data and also can receive the data simultaneously. It is
two-way directional communication simultaneously.
Difference between Simplex, Half duplex and Full Duplex Transmission Modes:

SIMPLEX HALF DUPLEX FULL DUPLEX

Full duplex mode is a two-


Half duplex mode is a two-way way directional
Simplex mode is a uni- directional communication but communication
directional communication. one at a time. simultaneously.

In half duplex mode, Sender In full duplex mode, Sender


In simplex mode, Sender can can send the data and also can can send the data and also can
send the data but that sender receive the data but one at a receive the data
can’t receive the data. time. simultaneously.

The simplex mode provides The half duplex mode provides Full duplex provides better
less performance than half less performance than full performance than simplex and
duplex and full duplex. duplex. half duplex mode.

Example of simplex mode Example of half duplex mode Example of full duplex mode
are:Keyboard and monitor. is: Walkie-Talkies. is:Telephone.
Standard I/O Interfaces
 The processor bus is the bus defined by the signals on the processor chip itself. Devices that
require a very high speed connection to the processor, such as the main memory, may be
connected directly to this bus ¾ The motherboard usually provides another bus that can support
more devices.
 The two buses are interconnected by a circuit, which we called a bridge, that translates the
signals and protocols of one bus into those of the other
 It is impossible to define a uniform standards for the processor bus. The structure of this bus is
closely tied to the architecture of the processor The expansion bus is not subject to these
limitations, and therefore it can use a standardized signaling structure
PCI Bus
 The bus support three independent address spaces: memory, I/O, and configuration.
 The I/O address space is intended for use with processors, such Pentium, that have a separate
I/O address space.
 However, the system designer may choose to use memory-mapped I/O even when a separate I/O
address space is available
 The configuration space is intended to give the PCI its plug-and-play capability. ‹ A 4-bit
command that accompanies the address identifies which of the three spaces is being used in a
given data transfer operation

Universal Serial Bus (USB)


 The USB has been designed to meet several key objectives ‹ Provide a simple, low-cost, and
easy to use interconnection system that overcomes the difficulties due to the limited number of
I/O ports available on a computer ‹
 Accommodate a wide range of data transfer characteristics for I/O devices, including telephone
and Internet connections
 ‹ Enhance user convenience through a “plug-and-play” mode of operation
 A serial transmission format has been chosen for the USB because a serial bus satisfies the low-
cost and flexibility requirements
 Clock and data information are encoded together and transmitted as a single signal ‹ Hence,
there are no limitations on clock frequency or distance arising from data skew
 To accommodate a large number of devices that can be added or removed at any time, the USB
has the tree structure ‹ Each node of the tree has a device called a hub, which acts as an
intermediate control point between the host and the I/O device ‹
 At the root of the tree, a root hub connects the entire tree to the host computer

You might also like