Unit-5 COA
Unit-5 COA
Unit-5 COA
Peripheral Devices
Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also
called peripherals.
For example: Keyboards, display units and printers are common peripheral devices.
There are three types of peripherals:
1. Input peripherals: Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside
world. Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as well
as, output(from computer to the outside world). Example: Touch screen etc.
Classification of Peripheral devices:
It is generally classified into 3 basic categories which are given below:
1. Input Devices:
The input devices is defined as it converts incoming data and instructions into a pattern of
electrical signals in binary code that are comprehensible to a digital computer.
Example:
Keyboard, mouse, scanner, microphone etc.
2. Output Devices:
An output device is generally reverse of the input process and generally translating the
digitized signals into a form intelligible to the user. The output device is also performed for
sending data from one computer system to another. For some time punched-card and paper-
tape readers were extensively used for input, but these have now been supplanted by more
efficient devices.
Example:
Monitors, headphones, printers etc.
3. Storage Devices:
Storage devices are used to store data in the system which is required for performing any
operation in the system. The storage device is one of the most requirement devices and also
provide better compatibility.
Example:
Hard disk, magnetic tape, Flash memory etc.
Advantage of Peripherals Devices:
Peripherals devices provides more feature due to this operation of the system is easy. These are
given below:
It is helpful for taking input very easily.
It is also provided a specific output.
It has a storage device for storing information or data
It also improves the efficiency of the system.
Interfaces
Interface is a shared boundary between two separate components of the computer system which
can be used to attach two or more components to the system for communication purposes.
There are two types of interface:
1. CPU Inteface
2. I/O Interface
Let's understand the I/O Interface in details,
Input-Output Interface
Peripherals connected to a computer need special communication links for interfacing with
CPU. In computer system, there are special hardware components between the CPU and
peripherals to control or manage the input-output transfers. These components are called input-
output interface units because they provide communication links between processor bus and
peripherals. They provide a method for transferring information between internal system and
input-output devices.
Input/Output Ports
A connection point that acts as interface between the computer and external devices like mouse,
printer, modem, etc. is called port. Ports are of two types −
Internal port − It connects the motherboard to internal devices like hard disk drive, CD
drive, internal modem, etc.
External port − It connects the motherboard to external devices like modem, mouse,
printer, flash drives, etc.
Serial Port
Serial ports transmit data sequentially one bit at a time. So they need only one wire to transmit 8
bits. However it also makes them slower. Serial ports are usually 9-pin or 25-pin male
connectors. They are also known as COM (communication) ports or RS323C ports.
Parallel Port
Parallel ports can send or receive 8 bits or 1 byte at a time. Parallel ports come in form of 25-pin
female pins and are used to connect printer, scanner, external hard disk drive, etc.
USB Port
USB stands for Universal Serial Bus. It is the industry standard for short distance digital data
connection. USB port is a standardized port to connect a variety of devices like printer, camera,
keyboard, speaker, etc.
PS-2 Port
PS/2 stands for Personal System/2. It is a female 6-pin port standard that connects to the male
mini-DIN cable. PS/2 was introduced by IBM to connect mouse and keyboard to personal
computers. This port is now mostly obsolete, though some systems compatible with IBM may
have this port.
Infrared Port
Infrared port is a port that enables wireless exchange of data within a radius of 10m. Two
devices that have infrared ports are placed facing each other so that beams of infrared lights can
be used to share data.
Bluetooth Port
Bluetooth is a telecommunication specification that facilitates wireless connection between
phones, computers and other digital devices over short range wireless connection. Bluetooth
port enables synchronization between Bluetooth-enabled devices. There are two types of
Bluetooth ports −
Incoming − It is used to receive connection from Bluetooth devices.
Outgoing − It is used to request connection to other Bluetooth devices.
FireWire Port
FireWire is Apple Computer’s interface standard for enabling high speed communication using
serial bus. It is also called IEEE 1394 and used mostly for audio and video devices like digital
camcorders.
Interrupts
In computer architecture, an interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention.
Interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high priority process requiring interruption of the
current working process. In I/O devices one of the bus control lines is dedicated for this purpose
and is called the Interrupt Service Routine (ISR).
TYPES OF INTERRUPTS
Although interrupts have highest priority than other signals, there are many type of interrupts but
basic type of interrupts are ->
1. Hardware Interrupts: If the signal for the processor is from external device or hardware is
called hardware interrupts.
Example: from keyboard we will press the key to do some action this pressing of key in
keyboard will generate a signal which is given to the processor to do action, such interrupts are
called hardware interrupts.
Hardware interrupts can be classified into two types ->
Maskable Interrupt: The hardware interrupts which can be delayed when a much
highest priority interrupt has occurred to the processor.
Non Maskable Interrupt: The hardware which cannot be delayed and should process by
the processor immediately.
2. Software Interrupts: Software interrupt can also divided in to two types they are ->
Normal Interrupts: the interrupts which are caused by the software instructions are
called software instructions.
Exception: unplanned interrupts while executing a program is called Exception. For
example: while executing a program if we got a value which should be divided by zero is called a
exception.
NEED FOR INTERRUPTS
The operating system is a reactive program
1. When you give some input it will perform computations and produces output but
meanwhile you can interact with the system by interrupting the running process or you can stop
and start another process
This creativeness is due to the interrupts
Modern operating systems are interrupt driven
INTERRUPT SERVICE ROUTINE AND its WORKING
The routine that gets executed when an interrupt request is made is called as interrupt service
routine.
Step 1: When the interrupt occurs the processor is currently executing i’th instruction and
the program counter will be currently pointing to (i + 1)th instruction.
Step 2: When the interrupt occurs the program counter value is stored on the processes
stack.
Step 3: The program counter is now loaded with the address of interrupt service routine.
Step 4: Once the interrupt service routine is completed the address on the processes stack
is pop and place back in the program counter.
Step 5: Execution resumes from (i + 1)th line of COMPUTE routine.
INTERRUPT HARDWARE
Many computers have facility to connect two or more input and output devices to it like laptop
may have 3 USB slots. All these input and output devices are connected via switches as shown -
So there is a common interrupt line for all N input/output devices and the interrupt handling
works in the following manner ->
1. When no interrupt is issued by the input/output devices then all the switches are open and
the entire voltage from Vdd is flown through the single line INTR and reaches the processor.
Which means the processor gets a voltage of 1V.
2. When the interrupt is issued by the input/output devices then the switch associated with
the input/output device is closed, so the entire current now passes via the switches which means
the hardware line reaching the processes i.e INTR line gets 0 voltage. This is an indication for
the processor that an interrupt has occurred and the processor needs to identify which
input/output device has triggered the interrupt
3. The value of INTR is a logical OR of the requests from individual devices.
4. The resistor R is called as a pull up resistor because it pulls the line voltage to high
voltage state when all switches are open( no interrupt state).
ENABLING AND DISABLING INTERRUPTS
An interrupt can stop the currently executed program temporarily and branch to an interrupt
service routine.
An interrupt can occur at any time during the execution of a program.
Because of the above reasons it may be some time necessary to disable the interrupt and enable it
later on in the program. For this reason some processor may provide special machine instructions
such as interrupt enable an interrupt disable that performs these tasks.
Let us take an example of recursive interrupt problem ->
Let’s assume that an interrupt has occurred and the interrupt service routine is about to be
executed. The interrupt line is still high and it may go low only when the interrupt routine is
returned. This may take some time during which the processor may think that some more
interrupts have arrived and needs to be scheduled first so that the interrupt service routine do not
leads to an infinite loop of interrupts.
There are two methods to handle the situation:
Method 1 : As soon as the interrupt service routine is entered all interrupts are disabled.
The interrupts are enabled only while exiting the interrupt service routine. Also note that the
program written for the interrupt service routine has to enable and disable the interrupts in this
case. This is done explicitly by the programmer.
Method 2 : In case of simple processor, the processor can disable the interrupts when the
interrupt service routine is entered. It will enable the interrupt when the interrupt service routine
is about to exit. This method is different from first method as in first method we had to explicitly
write the program for the interrupt service routines to enable and disable the interrupts, where is
in this case the processor has inbuilt mechanism to perform this task.
HANDLING MULTIPLE DEVICES
There could be scenarios when there are multiple input/output devices connected to the CPU that
could be raising interrupts, since these interrupts are raised at a random time there can be several
issues like->
How will the processor identify the device using the interrupt
How will the processor handle two simultaneous interrupts
Should the device be allowed to raise an interrupt while another interrupt service routine
is already being executed
How to identify the device raising the interrupts?
The status register can be used to identify the device using an interrupt. When a device raises an
interrupt it will set a specific bit to one . That bit is called IRQ(Interrupt ReQuest) .
IRQs are hardware lines over which devices can send interrupt signal to the microprocessor
When you add a new device to a PC you sometime need to set its IRQ number by setting
a DIP switch. Earlier the KIRQ bit is set to 1 when an interrupt is taken from the keyboard and
the DIRQ bit it was set to 1 while displaying an output.
DISADVANTAGES -> A lot of time spent in checking for the IRQ bits of all the devices,
considering that most of devices are generating interrupts at a random time.
DAISY CHAIN MECHANISM
Suppose when there are multiple input/output devices raising an interrupt simultaneously then it
is straight forward for us to select which interrupt to handle depending on the interrupt having the
highest priority.
The Daisy chain interrupt handling mechanism works as->
Step 1: Multiple devices try to raise an interrupt by trying to pull down the interrupt
request line(INTR).
Step 2 : The processes realises that there are devices trying to raise an interrupt ,so it
makes the INTA line goes high, is that it is set to 1.
Step 3 : The INTA Line is connected to a device, device one in this case.
1. If this device one had raised an interrupt then it goes ahead and passes the identifying
code to the data line.
2. If device one had not raise an interrupt then it passes the INTA signal to device two and
so on.
Conclusion: So priority is given to device nearest to the processor. This method ensures that
multiple interrupt request are handled properly, even when all the devices are connected to a
common interrupt request line.
Interrupt Latency
When an interrupt occur, the service of the interrupt by executing the ISR may not start
immediately by context switching. The time interval between the occurrence of interrupt and start
of execution of the ISR is called interrupt latency.
Tswitch = Time taken for context switch
ΣTexec = The sum of time interval for executing the ISR
Interrupt Latency = Tswitch + ΣTexec
Handling Multiple Devices:
When more than one device raises an interrupt request signal, then additional information is
needed to decide which which device to be considered first. The following methods are used to
decide which device to select: Polling, Vectored Interrupts, and Interrupt Nesting. These are
explained as following below.
1. Polling:
In polling, the first device encountered with with IRQ bit set is the device that is to be serviced
first. Appropriate ISR is called to service the same. It is easy to implement but a lot of time is
wasted by interrogating the IRQ bit of all devices.
2. Vectored Interrupts:
In vectored interrupts, a device requesting an interrupt identifies itself directly by sending a
special code to the processor over the bus. This enables the processor to identify the device that
generated the interrupt. The special code can be the starting address of the ISR or where the ISR
is located in memory, and is called the interrupt vector.
3. Interrupt Nesting:
In this method, I/O device is organized in a priority structure. Therefore, interrupt request from a
higher priority device is recognized where as request from a lower priority device is not. To
implement this each process/device (even the processor). Processor accepts interrupts only from
devices/processes having priority more than it.
Interrupts and Exceptions
Exceptions and interrupts are unexpected events which will disrupt the normal flow of
execution of instruction(that is currently executing by processor). An exception is an unexpected
event from within the processor. Interrupt is an unexpected event from outside the process.
Whenever an exception or interrupt occurs, the hardware starts executing the code that performs
an action in response to the exception. This action may involve killing a process, outputting an
error message, communicating with an external device, or horribly crashing the entire computer
system by initiating a “Blue Screen of Death” and halting the CPU. The instructions responsible
for this action reside in the operating system kernel, and the code that performs this action is
called the interrupt handler code. Now, We can think of handler code as an operating system
subroutine. Then, After the handler code is executed, it may be possible to continue execution
after the instruction where the execution or interrupt occurred.
Exception and Interrupt Handling :
Whenever an exception or interrupt occurs, execution transition from user mode to kernel mode
where the exception or interrupt is handled. In detail, the following steps must be taken to
handle an exception or interrupts.
While entering the kernel, the context (values of all CPU registers) of the currently executing
process must first be saved to memory. The kernel is now ready to handle the
exception/interrupt.
1. Determine the cause of the exception/interrupt.
2. Handle the exception/interrupt.
When the exception/interrupt have been handled the kernel performs the following steps:
1. Select a process to restore and resume.
2. Restore the context of the selected process.
3. Resume execution of the selected process.
At any point in time, the values of all the registers in the CPU defines the context of the CPU.
Another name used for CPU context is CPU state.
The exception/interrupt handler uses the same CPU as the currently executing process. When
entering the exception/interrupt handler, the values in all CPU registers to be used by the
exception/interrupt handler must be saved to memory. The saved register values can later
restored before resuming execution of the process.
The handler may have been invoked for a number of reasons. The handler thus needs to
determine the cause of the exception or interrupt. Information about what caused the exception
or interrupt can be stored in dedicated registers or at predefined addresses in memory.
Next, the exception or interrupt needs to be serviced. For instance, if it was a keyboard interrupt,
then the key code of the key press is obtained and stored somewhere or some other appropriate
action is taken. If it was an arithmetic overflow exception, an error message may be printed or
the program may be terminated.
The exception/interrupt have now been handled and the kernel. The kernel may choose to
resume the same process that was executing prior to handling the exception/interrupt or resume
execution of any other process currently in memory.
The context of the CPU can now be restored for the chosen process by reading and restoring all
register values from memory.
The process selected to be resumed must be resumed at the same point it was stopped. The
address of this instruction was saved by the machine when the interrupt occurred, so it is simply
a matter of getting this address and make the CPU continue to execute at this address.
Priority Interrupts | (S/W Polling and Daisy Chaining)
In I/O Interface (Interrupt and DMA Mode), we have discussed concept behind the Interrupt-
initiated I/O.
To summarize, when I/O devices are ready for I/O transfer, they generate an interrupt request
signal to the computer. The CPU receives this signal, suspends the current instructions it is
executing and then moves forward to service that transfer request. But what if multiple devices
generate interrupts simultaneously. In that case, we have to have a way to decide which interrupt
is to be serviced first. In other words, we have to set a priority among all the devices for
systemic interrupt servicing.
The concept of defining the priority among devices so as to know which one is to be serviced
first in case of simultaneous requests is called priority interrupt system. This could be done with
either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This
program then checks with each device if it is the one generating the interrupt. The order of
checking is determined by the priority that has to be set. The device having the highest priority
is checked first and then devices are checked in descending order of priority. If the device is
checked to be generating the interrupt, another service program is called which works
specifically for that particular device.
The structure will look something like this-
if (device[0].flag)
device[0].service();
else if (device[1].flag)
device[1].service();
else
//raise error
The major disadvantage of this method is that it is quite slow. To overcome this, we can use
hardware solution, one of which involves connecting the devices in series. This is called Daisy-
chaining method.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a
serial manner. This configuration is governed by the priority of the devices. The device with the
highest priority is placed first followed by the second highest priority device and so on. The
given figure depicts this arrangement.
WORKING:
There is an interrupt request line which is common to all the devices and goes into the CPU.
When no interrupts are pending, the line is in HIGH state. But if any of the devices raises
an interrupt, it places the interrupt request line in the LOW state.
The CPU acknowledges this interrupt request from the line and then enables the interrupt
acknowledge line in response to the request.
This signal is received at the PI(Priority in) input of device 1.
If the device has not requested the interrupt, it passes this signal to the next device
through its PO(priority out) output. (PI = 1 & PO = 1)
However, if the device had requested the interrupt, (PI =1 & PO = 0)
The device consumes the acknowledge signal and block its further use by placing
0 at its PO(priority out) output.
The device then proceeds to place its interrupt vector address(VAD) into the data
bus of CPU.
The device puts its interrupt request signal in HIGH state to indicate its interrupt
has been taken care of.
NOTE: VAD is the address of the service routine which services that device.
If a device gets 0 at its PI input, it generates 0 at the PO output to tell other devices that
acknowledge signal has been blocked. (PI = 0 & PO = 0)
Hence, the device having PI = 1 and PO = 0 is the highest priority device that is requesting an
interrupt. Therefore, by daisy chain arrangement we have ensured that the highest priority
interrupt gets serviced first and have established a hierarchy. The farther a device is from the
first device, the lower its priority.
Purpose of an Interrupt in Computer Organization
Interrupt is the mechanism by which modules like I/O or memory may interrupt the normal
processing by CPU. It may be either clicking a mouse, dragging a cursor, printing a document
etc the case where interrupt is getting generated.
Why we require Interrupt?
External devices are comparatively slower than CPU. So if there is no interrupt CPU would
waste a lot of time waiting for external devices to match its speed with that of CPU. This
decreases the efficiency of CPU. Hence, interrupt is required to eliminate these limitations.
With Interrupt:
1. Suppose CPU instructs printer to print a certain document.
2. While printer does its task, CPU engaged in executing other tasks.
3. When printer is done with its given work, it tells CPU that it has done with its work.
(The word ‘tells’ here is interrupt which sends one message that printer has done its work
successfully.).
Advantages:
It increases the efficiency of CPU.
It decreases the waiting time of CPU.
Stops the wastage of instruction cycle.
Disadvantages:
CPU has to do a lot of work to handle interrupts, resume its previous execution of
programs (in short, overhead required to handle the interrupt request.).
Difference between Hardware Interrupt and Software Interrupt
1. Hardware Interrupt :
Hardware Interrupt is caused by some hardware device such as request to start an I/O, a
hardware failure or something similar. Hardware interrupts were introduced as a way to avoid
wasting the processor’s valuable time in polling loops, waiting for external events.
For example, when an I/O operation is completed such as reading some data into the computer
from a tape drive.
2. Software Interrupt :
Software Interrupt is invoked by the use of INT instruction. This event immediately stops
execution of the program and passes execution over to the INT handler. The INT handler is
usually a part of the operating system and determines the action to be taken. It occurs when an
application program terminates or requests certain services from the operating system.
For example, output to the screen, execute file etc.
Difference between Hardware Interrupt and Software Interrupt :
SR.NO
. HARDWARE INTERRUPT SOFTWARE INTERRUPT
of hardware interrupt.
Difference between Interrupt and Exception
Interrupt is one of the classes of Exception. There are 4 classes of Exception- interrupt, trap,
fault and abort. Though, interrupt belongs to exception still there are many differences between
them.
In any computer, during its normal execution of a program, there could be events that can cause
the CPU to temporarily halt. Events like this are called interrupts. Interrupts can be caused by
either software or hardware faults. Hardware interrupts are called Interrupts, while software
interrupts are called Exceptions. Once an interrupt is raised, the control is transferred to a special
sub-routine called Interrupt Service Routine (ISR), that can handle the conditions that are raised
by the interrupt.
What is Trap, Fault and Abort ?
1. Trap –
It is typically a type of synchronous interrupt caused by an exceptional condition (e.g.,
breakpoint, division by zero, invalid memory access).
2. Fault –
Fault exception is used in a client application to catch contractually-specified SOAP faults. By
the simple exception message, you can’t identify the reason of the exception, that’s why a Fault
Exception is useful.
3. Abort –
It is a type of exception occurs when an instruction fetch causes an error.
What is Interrupt ?
The term Interrupt is usually reserved for hardware interrupts. They are program control
interruptions caused by external hardware events. Here, external means external to the CPU.
Hardware interrupts usually come from many different sources such as timer chip, peripheral
devices (keyboards, mouse, etc.), I/O ports (serial, parallel, etc.), disk drives, CMOS clock,
expansion cards (sound card, video card, etc). That means hardware interrupts almost never
occur due to some event related to the executing program.
Example –
An event like a key press on the keyboard by the user, or an internal hardware timer timing out
can raise this kind of interrupt and can inform the CPU that a certain device needs some
attention. In a situation like that the CPU will stop whatever it was doing (i.e. pauses the current
program), provides the service required by the device and will get back to the normal program.
When hardware interrupts occur and the CPU starts the ISR, other hardware interrupts are
disabled (e.g. in 80×86 machines). If you need other hardware interrupts to occur while the ISR
is running, you need to do that explicitly by clearing the interrupt flag (with sti instruction). In
80×86 machines, clearing the interrupt flag will only affect hardware interrupts.
What is Exception ?
Exception is a software interrupt, which can be identified as a special handler routine. Exception
can be identified as an automatically occurring trap. Generally, there are no specific instructions
associated with exceptions (traps are generated using a specific instruction). So, an exception
occurs due to an “exceptional” condition that occurs during program execution.
Example –
Division by zero, execution of an illegal opcode or memory related fault could cause exceptions.
Whenever an exception is raised, the CPU temporarily suspends the program it was executing
and starts the ISR. ISR will contain what to do with the exception. It may correct the problem or
if it is not possible it may abort the program gracefully by printing a suitable error message.
Although a specific instruction does not cause an exception, an exception will always be caused
by an instruction. For example, the division by zero error can only occur during the execution of
the division instruction.
Difference between Interrupt and Exception :
INTERRUPT EXCEPTION
These are asynchronous external requests These are synchronous internal requests for service
for service (like keyboard or printer needs based upon abnormal events (think of illegal
service). instructions, illegal address, overflow etc).
S.N
O INTERRUPT POLLING
Interrupt can take place at any Whereas CPU steadily ballots the device
4. time. at regular or proper interval.
Note: Both the methods programmed I/O and Interrupt-driven I/O require the active
intervention of the
processor to transfer data between memory and the I/O module, and any data transfer must
transverse
a path through the processor. Thus both these forms of I/O suffer from two inherent drawbacks.
1. The I/O transfer rate is limited by the speed with which the processor can test and
service a
device.
2. The processor is tied up in managing an I/O transfer; a number of instructions
must be executed
for each I/O transfer.
3. Direct Memory Access: The data transfer between a fast storage media such as
magnetic disk and memory unit is limited by the speed of the CPU. Thus we can allow the
peripherals directly communicate with each other using the memory buses, removing the
intervention of the CPU. This type of data transfer technique is known as DMA or direct
memory access. During DMA the CPU is idle and it has no control over the memory buses. The
DMA controller takes over the buses to manage the transfer directly between the I/O devices
and the memory unit.
Bus Request : It is used by the DMA controller to request the CPU to relinquish the control of
the buses.
Bus Grant : It is activated by the CPU to Inform the external DMA controller that the buses are
in high impedance state and the requesting DMA can take control of the buses. Once the DMA
has taken the control of the buses it transfers the data. This transfer can take place in many ways.
Types of DMA transfer using DMA controller:
Burst Transfer :
DMA returns the bus after complete data transfer. A register is used as a byte count,
being decremented for each byte transfer, and upon the byte count reaching zero, the DMAC
will
release the bus. When the DMAC operates in burst mode, the CPU is halted for the duration of
the data
transfer.
Steps involved are:
1. Bus grant request time.
2. Transfer the entire block of data at transfer rate of device because the device is
usually slow than the
speed at which the data can be transferred to CPU.
3. Release the control of the bus back to CPU
So, total time taken to transfer the N bytes
= Bus grant request time + (N) * (memory transfer rate) + Bus release control time.
Where,
X µsec =data transfer time or preparation time (words/block)
Y µsec =memory cycle time or cycle time or transfer time (words/block)
% CPU idle (Blocked)=(Y/X+Y)*100
% CPU Busy=(X/X+Y)*100
Cyclic Stealing :
An alternative method in which DMA controller transfers one word at a time after which it must
return the control of the buses to the CPU. The CPU delays its operation only for one memory
cycle to allow the direct memory I/O transfer to “steal” one memory cycle.
Steps Involved are:
4. Buffer the byte into the buffer
5. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant request)
6. Transfer the byte (at system bus speed)
7. Release the control of the bus back to CPU.
Before moving on transfer next byte of data, device performs step 1 again so that bus isn’t tied
up and
the transfer won’t depend upon the transfer rate of device.
So, for 1 byte of transfer of data, time taken by using cycle stealing mode (T).
= time required for bus grant + 1 bus cycle to transfer data + time required to release the bus, it
will be
NxT
In cycle stealing mode we always follow pipelining concept that when one byte is getting
transferred then Device is parallel preparing the next byte. “The fraction of CPU time to the data
transfer time” if asked then cycle stealing mode is used.
Where,
X µsec =data transfer time or preparation time
(words/block)
Y µsec =memory cycle time or cycle time or transfer
time (words/block)
% CPU idle (Blocked) =(Y/X)*100
% CPU busy=(X/Y)*100
Interleaved mode: In this technique , the DMA controller takes over the system bus when the
microprocessor is not using it.An alternate half cycle i.e. half cycle DMA + half cycle processor.
Programmed I/O:
CPU requests I/O operation
I/O module performs operations.
I/O module sets status bits
CPU checks status bits periodically
I/O module does not inform CPU directly
I/O module does not interrupt CPU
CPU may wait or come back later
Under programmed I/O data transfer is very like memory access (CPU viewpoint)
Each device is given an unique identifier
CPU commands contain identifier (address)
Dis-advantage of Programmed I/O:
The problem with programmed I/O is that the processor has to wait a long time for the
I/O module of concern to be ready for either reception or transmission of data.
The processor, while waiting, must repeatedly interrogate the status of the I/O module.
Performance of the entire system is severely degraded.
Interrupt Driven I/O Basic Operation:
CPU issues read command
I/O module gets data from peripheral whilst CPU does other work
I/O module interrupts CPU
CPU requests data
I/O module transfers data
Working of CPU in terms of interrupts:
Issue read command.
Do other work.
Check for interrupt at end of each instruction cycle.
If interrupted:-
–Save context (registers)
–Process interrupt
Fetch data & store.
See Operating Systems notes.
Multiple Interrupts:
Each interrupt line has a priority
Higher priority lines can interrupt lower priority lines
If bus mastering only current master can interrupt
DMA Operation:
CPU tells DMA controller:-
–Read/Write
–Device address
–Starting address of memory block for data
–Amount of data to be transferred
CPU carries on with other work
DMA controller deals with transfer
DMA controller sends interrupt when finished
Cyclic Stealing :
In this DMA controller transfers one word at a time after which it must return the control of the
buses to the CPU. The CPU merely delays its operation for one memory cycle to allow the
direct memory I/O transfer to “steal” one memory cycle.
Structure of Input-Output Interface
Last Updated: 07-08-2020
The block diagram of an Input-Output Interface unit contain the following blocks :
1. Data Bus Buffer
2. Read/Write Control Logic
3. Port A, Port B register
4. Control and Status register
0 0 1 0 0 Port A
0 0 1 0 1 Port B
0 0 1 1 0 Control Register
0 0 1 1 1 Status Register
Write State :
0 1 0 0 0 Port A
0 1 0 0 1 Port B
Control
0 1 0 1 0 Register
0 1 0 1 1 Status Register
Example :
If S0, S1 = 0 1, then Port B data register is selected for data transfer between CPU and I/O
device.
If S0, S1 = 1 0, then Control register is selected and store the control information send by
the CPU.
Difference between Programmed and Interrupt Initiated I/O
Data transfer between the CPU and I/O devices can be done in variety of modes. These are three
possible modes:
1. Programmed I/O
2. Interrupt initiated I/O
3. Direct Memory Access (DMA)
In this article we shall discuss the first two modes only.
1. Programmed I/O :
In this mode the data transfer is initiated by the instructions written in a computer program. An
input instruction is required to store the data from the device to the CPU and a store instruction
is required to transfer the data from the CPU to the device. Data transfer through this mode
requires constant monitoring of the peripheral device by the CPU and also monitor the
possibility of new transfer once the transfer has been initiated. Thus CPU stays in a loop until
the I/O device indicates that it is ready for data transfer. Thus programmed I/O is a time
consuming process that keeps the processor busy needlessly and leads to wastage of the CPU
cycles.
This can be overcome by the use of an interrupt facility. This forms the basis for the Interrupt
Initiated I/O.
2. Interrupt Initiated I/O :
This mode uses an interrupt facility and special commands to inform the interface to issue the
interrupt command when data becomes available and interface is ready for the data transfer. In
the meantime CPU keeps on executing other tasks and need not check for the flag. When the
flag is set, the interface is informed and an interrupt is initiated. This interrupt causes the CPU to
deviate from what it is doing to respond to the I/O transfer. The CPU responds to the signal by
storing the return address from the program counter (PC) into the memory stack and then
branches to service that processes the I/O request. After the transfer is complete, CPU returns to
the previous task it was executing. The branch address of the service can be chosen in two ways
known as vectored and non-vectored interrupt. In vectored interrupt, the source that interrupts,
supplies the branch information to the CPU while in case of non-vectored interrupt the branch
address is assigned to a fixed location in memory.
CPU cannot do any work until the transfer is CPU can do any other work until it is
complete as it has to stay in the loop to interrupted by the command indicating the
continuously monitor the peripheral device. readiness of device for data transfer
The performance of the system is severely The performance of the system is enhanced
degraded. to some extent.
Difference between Maskable and Non Maskable Interrupt
An interrupt is an event caused by a component other than the CPU. It indicates the CPU of an
external event that requires immediate attention. Interrupts occur asynchronously. Maskable and
non-maskable interrupts are two types of interrupts.
1. Maskable Interrupt :
An Interrupt that can be disabled or ignored by the instructions of CPU are called as Maskable
Interrupt.The interrupts are either edge-triggered or level-triggered or level-triggered.
Eg:
RST6.5,RST7.5,RST5.5 of 8085
2. Non-Maskable Interrupt :
An interrupt that cannot be disabled or ignored by the instructions of CPU are called as Non-
Maskable Interrupt.A Non-maskable interrupt is often used when response time is critical or
when an interrupt should never be disable during normal system operation. Such uses include
reporting non-recoverable hardware errors, system debugging and profiling and handling of
species cases like system resets.
Eg:
Trap of 8085
Difference between maskable and nonmaskable interrupt :
SR.NO
. MASKABLE INTERRUPT NON MASKABLE INTERRUPT
Maskable interrupt is a
hardware Interrupt that can A non-maskable interrupt is a hardware
be disabled or ignored by interrupt that cannot be disabled or ignored
1 the instructions of CPU. by the instructions of CPU.
1. Whenever an I/O device wants to transfer the data to or from memory, it sends the DMA
request (DRQ) to the DMA controller. DMA controller accepts this DRQ and asks the CPU to
hold for a few clock cycles by sending it the Hold request (HLD).
2. CPU receives the Hold request (HLD) from DMA controller and relinquishes the bus
and sends the Hold acknowledgement (HLDA) to DMA controller.
3. After receiving the Hold acknowledgement (HLDA), DMA controller acknowledges I/O
device (DACK) that the data transfer can be performed and DMA controller takes the charge of
the system bus and transfers the data to or from memory.
4. When the data transfer is accomplished, the DMA raise an interrupt to let know the
processor that the task of data transfer is finished and the processor can take control over the bus
again and start processing where it has left.
Now the DMA controller can be a separate unit that is shared by various I/O devices, or it can
also be a part of the I/O device interface.
Whenever a processor is requested to read or write a block of data, i.e. transfer a block of data, it
instructs the DMA controller by sending the following information.
1. The first information is whether the data has to be read from memory or the data has to
be written to the memory. It passes this information via read or write control lines that is
between the processor and DMA controllers control logic unit.
2. The processor also provides the starting address of/ for the data block in the memory,
from where the data block in memory has to be read or where the data block has to be written in
memory. DMA controller stores this in its address register. It is also called the starting
address register.
3. The processor also sends the word count, i.e. how many words are to be read or written.
It stores this information in the data count or the word count register.
4. The most important is the address of I/O device that wants to read or write data. This
information is stored in the data register.
Direct Memory Access Advantages and Disadvantages
Advantages:
1. Transferring the data without the involvement of the processor will speed up the read-
write task.
2. DMA reduces the clock cycle requires to read or write a block of data.
3. Implementing DMA also reduces the overhead of the processor.
Disadvantages
1. As it is a hardware unit, it would cost to implement a DMA controller in the system.
2. Cache coherence problem can occur while using DMA controller.
Key Takeaways
DMA is an abbreviation of direct memory access.
DMA is a method of data transfer between main memory and peripheral devices.
The hardware unit that controls the DMA transfer is a DMA controller.
DMA controller transfers the data to and from memory without the participation of the
processor.
The processor provides the start address and the word count of the data block which is
transferred to or from memory to the DMA controller and frees the bus for DMA controller to
transfer the block of data.
DMA controller transfers the data block at the faster rate as data is directly accessed by
I/O devices and is not required to pass through the processor which save the clock cycles.
DMA controller transfers the block of data to and from memory in three modes burst
mode, cycle steal mode and transparent mode.
DMA can be configured in various ways it can be a part of individual I/O devices, or all
the peripherals attached to the system may share the same DMA controller.
Thus the DMA controller is a convenient mode of data transfer. It is preferred over the
programmed I/O and Interrupt-driven I/O mode of data transfer.
Introduction of Input-Output Processor
The DMA mode of data transfer reduces CPU’s overhead in handling I/O operations. It also
allows parallelism in CPU and I/O operations. Such parallelism is necessary to avoid wastage of
valuable CPU time while handling I/O devices whose speeds are much slower as compared to
CPU. The concept of DMA operation can be extended to relieve the CPU further from getting
involved with the execution of I/O operations. This gives rises to the development of special
purpose processor called Input-Output Processor (IOP) or IO channel.
The Input Output Processor (IOP) is just like a CPU that handles the details of I/O operations. It
is more equipped with facilities than those are available in typical DMA controller. The IOP can
fetch and execute its own instructions that are specifically designed to characterize I/O transfers.
In addition to the I/O – related tasks, it can perform other processing tasks like arithmetic, logic,
branching and code translation. The main memory unit takes the pivotal role. It communicates
with processor by the means of DMA.
The block diagram –
The Input Output Processor is a specialized processor which loads and stores data into memory
along with the execution of I/O instructions. It acts as an interface between system and devices.
It involves a sequence of events to executing I/O operations and then store the results into the
memory.
Advantages –
The I/O devices can directly access the main memory without the intervention by the
processor in I/O processor based systems.
It is used to address the problems that are arises in Direct memory access method.
Q. Characteristics of input- output channels?
The I/O channel represents an extension of DMA concept. An I/O channel has ability to
execute I/O instructions that gives complete control over I/O operation. With such devices the
CPU doesn't execute I/O instructions. These kinds of instructions are stored in main memory to
be executed by a special-purpose processor in I/O channel itself. So CPU initiates an I/O
transfer by instructing I/O channel to execute a program in memory. Two kinds of I/O channels
are commonly used that can be seen in Figure (a and b).
A selector channel controls multiple high-speed devices and at any one time is dedicated to
transfer of data with one of those devices. Every device is handled by a controller or I/O
interface. So I/O channel serves in place of CPU in controlling these I/O controllers.
A multiplexer channel can handle I/O with many devices at the same time. If devices are slow
then byte multiplexer is used. Let's explain this with illustration. If we have 3 slow devices that
need to send individual bytes as:
X1 X2 X3 X4 X5 ......
Y1 Y2 Y3 Y4 Y5......
Z1 Z2 Z3 Z4 Z5......
Then on a byte multiplexer channel they can transmit bytes as X1 Y1 Z1 X2 Y2 Z2 X3 Y3
Z3...... For high-speed devices blocks of data from various devices are interleaved. These
devices are known as block multiplexer.
Input/Output Channel
A channel is an independent hardware component that co-ordinate all I/O to a set of controllers.
Computer systems that use I/O channel have special hardware components that handle all I/O
operations.
Channels use separate, independent and low cost processors for its functioning which are called
Channel Processors.
Channel processors are simple, but contains sufficient memory to handle all I/O tasks. When I/O
transfer is complete or an error is detected, the channel controller communicates with the CPU
using an interrupt, and informs CPU about the error or the task completion.
Each channel supports one or more controllers or devices. Channel programs contain list of
commands to the channel itself and for various connected controllers or devices. Once the
operating system has prepared a list of I/O commands, it executes a single I/O machine
instruction to initiate the channel program, the channel then assumes control of the I/O
operations until they are completed.
IBM 370 I/O Channel
The I/O processor in the IBM 370 computer is called a Channel. A computer system
configuration includes a number of channels which are connected to one or more I/O devices.
Categories of I/O Channels
Following are the different categories of I/O channels:
Multiplexer
The Multiplexer channel can be connected to a number of slow and medium speed devices. It is
capable of operating number of I/O devices simultaneously.
Selector
This channel can handle only one I/O operation at a time and is used to control one high speed
device at a time.
Block-Multiplexer
It combines the features of both multiplexer and selector channels.
The CPU directly can communicate with the channels through control lines. Following diagram
shows the word format of channel operation.
Alternatively referred to as the input channel and I/O channel, the input/output channel is a
line of communication in a computing device. The I/O channel is the channel between the
input/output bus and memory to the CPU or a computer peripheral.
Synchronous Transmission
In synchronous transmission, data moves in a completely paired approach, in the form of chunks
or frames. Synchronisation between the source and target is required so that the source knows
where the new byte begins, since there are no spaces included between the data.
Synchronous transmission is effective, dependable, and often utilised for transmitting a large
amount of data. It offers real-time communication between linked devices.
An example of synchronous transmission would be the transfer of a large text file. Before the
file is transmitted, it is first dissected into blocks of sentences. The blocks are then transferred
over the communication link to the target location.
Because there are no beginning and end bits, the data transfer rate is quicker but there’s an
increased possibility of errors occurring. Over time, the clocks will get out of sync, and the
target device would have the incorrect time, so some bytes could become damaged on account
of lost bits. To resolve this issue, it’s necessary to regularly re-synchronise the clocks, as well
as to make use of check digits to ensure that the bytes are correctly received and translated.
Characteristics of Synchronous Transmission
There are no spaces in between characters being sent.
Timing is provided by modems or other devices at the end of the transmission.
Special ’syn’ characters goes before the data being sent.
The syn characters are included between chunks of data for timing functions.
Examples of Synchronous Transmission
Chatrooms
Video conferencing
Telephonic conversations
Face-to-face interactions
Asynchronous Transmission
In asynchronous transmission, data moves in a half-paired approach, 1 byte or 1 character at a
time. It sends the data in a constant current of bytes. The size of a character transmitted is 8
bits, with a parity bit added both at the beginning and at the end, making it a total of 10 bits. It
doesn’t need a clock for integration—rather, it utilises the parity bits to tell the receiver how to
translate the data.
It is straightforward, quick, cost-effective, and doesn’t need 2-way communication to function.
Characteristics of Asynchronous Transmission
Each character is headed by a beginning bit and concluded with one or more end bits.
There may be gaps or spaces in between characters.
Examples of Asynchronous Transmission
Emails
Forums
Letters
Radios
Televisions
Transmits 1 byte
Transmits data in the form
Definition or character at a
of chunks or frames
time
Asynchronous Transmission:
In Asynchronous Transmission, data is sent in form of byte or character. This transmission is the
half duplex type transmission. In this transmission start bits and stop bits are added with data. It
does not require synchronization.
.
NO SYNCHRONOUS TRANSMISSION ASYNCHRONOUS TRANSMISSION
Parallel Transmission:
In Parallel Transmission, many bits are flow together simultaneously from one computer to
another computer. Parallel Transmission is faster than serial transmission to transmit the bits.
Parallel transmission is used for short distance.
S.N PARALLEL
O SERIAL TRANSMISSION TRANSMISSION
Parallel
Transmission is not
2. Serial Transmission is cost efficient. cost efficient.
In Parallel
Transmission, eight
bits transferred at
3. In serial transmission, one bit transferred at one clock pulse. one clock pulse.
Parallel
Transmission is
fast in comparison
Serial Transmission is slow in comparison of Parallel of Serial
4. Transmission. Transmission.
Generally, Parallel
Transmission is
used for short
5. Generally, Serial Transmission is used for long distance. distance.
Difference between Simplex, Half duplex and Full Duplex Transmission Modes
There are 3 types of transmission modes which are given below: Simplex mode, Half duplex
mode, and Full duplex mode. These are explained as following below.
1. Simplex mode:
In simplex mode, Sender can send the data but that sender can’t receive the data. It is a
unidirectional communication.
2. Half-duplex mode:
In half duplex mode, Sender can send the data and also can receive the data but one at a time. It
is two-way directional communication but one at a time.
The simplex mode provides The half duplex mode provides Full duplex provides better
less performance than half less performance than full performance than simplex and
duplex and full duplex. duplex. half duplex mode.
Example of simplex mode Example of half duplex mode Example of full duplex mode
are:Keyboard and monitor. is: Walkie-Talkies. is:Telephone.
Standard I/O Interfaces
The processor bus is the bus defined by the signals on the processor chip itself. Devices that
require a very high speed connection to the processor, such as the main memory, may be
connected directly to this bus ¾ The motherboard usually provides another bus that can support
more devices.
The two buses are interconnected by a circuit, which we called a bridge, that translates the
signals and protocols of one bus into those of the other
It is impossible to define a uniform standards for the processor bus. The structure of this bus is
closely tied to the architecture of the processor The expansion bus is not subject to these
limitations, and therefore it can use a standardized signaling structure
PCI Bus
The bus support three independent address spaces: memory, I/O, and configuration.
The I/O address space is intended for use with processors, such Pentium, that have a separate
I/O address space.
However, the system designer may choose to use memory-mapped I/O even when a separate I/O
address space is available
The configuration space is intended to give the PCI its plug-and-play capability. ‹ A 4-bit
command that accompanies the address identifies which of the three spaces is being used in a
given data transfer operation