Principles of Digital Communication Systems and Computer Networks
Principles of Digital Communication Systems and Computer Networks
Principles of Digital Communication Systems and Computer Networks
Table of Contents
Principles of Digital Communication Systems and Computer Networks
Preface
Part I - Digital Communication Systems
Chapter 1 - Basics of Communication Systems
Chapter 2 - Information Theory
Chapter 3 - Transmission Media
Chapter 4 - Coding of Text, Voice, Image, and Video Signals
Chapter 5 - Error Detection and Correction
Chapter 6 - Digital Encoding
Chapter 7 - Multiplexing
Chapter 8 - Multiple Access
Chapter 9 - Carrier Modulation
Chapter 10 - Issues in Communication System Design
Chapter 11 - Public Switched Telephone Network
Chapter 12 - Terrestrial Radio Communication Systems
Chapter 13 - Satellite Communication Systems
Chapter 14 - Optical Fiber Communication Systems
Part II - Data Communication Protocols and Computer Networking
Chapter 15 - Issues in Computer Networking
Chapter 16 - ISO/OSI Protocol Architecture
Chapter 17 - Local Area Networks
Chapter 18 - Wide Area Networks and X.25 Protocols
Chapter 19 - Internetworking
Chapter 20 - TCP/IP Protocol Suite
Chapter 21 - Internet Protocol (IP)
Chapter 22 - Transport Layer ProtocolsTCP and UDP
Chapter 23 - Distributed Applications
Chapter 24 - The Wired Internet
Chapter 25 - Network Computing
Chapter 26 - Signaling System No. 7
Chapter 27 - Integrated Services Digital Network
Chapter 28 - Frame Relay
Chapter 29 - Asynchronous Transfer Mode
Part III - Mobile Computing and Convergence Technologies
Chapter 30 - Radio Paging
Chapter 31 - Cellular Mobile Communication Systems
Chapter 32 - Global Positioning System
Chapter 33 - Wireless Internet
Chapter 34 - Multimedia Communication over IP Networks
Chapter 35 - Computer Telephony Integration and Unified Messaging
Chapter 36 - Wireless Personal/Home Area Networks
Chapter 37 - Telecommunications Management Network
Chapter 38 - Information Security
Chapter 39 - Futuristic Technologies and Applications
Appendix A - Glossary
Appendix B - Acronyms and Abbreviations
<Day Day Up>
Principles of Digital Communication Systems and
Computer Networks
byDr. K.V. Prasad ISBN:1584503297
Charles River Media 2003 (742 pages)
This is a textbook for courses on digital communication
systems, data communication, computer networks, and
mobile computing, and a comprehensive resource for anyone
pursuing a career in telecommunications and data
communication.
Table of Contents
Principles of Digital Communication Systems and Computer Networks
Preface
Part I - Digital Communication Systems
Chapter 1 - Basics of Communication Systems
Chapter 2 - Information Theory
Chapter 3 - Transmission Media
Chapter 4 - Coding of Text, Voice, Image, and Video Signals
Chapter 5 - Error Detection and Correction
Chapter 6 - Digital Encoding
Chapter 7 - Multiplexing
Chapter 8 - Multiple Access
Chapter 9 - Carrier Modulation
Chapter 10 - Issues in Communication System Design
Chapter 11 - Public Switched Telephone Network
Chapter 12 - Terrestrial Radio Communication Systems
Chapter 13 - Satellite Communication Systems
Chapter 14 - Optical Fiber Communication Systems
Part II - Data Communication Protocols and Computer Networking
Chapter 15 - Issues in Computer Networking
Chapter 16 - ISO/OSI Protocol Architecture
Chapter 17 - Local Area Networks
Chapter 18 - Wide Area Networks and X.25 Protocols
Chapter 19 - Internetworking
Chapter 20 - TCP/IP Protocol Suite
Chapter 21 - Internet Protocol (IP)
Chapter 22 - Transport Layer ProtocolsTCP and UDP
Chapter 23 - Distributed Applications
Chapter 24 - The Wired Internet
Chapter 25 - Network Computing
Chapter 26 - Signaling System No. 7
Chapter 27 - Integrated Services Digital Network
Chapter 28 - Frame Relay
Chapter 29 - Asynchronous Transfer Mode
Part III - Mobile Computing and Convergence Technologies
Chapter 30 - Radio Paging
Chapter 31 - Cellular Mobile Communication Systems
Chapter 32 - Global Positioning System
Chapter 33 - Wireless Internet
Chapter 34 - Multimedia Communication over IP Networks
Chapter 35 - Computer Telephony Integration and Unified Messaging
Chapter 36 - Wireless Personal/Home Area Networks
Chapter 37 - Telecommunications Management Network
Chapter 38 - Information Security
Chapter 39 - Futuristic Technologies and Applications
Appendix A - Glossary
Appendix B - Acronyms and Abbreviations
Appendix C - Solutions to Selected Exercises
Index
List of Figures
List of Tables
List of Listings
<Day Day Up>
<Day Day Up>
Back Cover
Principles of Digital Communication Systems and Computer Networks is designed as a textbook for courses on digital
communication systems, data communication, computer networks, and mobile computing. Part I deals with key topics
such as information theory, transmission media, coding, error correction, multiplexing, multiple access, carrier
modulation, PSTN, and radio communication. Part II goes on to cover the networking concepts, the ISO/OSI protocol
architecture, Ethernet mobile computing, including radio paging, cellular mobile, GPS, CTI, unified messaging, and
multimedia communication. Helpful summaries, lists of supplementary information, references, and exercises at the
end of each chapter make the book a comprehensive resource for anyone pursuing a career in telecommunications and
data communication.
Key Features
Provides comprehensive coverage of digital communication protocols, and mobile computing
Covers digital communications systems, data communication protocols, computer networking, mobile
computing, and convergence technologies, including PSTN, RS232, Signaling System No.7, ISDN, Frame Relay,
WAP, 3G Networks, and Radio Paging
Features a glossary of important terms, a dictionary of common acronyms, as well as references, review
questions, exercises, and projects for each chapter
In Brief and Notes sections provide handy, at-a-glance information
Includes diagrams and other visual aids to reinforce textual descriptions and analyses
About the Author
Dr. K.V. Prasad is currently the Director of Technology at Innovation Communications Systems Limited, and has been
associated with the telecommunications industry for the past 16 years. He has published extensively in leading
international journals and magazines in the areas of wireless communication, computer telephony integration, software
engineering, and artificial intelligence. He also has authored and co-authored a number of programming guides.
<Day Day Up>
<Day Day Up>
Principles of Digital Communication Systems and
Computer Networks
K. V. Prasad
CHARLES RIVER MEDIA, INC.
Copyright 2003 Dreamtech Press
Translation Copyright 2004 by CHARLES RIVER MEDIA, INC.
All rights reserved.
Copyright 2004 by CHARLES RIVER MEDIA, INC.
All rights reserved.
No part of this publication may be reproduced in any way, stored in a retrieval system of any type, or
transmitted by any means or media, electronic or mechanical, including, but not limited to, photocopy,
recording, or scanning, without prior permission in writing from the publisher.
Acquisitions Editor: James Walsh
Cover Design: Sherry Stinson
CHARLES RIVER MEDIA, INC.
10 Downer Avenue
Hingham, Massachusetts 02043
781-740-0400
781-740-8816 (FAX)
info@charlesriver.com
http://www.charlesriver.com
K. V. K. K. Prasad. Principles of Digital Communication Systems and Computer Networks
1-58450-329-7
All brand names and product names mentioned in this book are trademarks or service marks of their respective
companies. Any omission or misuse (of any kind) of service marks or trademarks should not be regarded as
intent to infringe on the property of others. The publisher recognizes and respects all marks used by
companies, manufacturers, and developers as a means to distinguish their products.
Library of Congress Cataloging-in-Publication Data
Prasad, K. V. K. K.
Principles of digital communication systems and computer networks /
K.V.K.K. Prasad. 1st ed.
p. cm.
ISBN 1-58450-329-7 (Paperback : alk. paper)
1. Digital communications. 2. Computer networks. I. Title.
TK5103.7.P74 2003
004.6dc22
2003021571
04 7 6 5 4 3 2 First Edition
CHARLES RIVER MEDIA titles are available for site license or bulk purchase by institutions, user groups,
corporations, etc. For additional information, please contact the Special Sales Department at 781-740-0400.
<Day Day Up>
<Day Day Up>
Preface
At the dawn of the twenty-first century, we are witnessing a revolution that will make the utopia of a "Global
Village" a reality. Telecommunications technology is paving the way to a world in which distance is no longer a
barrier. Telecommunication is enabling us to interact with one another irrespective of our physical locations,
bringing people of different countries, cultures, and races closer to one another.
The revolution we are witnessing is due to the fast pace of the convergence of information, communications,
and entertainment (ICE) technologies. These technologies help us to communicate and carry out business, to
entertain and educate ourselves, and to share information and knowledge. However, we are at just the
beginning of this revolution. To ensure that every individual on this planet is connected and is provided with
quality communication services is a gigantic task. This task requires dedicated professionals with a commitment
to technological excellence. These professionals will be the architects of the Global Village.
Prospective communications professionals need to have a strong foundation in digital communication and data
communication protocols, as well as an exposure to the latest technological innovations. This book aims at
providing that foundation to the students of computer science/communication engineering/information
technology.
ABOUT THE BOOK
This book lays the foundation for a career in telecommunications and data communications. It can be used as a
textbook for the following subjects:
Digital Communication Systems
Data Communication and Computer Networks
Mobile Computing
Accordingly, the book is divided into three parts.
Part 1 focuses on digital communication systems. This part covers the basic building blocks of digital
communications, such as transmission media, multiplexing, multiple access, source coding, error detecting and
correcting codes, and modulation techniques. Representative telecommunication systems such as Public
Switched Telephone Network, terrestrial radio systems, satellite radio systems, and optical communication
systems are also discussed.
Part 2 focuses on data communication protocols and computer networking. The Open Systems Interconnection
(OSI) protocol architecture and the Transmission Control Protocol/Internet Protocol (TCP/IP) architecture are
covered in detail. Representative computer networks based on international standards are covered, including
Signaling System No. 7 (SS7), Integrated Services Digital Network (ISDN), Frame Relay and Asynchronous
Transfer Mode (ATM).
Part 3 focuses on mobile computing and convergence technologies. Radio paging, cellular mobile
communications, global positioning system, wireless Internet, and wireless personal/home area networks are
covered, highlighting the technology ingredients of mobile computing. Multimedia communication over IP
networks and Computer Telephony Integration (CTI) are covered in detail, highlighting the technology
ingredients of convergence. The architecture of Telecommunications Management Network, which is gaining
importance to manage complex telecommunication networks, is presented. The last chapter gives a glimpse of
futuristic technologies.
At the end of each chapter, references, questions, exercises, and projects are given. References are books and
research papers published in professional journals as well as Web addresses. Needless to say, the Internet
continues to be the best platform to obtain the latest information on a specific topic. Just use a search engine
and enter the topic of your choice, and you will get millions of references! The Questions help the reader focus
on specific topics while reading the book. The Exercise section is exploratory and the reader is urged to do
these exercises to enhance his conceptual understanding. Final year students can choose some of the
exercises as seminar topics. In the Projects section, brief problem definitions are given based on the
requirements of the communications industry. Final year students can select some of these projects in their
areas of interest. These projects are long-term projects (of one month to three months duration). Certainly,
students need the help of their teachers in converting the project ideas into detailed requirement documents. Of
course, the reader is welcome to contact the author at kvkk.prasad@acm.org for any additional information.
Appendix A gives a glossary of important terms. Appendix B gives a list of acronyms and abbreviations.
Appendix C gives the solutions to selected exercises from various chapters of the book.
Planning a Career in Telecom/Datacom
Telecom/datacom is a very promising field, and those who choose this field will have very bright careers. This
field requires software and hardware professionals of almost all skill sets. Software development in
communication protocols is generally carried out in C or C++, though Java is picking up fast. Prospective
telecom/datacom software professionals need to gain expertise in at least two of the following skill sets:
Unix/Solaris/Linux with C and C++
Windows NT/2000/XP with C, C++, and C#
Real-time operating systems (such as pSOS, VxWorks, OS/9, and RTLinux) and C/C++
Unix/Solaris/Linux with Java (including RMI, JDBC, CORBA, EJB, Jini, and JMF)
RDBMS with frontend tools such as VB, VC++, and Java
Mobile operating systems (such as Win CE, Palm Operating System, and Symbian Operating System) and
C/C++
Markup languages such as XML, WML, XHTML, and VoiceXML and scripting languages such as
JavaScript and WMLScript
For those who have an interest in pursuing hardware-cum-software development, the following skill sets are
suggested:
Real-time operating systems (such as pSOS, VxWorks, OS/9 and RTLinux) and C/C++
VLSI design languages and tools (such as VHDL and Verilog) and C/C++
Micro-controllers (such as Intel 8051, MC 68HC11 and ARM), associated assembly language, and C/C++
Digital Signal Processors and C/C++
Along with expertise in the development environment (operating systems, programming languages, etc.),
domain expertise in communications is important for a successful career. However, communications is a vast
field, and it would be futile to try to gain expertise in every aspect. A very good understanding of the basics of
telecommunication systems and communication protocols is mandatory for everyone. After that, one needs to
have an exposure to the latest developments and then focus on some specialization. The specialization can be
in any of the following areas:
TCP/IP protocol stack
Multimedia communication over IP networks
Wireless Internet
Telecommunication switching software development
Telecommunication network management
Voice and video coding
Wireless local area networks
Wireless personal area networks
Terrestrial radio communication
Optical communication
Satellite communication
Signaling systems
Computer Telephony Integration
If one has a strong foundation in the basic concepts of telecommunications and data communications, it is not
difficult to move from one domain to another during one's career. The sole aim of this book is to cover the
fundamentals in detail and also to give an exposure to the latest developments in the application domains
listed.
Let us now begin our journey into the exciting world of communications.
<Day Day Up>
<Day Day Up>
Part I: Digital Communication Systems
Chapter 1: Basics of Communication Systems
Chapter 2: Information Theory
Chapter 3: Transmission Media
Chapter 4: Coding of Text, Voice, Image, and Video Signals
Chapter 5: Error Detection and Correction
Chapter 6: Digital Encoding
Chapter 7: Multiplexing
Chapter 8: Multiple Access
Chapter 9: Carrier Modulation
Chapter 10: Issues in Communication System Design
Chapter 11: Public Switched Telephone Network
Chapter 12: Terrestrial Radio Communication Systems
Chapter 13: Satellite Communication Systems
Chapter 14: Optical Fiber Communication Systems
The objective of any telecommunication system is to facilitate communication between peoplewho may be
sitting in adjacent rooms or located in different corners of the world. Perhaps one of them is traveling. The
information people may like to exchange can be in different formstext, graphics, voice, or video. In
broadcasting, information is sent from a central location, and people just receive the information; they are
passive listeners (in the case of radio) or passive watchers (in the case of television). It is not just people;
devices may have to communicate with each othera PC to a printer, a digital camera to a PC, or two devices
in a process control system.
The basic principles of all these types of communication are the same. In this part of the book, we will study the
fundamentals of digital communication: the building blocks of a communication system and the mechanisms of
coding various types of information. We will study Shannon's information theory which laid the foundation of
digital communications. We also will cover the characteristics of various transmission media and how to utilize
a medium effectively through multiplexing and multiple access techniques. The issues involved in designing
telecommunication systems are also discussed. Finally, we will study representative telecommunication
systems using cable, terrestrial radio, satellite radio, and optical fiber as the transmission media.
This part contains 14 chapters that cover all the fundamental aspects of telecommunications and representative
telecommunication networks.
<Day Day Up>
<Day Day Up>
Chapter 1: Basics of Communication Systems
We begin the journey into the exciting field of telecommunications by studying the basic building blocks of a
telecommunication system. We will study the various types of communication and how the electrical signal is
impaired as it travels through the transmission medium. With the advances in digital electronics, digital
communication systems slowly are replacing analog systems. We will discuss the differences between analog
communication and digital communication.
1.1 BASIC TELECOMMUNICATION SYSTEM
A very simple telecom system is shown in Figure 1.1. At the transmitting end, there will be a source that
generates the data and a transducer that converts the data into an electrical signal. The signal is sent over a
transmission medium and, at the receiving end, the transducer again converts the electrical signal into data and
is given to the destination (sink). For example, if two people want to talk to each other using this system, the
transducer is the microphone that converts the sound waves into equivalent electrical signals. At the receiving
end, the speakers convert the electrical signal into acoustic waves. Similarly, if video is to be transmitted, the
transducers required are a video camera at the transmitting side and a monitor at the receiving side. The
medium can be copper wire. The public address system used in an auditorium is an example of such a simple
communication system.
Figure 1.1: Basic telecommunication system.
What is the problem with this system? As the electrical signal passes through the medium, the signal gets
attenuated. The attenuated signal may not be able to drive the transducer at the receiving end at all if the
distance between the sender and the receiver is large. We can, to some extent, overcome this problem by
using amplifiers between. The amplifier will ensure that the electrical signals are of sufficient strength to drive
the transducer.
In an electrical communication system, at the transmitting side, a transducer converts the real-life information
into an electrical signal. At the receiving side, a transducer converts the electrical signal back into real-life
information.
But we still have a problem. The transmission medium introduces noise. The noise cannot be eliminated at all.
So, in the above case, we amplify the signal, but at the same time, we also amplify the noise that is added to
the actual signal containing the information. Amplification alone does not solve the problem, particularly when
the system has to cover large distances.
NoteAs the electrical signal passes through the transmission medium, the signal gets attenuated. In
addition, the transmission medium introduces noise and, as a result, the signal gets distorted.
The objective of designing a communication system is for the electrical signal at the transmitting end to be
reproduced at the receiving end with minimal distortion. To achieve this, different techniques are used,
depending on issues such as type of data, type of communication medium, distance to be covered, and so
forth.
The objective of designing a communication system is to reproduce the electrical signal at the receiving end
with minimal distortion.
Figure 1.2 shows a communication system used to interconnect two computers. The computers output
electrical signals directly (through the serial port, for example), and hence there is no need for a transducer.
The data can be passed directly through the communication medium to the other computer if the distance is
small (less than 100 meters).
Figure 1.2: PC-to-PC communication.
NoteThe serial ports of two computers can be connected directly using a copper cable. However, due to
the signal attenuation, the distance cannot be more than 100 meters.
Figure 1.3 shows a communication system in which two PCs communicate with each other over a telephone
network. In this system, we introduced a new device called a modem (modulator-demodulator) at both ends.
The PCs send digital signals, which the modem converts into analog signals and transmits through the medium
(copper wires). At the receiving end, the modem converts the incoming analog signal into digital form and
passes it on to the PC.
Figure 1.3: PC-to-PC communication over telephone network.
Two computers can communicate with each other through the telephone network, using a modem at each end.
The modem converts the digital signals generated by the computer into analog form for transmission over the
medium at the transmitting end and the reverse at the receiving end.
Figure 1.4 shows a generic communication system. In this figure, a block "medium access processing" is
introduced. This block has various functions, depending on the requirement. In some communication systems,
the transmission medium needs to be shared by a number of users. Sometimes the user is allowed to transmit
only during certain time periods. Sometimes the user may need to send the same data to multiple users.
Additional processing needs to be done to cater to all these requirements. At the transmitting side, the source
generates information that is converted into an electrical signal. This signal, called the baseband signal, is
processed and transmitted only when it is allowed. The signal is sent on to the transmission medium through a
transmitter. At the receiving end, the receiver amplifies the signal and does the necessary operations to present
the baseband signal to the user. Any telecommunication system is a special form of this system. Consider the
following examples:
Figure 1.4: Generic communication system.
In the case of a radio communication system for broadcasting audio programs, the electrical signal is
transformed into a high-frequency signal and sent through the air (free space). A radio transmitter is used to do
this. A reverse of this transformation converting the high-frequency signal into an audio signalis performed
at the receiving station. Since it is a broadcasting system, many receivers can receive the information.
In a radio communication system, the electrical signal is transformed into a high-frequency signal and sent over
the air.
In a communication system on which two persons communicate with two other persons located somewhere
else, but only on one communication link, the voice signals need to be combined. We cannot mix the two voice
signals directly because it will not be possible to separate them at the receiving end. We need to "multiplex" the
two signals, using special techniques.
In a mobile communication system, a radio channel has to be shared by a number of users. Each user has to
use the radio channel for a short time during which he has to transmit his data and then wait for his next turn.
This mechanism of sharing the channel is known as multiple access.
When multiple users have to share the same transmission medium, multiple access techniques are used.
These techniques are required in both radio communication and cable communication.
Hence, depending on the type of communication, the distance to be covered, etc., a communication system will
consist of a number of elements, each element carrying out a specific function. Some important elements are:
Multiplexer: Combines the signals from different sources to transmit on the channel. At the receiving end,
a demultiplexer is used to separate the signals.
Multiple access: ess: When two or more users share the same channel, each user has to transmit his
signal only at a specified time or using a specific frequency band.
Error detection and correction: If the channel is noisy, the received data will have errors. Detection, and
if possible correction, of the errors has to be done at the receiving end. This is done through a mechanism
called channel coding.
Source coding: If the channel has a lower bandwidth than the input signal bandwidth, the input signal has
to be processed to reduce its bandwidth so that it can be accommodated on the channel.
Switching: If a large number of users has to be provided with communication facilities, as in a telephone
network, the users are to be connected based on the numbers dialed. This is done through a mechanism
called switching.
Signaling: In a telephone network, when you dial a particular telephone number, you are telling the
network whom you want to call. This is called signaling information. The telephone switch (or exchange) will
process the signaling information to carry out the necessary operations for connecting to the called party.
The various functions to be carried out in a communication system are: multiplexing, multiple access, error
detection and correction, source coding, switching and signaling.
NoteTwo voice signals cannot be mixed directly because it will not be possible to separate them at the
receiving end. The two voice signals can be transformed into different frequencies to combine them
and send over the medium.
<Day Day Up>
<Day Day Up>
1.2 TYPES OF COMMUNICATION
Based on the requirements, the communications can be of different types:
Point-to-point communication: In this type, communication takes place between two end points. For instance,
in the case of voice communication using telephones, there is one calling party and one called party. Hence the
communication is point-to-point.
Point-to-multipoint communication: In this type of communication, there is one sender and multiple
recipients. For example, in voice conferencing, one person will be talking but many others can listen. The
message from the sender has to be multicast to many others.
Broadcasting: In a broadcasting system, there is a central location from which information is sent to many
recipients, as in the case of audio or video broadcasting. In a broadcasting system, the listeners are passive,
and there is no reverse communication path.
Simplex communication: In simplex communication, communication is possible only in one direction. There is
one sender and one receiver; the sender and receiver cannot change roles.
Half-duplex communication: Half-duplex communication is possible in both directions between two entities
(computers or persons), but one at a time. A walkie-talkie uses this approach. The person who wants to talk
presses a talk button on his handset to start talking, and the other person's handset will be in receive mode.
When the sender finishes, he terminates it with an over message. The other person can press the talk button
and start talking. These types of systems require limited channel bandwidth, so they are low cost systems.
Full-duplex communication: In a full-duplex communication system, the two partiesthe caller and the
calledcan communicate simultaneously, as in a telephone system. However, note that the communication
system allows simultaneous transmission of data, but when two persons talk simultaneously, there is no
effective communication! The ability of the communication system to transport data in both directions defines
the system as full-duplex.
In simplex communication, the communication is one-way only. In half-duplex communication, communication is
both ways, but only in one direction at a time. In full-duplex communication, communication is in both directions
simultaneously.
Depending on the type of information transmitted, we have voice communication, data communication, fax
communication, and video communication systems. When various types of information are clubbed together,
we talk of multimedia communications. Even a few years ago, different information media such as voice, data,
video, etc. were transmitted separately by using their own respective methods of transmission. With the advent
of digital communication and "convergence technologies," this distinction is slowly disappearing, and
multimedia communication is becoming the order of the day.
<Day Day Up>
<Day Day Up>
1.3 TRANSMISSION IMPAIRMENTS
While the electrical signal is traversing over the medium, the signal will be impaired due to various factors.
These transmission impairments can be classified into three types:
Attenuation distortion a.
Delay distortion b.
Noise c.
The transmission impairments can be classified into: (a) attenuation distortion; (b) delay distortion; and (c)
noise.
The amplitude of the signal wave decreases as the signal travels through the medium. This effect is known as
attenuation distortion. Delay distortion occurs as a result of different frequency components arriving at different
times in the guided media such as copper wire or coaxial cable. The third type of impairmentnoisecan be
divided into the following categories:
Thermal noise
Intermodulation
Crosstalk
Impulse noise
Thermal noise: Thermal noise occurs due to the thermal agitation of electrons in a conductor. This is
distributed uniformly across the spectrum and hence called white noise. This noise cannot be eliminated and
hence, when designing telecom systems, we need to introduce some method to overcome the ill effects of
thermal noise. Thermal noise for a bandwidth of 1 Hz is obtained from the formula:
where No is noise power density, watts per Hz
Thermal noise for a bandwidth of B Hz is given by
If N is expressed in dB (decibels)
Using this formula, thermal noise for a given bandwidth is calculated.
NoteThermal noise for a bandwidth of B Hz is given by N = kTB (watts) where k is Boltzmann's constant
and T is temperature. N is generally expressed in decibels.
Intermodulation noise: When two signals of different frequencies are sent through the medium, due to
nonlinearity of the transmitters, frequency components such as f1 + f2 and f1 f2 are produced, which are
unwanted components and need to be filtered out.
Crosstalk: Unwanted coupling between signal paths is known as crosstalk. In the telephone network, this
coupling is quite common. As a result of this, we hear other conversations. Crosstalk needs to be eliminated by
using appropriate design techniques.
Impulse noise: This is caused by external electromagnetic disturbances such as lightning. This noise is
unpredictable. When the signal is traversing the medium, impulse noise may cause sudden bursts of errors.
This may cause a temporary disturbance in voice communication. For data communication, appropriate
methods need to be devised whereby the lost data is retransmitted.
NoteImpulse noise occurs due to external electromagnetic disturbances such as lightning. Impulse noise
causes burst of errors.
Noise is the source of bread and butter for telecom engineers! If there were no noise, there would be no need
for telecom engineersfor we can then design perfect communication systems. Telecom engineering is all
about overcoming the effects of noise.
Noise can be divided into four categories: (a) thermal noise, (b) intermodulation noise, (c) cross-talk and (d)
impulse noise.
<Day Day Up>
<Day Day Up>
1.4 ANALOG VERSUS DIGITAL TRANSMISSION
The electrical signal output from a transducer such as microphone or a video camera is an analog signal; that
is, the amplitude of the signal varies continuously with time. Transmitting this signal (with necessary
transformations) to the receiving end results in analog transmission. However, at the receiving end, it has to be
ensured that the signal does not get distorted at all due to transmission impairments, which is very difficult.
In analog communication, the signal, whose amplitude varies continuously, is transmitted over the medium.
Reproducing the analog signal at the receiving end is very difficult due to transmission impairments. Hence,
analog communication systems are badly affected by noise.
The output of a computer is a digital signal. The digital signal has a fixed number of amplitude levels. For
instance, binary 1 can be represented by one voltage level (say, 5 volts) and binary 0 can be represented by
another level (say, 0 volt). If this signal is transmitted through the medium (of course with necessary
transformations), the receiving end needs only to detect these levels. Even if the signal is slightly impaired due
to noise, still there is no problem. For example, we can say that if the signal is above 2.5 volts, it is 1 and if it is
below 2.5 volts, it is zero. Unless the signal is badly damaged, we can easily find out whether the transmitted bit
is a 1 or a 0.
The voice and video signals (output of the transducer) are always analog. Then how do we take advantage of
the digital transmission? Simple. Convert the analog signal into the digital format. This is achieved through
analog-to-digital conversion. At this point, let us assume only that it is possible to convert an analog signal into
its equivalent digital signal. We will study the details of this conversion process in later chapters.
In a digital communication system, 1s and 0s are transmitted as voltage pulses. So, even if the pulse is
distorted due to noise, it is not very difficult to detect the pulses at the receiving end. Hence, digital
communication is much more immune to noise as compared to analog communication.
Digital transmission is much more advantageous than analog transmission because digital systems are
comparatively immune to noise. Due to advances in digital electronics, digital systems have become cheaper,
as well. The advantages of digital systems are:
More reliable transmission because only discrimination between ones and zeros is required.
Less costly implementation because of the advances in digital logic chips.
Ease of combining various types of signals (voice, video, etc.).
Ease of developing secure communication systems.
Though a large number of analog communication systems are still in use, digital communication systems are
now being deployed. Also, the old analog systems are being replaced by digital systems. In this book, we focus
mainly on digital communication systems.
The advantages of digital communication are more reliable transmission, less costly implementation, ease of
multiplexing different types of signals, and secure communication.
NoteAll the newly developed communication systems are digital systems. Only in broadcasting
applications, is analog communication used extensively.
Summary
This chapter has presented the basic building blocks of a communication system. The information source
produces the data that is converted into electrical signal and sent through the transmission medium. Since the
transmission medium introduces noise, additional processing is required to transmit the signal over large
distances. Also, additional processing is required if the medium is shared by a number of users.
Communication systems are of various types. Point-to-point systems provide communication between two end
points. Point-to-multipoint systems facilitate sending information simultaneously to a number of points.
Broadcasting systems facilitate sending information to a large number of points from a central location. Simplex
systems allow communication only in one direction. Half-duplex systems allow communication in both directions
but in one direction at a time. Full-duplex systems allow simultaneous communication in both directions.
Communication systems can be broadly divided into analog communication systems and digital communication
systems. In analog communication systems, the analog signal is transmitted. In digital communication system,
even though the input signal is in analog form, it is converted into digital format and then sent through the
medium. For a noise condition, the digital communication system gives better performance than the analog
system.
The concepts of multiplexing and multiple access also are introduced in this chapter. The details will be
discussed in later chapters.
References
G. Kennedy and B. Davis. Electronic Communication Systems. Tata McGraw-Hill Publishing Company
Limited, 1993.
R. Horak. Communication Systems and Networks. Wiley-Dreamtech India Pvt. Ltd., 2002.
Questions
What are the advantages of digital communication over analog communication? 1.
Explain the different types of communication systems. 2.
What are the different types of transmission impairments? 3.
What is multiplexing? 4.
What is multiple access? 5.
What is signaling? 6.
Exercises
1. Write a program to generate a bit stream of ones and zeros.
2. Write a program to generate noise. You can use the random number generation function rand() to
generate the random numbers. The conversion of the random numbers to binary form produces a
pseudo-random noise.
3. Write a program that simulates a transmission medium. The bits at random places in the bit stream
generated (in Exercise #1) have to be modified to create the errors1 has to be changed to 0 and
0 has to be changed to 1.
4. Chips (integrated circuits) are available for generation of noise. Identify a noise generator chip.
Answers
1. To generate a bit stream of 1s and 0s, you can write a program that takes characters as input and produces
the ASCII bit stream. A segment of the VC++ code is given in Listing C.1. The screenshot for this program
is given in Figure C.1. Please note that you need to create your own project file in VC++ and add the code
given in Listing C.1.
Figure C.1: Screenshot that displays the input text and equivalent ASCII code.
Listing C.1: To generate a bit stream of 1s and 0s.
/* Converts the text into ASCII */
CString CAsciiMy::TextToAscii(CString text)
{
CString ret_str, temp_str;
int length, temp_int=0;
length=text.GetLength();
for(int i=0;i<length;i++){
temp_int=text.GetAt(i);
temp_str=ConvertBase(temp_int,2);
ret_str=ret_str+temp_str;
}
return ret_str;
}
CString CAsciiMy::ConvertBase(int val, int base)
{
CString ret_str, temp_str;
int ret_val=0;
int temp=val;
while(1){
if(temp>0){
temp_str.Format("%d",temp%base);
ret_str=temp_str+ret_str;
temp=temp/base;
}
else
break;
}
while(ret_str.GetLength()<7){
ret_str="0"+ret_str;
}
return ret_str;
}
2. To generate noise, you need to write a program that generates random numbers. The random numbers can
be between 1 and +1. Listing C.2 gives the VC++ code segment that does this. You need to create a
project file in the VC++ environment and add this code. The waveform of the noise generated using this
program is shown in Figure C.2.
Figure C.2: Waveform of the noise signal.
Listing C.2: To generate random numbers between 1 and +1 and display the waveform.
/* To display the signal on the screen */
int CNoise_signalDlg::NoiseFunction()
{
CWindowDC dc(GetDlgItem(IDC_SINE));
CRect rcClient;
LOGBRUSH logBrush;
logBrush.lbStyle =BS_SOLID;
logBrush.lbColor=RGB(0,255,0);
CPen pen(PS_GEOMETRIC | PS_JOIN_ROUND,1, &logBrush);
dc.SelectObject(&pen);
dc.SetTextColor(RGB(255,255,255));
while(continueThread){
m_sine.GetClientRect(rcClient);
dc.FillSolidRect(rcClient, RGB(0,0,0));
dc.MoveTo(0,rcClient.bottom/2);
int x, y;
dc.MoveTo(0,rcClient.bottom/2);
for (x =0 ; x < (rcClient.right); x++) // display Input
{
y = rcClient.bottom/2 - Noise();
dc.LineTo(x, y);
}
Sleep(200);
}
return 0;
}
/* To generate the noise signal */
int CNoise_signalDlg::Noise()
{
int NISample;
double NSample;
double N2PI = 2*TPI;
double NWT;
NoiseFreq = 300+rand()%4300;
NoiseAMP = 8+rand()%32;
NWT = NoiseFreq*0.00125;
NSampleNo++;
NSample =NoiseAMP*sin(N2PI*NWTn);
NWTn += NWT;
if (NWTn > 1.0) NWTn -= 1.0;
NISample = (int) NSample;
return NISample;
}
3. To simulate a transmission medium, you need to modify the bit stream at random places by converting 1 to
0 and 0 to 1. Listing C.3 gives VC++ code segment that generates the bit stream and then introduces errors
at random places. Figure C.3 gives the screenshot that displays the original bit stream and the bit stream
with errors.
Figure C.3: Screenshot that displays the original bit stream and the bit stream with errors.
Listing C.3: To simulate a transmission medium.
/* To convert the text into bit stream and introduce errors in
the bit stream */
void CBitStreamDlg::OnDisplay()
{
// TODO: Add your control notification handler code here
CString strdata, binary, s, ss, no;
int bit, i=0, count=0;
char ch;
m_text.GetWindowText(strdata);
for(int j=0;j<strdata.GetLength();j++)
{
ch=strdata[j];
for(int k=0;k<8;i++,count++,k++)
{
bit = ch%2;
bin_val[i]=bit;
ch=ch/2;
s.Format("%d",bin_val[i]);
binary = binary + s;
}
}
m_bin_data.SetWindowText(binary);
for(int n=0;n<10;n++)
{
int ran_no;
srand((unsigned)time( NULL ) );
ran_no = rand() % 100;
ss.Format("%d",ran_no);
AfxMessageBox(ss);
no = no + "," + ss;
if(bin_val[ran_no]==0)
bin_val[ran_no]=1;
else
bin_val[ran_no]=0;
}
CString bin1;
for(i=0;i<104;i++)
{
s.Format("%d",bin_val[i]);
bin1 = bin1 + s;
}
m_con_text.SetWindowText(bin1);
m_Random_no.SetWindowText(no);
}
4. Many semiconductor vendors provide integrated circuits that generate noise in the audio frequency band.
Suppliers of measurement equipment provide noise generators used for testing communication systems.
The best way to generate noise is through digital signal processors.
Project
Write a report on the history of telecommunication, listing the important milestones in the development of
telecommunication technology.
<Day Day Up>
<Day Day Up>
Chapter 2: Information Theory
Claude Shannon laid the foundation of information theory in 1948. His paper "A Mathematical Theory of
Communication" published in Bell System Technical Journal is the basis for the entire telecommunications
developments that have taken place during the last five decades. A good understanding of the concepts
proposed by Shannon is a must for every budding telecommunication professional. We study Shannon's
contributions to the field of modern communications in this chapter.
2.1 REQUIREMENTS OF A COMMUNICATION SYSTEM
In any communication system, there will be an information source that produces information in some form, and
an information sink absorbs the information. The communication medium connects the source and the sink.
The purpose of a communication system is to transmit the information from the source to the sink without
errors. However, the communication medium always introduces some errors because of noise. The
fundamental requirement of a communication system is to transmit the information without errors in spite of the
noise.
The requirement of a communication system is to transmit the information from the source to the sink without
errors, in spite of the fact that noise is always introduced in the communication medium.
2.1.1 The Communication System
The block diagram of a generic communication system is shown in Figure 2.1. The information source produces
symbols (such as English letters, speech, video, etc.) that are sent through the transmission medium by the
transmitter. The communication medium introduces noise, and so errors are introduced in the transmitted data.
At the receiving end, the receiver decodes the data and gives it to the information sink.
Figure 2.1: Generic communication system
As an example, consider an information source that produces two symbols A and B. The transmitter codes the
data into a bit stream. For example, A can be coded as 1 and B as 0. The stream of 1's and 0's is transmitted
through the medium. Because of noise, 1 may become 0 or 0 may become 1 at random places, as illustrated
below:
Symbols produced: A B B A A A B A B A
Bit stream produced: 1 0 0 1 1 1 0 1 0 1
Bit stream received: 1 0 0 1 1 1 1 1 0 1
At the receiver, one bit is received in error. How to ensure that the received data can be made error free?
Shannon provides the answer. The communication system given in Figure 2.1 can be expanded, as shown in
Figure 2.2.
Figure 2.2: Generic communication system as proposed by Shannon.
In a digital communication system, due to the effect of noise, errors are introduced. As a result, 1 may become
a 0 and 0 may become a 1.
In this block diagram, the information source produces the symbols that are coded using two types of
codingsource encoding and channel encodingand then modulated and sent over the medium. At the
receiving end, the modulated signal is demodulated, and the inverse operations of channel encoding and
source encoding (channel decoding and source decoding) are performed. Then the information is presented to
the information sink. Each block is explained below.
As proposed by Shannon, the communication system consists of source encoder, channel encoder and
modulator at the transmitting end, and demodulator, channel decoder and source decoder at the receiving end.
Information source: The information source produces the symbols. If the information source is, for example, a
microphone, the signal is in analog form. If the source is a computer, the signal is in digital form (a set of
symbols).
Source encoder: The source encoder converts the signal produced by the information source into a data
stream. If the input signal is analog, it can be converted into digital form using an analog-to-digital converter. If
the input to the source encoder is a stream of symbols, it can be converted into a stream of 1s and 0s using
some type of coding mechanism. For instance, if the source produces the symbols A and B, A can be coded as
1 and B as 0. Shannon's source coding theorem tells us how to do this coding efficiently.
Source encoding is done to reduce the redundancy in the signal. Source coding techniques can be divided into
lossless encoding techniques and lossy encoding techniques. In lossy encoding techniques, some information
is lost.
In source coding, there are two types of codinglossless coding and lossy coding. In lossless coding, no
information is lost. When we compress our computer files using a compression technique (for instance,
WinZip), there is no loss of information. Such coding techniques are called lossless coding techniques. In lossy
coding, some information is lost while doing the source coding. As long as the loss is not significant, we can
tolerate it. When an image is converted into JPEG format, the coding is lossy coding because some information
is lost. Most of the techniques used for voice, image, and video coding are lossy coding techniques.
NoteThe compression utilities we use to compress data files use lossless encoding techniques. JPEG
image compression is a lossy technique because some information is lost.
Channel encoder: If we have to decode the information correctly, even if errors are introduced in the medium,
we need to put some additional bits in the source-encoded data so that the additional information can be used
to detect and correct the errors. This process of adding bits is done by the channel encoder. Shannon's channel
coding theorem tells us how to achieve this.
In channel encoding, redundancy is introduced so that at the receiving end, the redundant bits can be used for
error detection or error correction.
In channel encoding, redundancy is introduced so that at the receiving end, the redundant bits can be used for
error detection or error correction.
Modulation: Modulation is a process of transforming the signal so that the signal can be transmitted through
the medium. We will discuss the details of modulation in a later chapter.
Demodulator: The demodulator performs the inverse operation of the modulator.
Channel decoder: The channel decoder analyzes the received bit stream and detects and corrects the errors,
if any, using the additional data introduced by the channel encoder.
Source decoder: The source decoder converts the bit stream into the actual information. If analog-to-digital
conversion is done at the source encoder, digital-to-analog conversion is done at the source decoder. If the
symbols are coded into 1s and 0s at the source encoder, the bit stream is converted back to the symbols by the
source decoder.
Information sink: The information sink absorbs the information.
The block diagram given in Figure 2.2 is the most important diagram for all communication engineers. We will
devote separate chapters to each of the blocks in this diagram.
<Day Day Up>
<Day Day Up>
2.2 ENTROPY OF AN INFORMATION SOURCE
What is information? How do we measure information? These are fundamental issues for which Shannon
provided the answers. We can say that we received some information if there is "decrease in uncertainty."
Consider an information source that produces two symbols A and B. The source has sent A, B, B, A, and now
we are waiting for the next symbol. Which symbol will it produce? If it produces A, the uncertainty that was
there in the waiting period is gone, and we say that "information" is produced. Note that we are using the term
"information" from a communication theory point of view; it has nothing to do with the "usefulness" of the
information.
Shannon proposed a formula to measure information. The information measure is called the entropy of the
source. If a source produces N symbols, and if all the symbols are equally likely to occur, the entropy of the
source is given by
For example, assume that a source produces the English letters (in this chapter, we will refer to the English
letters A to Z and space, totaling 27, as symbols), and all these symbols will be produced with equal probability.
In such a case, the entropy is
The information source may not produce all the symbols with equal probability. For instance, in English the
letter "E" has the highest frequency (and hence highest probability of occurrence), and the other letters occur
with different probabilities. In general, if a source produces (i)th symbol with a probability of P(i), the entropy of
the source is given by
If a large text of English is analyzed and the probabilities of all symbols (or letters) are obtained and substituted
in the formula, then the entropy is
NoteConsider the following sentence: "I do not knw wheter this is undrstandble." In spite of the fact that a
number of letters are missing in this sentence, you can make out what the sentence is. In other
words, there is a lot of redundancy in the English text.
This is called the first-order approximation for calculation of the entropy of the information source. In English,
there is a dependence of one letter on the previous letter. For instance, the letter U always occurs after the
letter Q. If we consider the probabilities of two symbols together (aa, ab, ac, ad,..ba, bb, and so on), then it is
called the second-order approximation. So, in second-order approximation, we have to consider the conditional
probabilities of digrams (or two symbols together). The second-order entropy of a source producing English
letters can be worked out to be
H = 3.36 bits/symbol
The third-order entropy of a source producing English letters can be worked out to be
H = 2.77 bits/symbol
As you consider the higher orders, the entropy goes down.
If a source produces (i)th symbol with a probability of P(i), the entropy of the source is given by H = - S P(i)
log
2
P(i) bits/symbol.
As another example, consider a source that produces four symbols with probabilities of 1/2, 1/4, 1/8, and 1/8,
and all symbols are independent of each other. The entropy of the source is 7/4 bits/symbol.
NoteAs you consider the higher-order probabilities, the entropy of the source goes down. For example, the
third-order entropy of a source producing English letters is 2.77 bits/symboleach combination of
three letters can be represented by 2.77 bits.
<Day Day Up>
<Day Day Up>
2.3 CHANNEL CAPACITY
Shannon introduced the concept of channel capacity, the limit at which data can be transmitted through a
medium. The errors in the transmission medium depend on the energy of the signal, the energy of the noise,
and the bandwidth of the channel. Conceptually, if the bandwidth is high, we can pump more data in the
channel. If the signal energy is high, the effect of noise is reduced. According to Shannon, the bandwidth of the
channel and signal energy and noise energy are related by the formula
where
C is channel capacity in bits per second (bps)
W is bandwidth of the channel in Hz
S/N is the signal-to-noise power ratio (SNR). SNR generally is measured in dB using the formula
The value of the channel capacity obtained using this formula is the theoretical maximum. As an example,
consider a voice-grade line for which W = 3100Hz, SNR = 30dB (i.e., the signal-to-noise ratio is 1000:1)
So, we cannot transmit data at a rate faster than this value in a voice-grade line.
An important point to be noted is that in the above formula, Shannon assumes only thermal noise.
To increase C, can we increase W? No, because increasing W increases noise as well, and SNR will be
reduced. To increase C, can we increase SNR? No, that results in more noise, called intermodulation noise.
The entropy of information source and channel capacity are two important concepts, based on which Shannon
proposed his theorems.
The bandwidth of the channel, signal energy, and noise energy are related by the formula C = W log
2
(1 + S/N)
bps where C is the channel capacity, W is the bandwidth, and S/N is the signal-to-noise ratio.
<Day Day Up>
<Day Day Up>
2.4 SHANNON'S THEOREMS
In a digital communication system, the aim of the designer is to convert any information into a digital signal,
pass it through the transmission medium and, at the receiving end, reproduce the digital signal exactly. To
achieve this objective, two important requirements are:
To code any type of information into digital format. Note that the world is analogvoice signals are
analog, images are analog. We need to devise mechanisms to convert analog signals into digital format.
If the source produces symbols (such as A, B), we also need to convert these symbols into a bit stream.
This coding has to be done efficiently so that the smallest number of bits is required for coding.
1.
To ensure that the data sent over the channel is not corrupted. We cannot eliminate the noise
introduced on the channels, and hence we need to introduce special coding techniques to overcome the
effect of noise.
2.
These two aspects have been addressed by Claude Shannon in his classical paper "A Mathematical Theory of
Communication" published in 1948 in Bell System Technical Journal, which gave the foundation to information
theory. Shannon addressed these two aspects through his source coding theorem and channel coding theorem.
Shannon's source coding theorem addresses how the symbols produced by a source have to be encoded
efficiently. Shannon's channel coding theorem addresses how to encode the data to overcome the effect of
noise.
2.4.1 Source Coding Theorem
The source coding theorem states that "the number of bits required to uniquely describe an information source
can be approximated to the information content as closely as desired."
Again consider the source that produces the English letters. The information content or entropy is 4.07
bits/symbol. According to Shannon's source coding theorem, the symbols can be coded in such a way that for
each symbol, 4.07 bits are required. But what should be the coding technique? Shannon does not tell us!
Shannon's theory puts only a limit on the minimum number of bits required. This is a very important limit; all
communication engineers have struggled to achieve the limit all these 50 years.
Consider a source that produces two symbols A and B with equal probability.
Symbol Probability Code Word
A 0.5 1
B 0.5 0
The two symbols can be coded as above, A is represented by 1 and B by 0. We require 1 bit/symbol.
Now consider a source that produces these same two symbols. But instead of coding A and B directly, we can
code AA, AB, BA, BB. The probabilities of these symbols and associated code words are shown here:
Symbol Probability Code Word
AA 0.45 0
AB 0.45 10
BA 0.05 110
BB 0.05 111
Here the strategy in assigning the code words is that the symbols with high probability are given short code
words and symbols with low probability are given long code words.
NoteAssigning short code words to high-probability symbols and long code words to low-probability
symbols results in efficient coding.
In this case, the average number of bits required per symbol can be calculated using the formula
where P(i) is the probability and L(i) is the length of the code word. For this example, the value is (1 * 0.45 + 2 *
0.45 + 3 * 0.05 + 3 * 0.05) = 1.65 bits/symbol. The entropy of the source is 1.469 bits/symbol.
So, if the source produces the symbols in the following sequence:
A A B A B A A B B B
then source coding gives the bit stream
0 110 110 10 111
This encoding scheme on an average, requires 1.65 bits/symbol. If we code the symbols directly without taking
into consideration the probabilities, the coding scheme would be
AA 00
AB 01
BA 10
BB 11
Hence, we require 2 bits/symbol. The encoding mechanism taking the probabilities into consideration is a better
coding technique. The theoretical limit of the number of bits/symbol is the entropy, which is 1.469 bits/symbol.
The entropy of the source also determines the channel capacity.
As we keep considering the higher-order entropies, we can reduce the bits/ symbol further and perhaps achieve
the limit set by Shannon.
Based on this theory, it is estimated that English text cannot be compressed to less than 1.5 bits/symbol even if
you use sophisticated coders and decoders.
This theorem provides the basis for coding information (text, voice, video) into the minimum possible bits for
transmission over a channel. We will study the details of source coding in Chapter 4, "Coding of Text, Voice,
Image, and Video Signals."
The source coding theorem states "the number of bits required to uniquely describe an information source can
be approximated to the information content as closely as desired."
2.4.2 Channel Coding Theorem
Shannon's channel coding theorem states that "the error rate of data transmitted over a bandwidth limited noisy
channel can be reduced to an arbitrary small amount if the information rate is lower than the channel capacity."
This theorem is the basis for error correcting codes using which we can achieve error-free transmission. Again,
Shannon only specified that using good coding mechanisms, we can achieve error-free transmission, but he
did not specify what the coding mechanism should be! According to Shannon, channel coding may introduce
additional delay in transmission but, using appropriate coding techniques, we can overcome the effect of
channel noise.
Consider the example of a source producing the symbols A and B. A is coded as 1 and B as 0.
Symbols produced: A B B A B
Bit stream: 1 0 0 1 0
Now, instead of transmitting this bit stream directly, we can transmit the bit stream
111000000111000
that is, we repeat each bit three times. Now, let us assume that the received bit stream is
101000010111000
Two errors are introduced in the channel. But still, we can decode the data correctly at the receiver because we
know that the second bit should be 1 and the eighth bit should be 0 because the receiver also knows that each
bit is transmitted thrice. This is error correction. This coding is called Rate 1/3 error correcting code. Such
codes that can correct the errors are called Forward Error Correcting (FEC) codes.
Ever since Shannon published his historical paper, there has been a tremendous amount of research in the
error correcting codes. We will discuss error detection and correction in Chapter 5 "Error Detection and
Correction".
All these 50 years, communication engineers have struggled to achieve the theoretical limits set by Shannon.
They have made considerable progress. Take the case of line modems that we use for transmission of data
over telephone lines. The evolution of line modems from V.26 (2400bps data rate, 1200Hz bandwidth), V.27
modems (4800bps data rate, 1600Hz bandwidth), V.32 modems (9600bps data rate, 2400Hz bandwidth), and
V.34 modems (28,800bps data rate, 3400Hz bandwidth) indicates the progress in source coding and channel
coding techniques using Shannon's theory as the foundation.
Shannon's channel coding theorem states that "the error rate of data transmitted over a bandwidth limited noisy
channel can be reduced to an arbitrary small amount if the information rate is lower than the channel capacity."
NoteSource coding is used mainly to reduce the redundancy in the signal, whereas channel coding is used
to introduce redundancy to overcome the effect of noise.
Summary
In this chapter, we studied Shannon's theory of communication. Shannon introduced the concept of entropy of
an information source to measure the number of bits required to represent the symbols produced by the source.
He also defined channel capacity, which is related to the bandwidth and signal-to-noise ratio. Based on these
two measures, he formulated the source coding theorem and channel coding theorem. Source coding theorem
states that "the number of bits required to uniquely describe an information source can be approximated to the
information content as closely as desired." Channel coding theorem states that "the error rate of data
transmitted over a bandwidth limited noisy channel can be reduced to an arbitrary small amount if the
information rate is lower than the channel capacity." A good conceptual understanding of these two theorems is
important for every communication engineer.
References
C.E. Shannon. "A Mathematical Theory of Communication." Bell System Technical Journal, Vol. 27, 1948.
Every communications engineer must read this paper. Shannon is considered the father of modern
communications. You have to be very good at mathematics to understand this paper.
W. Gappmair, Claude E. Shannon. "The 50
th
Anniversary of Information Theory." IEEE Communications
Magazine, Vol. 37, No. 4, April 1999.
This paper gives a brief biography of Shannon and a brief overview of the importance of Shannon's theory.
cm.bell-labs.com/cm/ms/what/shannonday/paper.html You can download Shannon's original paper from
this link.
Questions
Draw the block diagram of a communication system and explain the function of each block. 1.
What is entropy of an information source? Illustrate with examples. 2.
What is source coding? What is the difference between lossless coding and lossy coding? 3.
Explain the concept of channel capacity with an example. 4.
What is channel coding? Explain the concept of error correcting codes. 5.
4.
5.
Exercises
1. A source produces 42 symbols with equal probability. Calculate the entropy of the source.
2. A source produces two symbols A and B with probabilities of 0.6 and 0.4, respectively. Calculate
the entropy of the source.
3. The ASCII code is used to represent characters in the computer. Is it an efficient coding technique
from Shannon's point of view? If not, why?
4. An information source produces English symbols (letters A to Z and space). Using the first-order
model, calculate the entropy of the information source. You need to enter a large English text with
the 27 symbols, calculate the frequency of occurrence of each symbol, and then calculate the
entropy. The answer should be close to 4.07 bits/symbol.
5. In the above example, using the second-order model, calculate the entropy. You need to calculate
the frequencies taking two symbols at a time such as aa, ab, ac, etc. The entropy should be close
to 3.36 bits/symbol.
Answers
1. For a source that produces 42 symbols with equal probability, the entropy of the source is
H = log
2
42 bits/symbol
= 5.55 bits/symbol
2. For a source that produces two symbols A and B with probabilities of 0.6 and 0.4, respectively, the entropy
is
H = - {0.6 log
2
0.6 + 0.4 log
2
0.4} = 0.970 bits/symbol
3. In ASCII, each character is represented by seven bits. The frequency of occurrence of the English letters is
not taken into consideration at all. If the frequency of occurrence is taken into consideration, then the most
frequently occurring letters have to be represented by small code words (such as 2 bits) and less frequently
occurring letters have to be represented by long code words. According to Shannon's theory, ASCII is not
an efficient coding technique.
However, note that if an efficient coding technique is followed, then a lot of additional processing is involved,
which causes delay in decoding the text.
4. You can write a program that obtains the frequency of occurrence of the English letters. The program takes
a text file as input and produces the frequency of occurrence for all the letters and spaces. You can ignore
the punctuation marks. You need to convert all letters either into capital letters or small letters. Based on the
frequencies, if you apply Shannon's formula for entropy, you will get a value close to 4.07 bits/symbol.
5. You can modify the above program to calculate the frequencies of two letter combinations (aa, ab, ac,ba,
bb, zy, zz). Again, if you apply the formula, you will get a value close to 3.36 bits/symbol.
Projects
Simulate a digital communication system in C language. (a) Write a program to generate a continuous
bit stream of 1s and 0s. (b) Simulate the medium by changing a 1 to a 0 and a 0 to a 1 at random places
in the bit stream using a random number generator. (c) Calculate the bit error rate (= number of
errors/number of bits transmitted).
1.
Develop a file compression utility using the second-order approximation described in this chapter. The
coding should be done by taking the frequencies of occurrence of combinations of two characters, as in
Exercise 2.
2.
<Day Day Up>
<Day Day Up>
Chapter 3: Transmission Media
OVERVIEW
To exchange information between people separated by a distance has been a necessity throughout the history
of mankind. Long ago, people used fire for communicating over a limited distance; they used birds and
messengers for long distance communication. The postal system still provides an excellent means of sending
written communication across the globe.
Communication over long distances, not just through written text but using other media such as voice and
video, has been achieved through electrical communication. This discipline deals with conversion of information
into electrical signals and transmitting them over a distance through a transmission medium. In this chapter, we
will study the various transmission media used for electrical communication, such as twisted pair, coaxial cable,
optical fiber, and the radio. We will study the characteristics and the advantages and disadvantages of each for
practical communication systems. Free space (or radio) communication particularly provides the unique
advantage of support for mobilitythe user can communicate while on the move (in a car or an airplane).
However, the radio spectrum is a precious natural resource and has to be used efficiently. We will also discuss
the issue of radio spectrum management briefly in this chapter.
<Day Day Up>
<Day Day Up>
3.1 TWISTED PAIR
Twisted pair gets its name because a pair of copper wires is twisted to form the transmission medium. This is
the least expensive transmission medium and hence the most widely used. This medium is used extensively in
the local underground telephone network, in Private Branch Exchanges (PBX's), and also in local area
networks (LANs).
As the electrical signal traverses the medium, it becomes attenuated, that is, the signal level will go down.
Hence, a small electronic gadget called a repeater is used every 2 to 10 kilometers. A repeater amplifies the
signal to the required level and retransmits on the medium.
The data rate supported by the twisted pair depends on the distance to be covered and the quality of the
copper. Category 5 twisted pair supports data rates in the range of 10Mbps to 100Mbps up to a distance of 100
meters.
The twisted pair of copper wires is a low-cost transmission medium used extensively in telephone networks and
local area networks.
<Day Day Up>
<Day Day Up>
3.2 COAXIAL CABLE
Coaxial cable is used extensively for cable TV distribution, long-distance telephone trunks, and LANs. The
cross-section of a coaxial cable used in an Ethernet local area network is shown in Figure 3.1.
Figure 3.1: Coaxial cable used in Ethernet LAN.
Coaxial cable can support a maximum data rate of 500Mbps for a distance of about 500 meters. Repeaters are
required every 1 to 10 kilometers.
Coaxial cable is used in long distance telephone networks, local area networks, and cable TV distribution
networks.
The speed of transmission in copper cable is 2.3 10
8
meters/second. Note that this speed is less than the
speed of light in a vacuum (3 10
8
meters/second).
Based on the speed of transmission and the distance (length of the cable), the propagation delay can be
calculated using the formula
Delay = distance/speed
For example, if the distance is 10 kilometers, the propagation delay is
Delay = 10,000/(2.3 10
8
) seconds = 43.48 microseconds.
<Day Day Up>
<Day Day Up>
3.3 OPTICAL FIBER
Optical fiber is now being deployed extensively and is the most preferred medium for all types of networks
because of the high data rates that can be supported. Light in a glass medium can carry more information over
large distances, as compared to electrical signals in a copper cable or a coaxial cable.
Optical fiber is the most attractive transmission medium because of its support for very high data rates and low
attenuation.
The challenge in the initial days of research on fiber was to develop glass so pure that at least 1% of the light
would be retained at the end of 1 kilometer. This feat was achieved in 1970. The recent advances in fiber have
been phenomenal; light can traverse 100km without any amplification, thanks to research in making purer
glass. With the state of the art, the loss will be about 0.35dB/km for 1310 nanometers and 0.25dB/ km for
1550nm.
Light transmission in the fiber works on the principle that the light waves are reflected within the core and
guided to the end of the fiber, provided the angle at which the light waves are transmitted is controlled. Note
that if the angle is not proper, the light is refracted and not reflected. The fiber medium has a core and cladding,
both pure solid glass and protected by acrylate coating that surrounds the cladding.
As shown in Figure 3.2, there are two types of fiber: single mode and multimode. Single mode fiber has a small
core and allows only one ray (or mode) of light to propagate at a time. Multimode fiber, the first to be
commercialized, has a much larger core than single mode fiber and allows hundreds of rays of light to be
transmitted through the fiber simultaneously. The larger core diameter allows low-cost optical transmitters and
connectors and hence is cheaper.
Figure 3.2: Optical fiber.
Gigabits and even terabits of data can be transmitted through the fibers, and the future lies in optical fiber
networks. Currently twisted copper wire is being used for providing telephones to our homes, but soon fiber to
the home will be a reality.
The speed of transmission is 2 10
8
meters/second in optical fiber.
We will study optical communication systems in detail in Chapter 14, "Optical Fibre Communication Systems."
The two types of optical fiber are single mode and multimode fiber. Single-mode fiber allows only one ray (or
mode) of light to propagate at a time whereas multimode fiber allows multiple rays (or modes).
<Day Day Up>
<Day Day Up>
3.4 TERRESTRIAL RADIO
Free space as the medium has the main advantage that the receiver can be fixed or mobile. Free space is
called an unguided medium because the electromagnetic waves can travel freely in all directions. Depending on
the frequency of the radio waves, the propagation characteristics vary, and different frequencies are used for
different applications, based on the required propagation characteristics. Radio is used for broadcasting
extensively because a central station can transmit the program to be received by a large number of receivers
spread over a large geographical area. In this case, the transmitter transmits at a specific frequency, and all the
receivers tune to that frequency to receive the program.
Radio as a transmission medium has the main advantage that it supports mobility. In addition, installation and
maintenance of radio systems are very easy.
In two-way communication systems such as for voice, data, or video, there is a base station located at a fixed
place in the area of operation and a number of terminals. As shown in Figure 3.3, a pair of frequencies is used
for communicationone frequency for transmitting from the base station to the terminals (the downlink) and
one frequency from the terminal to the base station (the uplink). This frequency pair is called the radio channel.
Figure 3.3: Two-way communication using radio.
NoteA radio channel consists of a pair of frequenciesone frequency is used for uplink and one frequency
is used for downlink. However, in some radio systems, a single frequency is used in both directions.
Radio as the transmission medium has special characteristics that also pose special problems.
Path loss: As the distance between the base station and the terminal increases, the received signal becomes
weaker and weaker, even if there are no obstacles between the base station and the terminal. The higher the
frequency, the higher the path loss. Many models are available (such as Egli's model and Okomura-Hata
model) to estimate path loss. To compensate for path loss, we need to use high-gain antennas and also
develop receivers of high sensitivity.
NotePath loss causes a heavy attenuation of the radio signal. Hence, the radio receiver should be capable
of receiving very weak signals. In other words, the receiver should have high sensitivity.
Fading: Where there are obstacles between the base station and the terminal (hills, buildings, etc.), the signal
strength goes down further, which is known as fading. In densely populated urban areas, the signal can take
more than one pathone signal path can be directly from the base station to the terminal and another path can
be from the base station to a building and the signal reflected from the building and then received at the
terminal. Sometimes, there may not be a line of sight between the base station and terminal antennas, and
hence the signals received at the terminals are from different paths. The received signal is the sum of many
identical signals that differ only in phase. As a result, there will be fading of the signal, which is known as
multipath fading or Raliegh fading.
NoteMultipath fading is predominant in mobile communication systems. The mobile phone receives the
signals that traverse different paths.
Rain attenuation: The rain affects radio frequency signals. Particularly in some frequency bands, rain
attenuation is greater. When designing radio systems, the effect of rain (and hence the path loss) needs to be
taken into consideration.
The radio spectrum is divided into different frequency bands, and each band is used for a specific application.
The details of the radio spectrum are discussed in the next section.
NoteRadio wave propagation is very complex, and a number of mathematical models have been
developed to study the propagation in free space.
3.4.1 Radio Spectrum
Electrical communication is achieved by using electromagnetic waves, that is, oscillations of electric and
magnetic fields in free space. The electromagnetic waves have two main parts: radio waves and light waves.
Distinguishing between radio waves and light waves reflects the technology used to detect them. The radio
waves are measured in frequency (Hz), and the other types of waves in terms of wavelength (meters) or energy
(electron volts).
The electromagnetic spectrum consists of the following:
Radio waves : 300GHz and lower (frequency)
Sub-millimeter waves : 100 micrometers to 1 millimeter (wavelength)
Infrared : 780 nanometers to 100 micrometers (wavelength)
Visible light : 380 nanometers to 780 nanometers (wavelength)
Ultraviolet : 10 nanometers to 380 nanometers (wavelength)
X-ray : 120eV to 120keV (energy)
Gamma rays : 120 keV and up (energy)
The radio spectrum spans from 3kHz to 300GHz. This spectrum is divided into different bands. Because of the
differences in propagation characteristics of the waves with different frequencies, and also the effect of
atmosphere and rain on these waves, different bands are used for different applications. Table 3.1 gives the
various frequency bands, the corresponding frequency ranges, and some application areas in each band.
Table 3.1: The radio frequency spectrum and typical applications
Frequency band Frequency
range
Application areas
Very Low Frequency
(VLF)
3kHz to 30kHz Radio navigation, maritime mobile (communication on
ships)
Low Frequency (LF) 30kHz to
300kHz
Radio navigation, maritime mobile
Medium Frequency
(MF)
300kHz to
3MHz
AM radio broadcast, aeronautical mobile
High Frequency (HF) 3MHz to
30MHz
Maritime mobile, aeronautical mobile
Very High Frequency
(VHF)
30MHz to
300MHz
Land mobile, FM broadcast, TV broadcast, aeronautical
mobile, radio paging, trunked radio
Ultra-High
Frequency (UHF)
300MHz to
1GHz
TV broadcast, mobile satellite, land mobile, radio
astronomy
L band 1GHz to 2GHz Aeronautical radio navigation, radio astronomy, earth
exploration satellites
S band 2GHz to 4GHz Space research, fixed satellite communication
C band 4GHz to 8GHz Fixed satellite communication, meteorological satellite
communication
X band 8GHz to
12GHz
Fixed satellite broadcast, space research
Ku band 12GHz to
18GHz
Mobile and fixed satellite communication, satellite
broadcast
Frequency band Frequency
range
Application areas
K band 18GHz to
27GHz
Mobile and fixed satellite communication
Ka band 27GHz to
40GHz
Inter-satellite communication, mobile satellite
communication
Millimeter 40GHz to
300GHz
Space research, Inter-satellite communications
The radio spectrum is divided into a number of bands, and each band is used for specific applications.
International Telecommunications Union (ITU) assigns specific frequency bands for each application. Every
country's telecommunications authorities in turn make policies on the use of these frequency bands. The
specific frequency bands for some typical applications are listed here:
AM radio 535 to 1605 MHz
Citizen band radio 27MHz
Cordless telephone devices 43.69 to 50 MHz
VHF TV 54 to 72 MHz, 76 to 88 MHz, 174 to 216 MHz
Aviation 118 to 137 MHz
Ham radio 144 to 148 MHz
420 to 450 MHz
UHF TV 470 to 608 MHz
614 to 806 MHz
Cellular phones 824 to 849 MHz, 869 to 894 MHz
Personal communication services 901902 MHz, 930931 MHz, 940941 MHz
Search for extra-terrestrial intelligence 1420 to 1660 MHz
Inmarsat satellite phones 1525 to 1559 MHz, 1626.5 to 1660.5 MHz
The representative terrestrial radio systems are discussed in Chapter 12, "Terrestrial Radio Communication
Systems."
NoteSome frequency bands such as ham radio band and the Industrial, Scientific, and Medical (ISM) band
are free bandsno prior government approvals are required to operate radio systems in those bands.
<Day Day Up>
K band 18GHz to
27GHz
Mobile and fixed satellite communication
Ka band 27GHz to
40GHz
Inter-satellite communication, mobile satellite
communication
Millimeter 40GHz to
300GHz
Space research, Inter-satellite communications
The radio spectrum is divided into a number of bands, and each band is used for specific applications.
International Telecommunications Union (ITU) assigns specific frequency bands for each application. Every
country's telecommunications authorities in turn make policies on the use of these frequency bands. The
specific frequency bands for some typical applications are listed here:
AM radio 535 to 1605 MHz
Citizen band radio 27MHz
Cordless telephone devices 43.69 to 50 MHz
VHF TV 54 to 72 MHz, 76 to 88 MHz, 174 to 216 MHz
Aviation 118 to 137 MHz
Ham radio 144 to 148 MHz
420 to 450 MHz
UHF TV 470 to 608 MHz
614 to 806 MHz
Cellular phones 824 to 849 MHz, 869 to 894 MHz
Personal communication services 901902 MHz, 930931 MHz, 940941 MHz
Search for extra-terrestrial intelligence 1420 to 1660 MHz
Inmarsat satellite phones 1525 to 1559 MHz, 1626.5 to 1660.5 MHz
The representative terrestrial radio systems are discussed in Chapter 12, "Terrestrial Radio Communication
Systems."
NoteSome frequency bands such as ham radio band and the Industrial, Scientific, and Medical (ISM) band
are free bandsno prior government approvals are required to operate radio systems in those bands.
<Day Day Up>
<Day Day Up>
3.5 SATELLITE RADIO
Arthur C. Clarke proposed the concept of communication satellites. A communication satellite is a relay in the
sky. If the satellite is placed at a distance of about 36,000 km above the surface of the earth, then it appears
stationary with respect to the earth because it has an orbital period of 24 hours. This orbit is called a
geostationary orbit, and the satellites are called geostationary satellites. As shown in Figure 3.4, three
geostationary communication satellites can cover the entire earth.
Figure 3.4: Three geostationary satellites covering the entire earth.
On Earth, we need satellite antennas (which are a part of the Earth stations) that point toward the satellite for
communication. A pair of frequencies is used for communication with the satellitethe frequency used from
Earth station to the satellite is called the uplink frequency, and the frequency from the satellite to the Earth
station is called the downlink frequency. The signals transmitted by an Earth station to the satellite are amplified
and then relayed back to the receiving Earth stations.
The main attraction of communication satellites is distance insensitivity. To provide communication facilities
across the continents and also to rural and remote areas where laying cables is difficult, satellite
communication will be very attractive. However, satellite communication has a disadvantagedelay. The
propagation time for the signal to travel all the way to the satellite and back is nearly 240 msec. Also, because
the signal has to travel long distances, there will be signal attenuation, and high-sensitivity receivers are
required at both the satellite and the Earth stations.
To develop networks using satellite communications, there are two types of configurationsmesh and star,
which are shown in Figure 3.5.
Figure 3.5: Satellite communication network configurations.
In mesh configuration, two Earth stations can communicate via the satellite. In this configuration, the size of the
antennas at the Earth stations is large (starting from 4.5 meters diameter).
NoteIn star configuration, the size of the antenna will be very small, so the cost of an Earth station will be
low. However, the disadvantage of star configuration is that the propagation delay is high.
In star configuration, there will be a central station (called the hub) and a number of Earth stations each with a
Very Small Aperture Terminal (VSAT). When a VSAT has to communicate with another VSAT, the signal will be
sent to the satellite, the satellite will relay the signal to the hub, the hub will amplify the signal and resend it to
the satellite, and then the satellite will relay it to the other VSAT. In this configuration, the roundtrip delay will be
double that of the mesh configuration. However the advantage is that smaller Earth stations can be used. VSAT
communication is now used extensively for communication networks because of the low cost.
Satellite communication systems work in two configurations: mesh and star. In mesh configuration, Earth
stations communicate with each other directly; in star configuration, the Earth stations communicate via a
central station.
ITU allocated specific frequency bands for satellite communications. Some bands used for satellite
communications are 6/4GHz, 14/12GHz, and 17/12GHz bands. As the frequency goes up, the size of the
antenna goes down, and use of higher frequencies results in lower cost of equipment. However, the effect of
rain on signals of different frequencies varies. Rain attenuation is more in 14/ 12GHz bands as compared to
6/4GHz bands.
In addition to the geostationary satellites, low Earth orbiting satellites also are being deployed for providing
communication facilities.
In Chapter 13, "Satellite Communcation Systems," we will discuss satellite communication systems in greater
detail.
NoteThe size of the satellite Earth station antenna decreases as the frequency of operation increases.
Hence, the higher the frequency of operation, the smaller the size of antenna.
<Day Day Up>
<Day Day Up>
3.6 RADIO SPECTRUM MANAGEMENT
In the electromagnetic spectrum that spans from 0Hz to 10
25
GHz (cosmic rays), the radio spectrum spans from
3kHz to 300GHz in VLF, LF, MF, HF, VHF, UHF, SHF, and EHF bands. The only portion of the radio spectrum
not allocated to anyone is 3 to 9 kHz (rather, it is allocated to every individual for freedom of speech).
The radio spectrum is a limited natural resource, and hence international and national authorities regulate its
use.
The International Telecommunications Union (ITU), through the World Administrative Radio Conferences
(WARC), allots frequency bands for different application areas. For administrative convenience, the world is
divided into three regions. The allocations for these regions differ to some extent. All nations are bound by
these regulations for frequency use. The WARC regulations are only broad guidelines, because a centralized
authority in every country manages the radio spectrum. In the United States, the frequency spectrum is
managed by the FCC (Federal Communications Commission). In India, the agency is WPC (Wireless Planning
and Coordination Cell) under the Government of India.
The complexity of radio spectrum management results from several factors:
There are some frequency bands for the exclusive use of governmental agencies and some for
nongovernmental agencies. Some frequency bands are shared. When the same band is shared by
different agencies for the same application or for different applications, it is necessary to ensure that there
is no interference between various systems.
When a frequency band is allocated for a particular application (e.g., cellular mobile communication), the
service can be provided by different operators in the same area. Without a coordinated effort in band
allocation, the spectrum cannot be used efficiently to support a large number of users for that service in the
same frequency band.
Because of higher user demands, the frequency band allocated for a particular service may become
congested and new bands need to be allocated. For example, in the case of mobile communications, the
900MHz band got congested, and the 1800MHz band was allocated. This type of new allocation of bands
calls for long-term planning of spectrum use.
As new application areas emerge, accommodating these applications along with the existing applications in
the required frequency bands is another challenge in spectrum management.
New technologies (better techniques for reducing bandwidth requirements, new frequency reuse
technologies, antenna technologies, etc.) lead to better utilization of spectrum. Ensuring that the new
technologies are incorporated is important in spectrum management.
Agencies will be allocated fixed frequencies for use. For various reasons, however, these frequencies may
not be used at all or may not be used efficiently. A periodic review of the use of the spectrum is also
required. A review process needs to be followed by which an application for frequency allotment will be
processed, frequencies allocated, and use monitored.
Radio spectrum management ensures that the allotted spectrum is being used efficiently, to ensure that
there is no interference between different radio systems and to allocate new frequency bands for new
services.
All these aspects make spectrum management a difficult task, and the need for an efficient spectrum
management methodology cannot be overemphasized.
3.6.1 Spectrum Management Activities
Radio spectrum management involves three major activities:
Spectrum assignment and selection involves recommending a specific frequency band of operation for
use in a given location. For this, extensive databases containing all the information regarding the present
1.
uses of radio spectrum have to be developed and maintained so that the effect of the proposed
frequency band uses on existing systems can be studied. Depending on the interference considerations,
specific frequency bands can be allocated.
1.
Spectrum engineering and analysis involves computations for installations of radio equipment at specific
locations and for predicting the system performance in the radio environment.
2.
Spectrum planning involves long-term/emergency planning, keeping in view, among other things, the
demands for new services and technological changes.
3.
Because all these activities involve huge computation, various national authorities are deploying computerized
spectrum management systems. Expert systems are also being developed to manage the spectrum efficiently.
Radio spectrum management involves spectrum management and selection, spectrum engineering and
analysis, and long-term/emergency spectrum planning for new services.
3.6.2 Cost of Spectrum
Radio spectrum is a limited natural resource, and its optimal utilization must be ensured. Government agencies
charge users for the spectrum. The charges are generally on an annual basis.
Nowdays, the government agencies are also using innovative methods for making money out of the spectrum.
The present trend is to auction the spectrum. The highest bidder will be given the spectrum for a specific
application. For the 3rd Generation (3G) wireless systems, this approach has been followed, and it turned out
that in most countries the spectrum cost is much higher than the infrastructure (equipment) cost.
Operators that obtain specific frequency bands for radio services need to pay for the cost of the spectrum.
Summary
The details of various transmission media used in telecommunications systems are presented in this chapter.
Based on the considerations of cost, data rates required, and distance to be covered, the transmission medium
has to be chosen. The transmission media options are twisted copper pair, coaxial cable, optical fiber, and
radio. Twisted pair is of low cost, but the attenuation is very high and the data rates supported are low. Because
of the low cost, it is used extensively in the telephone network, in PBX, and for LANs. Coaxial cable supports
higher data rates compared to twisted pair and is used in cable TV, LANs, and the telephone network. Optical
fiber supports very large data rates and is now the preferred medium for LANs and the telephone networks. The
main attraction of radio as the transmission medium is its support for mobility. Furthermore, installation and
maintenance of radio systems is easy because there is no need to dig below ground. Terrestrial radio systems
are used extensively in the telephone network as well as for mobile communications. Wireless LANs also are
becoming predominant nowdays. Satellite radio has the main advantage that remote and rural areas can be
connected easily. Satellite radio is also used extensively for broadcasting.
Because radio spectrum is a precious natural resource, the spectrum has to be used effectively. ITU allocates
the frequency bands for different applications. In each country, there is a government organization that
coordinates the allocation and use of the spectrum for different applications. We have studied the intricacies of
spectrum management, which include planning, allocation, and monitoring the use of the spectrum.
References
R. Horak. Communications Systems and Networks, Third Edition. Wiley-Dreamtech India Pvt. Ltd., 2002.
This book gives comprehensive coverage of many topics in telecommunication and details of the
transmission media.
http://www.inmarsat.com Inmarsat operates a worldwide mobile satellite network. You can get the details of
the satellites and the services offered from this URL.
http://www.iec.org/tutorials The Web site of International Engineering Consortium. This link provides many
tutorials on telecommunications topics.
Questions
List the different transmission media and their applications. 1.
2.
3.
1.
List the frequency bands of the radio spectrum and the applications in each frequency band. 2.
What are the issues involved in radio spectrum management? 3.
Some frequency bands such as for ham radio and the Industrial, Scientific and Medical (ISM) band are
not regulated, and anyone can use those bands. Debate the pros and cons of such unregulated bands.
4.
Exercises
1. A terrestrial radio link is 30 kilometers long. Find out the propagation delay, assuming that the
speed of light is 3 10
8
meters/second.
2. The coaxial cable laid between two telephone switches is 40 kilometers long. Find out the
propagation delay if the speed of transmission in the coaxial cable is 2.3 10
8
meters/second.
3. Calculate the propagation delay in an optical fiber of 100 kilometers. The speed is 2 10
8
meters/second.
4. Compile the list of frequency bands used for satellites for the following applications: (a)
broadcasting, (b) telephone communications, (c) weather monitoring, (d) military applications.
5. Calculate the propagation delay from one Earth station to another Earth station in a satellite
communication network for (a) mesh configuration and (b) star configuration.
Answers
1. For a terrestrial radio link of 30 kilometers, assuming that speed of light is 3 10
8
meters/second, the
propagation delay is 30,000 / (3 * 10
8
) seconds = 100 microseconds.
2. When the cable length is 40 kilometers and the speed of transmission in the coaxial cable is 2.3 10
8
meters/second, the propagation delay is 40,000/ (2.3 * 10
8
) seconds = 173.91 microseconds.
3. The propagation delay in an optical fiber of 100 kilometers if the speed is 2 10
8
meters/second, is 100,000
/ (2 10
8
) seconds = 0.5 msec.
4. In satellite communication, broadcasting and voice communication are done in the C band and Ku band.
Band Designation Frequency Range
L band 1-2 GHz
S band 2-4 GHz
C band 4-8 GHz
X-band 8-12 GHz
Ku band 12-18 GHz
K band 18-27 GHz
Ka band 27-40 GHz
Q band 33-50 GHz
U band 40-60 GHz
V band 40-75 GHz
W band 75-110GHz
5. In mesh configuration, the communication is from one Earth station to another Earth station. Hence, the
propagation delay is 240 msec. In star configuration, the transmission is from VSAT to the satellite, satellite
to the hub, hub to the satellite, and then satellite to the other VSAT. Hence, the propagation delay is 480
msec.
Projects
Develop a database of frequency bands used for different applications given in Table 3.1. Develop a 1.
2.
graphical user interface to facilitate the display of the frequency bands for a given application.
1.
Study the frequency bands of operation for Inmarsat satellites. Inmarsat operates a worldwide mobile
satellite network.
2.
<Day Day Up>
<Day Day Up>
Chapter 4: Coding of Text, Voice, Image, and Video
Signals
OVERVIEW
The information that has to be exchanged between two entities (persons or machines) in a communication
system can be in one of the following formats:
Text
Voice
Image
Video
In an electrical communication system, the information is first converted into an electrical signal. For instance, a
microphone is the transducer that converts the human voice into an analog signal. Similarly, the video camera
converts the real-life scenery into an analog signal. In a digital communication system, the first step is to
convert the analog signal into digital format using analog-to-digital conversion techniques. This digital signal
representation for various types of information is the topic of this chapter.
<Day Day Up>
<Day Day Up>
4.1 TEXT MESSAGES
Text messages are generally represented in ASCII (American Standard Code for Information Interchange), in
which a 7-bit code is used to represent each character. Another code form called EBCDIC (Extended Binary
Coded Decimal Interchange Code) is also used. To transmit text messages, first the text is converted into one
of these formats, and then the bit stream is converted into an electrical signal.
Using ASCII, the number of characters that can be represented is limited to 128 because only 7-bit code is
used. The ASCII code is used for representing many European languages as well. To represent Indian
languages, a standard known as Indian Standard Code for Information Interchange (ISCII) has been developed.
ISCII has both 7-bit and 8-bit representations.
ASCII is the most widely used coding scheme for representation of text in computers. ISCII is used to represent
text of Indian languages.
NoteIn extended ASCII, each character is represented by 8 bits. Using 8 bits, a number of graphic
characters and control characters can be represented.
Unicode has been developed to represent all the world languages. Unicode uses 16 bits to represent each
character and can be used to encode the characters of any recognized language in the world. Modern
programming languages such as Java and markup languages such as XML support Unicode.
Unicode is used to represent any world language in computers. Unicode uses 16 bits to represent each
character. Java and XML support Unicode.
It is important to note that the ASCII/Unicode coding mechanism is not the best way, according to Shannon. If
we consider the frequency of occurrence of the letters of a language and use small codewords for frequently
occurring letters, the coding will be more efficient. However, more processing will be required, and more delay
will result.
The best coding mechanism for text messages was developed by Morse. The Morse code was used
extensively for communication in the old days. Many ships used the Morse code until May 2000. In Morse code,
characters are represented by dots and dashes. Morse code is no longer used in standard communication
systems.
NoteMorse code uses dots and dashes to represent various English characters. It is an efficient code
because short codes are used to represent high-frequency letters and long codes are used to
represent low-frequency letters. The letter E is represented by just one dot and the letter Q is
represented by dash dash dot dash.
<Day Day Up>
<Day Day Up>
4.2 VOICE
To transmit voice from one place to another, the speech (acoustic signal) is first converted into an electrical
signal using a transducer, the microphone. This electrical signal is an analog signal. The voice signal
corresponding to the speech "how are you" is shown in Figure 4.1. The important characteristics of the voice
signal are given here:
The voice signal occupies a bandwidth of 4kHz i.e., the highest frequency component in the voice signal is
4kHz. Though higher frequency components are present, they are not significant, so a filter is used to
remove all the high-frequency components above 4kHz. In telephone networks, the bandwidth is limited to
only 3.4kHz.
The pitch varies from person to person. Pitch is the fundamental frequency in the voice signal. In a male
voice, the pitch is in the range of 50250 Hz. In a female voice, the pitch is in the range of 200400 Hz.
The speech sounds can be classified broadly as voiced sounds and unvoiced sounds. Signals
corresponding to voiced sounds (such as the vowels a, e, i, o, u) will be periodic signals and will have high
amplitude. Signals corresponding to unvoiced sounds (such as th, s, z, etc.) will look like noise signals and
will have low amplitude.
Voice signal is considered a nonstationary signal, i.e., the characteristics of the signal (such as pitch and
energy) vary. However, if we take small portions of the voice signals of about 20msec duration, the signal
can be considered stationary. In other words, during this small duration, the characteristics of the signal do
not change much. Therefore, the pitch value can be calculated using the voice signal of 20msec. However,
if we take the next 20msec, the pitch may be different.
Figure 4.1: Speech waveform.
The voice signal occupies a bandwidth of 4KHz. The voice signal can be broken down into a fundamental
frequency and its harmonics. The fundamental frequency or pitch is low for a male voice and high for a female
voice.
These characteristics are used while converting the analog voice signal into digital form. Analog-to-digital
conversion of voice signals can be done using one of two techniques: waveform coding and vocoding.
NoteThe characteristics of speech signals described here are used extensively for speech processing
applications such as text-to-speech conversion and speech recognition.
Music signals have a bandwidth of 20kHz. The techniques used for converting music signals into
digital form are the same as for voice signals.
4.2.1 Waveform Coding
Waveform coding is done in such a way that the analog electrical signal can be reproduced at the receiving end
with minimum distortion. Hundreds of waveform coding techniques have been proposed by many researchers.
We will study two important waveform coding techniques: pulse code modulation (PCM) and adaptive
differential pulse code modulation (ADPCM).
Pulse Code Modulation
Pulse Code Modulation (PCM) is the first and the most widely used waveform coding technique. The ITU-T
Recommendation G.711 specifies the algorithm for coding speech in PCM format.
PCM coding technique is based on Nyquist's theorem, which states that if a signal is sampled uniformly at least
at the rate of twice the highest frequency component, it can be reconstructed without any distortion. The highest
frequency component in voice signal is 4kHz, so we need to sample the waveform at 8000 samples per
secondevery 1/8000th of a second (125 microseconds). We have to find out the amplitude of the waveform
for every 125 microseconds and transmit that value instead of transmitting the analog signal as it is. The
sample values are still analog values, and we can "quantize" these values into a fixed number of levels. As
shown in Figure 4.2, if the number of quantization levels is 256, we can represent each sample by 8 bits. So, 1
second of voice signal can be represented by 8000 8 bits, 64kbits. Hence, for transmitting voice using PCM,
we require 64 kbps data rate. However, note that since we are approximating the sample values through
quantization, there will be a distortion in the reconstructed signal; this distortion is known as quantization noise.
Figure 4.2: Pulse Code Modulation.
ITU-T standard G.711 specifies the mechanism for coding of voice signals. The voice signal is band limited to
4kHz, sampled at 8000 samples per second, and each sample is represented by 8 bits. Hence, using PCM,
voice signals can be coded at 64kbps.
In the PCM coding technique standardized by ITU in the G.711 recommendation, the nonlinear characteristic of
human hearing is exploited the ear is more sensitive to the quantization noise in the lower amplitude signal
than to noise in the large amplitude signal. In G.711, a logarithmic (non-linear) quantization function is applied
to the speech signal, and so the small signals are quantized with higher precision. Two quantization functions,
called A-law and m-law, have been defined in G.711. m-law is used in the U.S. and Japan. A-law is used in
Europe and the countries that follow European standards. The speech quality produced by the PCM coding
technique is called toll quality speech and is taken as the reference to compare the quality of other speech
coding techniques.
For CD-quality audio, the sampling rate is 44.1kHz (one sample every 23 microseconds), and each sample is
coded with 16 bits. For two-channel stereo audio stream, the bit rate required is 2 44.1 1000 16 =
1.41Mbps.
NoteThe quality of speech obtained using the PCM coding technique is called toll quality. To compare the
quality of different coding techniques, toll quality speech is taken as the reference.
Adaptive Differential Pulse Code Modulation
One simple modification that can be mode to PCM is that we can code the difference between two successive
samples rather than coding the samples directly. This technique is known as differential pulse code modulation
(DPCM).
Another characteristic of the voice signal that can be used is that a sample value can be predicted from past
sample values. At the transmitting side, we predict the sample value and find the difference between the
predicted value and the actual value and then send the difference value. This technique is known as adaptive
differential pulse code modulation (ADPCM). Using ADPCM, voice signals can be coded at 32kbps without any
degradation of quality as compared to PCM.
ITU-T Recommendation G.721 specifies the coding algorithm. In ADPCM, the value of speech sample is not
transmitted, but the difference between the predicted value and the actual sample value is. Generally, the
ADPCM coder takes the PCM coded speech data and converts it to ADPCM data.
The block diagram of an ADPCM encoder is shown in Figure 4.3(a). Eight-bit [.mu]-law PCM samples are input
to the encoder and are converted into linear format. Each sample value is predicted using a prediction
algorithm, and then the predicted value of the linear sample is subtracted from the actual value to generate the
difference signal. Adaptive quantization is performed on this difference value to produce a 4-bit ADPCM sample
value, which is transmitted. Instead of representing each sample by 8 bits, in ADPCM only 4 bits are used. At
the receiving end, the decoder, shown in Figure 4.3(b), obtains the dequantized version of the digital signal.
This value is added to the value generated by the adaptive predictor to produce the linear PCM coded speech,
which is adjusted to reconstruct m-law-based PCM coded speech.
Figure 4.3: (a) ADPCM Encoder.
Figure 4.3: (b) ADPCM Decoder.
There are many waveform coding techniques such as delta modulation (DM) and continuously variable slope
delta modulation (CVSD). Using these, the coding rate can be reduced to 16kbps, 9.8kbps, and so on. As the
coding rate reduces, the quality of the speech is also going down. There are coding techniques using good
quality speech which can be produced at low coding rates.
The PCM coding technique is used extensively in telephone networks. ADPCM is used in telephone networks
as well as in many radio systems such as digital enhanced cordless telecommunications (DECT).
In ADPCM, each sample is represented by 4 bits, and hence the data rate required is 32kbps. ADPCM is used
in telephone networks as well as radio systems such as DECT.
NoteOver the past 50 years, hundreds of waveform coding techniques have been developed with which
data rates can be reduced to as low as 9.8kbps to get good quality speech.
4.2.2 Vocoding
A radically different method of coding speech signals was proposed by H. Dudley in 1939. He named his coder
vocoder, a term derived from VOice CODER. In a vocoder, the electrical model for speech production seen in
Figure 4.4 is used. This model is called the sourcefilter model because the speech production mechanism is
considered as two distinct entitiesa filter to model the vocal tract and an excitation source. The excitation
source consists of a pulse generator and a noise generator. The filter is excited by the pulse generator to
produce voiced sounds (vowels) and by the noise generator to produce unvoiced sounds (consonants). The
vocal tract filter is a time-varying filterthe filter coefficients vary with time. As the characteristics of the voice
signal vary slowly with time, for time periods on the order of 20msec, the filter coefficients can be assumed to
be constant.
Figure 4.4: Electrical model of speech production.
In vocoding techniques, at the transmitter, the speech signal is divided into frames of 20msec in duration. Each
frame contains 160 samples. Each frame is analyzed to check whether it is a voiced frame or unvoiced frame
by using parameters such as energy, amplitude levels, etc. For voiced frames, the pitch is determined. For
each frame, the filter coefficients are also determined. These parametersvoiced/unvoiced classification, filter
coefficients, and pitch for voiced framesare transmitted to the receiver. At the receiving end, the speech
signal is reconstructed using the electrical model of speech production. Using this approach, the data rate can
be reduced as low as 1.2kbps. However, compared to voice coding techniques, the quality of speech will not be
very good. A number of techniques are used for calculating the filter coefficients. Linear prediction is the most
widely used of these techniques.
In vocoding techniques, the electrical model of speech production is used. In this model, the vocal tract is
represented as a filter. The filter is excited by a pulse generator to produce voiced sounds and by a noise
generator to produce unvoiced sounds.
NoteThe voice generated using the vocoding techniques sounds very mechanical or robotic. Such a voice
is called synthesized voice. Many speech synthesizers, which are integrated into robots, cameras,
and such, use the vocoding techniques.
Linear Prediction
The basic concept of linear prediction is that the sample of a voice signal can be approximated as a linear
combination of the past samples of the signal.
If S
n
is the n
th
speech sample, then
where a
k
(k = 1,,P) are the linear prediction coefficients, G is the gain of the vocal tract filter, and U
n
is the
excitation to the filter. Linear prediction coefficients (generally 8 to 12) represent the vocal tract filter
coefficients. Calculating the linear prediction coefficients involves solving P linear equations. One of the most
widely used methods for solving these equations is through the Durbin and Levinson algorithm.
Coding of the voice signal using linear prediction analysis involves the following steps:
At the transmitting end, divide the voice signal into frames, each frame of 20msec duration. For each
frame, calculate the linear prediction coefficients and pitch and find out whether the frame is voiced or
unvoiced. Convert these values into code words and send them to the receiving end.
At the receiver, using these parameters and the speech production model, reconstruct the voice signal.
In linear prediction technique, a voice sample is approximated as a linear combination of the past n samples.
The linear prediction coefficients are calculated every 20 milliseconds and sent to the receiver, which
reconstructs the speech samples using these coefficients. Using this approach, voice signals can be
compressed to as low as 1.2kbps.
Using linear prediction vocoder, voice signals can be compressed to as low as 1.2kbps. Quality of speech will
be very good for data rates down to 9.6kbps, but the voice sounds synthetic for further lower data rates. Slight
variations of this technique are used extensively in many practical systems such as mobile communication
systems, speech synthesizers, etc.
NoteVariations of LPC technique are used in many commercial systems, such as mobile communication
systems and Internet telephony.
<Day Day Up>
<Day Day Up>
4.3 IMAGE
To transmit an image, the image is divided into grids called pixels (or picture elements). The higher the number
of grids, the higher the resolution. Grid sizes such as 768 1024 and 400 600 are generally used in computer
graphics. For black-and-white pictures, each pixel is given a certain grayscale value. If there are 256 grayscale
levels, each pixel is represented by 8 bits. So, to represent a picture with a grid size of 400 600 pixels with
each pixel of 8 bits, 240kbytes of storage is required. To represent color, the levels of the three fundamental
colorsred, blue, and greenare combined together. The shades of the colors will be higher if more levels of
each color are used.
In image coding, the image is divided into small grids called pixels, and each pixel is quantized. The higher the
number of pixels, the higher will be the quality of the reconstructed image.
For example, if an image is coded with a resolution of 352 240 pixels, and each pixel is represented by 24
bits, the size of the image is 352 240 24/8 = 247.5 kilobytes.
To store the images as well as to send them through a communication medium, the image needs to be
compressed. A compressed image occupies less storage space if stored on a medium such as hard disk or
CD-ROM. If the image is sent through a communication medium, the compressed image can be transmitted
fast.
One of the most widely used image coding formats is JPEG format. Joint Photograph Experts Group (JPEG)
proposed this standard for coding of images. The block diagram of JPEG image compression is shown in
Figure 4.5.
Figure 4.5: JPEG compression.
For compressing the image using the JPEG compression technique, the image is divided into blocks of 8 by 8
pixels and each block is processed using the following steps:
Apply discrete cosine transform (DCT), which takes the 8 8 matrix and produces an 8 8 matrix that
contains the frequency coefficients. This is similar to the Fast Fourier Transform (FFT) used in Digital
Signal Processing. The output matrix represents the image in spatial frequency domain.
1.
Quantize the frequency coefficients obtained in Step 1. This is just rounding off the values to the nearest
quantization level. As a result, the quality of the image will slightly degrade.
2.
Convert the quantization levels into bits. Since there will be little change in the consecutive frequency
coefficients, the differences in the frequency coefficients are encoded instead of directly encoding the
coefficients.
3.
JPEG compression of an image is done in three steps: (a) division of the image into 8 8 matrix and applying
discrete cosine transform (DCT) on each matrix, (b) quantization of the frequency coefficients obtained in step
(a), and (c) conversion of the quantization levels into bits. Compression ratios of 30:1 can be achieved using
this technique.
Compression ratios of 30:1 can be achieved using JPEG compression. In other words, a 300kB image can be
reduced to about 10kB.
NoteJPEG image compression is used extensively in Web page development. As compared to the bit
mapped files (which have a .bmp extension), the JPEG images (which have a .jpg extension) occupy
less space and hence can be downloaded fast when we access a Web site.
<Day Day Up>
<Day Day Up>
4.4 VIDEO
A video signal occupies a bandwidth of 5MHz. Using the Nyquist sampling theorem, we need to sample the
video signal at 10 samples/msec. If we use 8-bit PCM, video signal requires a bandwidth of 80Mbps. This is a
very high data rate, and this coding technique is not suitable for digital transmission of video. A number of video
coding techniques have been proposed to reduce the data rate.
For video coding, the video is considered a series of frames. At least 16 frames per second are required to get
the perception of moving video. Each frame is compressed using the image compression techniques and
transmitted. Using this technique, video can be compressed to 64kbps, though the quality will not be very good.
Video encoding is an extension of image encoding. As shown in Figure 4.6, a series of images or frames,
typically 16 to 30 frames, is transmitted per second. Due to the persistence of the eye, these discrete images
appear as though it is a moving video. Accordingly, the data rate for transmission of video will be the number of
frames multiplied by the data rate for one frame. The data rate is reduced to about 64kbps in desktop video
conferencing systems where the resolution of the image and the number of frames are reduced considerably.
The resulting video is generally acceptable for conducting business meetings over the Internet or corporate
intranets, but not for transmission of, say, dance programs, because the video will have many jerks.
Moving Picture Experts Group (MPEG) released a number of standards for video coding. The following
standards are used presently:
MPEG-2: This standard is for digital video broadcasting. The data rates are 3 and 7.5Mbps. The picture quality
will be much better than analog TV. This standard is used in broadcasting through direct broadcast satellites.
A variety of video compression standards have been developed. Notable among them is MPEG-2, which is
used for video broadcasting. MPEG-4 is used in video conferencing applications and HDTV for high-definition
television broadcasting.
MPEG-4: This standard is used extensively for coding, creation, and distribution of audio-visual content for
many applications because it supports a wide range of data rates. The MPEG-4 standard addresses the
following aspects:
Representing audio-visual content, called media objects.
Describing the composition of these objects to create compound media objects.
Multiplexing and synchronizing the data.
Figure 4.6: Video coding through frames and pixels.
The primitive objects can be still images, audio, text, graphics, video, or synthesized speech. Video coding
between 5kbps and 10Mbps, speech coding from 1.2kbps to 24kbps, audio (music) coding at 128kbps, etc. are
possible. MP3 (MPEG Layer3) is the standard for distribution of music at 128kbps data rate, which is a part of
the MPEG-4 standards.
For video conferencing, 384kbps and 2.048Mbps data rates are very commonly used to obtain better quality as
compared to 64kbps. Video conferencing equipment that supports these data rates is commercially available.
MPEG-4 is used in mobile communication systems for supporting video conferencing while on the move. It also
is used in video conferencing over the Internet.
In spite of the many developments in digital communication, video broadcasting continues to be analog in most
countries. Many standards have been developed for digital video applications. When optical fiber is used
extensively as the transmission medium, perhaps then digital video will gain popularity. The important European
digital formats for video are given here:
Multimedia CIF format: Width in pixels 360; height in pixels 288; frames/ second 6.25 to 25; bit rate without
compression 7.8 to 31 Mbps; with compression 1 to 3 Mbps.
Video conferencing (QCIF format): Width in pixels 180; height in pixels 144; frames per second 6.25 to 25; bit
rate without compression 1.9 to 7.8 Mbps; with compression 0.064 to 1 Mbps.
Digital TV, ITU-R BT.601 format: Width 720; height 526; frames per second 25; bit rate without compression
166 Mbps; with compression 5 to 10 Mbps.
HDTV, ITU-R BT.109 format: Width 1920; height 1250; frames per second 25; bit rate without compression
960 Mbps; with compression 20 to 40 Mbps.
NoteCommercialization of digital video broadcasting has not happened very fast. It is expected that
ultilization of HDTV will take off in the first decade of the twenty-first century.
Summary
This chapter presented the details of coding text, voice, image, and video into digital format. For text, ASCII is
the most commonly used representation. Seven bits are used to represent characters. Unicode, which uses 16
bits is now being used for text representation. Characters of any world language can be represented using
Unicode.
For audio, Pulse Code Modulation (PCM) is the most widely used coding technique. In PCM, voice is coded at
64kbps data rate by sampling the voice signal at 8000 samples per second and representing each sample by 8
bits. Using Adaptive Differential Pulse Code Modulation (ADPCM), the coding rate can be reduced to 32kbps
without any reduction in quality. Another technique used for voice coding is Linear Prediction Coding (LPC),
with which the data rate can be reduced to as low as 1.2kbps. However, as the bit rate goes down, quality goes
down. Variants of LPC are used in many applications such as mobile communications, Internet telephony, etc.
For image compression, the Joint Photograph Experts Group (JPEG) standard is used, through which
compression ratios up to 30:1 can be achieved. For video coding, the most widely used standard was
developed by Moving Picture Experts Group (MPEG). MPEG-2 is used for broadcasting. MPEG-4 defines
standards for video encoding from 5kbps to 10Mbps. MPEG-4 is used in mobile communications as well as in
multimedia communication over the Internet.
References
J. Campbell. C Programmer's reference guide to Serial communication. Prentice-Hall, Inc., 1997.
G. Karlsson. "Asynchronous Transfer of Video". IEEE Communications Magazine, Vol. 34, No. 8, August
1996.
G. K. Wallace. "The JPEG Still Picture Compression Standard". Communications of the ACM, Vol. 34, No.
1, April 1991, pp. 30-44.
D. LeGall. "MPEG: A Video Compression Standard for Multimedia Applications". Communications of the
ACM, Vol. 34, No. 1, April 1994.
http://www.cdacindia.com Web site of Center for Development of Advanced Computing. You can obtain the
details of the ISCII standard from this site.
Questions
What are the different standards for coding of text messages? 1.
What is waveform coding? Explain the PCM and ADPCM coding techniques. 2.
What is a vocoder? Describe the speech production model. 3.
Explain the LPC coding technique. 4.
Explain the JPEG compression technique. 5.
What are the salient features of the MPEG-4 standard? 6.
Exercises
1. On your multimedia PC, record your voice and observe the speech waveform. Store the speech
data in a file and check the file size. Vary the sampling rate and quantization levels (bits per
sample), store the speech data, and observe the file sizes.
2. Install a desktop video camera on your PC and, using a software package such as Microsoft's
NetMeeting, participate in a video conference over the LAN. Observe the video quality.
3. Calculate the bit rate required for video transmission if the video is transmitted at the rate of 30
frames per second, with each frame divided into 640 480 pixels, and coding is done at 3 bits per
pixel.
4. Describe the Indian Standard Code for Information Interchange.
5. Download freely available MP3 software and find out the compression achieved in MP3 software by
converting WAV files into MP3 files.
6. Calculate the memory required to store 100 hours of voice conversation if the coding is done using
(a) PCM at 64kbps (b) ADPCM at 32kbps and (c) LPC at 2.4kbps.
7. If the music signal is band-limited to 15 kHz, what is the minimum sampling rate required? If 12 bits
are used to represent each sample, what is the data rate?
Answers
8. An image is of size 640 480 pixels. Each pixel is coded using 4 bits. How much is the storage
requirement to store the image?
Answers
1. To record your voice on your multimedia PC, you can use the sound recorder available on the Windows
operating system. You can also use a more sophisticated software utility such as GoldWave
(http://www.goldwave.com). You will have the options to select the sampling rate (8kHz, 16kHz, etc.) and
the quantization levels (8 bits, 16 bits, etc.). GoldWave provides utilities to filter background noise, vary the
pitch, and so on.
2. When you use a software package such as Microsoft's NetMeeting over a LAN, the video will be transmitted
at very low bit rates, so the video appears jerky.
3. If the video is transmitted at the rate of 30 frames per second, with each frame divided into 640 480
pixels, and coding is done at 3 bits per pixel, the data rate is
30 640 480 3 bits per second = 3 64 48 3kbps =27,648kbps = 27.648Mbps
4. The Indian Standard Code for Information Interchange (ISCII) is a standard developed by Department of
Electronics (Ministry of Information Technology), Government of India. ISCII code is used to represent
Indian languages in computers. Center for Development of Advanced Computing (CDAC) supplies the
hardware and software for Indian language processing based on ISCII. You can obtain the details from the
Web site http://www.cdacindia.com.
5. MP3 software can be obtained from the following sites:
http://www.dailymp3.com
http://www.mp3machine.com
http://www.mp3.com
6. To store 100 hours of voice, the memory requirement is given below if the coding is done using (a) PCM at
64kbps, (b) ADPCM at 32kbps, and (c) LPC at 2.4kbps.
To store 100 hours of voice using PCM at 64kbps data rate, Total duration of voice conversation =
100 hours = 100 3600 seconds Memory requirement = 100 3600 64 kbps = 100 3600 8
Kbytes = 360 8 Mbytes = 2880Mbytes
a.
1440Mbytes b.
100 3600 2.4kbps = 100 3600 0.3Kbytes = 36 3 Mbytes = 108Mbytes c.
7. If the music signal is band-limited to 15kHz, the minimum sampling rate required is twice the bandwidth.
Hence,
Minimum sampling rate = 2 15kHz = 30kHz
If 12 bits are used to represent each sample, the data rate = 30,000 12 bits/ second = 360,000 bits per
second = 360kbps.
8. The image is of the size 640 480 pixels. Each pixel is coded using 4 bits. To store the image,
Memory requirement = 640 480 4 bits = 153.6Kbytes.
Projects
Develop a program to generate Morse code. The output of the Morse code (sounds for dot and dash)
should be heard through the sound card. The duration of a dash is three times that of the dot.
1.
Study the DurbinLevinson algorithm for calculation of linear prediction coefficients. Implement the
algorithm in software.
2.
Develop software for compression of images using the JPEG standard. 3.
<Day Day Up>
<Day Day Up>
Chapter 5: Error Detection and Correction
In a digital communication system, totally error-free transmission is not possible due to transmission
impairments. At the receiving end, there should be a mechanism for detection of the errors and if possible for
their correction. In this chapter, we will study the various techniques used for error detection and correction.
5.1 NEED FOR ERROR DETECTION AND CORRECTION
Consider a communication system in which the transmitted bit stream is
1 0 11 0 111 0
The transmitted electrical signal corresponding to this bit stream and the received waveform are shown in
Figure 5.1. Due to the noise introduced in the transmission medium, the electrical signal is distorted. By using a
threshould, the receiver determines whether a 1 is transmitted or a 0 is transmitted. In this case, the receiver
decodes the bit stream as
1 0 1 0 0 1 0 1 0
Figure 5.1: Errors introduced by transmission medium.
In a digital communication system, some bits are likely to be received in error due to the noise in the
communication channel. As a result, 1 may become 0 or 0 may become 1. The Bit Error Rate (BER) is a
parameter used to characterize communication systems.
At two places, the received bit is in error1 has become 0 in both places.
How many errors can be tolerated by a communication system? It depends on the application. For instance, if
English text is transmitted, and a few letters are received in error, it is tolerable. Studies indicate that even if
20% of the letters are missing, human beings can understand the text.
Suppose the communication system is used to transmit digitized voice from one place to another. Studies
indicate that even if the Bit Error Rate is 10
- 3
, the listener will be able to understand the speech. In other
words, a voice communication system can tolerate one error for every 1000 bits transmitted.
NoteThe performance of a communication system can be characterized by the Bit Error Rate (BER). If
BER is 10
- 3
, there is one error per 1000 bits.
Now consider the case of a banking application. Suppose I need to transfer $100 from my account to a friend's
account through a data communication network. If the digit 1 becomes 3 due to one bit error during
transmission, then instead of $100, $300 will be deducted from my account! So, for such applications, not even
a single error can be tolerated. Hence, it is very important to detect the errors for data applications.
Errors can be classified as random errors and burst errors. Random errors, as the name implies, appear at
random intervals. Burst errors are caused by sudden disturbances such as lightning. Such disturbances cause
many consecutive bits to be in error.
Errors can be classified as
Random errors
Burst errors
Random errors occur at random places in the bit stream. Burst errors occur due to sudden disturbances in the
medium, caused by lightning, sudden interference with the nearby devices, etc. Such disturbances result in a
sequence of bits giving errors.
Detection and correction of errors is done through channel coding. In channel coding, additional bits are added
at the transmitter end, and these additional bits are used at the receiving end to check whether the transmitted
data is received correctly or not and, if possible, to correct the errors.
<Day Day Up>
<Day Day Up>
5.2 ERROR DETECTION
The three widely used techniques for error detection are parity, checksum, and cyclic redundancy check (CRC).
These techniques are discussed in the following sections.
5.2.1 Parity
Parity is used in serial communication protocols whereby we transmit one character at a time. For example, if
the information bits are
1 0 1 1 0 1 0
then an additional bit is added, which is called a parity bit. The parity bit can be added in such a way that the
total number of ones becomes even. In such a case, it is called even parity. In the above bit stream, already
there are four ones, and hence a 0 is added as the parity bit. The bit stream transmitted is
1 0 1 1 0 1 0 0
In case of odd parity, the additional bit added will make the total number of ones odd. For odd parity, the
additional bit added in the above case is 1 and the transmitted bit stream is
1 0 1 1 0 1 0 1
At the receiving end, from the first 7 bits, the receiver will calculate the expected parity bit. If the received parity
and the calculated parity match, it is assumed that the character received is OK.
The various parities can be even, odd or none. In the case of none parity, the parity bit is not used and is
ignored.
It is very easy to verify that parity can detect errors only if there is an odd number of errors; if the number of
errors is 1, 3, or 5, the error can be detected. If the number of errors is even, parity bit cannot detect the error.
Parity bit is the additional bit added to a character for error checking. In even parity, the additional bit will make
the total number of ones even. In case of odd parity, the additional bit will make the total number of ones odd.
Parity bit is used in serial communication.
5.2.2 Block Codes
The procedure used in block coding is shown in Figure 5.2. The block coder takes a block of information bits
(say 8000 bits) and generates additional bits (say, 16). The output of the block coder is the original data with
the additional 16 bits. The additional bits are called checksum or cyclic redundancy check (CRC). Block codes
can detect errors but cannot correct errors.
In block coders, a block of information bits is taken and additional bits are generated. These additional bits are
called checksum or cyclic redundancy check (CRC). Checksum or CRC is used for error detection.
Figure 5.2: Block coder.
Checksum
Suppose you want to send two characters, C and U.
The 7-bit ASCII values for these characters are
C 1 0 0 0 0 1 1
U 1 0 1 0 1 0 1
In addition to transmitting these bit streams, the binary representation of the sum of these two characters is
also sent. The value of C is 67 and the value of U is 85. The sum is 152. The binary representation of 152 is 1 0
0 1 1 0 0 0. This bit stream is also attached to the original binary stream, corresponding to C and U, while
transmitting the data.
Checksum of information bits is calculated using simple binary arithmetic. Checksum is used extensively
because its computation is very easy. However, checksum cannot detect all errors.
So, the transmitted bit stream is
1 0 0 0 0 1 1 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0
At the receiving end, the checksum is again calculated. If the received checksum matches this calculated
checksum, then the receiver assumes that the received data is OK. The checksum cannot detect all the errors.
Also, if the characters are sent in a different order, i.e., if the sequence is changed, the checksum will be the
same and hence the receiver assumes that the data is correct.
However, checksum is used mainly because its computation is very easy, and it provides a reasonably good
error detection capability.
NoteChecksum is used for error detection in TCP/IP protocols to check whether packets are received
correctly. Different algorithms are used for calculation of checksum.
Cyclic Redundancy Check
CRC is a very powerful technique for detecting errors. Hence, it is extensively used in all data communication
systems. Additional bits added to the information bits are called the CRC bits. These bits can be 16 or 32. If the
additional bits are 16, the CRC is represented as CRC-16. CRC-32 uses 32 additional bits. There are
international standards for calculation of CRC-16 and CRC-32. Since CRC calculation is very important, the C
programs to calculate the CRC are in Listings 5.1 and 5.2. When these programs are executed, the information
bits and the CRC in hexadecimal notation will be displayed.
Error detection using CRC is very simple. At the transmitting side, CRC is appended to the information bits. At
the receiving end, the receiver calculates CRC from the information bits and, if the calculated CRC matches the
received CRC, then the receiver knows that the information bits are OK.
CRC-16 and CRC-32 are the two standard algorithms used for calculation of cyclic redundancy check. The
additional CRC bits (16 and 32) are appended to the information bits at the transmitting side. At the receiving
side, the received CRC is compared with the calculated CRC. If the two match, the information bits are
considered as received correctly. If the two do not match, it indicates that there are errors in the information
bits.
Program for calculation of CRC-16
Listing 5.1: Program for calculation of CRC-16.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
long CRC = 0x0000;
long GenPolynomial = 0x8005; //Divisor for CRC-16 Polynomial
void bitBybit(int bit);
int main()
{
unsigned int MsgLength;
int i=0,j=0;
char SampleMsg[] = "Hello World";
char tempBuffer[100];
MsgLength = sizeof(SampleMsg)-1;
printf("\nActual Message: %s\n",SampleMsg);
strcpy(tempBuffer, SampleMsg);
tempBuffer[MsgLength] = 0x00;
tempBuffer[MsgLength+1] = 0x00;
tempBuffer[MsgLength+2] = '\0';
printf("\nAfter padding 16 0-bits to the Message:");
for(i=0;i<MsgLength+2;++i)
{
unsigned char ch = tempBuffer[i];
unsigned char mask = 0x80;
for(j=0;j<8;++j)
{
bitBybit(ch&mask);
mask>>=1;
}
printf(" ");
}
printf("\n\nCalculated CRC:0x%x\n\n",CRC);
return 0;
}
void bitBybit(int bit)
{
long firstBit = (CRC & 0x8000);
CRC = (CRC << 1);
if(bit)
{
CRC = CRC ^ 1;
printf("1");
}
else
{
CRC = CRC ^ 0;
printf("0");
}
if(firstBit)
{
CRC = (CRC^GenPolynomial);
}
}
In this listing, the actual message to be transmitted is "Hello World". The message is padded with sixteen 0
bits, and the message bit stream is
01001000 01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 00000000 00000000
The calculated CRC value in hexadecimal notation is 0x303f70c3.
Program for calculation of CRC-32
Listing 5.2: Program for calculation of CRC-32.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
long CRC = 0x00000000L;
long GenPolynomial = 0x04c11db7L; //Divisor for CRC-32 Polynomial
void bitBybit(int bit);
int main()
{
unsigned int MsgLength;
int i=0,j=0;
char SampleMsg[] = "Hello World";
char tempBuffer[100];
MsgLength = sizeof(SampleMsg)-1;
printf("\nActual Message: %s\n",SampleMsg);
strcpy(tempBuffer, SampleMsg);
tempBuffer[MsgLength] = 0x00;
tempBuffer[MsgLength+1] = 0x00;
tempBuffer[MsgLength+2] = 0x00;
tempBuffer[MsgLength+3] = 0x00;
tempBuffer[MsgLength+4] = '\0';
printf("\nAfter padding 32 0-bits to the Message:");
for(i=0;i<MsgLength+4;++i)
{
unsigned char ch = tempBuffer[i];
unsigned char mask = 0x80;
for(j=0;j<8;++j)
{
bitBybit(ch&mask);
mask>>=1;
}
printf(" ");
}
printf("\n\nCalculated CRC:0x%x\n\n",CRC);
return 0;
}
void bitBybit(int bit)
{
long firstBit = (CRC & 0x80000000L);
CRC = (CRC << 1);
if(bit)
{
CRC = CRC ^ 1;
printf("1");
}
else
{
CRC = CRC ^ 0;
printf("0");
}
if(firstBit)
{
CRC = (CRC^GenPolynomial);
}
}
Listing 5.2 gives the C program to calculate CRC-32. In this program the message for which CRC has to be
calculated is "Hello World". The message bit stream is
01001000 01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 00000000 00000000 00000000 00000000
The calculated CRC is 0x31d1680c.
NoteIn CRC calculation, a standard polynomial is used. This polynomial is different for CRC- 16 and CRC-
32. The bit stream is divided by this polynomial to calculate the CRC bits.
Using error detection techniques, the receiver can detect the presence of errors. In a practical
communication system, just detection of errors does not serve much purpose, so the receiver has to
use another mechanism such as asking the transmitter to resend the data. Communication protocols
carry out this task.
<Day Day Up>
<Day Day Up>
5.3 ERROR CORRECTION
If the error rate is high in transmission media such as satellite channels, error- correcting codes are used that
have the capability to correct errors. Error correcting codes introduce additional bits, resulting in higher data
rate and bandwidth requirements. However, the advantage is that retransmissions can be reduced. Error
correcting codes are also called forward-acting error correction (FEC) codes.
Convolutional codes are widely used as error correction codes. The procedure for convolutional coding is
shown in Figure 5.3. The convolutional coder takes a block of information (of n bits) and generates some
additional bits (k bits). The additional k bits are derived from the information bits. The output is (n + k) bits. The
additional k bits can be used to correct the errors that occurred in the original n bits. n/(n+k) is called rate of the
code. For instance, if 2 bits are sent for every 1 information bit, the rate is 1/2. Then the coding technique is
called Rate 1/2 FEC. If 3 bits are sent for every 2 information bits, the coding technique is called Rate 2/3 FEC.
The additional bits are derived from the information bits, and hence redundancy is introduced in the error
correcting codes.
Figure 5.3: Convolutional coder.
For example, in many radio systems, error rate is very high, and so FEC is used. In Bluetooth radio systems,
Rate 1/3 FEC is used. In this scheme, each bit is transmitted three times. To transmit bits b0b1b2b3, the actual
bits transmitted using Rate 1/3 FEC are
b0b0b0b1b1b1b2b2b2b3b3b3
At the receiver, error correction is possible. If the received bit stream is
101000111000111
it is very easy for the receiver to know that the second bit is received in error, and it can be corrected.
In error correcting codes such as convolutional codes, in addition to the information bits, additional redundant
bits are transmitted that can be used for error correction at the receiving end. Error correcting codes increase
the bandwidth requirement, but they are useful in noisy channels.
A number of FEC coding schemes have been proposed that increase the delay in processing and also the
bandwidth requirement but help in error correction. Shannon laid the foundation for channel coding, and during
the last five decades, hundreds of error-correcting codes have been developed.
NoteIt needs to be noted that in source coding techniques, removing the redundancy in the signal reduces
the data rate. For instance, in voice coding, low-bit rate coding techniques reduce the redundancy. In
contrast, in error correcting codes, redundancy is introduced to facilitate error correction at the
receiver.
Summary
In a communication system, transmission impairments cause errors. In many data applications, errors cannot
be tolerated, and error detection and correction are required. Error detection techniques use parity, checksum,
or cyclic redundancy check (CRC). Additional bits are added to the information bit stream at the transmitting
end. At the receiving end, the additional bits are used to check whether the information bits are received
correctly or not. CRC is the most effective way of detecting errors and is used extensively in data
communication. If the receiver detects that there are errors in the received bit stream, the receiver will ask the
sender to retransmit the data.
Error correction techniques add additional redundancy bits to the information bits so that at the receiver, the
errors can be corrected. Error correcting codes increase the data rate and hence the bandwidth requirement,
but they are needed if the channel is noisy and if retransmissions have to be avoided.
References
Dreamtech Software Team. Programming for Embedded Systems. Wiley-Dreamtech India Pvt. Ltd., 2002.
This book contains the source code for calculation of CRC as well as serial programming.
J. Campbell. C Programmer's Guide to Serial Communications. Prentice-Hall Inc., 1994.
Questions
Explain the need for error detection and correction. 1.
What is parity? Explain with examples. 2.
What is checksum? Explain with an example. 3.
What is CRC? What are the different standards for CRC calculation? 4.
Explain the principles of error correcting codes with an example. 5.
Exercises
1. Connect two PCs using an RS232 cable. Using serial communication software, experiment with
parity bits by setting even parity and odd parity.
2. "Parity can detect errors only if there is an odd number of errors". Prove this statement with an
example.
3. Calculate the even parity bit and the odd parity bit if the information bits are 1101101.
4. If the information bits are 110110110, what will be the bit stream when Rate 1/3 FEC is used for
error correction? Explain how errors are corrected using the received bit stream. What will be the
impact of bit errors introduced in the channel?
Answers
1. You can experiment with various parameters for serial communication using the Hyper Terminal program on
Windows. You need to connect two PCs using an RS232 cable. Figure C.4 shows the screenshot to set the
serial communication parameters.
Figure C.4: Screenshot to set the serial communication parameters.
To get this screen, you need do the following:
Click on Start
Go to Programs
Go to Accessories
Go to Hyper Terminal
Double click on HYPERTERM.EXE
Enter a name for New Connection "xxxx" and Click OK
You will get a screen with the title Connect To. Select COM1 or COM2 from the Connect Using
listbox and click OK.
2. "Parity can detect errors only if there are odd number of errors". To prove this statement, consider the bit
stream 1010101. If we add an even parity bit, the bit stream would be 10101010. If there is one error, the bit
stream may become 11101010. Now the number of ones is 5, but the parity bit is 0. Hence, you can detect
that there is an error. If there are two errors, the corrupted bit stream is 11111010. However, in this case,
the calculated parity bit is 0, the received parity bit is also 0, and the even number of errors cannot be
detected.
3. If the information bits are 1101101, the even parity bit is 1, and the odd parity bit is 0.
4. If the information bits are 110110110, when Rate 1/3 FEC is used for error correction, the transmitted bit
stream will be
111111000111111000111111000
Suppose the received bit stream is
101011100011110001110110100
There is one bit error for every three bits. Even then at the receiver, the correct bit stream can be obtained.
However, suppose the first three bits received are 001, then the receiver will decode it as 0 though the
actual information bit is 1.
Projects
Write C programs to calculate CRC for the standard CRC algorithms, CRC-16 and CRC-32. 1.
Write a program for error correction using Rate 1/3 FEC. 2.
Studies indicate that even if 20% of the characters are received in error in a communication system for
transmitting English text messages, the message can be understood. Develop software to prove this
3.
2.
study. You need to simulate errors using a random number generation program.
3.
<Day Day Up>
<Day Day Up>
Chapter 6: Digital Encoding
In a digital communication system, the first step is to convert the information into a bit stream of ones and
zeros. Then the bit stream has to be represented as an electrical signal. In this chapter, we will study the
various representations of the bit stream as an electrical signal.
6.1 REQUIREMENTS FOR DIGITAL ENCODING
Once the information is converted into a bit stream of ones and zeros, the next step is to convert the bit stream
into its electrical representation. The electrical signal representation has to be chosen carefully for the following
reasons:
The electrical representation decides the bandwidth requirement.
The electrical representation helps in clocking the beginning and ending of each bit.
Error detection can be built into the signal representation.
Noise immunity can be increased by a good electrical representation.
The complexity of the decoder can be decreased.
The bit stream is encoded into an equivalent electrical signal using digital encoding schemes. The encoding
scheme should be chosen keeping in view the bandwidth requirement, clocking, error detection capability, noise
immunity, and complexity of the decoder.
A variety of encoding schemes have been proposed that address all these issues. In all communication
systems, the standards specify which encoding technique has to be used. In this chapter, we discuss the most
widely used encoding schemes. We will refer to these encoding schemes throughout the book, and hence a
good understanding of these is most important.
<Day Day Up>
<Day Day Up>
6.2 CATEGORIES OF ENCODING SCHEMES
Encoding schemes can be divided into the following categories:
Unipolar encoding
Polar encoding
Bipolar encoding
Unipolar encoding: In the unipolar encoding scheme, only one voltage level is used. Binary 1 is represented
by positive voltage and binary 0 by an idle line. Because the signal will have a DC component, this scheme
cannot be used if the transmission medium is radio. This encoding scheme does not work well in noisy
conditions.
Polar encoding: In polar encoding, two voltage levels are used: a positive voltage level and a negative voltage
level. NRZ-I, NRZ-L, and Manchester encoding schemes, which we discuss in the following sections, are
examples of this encoding scheme.
Bipolar encoding: In bipolar encoding, three levels are used: a positive voltage, a negative voltage, and 0
voltage. AMI and HDB3 encoding schemes are examples of this encoding scheme.
The encoding schemes are divided into three categories: (a) unipolar encoding; (b) polar encoding; and (c)
bipolar encoding. In unipolar encoding, only one voltage level is used. In polar encoding, two voltage levels are
used. In bipolar encoding, three voltage levels are used. Both polar encoding and bipolar encoding schemes
are used in practical communication systems.
NoteThe encoding scheme to be used in a particular communication system is generally standardized.
You need to follow these standards when designing your system to achieve interoperability with the
systems designed by other manufacturers.
<Day Day Up>
<Day Day Up>
6.3 NON-RETURN TO ZERO INVERTIVE (NRZ-I)
In NRZ-I, binary 0 is represented by 0 volt and binary 1 by 0 volt or +V volts, according to the previous voltage.
If the previous voltage was 0 volt, the current binary 1 is +V volts. If the previous voltage was +V volts, then the
current binary 1 is 0 volt.
In NRZ-I, 0 is represented by 0 volt and 1 by 0 volt or +V volts, based on the previous voltage level.
<Day Day Up>
<Day Day Up>
6.4 NON-RETURN TO ZERO LEVEL (NRZ-L)
In NRZ-L, binary 1 is represented by positive voltage and 0 by negative voltage. This scheme, though simple,
creates problems: if there is a synchronization problem, it is difficult for the receiver to synchronize, and many
bits are lost.
In NRZ-L, 1 is represented by positive voltage and 0 by negative voltage. Synchronization is a problem in this
encoding scheme.
<Day Day Up>
<Day Day Up>
6.5 MANCHESTER
In Manchester encoding, the voltage transition is in the middle of the bit period. A low-to-high transition
represents 1 and high-to-low transition represents 0. The advantage of this scheme is that the transition serves
as a clocking mechanism, and errors can be detected if there is no transition. However, bandwidth requirement
for this scheme is higher as compared to other schemes.
In Manchester encoding, 1 is represented by low-to-high voltage transition and 0 is represented by high-to-low
voltage transition. This scheme is useful for deriving the clock, and errors can be detected.
<Day Day Up>
<Day Day Up>
6.6 RS232 STANDARD
The RS232 interface (commonly known as a serial port) is used in embedded systems as well as to connect PC
modems. The RS232 standard specifies a data rate of 19.2kbps, but higher data rates are supported in most
PCs, speeds up to 115kbps are common. Most controllers and digital signal processors support the serial port.
In RS232 communication, binary 0 is represented by +V volts and binary 1 is represented by -V volts.
In RS232, 0 is represented by +V volts and 1 is represented by -V volts.
<Day Day Up>
<Day Day Up>
6.7 BIPOLAR ALTERNATE MARK INVERSION (BIPOLAR AMI)
In the bipolar alternate mark inversion (Bipolar AMI) encoding scheme, 0 is represented by no signal and 1 by
positive or negative voltage. Binary 1 bits must alternate in polarity. The advantage of this coding scheme is
that if a long string of ones occurs, there will be no loss of synchronization. If synchronization is lost, it is easy
to resynchronize at the transition.
In the bipolar AMI encoding scheme, 0 is represented by no signal and 1 by either positive or negative voltage.
Binary 1 bits alternate in polarity. Ease of synchronization is the main advantage of this scheme.
<Day Day Up>
<Day Day Up>
6.8 HIGH-DENSITY BIPOLAR 3 (HDB3)
The HDB3 encoding scheme is almost the same as AMI, except for a small change: Two pulses, called
violation pulse and the balancing pulse, are used when consecutively four or more zeros occur in the bit
stream. When there are four consecutive binary 0 bits, the pulses will be 000V, where V will be same as the
previous non-zero voltage. However, this V pulse creates a DC component. To overcome this problem, the B
bit is introduced. If there are four consecutive binary 0 bits, the pulse train will be B00V. B is positive or
negative to make alternate Vs of opposite polarity.
HDB3 encoding is similar to AMI, except that two pulses called violation pulse (V) and balancing pulse (B) are
used when consecutively four or more zeros occur in the bit stream. These pulses eliminate the DC component.
The HDB3 encoding scheme is a standard adopted in Europe and Japan.
Summary
This chapter has presented the various schemes used for digital encoding. The digital encoding scheme should
take into consideration the bandwidth requirement, immunity to noise, and synchronization. Encoding schemes
can be broadly divided into unipolar encoding, polar encoding, and bipolar encoding. In unipolar encoding, there
will be only one levelbinary 1 is represented by positive voltage and binary 0 by keeping the line idle.
Because the signal will have a DC component, it cannot be used in radio systems. Polar encoding uses two
levels: a positive voltage level and a negative voltage level. Non-return to zero level (NRZ-L) and non-return to
zero invertive (NRZ-I) are examples of this type of encoding. In biploar encoding, three voltage levels are used:
positive, negative, and zero levels. Alternate mark inversion (AMI) and high-density bipolar 3 (HDB3) are
examples of this type of encoding. All these types of encoding schemes are used in communication systems.
References
W. Stallings. Data and Computer Communication. Fifth Edition. Prentice Hall Inc., 1999.
J. Campbell. C Programmer's Guide to Serial Communications. Prentice Hall Inc., 1997.
Questions
What are the requirements of digital encoding? 1.
What are the three categories of digital encoding? 2.
Explain polar encoding techniques. 3.
Explain bipolar encoding techniques. 4.
Exercises
4.
1. For the bit pattern 10101010100, draw the electrical signal waveforms for NRZ-I and NRZ-L
encoding schemes.
2. For the bit pattern 10101010110, draw the electrical signal waveforms for the Manchester encoding
scheme.
3. If the bit pattern is 101010101110, how does the waveform appear on an RS232 port?
4. For the bit pattern 10101010100, draw the waveforms for Bipolar AMI and Manchester encoding
schemes.
5. Study the UART (universal asynchronous receive-transmit) chips used for RS232 communication.
6. Is it possible to generate 5-bit codes from an RS232 interface of your PC? If not, what is the
alternative?
Answers
1. For the bit pattern 10101010100, the NRZ-I waveform is as follows:
The NRZ-L waveform is as follows:
2. For the bit pattern 10101010110, the waveform using Manchester coding is as follows:
3. For the bit pattern 101010101110, the waveform appear on an RS232 port will appear as:
4. For the bit pattern 10101010100, bipolar AMI waveform is as follows:
The HDB3 waveform is as follows:
Note that the HDB3 encoding will be the same as AMI. When four consecutive zeros are in the bit stream,
the waveforms will be different.
5. The UART (universal asynchronous receive-transmit) chip is used to control the serial communication port
(COM port) of the PC. The functions of this chip are converting parallel data into serial data and vice versa,
adding start and stop bits, creating a hardware interrupt when a character is received, and flow control.
Different UART chips support different speeds: 8250 supports 9.6kbps, 16450 supports 19.2kbps, and
16550 supports 115.2kbps. 8250 and 16450 have one byte buffer, and 16550 has 16 bytes of buffer.
6. To generate 5 bit codes from the RS232 interface of your PC is not possible. In RS232 communication,
there will be start bit and a parity bit. Even if there are five data bits, additional bits are added. To generate
five bit codes, you need to have special hardware with UART chips.
Projects
Write a C program that takes a bit stream and a type of encoding scheme as inputs and produces the
electrical signal waveforms for all the codes explained in this chapter.
1.
A variety of codes such as CCITT2, CCITT3, CCITT5, TORFEC, etc. have been developed to represent
the alphabet (A to Z). Compile a list of such codes. Write a program that takes English text as input and
displays the bit stream corresponding to various codes.
2.
<Day Day Up>
<Day Day Up>
Chapter 7: Multiplexing
In a communication system, the costliest element is the transmission medium. To make the best use of the
medium, we have to ensure that the bandwidth of the channel is utilized to its fullest capacity. Multiplexing is the
technique used to combine a number of channels and send them over the medium to make the best use of the
transmission medium. We will discuss the various multiplexing techniques in this chapter.
7.1 MULTIPLEXING AND DEMULTIPLEXING
Use of multiplexing technique is possible if the capacity of the channel is higher than the data rates of the
individual data sources. Consider the example of a communication system in which there are three data
sources. As shown in Figure 7.1, the signals from these three sources can be combined together (multiplexed)
and sent through a single transmission channel. At the receiving end, the signals are separated
(demultiplexed).
Figure 7.1: Multiplexing and demultiplexing.
At the transmitting end, equipment known as a multiplexer (abbreviated to MUX) is required. At the receiving
end, equipment known as a demultiplexer (abbreviated to DEMUX) is required. Conceptually, multiplexing is a
very simple operation that facilitates good utilization of the channel bandwidth. The various multiplexing
techniques are described in the following sections.
A multiplexer (MUX) combines the data of different sources and sends it over the channel. At the receiving end,
the demultiplexer (DEMUX) separates the data of the different sources. Multiplexing is done when the capacity
of the channel is higher than the data rates of the individual data sources.
<Day Day Up>
<Day Day Up>
7.2 FREQUENCY DIVISION MULTIPLEXING
In frequency division multiplexing (FDM), the signals are translated into different frequency bands and sent over
the medium. The communication channel is divided into different frequency bands, and each band carries the
signal corresponding to one source.
Consider three data sources that produce three signals as shown in Figure 7.2. Signal #1 is translated to
frequency band #1, signal #2 is translated into frequency band #2, and so on. At the receiving end, the signals
can be demultiplexed using filters. Signal #1 can be obtained by passing the multiplexed signal through a filter
that passes only frequency band #1.
Figure 7.2: Frequency division multiplexing.
FDM is used in cable TV transmission, where signals corresponding to different TV channels are multiplexed
and sent through the cable. At the TV receiver, by applying the filter, a particular channel's signal can be
viewed. Radio and TV transmission are also done using FDM, where each broadcasting station is given a small
band in the frequency spectrum. The center frequency of this band is known as the carrier frequency.
Figure 7.3 shows how multiple voice channels can be combined using FDM.
Figure 7.3: FDM of voice channels.
Each voice channel occupies a bandwidth of 3.4kHz. However, each channel is assigned a bandwidth of 4kHz.
The second voice channel is frequency translated to the band 48kHz. Similarly, the third voice channel is
translated to 812 kHz and so on. Slightly higher bandwidth is assigned (4kHz instead of 3.4kHz) mainly
because it is very difficult to design filters of high accuracy. Hence an additional bandwidth, known as guard
band, separates two successive channels.
In FDM, the signals from different sources are translated into different frequency bands at the transmitting side
and sent over the transmission medium. In cable TV, FDM is used to distribute programs of different channels
on different frequency bands. FDM is also used in audio/video broadcasting.
FDM systems are used extensively in analog communication systems. The telecommunication systems used in
telephone networks, broadcasting systems, etc. are based on FDM.
<Day Day Up>
<Day Day Up>
7.3 TIME DIVISION MULTIPLEXING
In synchronous time division multiplexing (TDM), the digitized signals are combined and sent over the
communication channel. Consider the case of a communication system shown in Figure 7.4. Three data
sources produce data at 64kbps using pulse code modulation (PCM). Each sample will be 8 bits, and the time
gap between two successive samples is 125 microseconds. The job of the MUX is to take the 8-bit sample
value of the first channel and the 8 bits of the second channel and then the 8 bits of the third channel. Again, go
back to the first channel. Since no sample should be lost, the job of the MUX is to complete scanning all the
channels and obtain the 8-bit sample values within 125 microseconds. This combined bit stream is sent over
the communication medium. The MUX does a scanning operation to collect the data from each data source and
also ensures that no data is lost. This is known as time division multiplexing. The output of the MUX is a
continuous bit stream, the first 8 bits corresponding to Channel 1, the next 8 bits corresponding to Channel 2,
and so on.
Figure 7.4: Time division multiplexing.
In time division multiplexing, the digital data corresponding to different sources is combined and transmitted
over the medium. The MUX collects the data from each source, and the combined bit stream is sent over the
medium. The DEMUX separates the data corresponding to the individual sources.
In a telephone network, switches (or exchanges) are interconnected through trunks. These trunks use TDM for
multiplexing 32 channels. This is shown in Figure 7.4. The 32 channels are, by convention, numbered as 0 to
31. Each channel produces data at the rate of 64kbps. The MUX takes the 8 bits of each channel and produces
a bit stream at the rate of 2048kbps (64kbps 32). At the receiving end, the DEMUX separates the data
corresponding to each channel. The TDM frame is also shown in Figure 7.5. The TDM frame depicts the
number of bits in each channel. Out of the 32 slots, 30 slots are used to carry voice and two slots (slot 0 and
slot 16) are used to carry synchronization and signaling information.
Figure 7.5: Time division multiplexing of voice channels.
Though TDM appears very simple, it has to be ensured that the MUX does not lose any data, and hence it has
to maintain perfect timings. At the DEMUX also, the data corresponding to each channel has to be separated
based on the timing of the bit stream. Hence, synchronization of the data is very important. Synchronization
helps in separating the bits corresponding to each channel. This TDM technique is also known as synchronous
TDM.
In the Public Switched Telephone Network (PSTN), the switches are interconnected through trunks that use
TDM. Trunks in which 30 voice channels are multiplexed are called E1 trunks.
The trunks used in the telephone network using this TDM mechanism are known as T1 trunks or T1 carriers.
The multiplexing of 24 channels is known as Level 1 multiplexing. Four such T1 carriers are multiplexed to form
T2 carrier. Seven T2 carriers are multiplexed to form T3 carrier and six T3 carriers are multiplex to form T4
carriers. The various levels of multiplexing, the number of voice channels and the data rates are given in Table
7.1. Note that at each level, additional bits are added for framing and synchronization.
Table 7.1: Digital Hierarchy of T-carriers
Level Number of Voice Channels Data Rates (Mbps)
1 24 1.544
2 96 6.312
3 672 44.736
4 4032 274.176
NoteIn a T1 carrier, the total number of voice channels is 24there are 24 voice slots in the TDM frame.
In Europe, a different digital hierarchy is followed. At the lowest level, 30 voice channels are
multiplexed and the trunk is called E1 trunk.
7.3.1 Statistical Time Division Multiplexing
In the TDM discussed above, each data source is given a time slot in which the data corresponding to that
source is carried. If the data source has no data to transmit, that slot will be empty. To make best use of the
time slots, the data source can be assigned a time slot only if it has data to transmit. A centralized station can
assign the time slots based on the need. This mechanism is known as statistical time division multiplexing
(STDM).
In STDM, the data source is assigned a time slot only if it has data to transmit. As compared to synchronous
TDM, this is a more efficient technique because time slots are not wasted.
<Day Day Up>
<Day Day Up>
7.4 WAVE DIVISION MULTIPLEXING
Wave division multiplexing (WDM) is used in optical fibers. In optical fiber communication, the signal
corresponding to a channel is translated to an optical frequency (generally expressed in wavelength) and
transmitted. This optical frequency is expressed by its equivalent wavelength and denoted by (lambda).
Instead of transmitting only one signal on the fiber, if two (or more) signals are sent on the same fiber at
different frequencies (or wavelengths), it is called WDM. In 1994, this was demonstratedsignal frequencies
had to be separated widely, typically 1310 nm and 1550 nm. Therefore, just using the two wavelengths can
double the fiber capacity.
As shown in Figure 7.6, the wave division multiplexer takes signals from different channels, translates them to
different wavelengths, and sends them through the optical fiber. Conceptually, it is the same as FDM.
Figure 7.6: Wave division multiplexing.
Wave Division Multiplexing is used in optical fibers. Data of different sources is sent through the fiber using
different wavelengths. The advantage of WDM is that the full capacity of an already laid optical fiber can be
used.
Note that in WDM, single-mode fiber carries the data of different channels in different wavelengths. The
advantage of WDM is that the full capacity of an already laid optical fiber can be increased by 16 to 32 times by
sending different channels in different wavelengths. Because each wavelength corresponds to a different color,
WDM effectively sends data corresponding to different channels in different colors.
7.4.1 Dense Wave Division Multiplexing
As the name implies, dense wave division multiplexing (DWDM) is the same as WDM except that the number
of wavelengths or ones can be much higher than 32. Multiple optical signals are combined on the same fiber to
increase the capacity of the fiber. One hundred twenty-eight and 256 wavelengths can be used to transmit
different channels at different rates. DWDM is paving the way for making high bandwidths available without the
need for laying extra fiber cables. Data rates up to terabits per second can be achieved on a single fiber.
DWDM is an extension of WDMa large number of wavelengths are used to transmit data of different sources.
Data rates up to the terabit level per second can be achieved using DWDM.
Summary
Various multiplexing techniques have been discussed in this chapter. In frequency division multiplexing (FDM),
different channels are translated into different frequency bands and transmitted through the medium. In time
division multiplexing (TDM), the data corresponding to different channels is given separate time slots. Wave
division multiplexing (WDM) and dense WDM (DWDM) facilitate transmission of different wavelengths in the
same fiber. All these multiplexing techniques are used to combine data from different sources and to send it
over the medium. At the receiving end, demultiplexing separates the data of the different sources.
References
R. Horak. Communications Systems and Networks. Wiley-Dreamtech India Pvt. Ltd., 2002.
http://www.iec.org/online/tutorials/dwdm Link to the DWDM resources of IEC online tutorials.
http://www.cisco.com Web site of Cisco Corporation. Gives a wealth of information on different multiple
access techniques and commercial products.
Questions
Explain the different multiplexing techniques. 1.
What is FDM? Give examples of practical systems in which FDM is used. 2.
What is the difference between synchronous TDM and statistical TDM? 3.
Explain the digital hierarchy of E-carriers. 4.
Explain the importance of WDM and DWDM in enhancing the capacity of optical fibers. 5.
Exercises
1. The digital hierarchy given in Table 7.1, calculate the overhead data bits added to the T2
carrier.
2. Carry out a survey on the use of the multiplexing techniques used in satellite
communications.
3. Study the multiplexing hierarchy used in optical fiber systems.
4. Explore the various bands used in optical fiber for WDM and DWDM.
Answers
1. The T1 carrier supports 24 voice channels with an aggregate data rate of 1.544 Mbps out of which 8000 bps
is for carrying signaling information. In T1 carrier, each frame consists of one bit for framing following by 24
slots. Each slot contains 7 bits of voice and one bit for signaling. Four T1 carriers are multiplexed to from T2
carrier. The data rate of T2 carrier is 6.312 Mbps whereas 4 1.544 Mbps = 6.176 Mbps. Hence the
additional overhead is 0.136 Mbps.
The complete hierarchy is given in the following Table:
Level Data Rate (Mbps) No. of 64kbps Channels
T1 1.544 24
T1C 3.152 48
T2 6.312 96
T3 44.736 672
T4 274.176 4,032
2. In satellite communication, the frequency band is divided into small bands, and a number of users share it.
Hence, FDMA is used. TDMA access scheme is also used for multiple users to share the same band. In
low-earth orbiting satellite systems, CDMA technique is also used.
3. The multiplexing hierarchy used in optical fiber communication systems is given in Chapter 14.
4. C band and L band are used in optical fiber for WDM and DWDM. C band corresponds to the wavelengths
1530 1565 nm, and L band corresponds to 1565 1625 nm.
Projects
Implement a time division multiplexer with two PCM inputs using digital hardware. 1.
2.
1.
Prepare a technical report on the WDM and DWDM giving the details of various commercial products. 2.
<Day Day Up>
<Day Day Up>
Chapter 8: Multiple Access
OVERVIEW
Multiple access is a technique used to make best use of the transmission medium. In multiple access, multiple
terminals or users share the bandwidth of the transmission medium. In this chapter, we study the various
multiple access techniques. Multiple access attains much importance in radio communications because the
radio spectrum is a precious natural resource, and to make best use of the radio bandwidth is important. We
discuss the various multiple access techniques with illustrative examples.
Multiple access is a technique with which multiple terminals share the bandwidth of the transmission medium.
Multiple access techniques are of paramount importance in radio systems where the channel bandwidth is very
limited.
<Day Day Up>
<Day Day Up>
8.1 FREQUENCY DIVISION MULTIPLE ACCESS
In radio communication systems, a certain frequency band is allocated for a specific use. This band is divided
into smaller bands, and this pool of bands is available for all the terminals to share. Depending on the need, the
base station allocates a frequency (the center frequency of a band) for the terminal to transmit. This is known
as frequency division multiple access (FDMA). In FDMA systems, there is a central station that allocates the
band (or frequency) to different terminals, based on their needs.
Consider the example of a mobile communication system. The system consists of a number of base stations.
As shown in Figure 8.1, a base station has a pool of frequencies (here, f1 to f4). All four frequencies are
available for sharing by the mobile terminals located around the base station. The frequency will be allocated
based on need. Mobile terminal A will be allocated the frequency f1, and then mobile terminal B will be allocated
frequency f2.
Figure 8.1: Frequency division multiple access.
In FDMA, the frequency band is divided into smaller bands, and the pool of these bands is available for a
number of radio terminals to share, based on the need. A central station allocates the bands to different
terminals based on the request.
To ensure that there is no overlap in transmission by different terminals using adjacent frequencies, a guard
band is provided. A small separation between adjacent bands is called the guard band.
NoteFDMA is used in many radio systems. Mobile communication systems and satellite communication
systems use this access technique.
<Day Day Up>
<Day Day Up>
8.2 SPACE DIVISION MULTIPLE ACCESS
Because radio spectrum is a natural resource, we have to make best use of it. When a set of frequencies is
reused in another location, it is called space division multiple access (SDMA). Again, consider a mobile
communication system. A base station will be allocated a few frequencies. The same set of frequencies can be
allocated to another region, provided there is sufficient distance between the two regions.
As shown in Figure 8.2, the area covered by a mobile communication system can be divided into small regions
called cells. A cell is represented as a hexagon. In each cell, there will be a base station that will be given a
pool of frequencies. The frequencies of cell A (1) can be reused in cell A(2). We will study the detail of SDMA in
more detail when we discuss cellular mobile communications in the chapter "Cellular Mobile Communication
Systems."
Figure 8.2: Space division multiple access.
In SDMA, a service area is divided into small regions called cells and each cell is allocated certain frequencies.
Two cells can make use of the same set of frequencies, provided these two cells are separated by a distance
called reuse distance. SDMA is used in mobile communication systems.
NoteThe main attraction of SDMA is frequency reuse. Frequencies allocated to one cell can be allocated
to another cell, provided there is sufficient distance between the two cells to avoid interference. This
minimum distance is called reuse distance.
<Day Day Up>
<Day Day Up>
8.3 TIME DIVISION MULTIPLE ACCESS
In time division multiple access (TDMA), one frequency is shared by a number of terminals. Each station is
given a time slot in which it can transmit. The number of time slots is fixed, and the terminals transmit in the
time slots allocated to them. For example, in mobile communication systems, each frequency is divided into
eight time slots and hence eight mobile terminals use the same frequency, communicating in different time
slots.
Figure 8.3 illustrates the concept of TDMA. Terminal A transmits in time slot 1 using frequency f1. Terminal B
also transmits using frequency f1 but in a different time slot. Which terminal gets which time slot is decided by
the base station.
Figure 8.3: Time division multiple access.
In TDMA systems, the time slots can be fixed or dynamic. In fixed TDMA, each station is given a fixed time slot
(say, time slot 1 for station A, time slot 2 for station B, and so on). This results in a simple system design, but
the disadvantage is that if a station has no data to transmit, that time slot is wasted. In dynamic TDMA, a
station is assigned a time slot only when it makes a request for it. This leads to a more complex system, but the
channel is used effectively.
An important issue in TDMA systems is synchronization: each station should know precisely when it can start
transmission in its time slot. It is easy to tell terminal A that it should transmit in time slot 1, but how does
terminal A know when exactly time slot 1 starts? If it starts transmission slightly early, the data will collide with
the data in time slot 0. If it starts transmission slightly late, the data will collide with the data in time slot 2. The
complexity of TDMA lies in the synchronization. Synchronization is achieved by the central station sending a bit
pattern (101010101 pattern), and all the stations use this bit pattern to synchronize their clocks.
In TDMA, a single frequency is shared by a number of terminals. Each terminal is assigned a small time slot
during which it can transmit. The time slot assignment can be fixed, or the assignment can be dynamica slot
is assigned only when a terminal has data to transmit.
NoteSynchronization is a major problem in TDMA systems. To ensure that each terminal transmits only
during its time slot, very strict timings have to be followed. The central station sends a bit pattern to all
the terminals with which all the terminals synchronize their clocks.
NoteIn fixed TDMA, the time slots are assigned to each terminal permanently. This results in a simple
implementation, but time slots are wasted if the terminal has no data to transmit. On the other hand,
in dynamic TDMA, time slots are assigned by a central station based on the request of a terminal.
Hence, a separate signaling slot is required to transmit requests for slots. In mobile communication
systems, dynamic TDMA is used.
8.3.1 Time Division Multiple AccessFrequency Division Duplex
(TDMA-FDD)
As mentioned earlier, in radio systems, a pair of frequencies are used for communication between two
stationsone uplink frequency and one downlink frequency. We can use TDMA in both directions.
Alternatively, the base station can multiplex the data in TDM mode and broadcast the data to all the terminals;
the terminals can use TDMA for communication with the base station. When two frequencies are used for
achieving full-duplex communication along with TDMA, the multiple access is referred to as TDMA-FDD.
TDMA-FDD is illustrated in Figure 8.4. The uplink frequency is f1, and the downlink frequency is f2. Terminal-to-
base station communication is achieved in time slot 4 using f1. Base station-to-terminal communication is
achieved in time slot 4 using f2.
Figure 8.4: TDMA-FDD.
In TDMA-FDD, two frequencies are used for communication between the base station and the terminalsone
frequency for uplink and one frequency for downlink.
8.3.2 Time Division Multiple AccessTime Division Duplex (TDMA-TDD)
Instead of using two frequencies for full-duplex communication, it is possible to use a single frequency for
communication in both directions between the base station and the terminal. When two-way communication is
achieved using the single frequency but in different time slots, it is known as TDMA-TDD.
As shown in Figure 8.5, the time slots are divided into two portions. The time slots in the first portion are for
communication from the base station to the terminals (downlink), and the time slots in the second portion are
for communication from the terminals to the base station (uplink). The base station will transmit the data in time
slot 2 using frequency f1. The terminal will receive the data and then switch over to transmit mode and transmit
its data in time slot 13 using frequency f1.
Figure 8.5: TDMA-TDD.
The advantage of TDMA-TDD is that a single frequency can be used for communication in both directions.
However, it requires complex electronic circuitry because the terminals and the base station have to switch to
transmit and receive modes very quickly. For illustration of the timings, the TDMA-TDD scheme used in digital
enhanced cordless telecommunications (DECT) system is shown in Figure 8.6. Here the TDMA frame is of 10
milliseconds duration, which is divided into 24 time slots12 for downlink and 12 for uplink.
Figure 8.6: TDMA-TDD in DECT.
In TDMA-FDD, two frequencies are used for communication between the base station and the terminalsone
frequency for uplink and one frequency for downlink.
<Day Day Up>
<Day Day Up>
8.4 FDMA/TDMA
In many radio systems, a combination of FDMA and TDMA is used. For example, in cellular mobile
communication systems, each base station is given a set of frequencies, and each frequency is divided into
time slots to be shared by different terminals.
The concept of FDMA/TDMA is illustrated in Figure 8.7. The base station is assigned four frequencies: f1 to f4.
Each frequency is in turn shared in TDMA mode with eight time slots. Terminal A is assigned time slot 1 of
frequency f1. The base station keeps allocating the time slots in f1 until all are exhausted. Then it allocates time
slots in f2 and so on. After some time, terminal B wants to make a call. At that time, time slot 2 in frequency f3
is free and is allocated to terminal B.
Figure 8.7: FDMA/TDMA
In FDMA/TDMA, a pool of frequencies is shared by a number of terminals. In addition, each frequency is shared
in different time slots by the terminals. FDMA/TDMA is used in mobile communication systems and satellite
communication systems.
The combination of FDMA and TDMA increases the capacity of the system. In the above example, 32
subscribers can make calls simultaneously using the eight time slots of four frequencies.
<Day Day Up>
<Day Day Up>
8.5 CODE DIVISION MULTIPLE ACCESS
Code division multiple access (CDMA) technology has been developed for defense applications where secure
communication is very important. In CDMA systems, a very large bandwidth channel is required, many times
more than the bandwidth occupied by the information to be transmitted. For instance, if the actual bandwidth
required is 1MHz, in CDMA systems, perhaps 80MHz is allocated. Such large bandwidths were available only
with defense organizations, and hence CDMA was used initially only for defense applications. Because the
spectrum is spread, these systems are also known as spread spectrum multiple access (SSMA) systems. In
this category, there are two types of techniques: frequency hopping and direct sequence.
In spread spectrum multiple access, a wide bandwidth channel is used. Frequency hopping and direct
sequence CDMA are the two types of SSMA techniques.
NoteCDMA requires a large radio bandwidth. Becasue radio spectrum is a precious natural resource,
CDMA systems did not become commercially popular and were used only in defense communication
systems. However, in recent years, commercial CDMA systems are being widely deployed.
Wireless local loops are the wireless links between subscriber terminals and the base stations
connected to the telephone switches. CDMA is widely used in wireless local loops.
8.5.1 Frequency Hopping (FH)
Consider a system in which 1MHz bandwidth is required to transmit the data. Instead of allocating a radio
channel of 1MHz only, a number of radio channels (say 79) will be allocated, each channel with 1MHz
bandwidth. We need a very large spectrum, 79 times that of the actual requirement. When a station has to
transmit its data, it will send the data in one channel for some time, switch over to another channel and transmit
some more data, and again switch over to another channel and so on. This is known as frequency hopping
(FH). When the transmitting station hops its frequency of transmission, only those stations that know the
hopping sequence can receive the data. This will be a secure communication system if the hopping sequence
is kept a secret between the transmitting and the receiving stations.
Frequency hopping, as used in Bluetooth radio system, is illustrated in Figure 8.8. Here the frequency hopping
is done at the rate of 1600 hops per second. Every 0.625 milliseconds, the frequency of operation will change.
The terminal will receive the data for 0.625 msec in frequency f1, for 0.625 msec in f20, for 0.625 msec in f32,
and so on. The hopping sequence (f1, f20, f32, f41) is decided between the transmitting and receiving
stations and is kept secret.
Figure 8.8: Frequency hopping.
In frequency hopping (FH) systems, each packet of data is transmitted using a different frequency. A pseudo-
random sequence generation algorithm decides the sequence of hopping.
Frequency hopping is used in Global System for Mobile Communications (GSM) and Bluetooth radio systems.
NoteBluetooth radio system, which interconnects devices such as desktop, laptop, mobile phone,
headphones, modems, and so forth within a range of 10 meters, uses the frequency hopping
technique.
8.5.2 Direct Sequence CDMA
In direct sequence CDMA (DS-CDMA), each bit to be transmitted is represented by multiple bits. For instance,
instead of transmitting a 1, a pattern of say 16 ones and zeros is transmitted, and instead of transmitting a 0,
another pattern of 16 ones and zeros is transmitted. Effectively, we are increasing the data rate and hence the
bandwidth requirement by 16 times. The number of bits to be transmitted in place of 1 or 0 is known as chipping
rate. If the chipping code is kept a secret, only those stations that have the chipping code can decode the
information. When multiple stations have to transmit, the chipping codes will be different for each station. If they
are chosen in such a way that they are orthogonal to each other, then the data from different stations can be
pushed on to the channel simultaneously without interference.
As shown in Figure 8.9, in DS-CDMA, multiple terminals transmit on to the channel simultaneously. Because
these terminals will have different chipping codes, there will be no interference.
Figure 8.9: DS-CDMA.
CDMA systems are now being widely deployed for cellular communications as well as 3G systems for
accessing the Internet through wireless networks. CDMA systems are used in wireless local loops.
In DS-CDMA, multiple terminals transmit on the same channel simultaneously, with different chipping codes. If
the chipping code length is say 11 bits, both 1 and 0 are replaced by the 11-bit sequence of ones and zeros.
This sequence is unique for each terminal.
NoteIn IEEE 802.11 wireless local area network standard, 11-bit chipping code is used.
<Day Day Up>
<Day Day Up>
8.6 ORTHOGONAL FREQUENCY DIVISION MULTIPLEXING (OFDM)
In CDMA systems, a single carrier is used to spread the spectrum. In orthogonal frequency division multiplexing
(OFDM), the transmission by individual terminals is split over a number of carriers. It is closer to frequency
hopping except that the transmission is done on the designated channels simultaneously.
The advantage of this scheme is that because multiple carriers are used to carry the data, multipath fading is
less compared to CDMA. OFDM is considered a big competitor to CDMA in wireless networks, though OFDM
systems are yet to be widely deployed.
In OFDM, the data is transmitted simultaneously on a number of carriers. This technique is used to overcome
multipath fading. OFDM is used in wireless local area networks.
<Day Day Up>
<Day Day Up>
8.7 CARRIER SENSE MULTIPLE ACCESS (CSMA)
In local area networks, the medium is either twisted copper pair or coaxial cable. This medium is shared by a
number of computers (nodes). While sharing the medium, only one node can transmit at a time. If multiple
nodes transmit simultaneously, the data is garbled. Sharing the medium is achieved through a protocol known
as Medium Access Control (MAC) protocol. The MAC protocol works as follows. When a node has to transmit
data, first it will sense the medium. If some other node is transmitting, it will keep sensing the medium until the
medium is free, and then it will start the transmission. This process of checking whether the medium is free or
not is known as carrier sense. Since multiple nodes access the medium through carrier sensing, the multiple
access is known as carrier sense multiple access.
Now consider the two nodes A and C in Figure 8.10. Node A sensed the carrier, found it free, and started the
transmission. For the data to reach the point where node C is connected, it will take some finite time. Before the
data on the medium reaches that point, node C senses the carrier, finds the carrier to be free, and then it also
sends its data. As a result, the data of A and C collide and get garbled. The garbling of data can be detected
because there will be a sudden rise in the voltage level on the cable. It is not enough if the nodes find the
carrier to be free, they also need to detect collisions. When a collision is detected by a node, that node has to
retransmit its data, waiting for some more time after the carrier is free. If more collisions are detected, the
waiting time has to be increased. This mechanism is known as carrier sense multiple access/collision detection
(CSMA/CD). CSMA/CD is the protocol used in Ethernet LANs. A variation of CSMA/CD is CSMA/CA, in which
CA stands for collision avoidance. In CSMA/CA, collisions are avoided by reserving the medium for a specific
duration. We will study the details of these protocols in Chapter 17, "Local Area Networks," where we will study
wired and wireless LANs.
Figure 8.10: CSMA/CD.
In local area networks, the medium (cable) is shared by a number of nodes using CSMA/ CD. Before
transmitting its data, a node senses the carrier to check whether the medium is free. If it is free, the node sends
its data. If the medium is not free, the node waits for a random amount of time and again senses the carrier. If
two or more nodes send their data simultaneously, it causes collision, and the nodes would need to retransmit
their data.
NoteCSMA with collision avoidance is used in wireless local area networks. To avoid collisions, time slots
have to be reserved for the nodes.
Summary
Multiple access techniques provide the mechanism to share the bandwidth efficiently by multiple terminals.
Particularly in radio systems where the spectrum bandwidth is very limited, multiple access techniques are very
important. In frequency division multiple access (FDMA), a pool of frequencies is assigned to a base station,
and the terminals are allocated a specific frequency for communication. In time division multiple access
(TDMA), one frequency will be shared by different terminals in different time slots. Each terminal is assigned a
specific time slot during which the terminal has to send the data. In TDMA frequency division duplex (FDD), a
pair of frequencies is used for communication uplink and downlink. In TDMA time division duplex (TDD), a
single frequency is used for communication in both directionsthe time slots will be divided into two
portionsdownlink slots and uplink slots. The code division multiple access technique requires large
bandwidth, but the advantage is that it provides secure communication. In CDMA, there are two mechanisms:
frequency hopping (FH) and direct sequence. In FH, the frequency of transmission changes very fast in a
pseudo-random fashion. Only those terminals that know the sequence of hopping will be able to decode the
data. In direct sequence CDMA, instead of sending the bit stream directly, chipping codes are used to transmit
ones and zeros. In local area networks (LANs), the medium is shared by multiple nodes using carrier sense
multiple access (CSMA) protocols.
References
R.O. LaMarive et al. "Wireless LANs and Mobile Networking: Standards and Future Directions." IEEE
Communications Magazine, Vol. 34, No. 8, August 1996.
J. Karaoguz. "High Rate Personal Area Networks." IEEE Communications Magazine, Vol. 39, No. 11,
December 2001.
http://www.3gpp.org The official site of the 3G partnership program. You can get the details of the CDMA-
and TDMA-based wireless networks from this site.
http://www.cdg.org Web site of CDMA development group.
http://www.iec.org IEC web site, which has excellent online tutorials.
http://www.qualcomm.com Web site of Qualcomm Corporation. You can get information on CDMA
technologies and products from this site.
http://www.standards.ieee.org The IEEE Standards web site. The details of various multiple access
techniques used in local area networks can be obtained from this site.
Questions
Explain FDMA with examples. 1.
What is the difference between TDMA-FDD and TDMA-TDD? 2.
Explain how SDMA is used in mobile communication systems. 3.
Explain frequency hopping with an example. 4.
What is direct sequence CDMA? 5.
Explain the variations of CSMA protocol. 6.
Exercises
1. Explore the details of the multiple access techniques used in the following systems: (a) mobile
communication systems based on GSM standards, (b) Bluetooth radio system, (c) 802.11 wireless
local area network, (d) HiperLAN, and (e) digital enhanced cordless telecommunications (DECT)
system.
2. Prepare a technical paper on the various multiple access techniques used in satellite
communication systems.
3. Explain the multiple access technique used in mobile communication system based on the Global
System for Mobile Communications (GSM) standard.
4. In TDMA systems, the channel data rate is higher than the data rate of the information source.
Explain with an example.
Answers
1. (a) Mobile communication systems based on GSM standards use FDMA/ TDMA. (b) A Bluetooth radio
system uses frequency hopping. (c) 802.11 wireless local area network uses frequency hopping. (d)
HiperLAN uses TDMA-TDD. (e) A digital enhanced cordless telecommunications (DECT) system uses
TDMA-TDD.
2. In satellite communication systems, TDM-TDM, TDMA, and CDMA technologies are used. The details are
given in Chapter 13.
3. In GSM, the service area is divided into small regions called cells. Each cell is assigned a number of
channels that are shared by the mobile phones in that cell. Hence, FDMA is used. In addition, each channel
is time-shared by eight mobile phones. Hence, TDMA is also used. So, the multiple access technique is
referred to as FDMA/TDMA. Each radio channel has a pair of frequenciesone for uplink and one for
downlink. Hence, the TDMA scheme is TDMA-FDD.
4. In a TDMA system, each station will give a small time slot during which it has to pump its data. However, the
source will generate the data continuously. Hence, the source has to buffer the data and send it fast over
the channel when it gets its time slot. For instance, the user talks into his mobile phone continuously. But
the mobile phone gets its time slot only at regular intervals. Hence, the mobile device has to buffer the
digitized voice and send it in its time slot.
Projects
Make a paper design of a radio communication system that has one base station and 32 remote
stations. The base station communicates with the remotes using broadcast mode. All the remotes listen
to the base station transmission and, if the address transmitted matches with its own address, the data
is decoded, otherwise discarded. The remotes transmit using TDMA. Design the TDMA time frame. Hint:
Access is TDMA/FDD. Because there are 32 remotes, the address length should be minimum 5 bits.
After carrying out the design, look for details in Chapter 12.
1.
Simulate a direct sequence CDMA system. Replace 1 with 11-bit code and replace 0 with another 11-bit
code. Study how these codes have to be chosen.
2.
<Day Day Up>
<Day Day Up>
Chapter 9: Carrier Modulation
When a signal is to be transmitted over a transmission medium, the signal is superimposed on a carrier, which
is a high-frequency sine wave. This is known as carrier modulation. In this chapter, we will study the various
analog and digital carrier modulation techniques. We also will discuss the various criteria based on which a
particular modulation technique has to be chosen.
9.1 WHAT IS MODULATION?
Modulation can be defined as superimposition of the signal containing the information on a high-frequency
carrier. If we have to transmit voice that contains frequency components up to 4kHz, we superimpose the voice
signal on a carrier of, say, 140MHz. The input voice signal is called the modulating signal. The transformation of
superimposition is called the modulation. The hardware that carries out this transformation is called the
modulator. The output of the modulator is called the modulated signal. The modulated carrier is sent through
the transmission medium, carrying out any other operations required on the modulated signal such as filtering.
At the receiving end, the modulated signal is passed through a demodulator, which does the reverse operation
of the modulator and gives out the modulating signal, which contains the original information. This process is
depicted in Figure 9.1. The modulating signal is also called the baseband signal. In a communication system,
both ends should have the capability to transmit and receive, and therefore the modulator and the demodulator
should be present at both ends. The modulator and demodulator together are called the modem.
Figure 9.1: Modulation and demodulation.
Modulation is the superimposition of a signal containing the information on a high-frequency carrier. The signal
carrying the information is called the modulating signal, and the output of the modulator is called the modulated
signal.
<Day Day Up>
<Day Day Up>
9.2 WHY MODULATION?
Suppose we want to transmit two voice channels from one place to another. If we combine the two voice
signals and transmit them on the medium, it is impossible to separate the voice conversations at the receiving
end. This is because both voice channels occupy the same frequency band, 300Hz to about 4kHz. A better way
of transmitting the two voice channels is to put them in different frequency bands and then send them,
translating the voice channels into different frequency bands.
Low-frequency signals have poor radiation capability and so low-frequency signals such as voice signals are
translated into high frequencies. We need to superimpose the voice signal onto a high-frequency signal to
transmit over large distances. This high-frequency signal is called the carrier, and the modulation is called
carrier modulation.
When different voice signals are modulated to different frequencies, we can transmit all these modulated
signals together. There will be no interference.
If radio is used as the transmission medium, the radio signal has to be sent through an antenna. The size of the
antenna decreases as the frequency of the signal goes up. If the voice is transmitted without superimposing it
on a high-frequency carrier, the antenna size should be 5,000 (yes, five thousand) meters!
For these reasons, modulation is an important transformation of the signal that is used in every communication
system.
To summarize, modulation allows:
Transmitting signals over large distances, because low-frequency signals have poor radiation
characteristics.
It is possible to combine a number of baseband signals and send them through the medium, provided
different carrier frequencies are used for different baseband signals.
Small antennas can be used if radio is the transmission medium.
Modulation allows transmission of signals over a large distance, because low-frequency signals have poor
radiation characteristics. It is also possible to combine a number of baseband signals and send them through
the medium.
NoteIn transmission systems using radio as the medium, the higher the frequency of operation, the smaller
the antenna size. So, using high-frequency carrier reduces the antenna size considerably.
<Day Day Up>
<Day Day Up>
9.3 TYPES OF MODULATION
In general, "modulation" can be defined as transformation of a signal. In Chapter 4, "Coding of Text, Voice,
Image, and Video Signals", we studied pulse code modulation (PCM) and its derivatives. It is important to note
that these are source coding techniques that convert the analog signal to digital form. In this chapter, we will
study carrier modulation, which transforms a carrier in such a way that the transformed carrier contains the
information of the modulating signal.
Many carrier modulation techniques have been proposed in the literature. Here, we will study the most
fundamental carrier modulation techniques that are used extensively in both analog and digital communication
systems.
Carrier modulation can be broadly divided into two categories:
Analog modulation
Digital modulation
The various analog modulation techniques are:
Amplitude modulation (AM)
Frequency modulation (FM)
Phase modulation (PM)
Analog modulation techniques can be broadly divided into amplitude modulation (AM), frequency modulation
(FM), and phase modulation (PM). FM and PM together are known as angle modulation techniques.
FM and PM together are known as angle modulation techniques.
The various digital modulation techniques are:
Amplitude shift keying (ASK)
Frequency shift keying (FSK)
Phase shift keying (PSK)
The three digital modulation techniques are (a) amplitude shift keying (ASK); (b) frequency shift keying (FSK);
and (c) phase shift keying (PSK).
<Day Day Up>
<Day Day Up>
9.4 COMPARISON OF DIFFERENT MODULATION TECHNIQUES
Before we discuss the details of different modulation techniques, it is important to understand why so many
modulation schemes are available. The reason for having different modulation schemes is that the performance
of each modulation scheme is different. The performance criteria on which modulation techniques can be
compared are:
Bandwidth: What is the bandwidth of the modulated wave?
Noise immunity: Even if noise is added to the modulated signal on the transmission medium, can the original
modulating signal be obtained by the demodulator without much distortion?
Complexity: What is the complexity involved in implementing the modulator and demodulator? Generally, the
modulator and demodulator are implemented as hardware, though nowdays, digital signal processors are used
for implementation, and hence a lot of software is also used.
The performance of a modulation scheme can be characterized by the bandwidth of the modulated signal,
immunity to noise and the complexity of the modulator/demodulator hardware.
Based on these performance criteria, a modulation technique has to be chosen for a given application.
NoteIn the past, both modulator and demodulator were implemented completely in hardware. With the
advent of digital signal processors, modulator and demodulator implementations are now software
oriented.
NoteHow well a modulation scheme performs on a noisy channel is characterized by the Bit Error Rate
(BER). The BER is related to the signal-to-noise ratio (SNR). For a given BER, say 10
- 3
, the
modulation technique that requires the least SNR is the best.
<Day Day Up>
<Day Day Up>
9.5 ANALOG MODULATION TECHNIQUES
Analog modulation is used extensively in broadcasting of audio and video programs, and many old
telecommunication systems are also based on analog modulation. All newly developed systems use digital
modulation. For broadcasting, analog modulation continues to play a vital role.
9.5.1 Amplitude Modulation
In AM, the amplitude of the carrier is proportional to the instantaneous amplitude of the modulating signal. The
frequency of the carrier is not changed.
In Figure 9.2(a), the modulating signal (a sine wave) is shown. Figure 9.2(b) shows the amplitude-modulated
signal. It is evident from this figure that the carrier's amplitude contains the information of the modulating signal.
It can also be seen that both the upper portion and the lower portion of the carrier amplitude contain the
information of the modulating signal.
Figure 9.2: (a) Modulating signal.
Figure 9.2: (b) Amplitude modulated signal.
If f
c
is the frequency of the carrier, the carrier can be represented mathematically as
A sin(2pf
c
t)
If f
m
is the frequency of the modulating signal, the modulating signal is represented by
B cos(2pf
m
t)
The amplitude-modulated wave is represented by
(A + B cos2pf
m
t) sin(2pf
c
t) or
A{1 + (B/A) cos(2pf
m
t)} sin(2pf
c
t)
The value of B/A denoted by m is called the modulation index. The value of (m 100) is the modulation index,
represented as a percentage.
On expanding the above equation, we get three terms: one term with f
c
, the second term with (f
c
+ f
m
), and the
third term with (f
c
- f
m
).
The terms with (f
c
+ f
m
) and (f
c
- f
m
) represent sidebands. Both the sidebands contain the information of the
modulating signal. The frequency component (f
c
+ f
m
) is called the upper sideband, and the frequency
component (f
c
+ f
m
) is called the lower sideband. Only the upper sideband or lower sideband can be transmitted
and demodulated at the receiving end.
The bandwidth required for AM is twice the highest modulating frequency. If the modulating frequency has a
bandwidth of 15kHz (the bandwidth used in audio broadcasting), the amplitude-modulated signal requires
30kHz.
It is very easy to implement the modulator and demodulator for AM. However, as the amplitude-modulated
signal travels on the medium, noise gets added, and the amplitude changes; hence, the demodulated signal is
not an exact replica of the modulating signal; i.e., AM is not immune to noise. Another problem with AM is that
when the AM waves are transmitted, the carrier will take up most of the transmitted power, though the carrier
does not contain any information; the information is present only in the sidebands.
Amplitude modulation is used in audio broadcasting. Different audio programs are amplitude modulated,
frequency division multiplexed, and transmitted over the radio medium. In video broadcasting, a single sideband
is transmitted.
The bandwidth required for AM is twice the highest modulating frequency. In AM radio broadcasting, the
modulating signal has bandwidth of 15kHz, and hence the bandwidth of an amplitude-modulated signal is
30kHz.
9.5.2 Frequency Modulation
In frequency modulation (FM), the frequency of the carrier is varied according to the amplitude of the
modulating signal. The frequency deviation is proportional to the amplitude of the modulating signal. The
amplitude of the carrier is kept constant.
Figure 9.2(c) shows the frequency-modulated signal when the modulating signal is a sine wave as shown in
Figure 9.2(a).
Figure 9.2: (c) Frequency modulated signal.
If the carrier is represented by
and the modulating signal is represented by
the instantaneous frequency of the frequency-modulated carrier is given by
where k is a proportionality constant.
In frequency modulation, the frequency deviation of the carrier is proportional to the amplitude of the modulating
signal.
According to Carlson's rule, the bandwidth of an FM signal is twice the sum of the modulating signal frequency
and the frequency deviation. If the frequency deviation is 75kHz and the modulating signal frequency is 15kHz,
the bandwidth required is 180kHz.
As compared to AM, FM implementation is complicated and occupies more bandwidth. Because the amplitude
remains constant, FM is immune to noise.
Many audio broadcasting stations now use FM. In FM radio, the peak frequency deviation is 75kHz, and the
peak modulating frequency is 15kHz. Hence the FM bandwidth requirement is 180kHz. The channels in FM
band are separated by 200kHz. If we compare the quality of audio in AM radio and FM radio, we can easily
make out that FM radio gives much better quality. This is because FM is more immune to noise as compared to
AM. FM also is used to modulate the audio signal in TV broadcasting.
The bandwidth of the frequency-modulated signal is twice the sum of the modulating signal frequency and the
frequency deviation.
NoteIn FM radio broadcasting, the peak frequency deviation is 75kHz, and the peak modulating signal
frequency is 15kHz. Hence the bandwidth of the modulated signal is 180kHz.
9.5.3 Phase Modulation
In phase modulation, the phase deviation of the carrier is proportional to the instantaneous amplitude of the
modulating signal. It is possible to obtain frequency modulation from phase modulation. The phase modulated
signal for the modulating signal shown in Figure 9.2(d) looks exactly same as Figure 9.2(c).
Figure 9.2: (d) Phase modulated signal.
In phase modulation, the phase deviation of the carrier is proportional to the instantaneous amplitude of the
modulating signal. No practical systems use phase modulation.
No practical systems use phase modulation.
<Day Day Up>
<Day Day Up>
9.6 DIGITAL MODULATION TECHNIQUES
The three important digital modulation techniques are
Amplitude shift keying (ASK)
Frequency shift keying (FSK)
Phase shift keying (PSK)
For a bit stream of ones and zeros, the modulating signal is shown in Figure 9.3(a). Figures 9.3(b), 9.3(c), and
9.3(d) show modulated signals using ASK, FSK, and binary PSK, respectively.
Figure 9.3: (a) Modulating signal.
Figure 9.3: (b) ASK.
Figure 9.3: (c) FSK.
Figure 9.3: (d) BPSK.
9.6.1 Amplitude Shift Keying
Amplitude shift keying (ASK) is also known as on off keying (OOK). In ASK, two amplitudes of the carrier
represent the binary values (1 and 0). Generally, one of the amplitudes is taken as zero. Accordingly, the ASK
signal can be mathematically represented by
In amplitude shift keying (ASK), 1 and 0 are represented by two different amplitudes of the carrier. ASK is
susceptible to noise. ASK is used in optical fiber communication because the noise is less.
The bandwidth requirement of ASK signal is given by the formula
where R is the bit rate and r is a constant between 0 and 1, related to the hardware implementation.
ASK is susceptible to noise and is not used on cable. It is used in optical fiber communication.
9.6.2 Frequency Shift Keying
In FSK, the binary values are represented by two different frequencies close to the carrier frequency. An FSK
signal is mathematically represented by
f
1
can be f
c
+ f
m
and f
2
can be f
c
- f
m
, where f
c
is the carrier frequency and 2f
m
is the frequency deviation.
The bandwidth requirement of FSK signal is given by
where R is the data rate and r is a constant between 0 and 1.
FSK is used widely in cable communication and also in radio communication.
In frequency shift keying (FSK), 1 and 0 are represented by two different frequencies of the carrier. FSK is used
widely in cable and radio communication systems.
9.6.3 Phase Shift Keying
The two commonly used PSK techniques are binary PSK (BPSK) and quadrature PSK (QPSK).
In PSK, the phase of the carrier represents a binary 1 or 0. In BPSK, two phases are used to represent 1 and 0.
Mathematically, a PSK signal is represented by
In BPSK, binary 1 and 0 are represented by two phases of the carrier.
The phase is measured relative to the previous bit interval. The bandwidth occupied by BPSK is the same as
that of ASK.
In quadrature PSK (QPSK), two bits in the bit stream are taken, and four phases of the carrier frequency are
used to represent the four combinations of the two bits.
In quadrature phase shift keying (QPSK), different phases of the carrier are used to represent the four possible
combinations of two bits: 00, 01, 10, and 11. QPSK is used widely in radio communication systems.
The bandwidth required for a QPSK modulated signal is half that of the BPSK modulated signal. Phase shift
keying (BPSK and QPSK) is used extensively in radio communication systems. In mobile communication
systems also, different PSK techniques are used.
Summary
Various carrier modulation techniques are reviewed in this chapter. Carrier modulation is the technique used to
transform the signal such that many baseband signals can be multiplexed and sent over the medium for
transmitting over large distances without interference. Modulation techniques can be broadly divided into analog
modulation techniques and digital modulation techniques. Amplitude modulation (AM) and frequency
modulation (FM) are the widely used analog modulation techniques. In AM, the information is contained in the
amplitude of the carrier. In FM, the frequency deviation of the carrier contains the information. AM and FM are
used extensively in broadcasting audio and video. The important digital modulation techniques are amplitude
shift keying (ASK), frequency shift keying (FSK), and phase shift keying (PSK). In ASK, binary digits are
represented by the presence or absence of the carrier. In FSK, the binary digits are represented by two
frequencies of the carrier. In PSK, the binary values are represented by different values of the phase of the
carrier. ASK is used in optical fiber communication. FSK and PSK are used when the transmission medium is
cable or radio. When designing a communication system, the modulation scheme is chosen, keeping in mind
the bandwidth of the modulated signal, ease of implementation of the modulator/demodulator, and noise
immunity.
References
G. Kennedy and B. Davis. Electronic Communication Systems. Tata McGraw Hill Publishing Company
Limited, 1993.
S. Haykin. Communication Systems, Third Edition, 1994.
The Web sites of digital signal processor (DSP) chip manufacturers such as Analog Devices, Motorola,
Lucent Technologies, Texas Instruments, and others provide a wealth of information on modulations and
development of modems using DSP.
Questions
Explain the need for modulation. 1.
List the various analog and digital modulation techniques. 2.
What criteria are used for comparing different modulation schemes? 3.
Explain the various analog modulation schemes. 4.
Explain the various digital modulation schemes. 5.
Exercises
1. For the bit pattern 1 0 1 1 0 1 1 1 0 0 1, draw the modulated waveform signals if the modulation
used is (a) ASK, (b) FSK, (c) BPSK.
2. Write a C program to generate a carrier of different frequencies and to generate amplitude
modulated waves if the modulating signal is a sine wave of 1kHz. Give a provision to change the
modulation index.
3. Make a comparative statement on different digital modulation techniques.
4. Find out the modulation techniques used in (a) Global System for Mobile Communication (GSM);
(b) Bluetooth; (c) IEEE 802.11 local area networks; and (d) digital subscriber lines (DSL).
5. If the bandwidth of a modulating signal is 20kHz, what is the bandwidth of the amplitude modulated
signal?
6. If the bandwidth of a modulating signal is 20kHz and the frequency deviation used in frequency
modulation is 75kHz, what is the bandwidth of the frequency-modulated signal?
Answers
1. The code segments for generation of ASK and FSK signals are given in Listing C.4. The screen shots are
given in Figure C.5 and Figure C.6.
Listing C.4: Generation of ASK and FSK waveforms.
/* To draw ASK waveform for 1 kHz tone */
void CSendDialog::ASKPlotter()
{
int n, j1, j2, var=0, mm=0;
int i=0, xr[8], bit;
char c;
wn=8*0.01745 ;
CWindowDC dc(GetDlgItem(IDC_SINE));
LOGBRUSH logBrush;
logBrush.lbStyle =BS_SOLID;
logBrush.lbColor=RGB(0,255,0);
CPen pen(PS_GEOMETRIC | PS_JOIN_ROUND,1, &logBrush);
dc.SelectObject(&pen);
dc.SetTextColor(RGB(255,255,255));
CString strData;
strData="My Sample Text";
for(int ii=0;ii<strData.GetLength();ii++)
{
c = strData[ii];
dc.TextOut(20,10,"Amplitude1 = 1V");
dc.TextOut(20,30,"Amplitude2 = 2V");
dc.TextOut(150,10,"Freq = 1200Hz ");
var=0;
xs=0;ys=0;
for(j2=0,i=0;j2<8;j2++){
bit=c%2;
xr[i]=bit;
i=i+1;
c=c/2;
}
for(i1=0;i1<=360;i1++){
ya[i1]=amp1*sin(wn*i1);
yb[i1]=amp2*sin(wn*i1);
}
for(n=0;n<8;n++){
if(xr[i-1]==1){
PlotSine1(&dc, var);i---;
}
else{
PlotSine2(&dc, var);i---;
}
var=var+45;
}
incr=0;
_sleep(200);
CRect rect;
m_sine.GetClientRect(rect);
dc.FillSolidRect(rect, RGB(0,0,0));
}// End For str.Length
m_start.SetWindowText("Send Code");
}
/* To draw the sin waveform when signal is present */
void CSendDialog::PlotSine1(CWindowDC *dc, int var)
{
if(incr!=0)
incr-=1;
flagctrl=1;
line(*&dc,25,110,385,110);
for( i1=var; i1<=var+45; i1=i1+1 )
{
xe=i1; ye=ya[i1];
spect[incr+=1]=ya[i1];
line(*&dc, 25+xs, 110-ys, 25+xe, 110-ye );
xs=xe;
ys=ye;
}
}
/* To draw the sin waveform when signal is not present */
void CSendDialog::PlotSine2(CWindowDC *dc, int var)
{
if(incr!=0)
incr-=1;
line(*&dc,25,110,385,110);
for( i1=var; i1<=var+45; i1=i1+1 )
{
xe=i1; ye=yb[i1];
spect[incr+=1]=yb[i1];
line(*&dc, 25+xs, 110-ys, 25+xe, 110-ye);
xs=xe;
ys=ye;
}
flagctrl=0;
}
/* To draw the horizontal line */
void CSendDialog::line(CWindowDC *dc, short x1, short y1, short
x2, short y2)
{
dc->BeginPath();
dc->MoveTo(x1,y1);
dc->LineTo(x2,y2);
dc->CloseFigure();
dc->EndPath();
dc->StrokePath();
}
/* To draw FSK waveform for 1 kHz tone */
void CSendDialog::FSKPlotter()
{
int n, j1, j2, var=0;
int i=0, xr[1000], bit;
char c;
wn1 = 8*0.01745 ;
wn2 = 32*0.01745;
CWindowDC dc(GetDlgItem(IDC_SINE));
LOGBRUSH logBrush;
logBrush.lbStyle = BS_SOLID;
logBrush.lbColor = RGB(0,255,0);
CPen pen(PS_GEOMETRIC | PS_JOIN_ROUND, 1, &logBrush);
dc.SelectObject(&pen);
dc.SetTextColor(RGB(255,255,255));
CString strData;
strData="My Sample Text";
for(int ii=0;ii<strData.GetLength();ii++)
{
c=strData[ii];
dc.TextOut(20,10,"Freq1 = 1200Hz");
dc.TextOut(20,30,"Freq2 = 2000Hz");
var=0;
xs=0;
ys=0;
for(j2=0,i=0;j2<8;j2++){
bit=c%2;
xr[i]=bit;
i++;
c=c/2;
}
for(i1=0;i1<=360;i1++){
ya[i1] = amp1*sin(wn1*i1);
yb[i1] = amp1*sin(wn2*i1);
}
for(n=0;n<8;n++){
if(xr[i-1]==1){
PlotSine1(&dc, var);i---;
}else{
PlotSine2(&dc, var);i---;
}
var = var + 45;
}
incr=0;
_sleep(200);
CRect rect;
m_sine.GetClientRect(rect);
dc.FillSolidRect(rect, RGB(0,0,0));
}
}
Figure C.5 shows a screenshot for ASK waveform for data.
Figure C.5: ASK waveform.
Figure C.6 shows a screenshot of an EFSK waveform.
Figure C.6: FSK waveform.
2. The code segment for generation of a 1kHz sine wave is given in Listing C.5. You can use this as the
modulating signal to generate the modulated signals. The waveform is shown in Figure C.7.
Listing C.5: To generate a 1KHz sine wave.
/* To Display the signal on the screen*/
void CSingleToneDlg::Tone(int i)
{
CWindowDC dc(GetDlgItem(IDC_SINE));
CRect rcClient;
LOGBRUSH logBrush;
logBrush.lbStyle =BS_SOLID;
logBrush.lbColor=RGB(0,255,0);
CPen pen(PS_GEOMETRIC | PS_JOIN_ROUND,1, &logBrush);
dc.SelectObject(&pen);
dc.SetTextColor(RGB(255,255,255));
while(continueThread){
m_sine.GetClientRect(rcClient);
dc.FillSolidRect(rcClient, RGB(0,0,0));
dc.MoveTo(0,rcClient.bottom/2);
int x, y;
dc.MoveTo(0,rcClient.bottom/2);
for (x =0 ; x < (rcClient.right); x++) // display Input
{
y = rcClient.bottom/2 - ToneNextSample();
dc.LineTo(x, y);
}
Sleep(200);
}
}
/* To initialize the frequency*/
void CSingleToneDlg::ToneInitSystem(double Freq)
{
T2PI = 2*TPI;
TSampleNo = 0;
ToneFreq = Freq;
TWT = ToneFreq*0.000125;
TWTn = 0;
TSampleNo = 0;
}
/* To calculate the next sample value */
int CSingleToneDlg::ToneNextSample()
{
int TISample;
double TSample;
int c;
TSampleNo++;
TSample = KTONE_AMPL*sin(T2PI*TWTn);
TWTn += TWT;
if (TWTn > 1.0) TWTn -= 1.0;
TISample = (int) TSample;
return TISample;
}
Figure C.7 shows a 1kHz sine wave.
Figure C.7: 1kHz sine wave.
3. Comparison of modulation techniques is done based on the noise immunity, bandwidth requirement, error
performance, and implementation complexity of the modulator/demodulator. ASK is not immune to noise.
QPSK occupies less bandwidth, but implementation is complex. The waterfall curve given in the Chapter 10
gives the performance of different modulation techniques.
4. The modems used in radio systems are called radio modems, and the modems used on wired systems are
called line modems. GSM and Bluetooth are radio systems. The modulation used in GSM is GMSK.
Bluetooth uses Gaussian FSK. There are various standards for line modems specified by ITU. These are
V.24, V.32, V.90, and so on.
5. If the bandwidth of the modulating signal is 20kHz, the bandwidth of the amplitude-modulated signal is
40kHz.
6. If the bandwidth of a modulating signal is 20kHz and the frequency deviation used in frequency modulation
is 75kHz, the bandwidth of the frequency modulated signal is 2(20 + 75) kHz = 190kHz.
Projects
Develop a software package that can be used for teaching different modulation schemes. The graphical
user interface (GUI) should (a) facilitate giving a bit stream (1010100) or a sine wave as input; (b)
enable the user to select a modulation technique (AM, FM, ASK, FSK, and so on); and (c) select the
modulation parameters such as carrier frequency for AM and FM, modulation index for AM, frequency
deviation for FM, and so forth. The bit pattern/modulating signal and the modulated signal should be
displayed as output.
1.
Digital signal processors are used extensively for modulation. Using a DSP evaluation board (that of
Analog Devices, Texas Instruments, Motorola, or Lucent Technologies), develop DSP software to
generate various modulations.
2.
Using MATLAB, generate various modulations. You can obtain an evaluation copy of MATLAB from
http://www.mathworks.com.
3.
Study the various modulation schemes used in line modems. Make a list of various ITU-T standards for
line modems, such as V.24, V.32, and so forth.
4.
<Day Day Up>
<Day Day Up>
Chapter 10: Issues in Communication System Design
OVERVIEW
To design a communication system according to user requirements is a challenging task because the
requirements differ from user to user and there are many design trade-offs to be considered. The design of a
communication system has to be carried out keeping in view the following factors:
What are the information sources? Data, voice, fax, video, or all of them? Depending on the requirements,
the bandwidth requirements will be different.
What is the coverage area? The coverage area decides which transmission medium has to be chosen, and
sometimes a combination of media may be required (for instance, a combination of twisted pair and
satellite).
Is a secure network required? For defense and corporate networks, a highly secure system is required to
ensure that the information is kept confidential. Special security features have to be incorporated in such
cases.
What are the performance criteria? For data applications, particularly such as in banking and other financial
transactions, performance requirements are very stringent. For instance, even a one-bit error in a million
bits transmitted is not acceptable. Such criteria call for efficient modulation techniques and error
detection/correction mechanisms. On the other hand, for voice and video applications, delay should be
minimum.
What are the signaling requirements? Is a separate signaling channel/network required?
Which national/international standards have to be followed? The days of providing proprietary solutions are
gone; every communication system has to be designed according to the available international standards.
Before designing a communication system, the relevant standards need to be studied, and the design has
to be carried out.
Given an unlimited budget, we can design a world-class communication system for every user segment.
But the user is always constrained by budget. For a given budget, to design the optimal system that meets
the requirements of the user is of course the biggest challenge, and as usual, we need to consider the
various trade-offs.
In this chapter, we discuss the issues involved in designing communication systems. We also study the
important aspects in designing radio communication systems. The special attraction of the radio systems is that
they provide mobility to users, but every attraction comes with a premiumradio systems pose special design
challenges.
When designing a communication system, the following requirements need to be considered: coverage area,
information sources, security issues, performance issues, signaling requirements, international/national
standards to be followed, and the cost.
<Day Day Up>
<Day Day Up>
10.1 DATA RATES
In a digital system, the information to be transmitted is converted into binary data (ones and zeros). In the case
of text, characters are converted into ASCII format and transmitted. In audio or video, the analog signal is
converted into a digital format and then transmitted.
To make best use of a communication channel, the data rate has to be reduced to the extent possible without
compromising quality. All information (text, graphics, voice, or video) contains redundancy, and this redundancy
can be removed using compression techniques. Use of low bit rate coding techniques (also called source
coding techniques) is very important to use the bandwidth efficiently. Particularly in radio systems, where radio
bandwidth has a price, low-bit rate coding is used extensively.
When designing a communication system, the designer has to consider the following issues related to
information data rates:
What are the information sources to the communication system: data, voice, fax, video, or a combination of
these?
How many information sources are there, and is there a need for multiplexing them before transmitting on
the channel?
How many information sources need to use the communication channel simultaneously? This determines
whether the channel bandwidth is enough, whether multiple access needs to be used, etc.
If the channel bandwidth is not sufficient to cater to the user requirements, the designer has to consider
using data compression techniques. A trade-off is possible between quality and bandwidth requirement. For
instance, voice signals can be coded at 4.8kbps, which allows many more voice channels to be
accommodated on a given communication channel, but quality would not be as good as 64kbps PCM-
coded voice.
The services to be supported by the communication systemdata, voice, fax, and video servicesdecide the
data rate requirement. Based on the available communication bandwidth, the designer has to consider low bit
rate coding of the various information sources.
NoteCompression techniques can be divided into two categories: (a) lossless compression techniques;
and (b) lossy compression techniques. The file compression techniques such as Winzip are lossless
because the original data is obtained by unzipping the file. Compression techniques for voice, image,
and video are lossy techniques because compression causes degradation of the quality.
<Day Day Up>
<Day Day Up>
10.2 ERROR DETECTION AND CORRECTION
The source coding techniques are used to reduce the redundancy in the signal. Because the transmission
medium introduces errors, we need to devise methods so that the receiver can either detect the errors or
correct the errors. To achieve this, error detection techniques and error correction techniques are used. These
techniques increase the bandwidth requirement, but they provide reliable communication. Error detection is
done through CRC and error correction through FEC.
Because the communication channel introduces errors in the bit stream, error detection techniques need to be
incorporated. If errors are detected, the receiver can ask for retransmission. If retransmissions have to be
reduced, error correction techniques need to be incorporated.
NoteIn voice and video communications, a higher bit error rate can be tolerated. Even if there is one error
in 10,000 bits, there will not be perceptible difference in voice/video quality. However, for data
applications, a very reliable data transfer is a must.
<Day Day Up>
<Day Day Up>
10.3 MODULATION TECHNIQUES
For designing an analog communication system, analog modulation techniques such as AM and FM are used.
If you are designing a digital communication system, the choices are digital modulation techniques such as
ASK, FSK, and PSK. Many variations of these basic schemes are available that give slightly different
performance characteristics. Choice of a modulation scheme needs to take into consideration the performance
requirements as well as the availability of hardware for implementing the modulator and demodulator.
In a communication system, an important design parameter is Bit Error Rate (BER). To achieve a good BER (to
reduce the bit errors as much as possible), the Signal-to-Noise Ratio (SNR) should be high. SNR and E
b
/N
o
are
related by the formula
where N
o
is the noise power density in watts/Hertz. The total noise in a signal with bandwidth of B is given by N
= N
o
B. Hence,
The BER can be reduced by increasing the E
b
/N
o
value, which can be achieved either by increasing the
bandwidth or by decreasing the data rate.
The curves shown in Figure 10.1 give the performance of the modulation schemes. These curves are known as
the waterfall curves. In designing a communication system, based on the required BER, the E
b
/N
o
value is
obtained for a given modulation scheme. PSK and QPSK perform better compared to ASK and FSK because
for a given BER, the value of E
b
/N
o
is less, and hence with less energy of the signal, we can achieve good
performance. Another criterion to be considered is the ease of implementing the modulator/ demodulator. It is
much easier to implement ASK and FSK modulators and demodulators as compared to PSK and its variations.
Figure 10.1: Performance curves for Digital Modulation Systems.
For digital communication systems, the BER is an important design parameter. BER can be reduced by
increasing the value of E
b
/N
o
(where E
b
is the energy per bit and N
o
is the noise power density).
NoteChoosing a specific modulation technique depends on two important factorsBER performance and
complexity of the modulator/demodulator circuitry.
<Day Day Up>
<Day Day Up>
10.4 PERFORMANCE CRITERIA
The performance of a digital communication system is measured in terms of the Bit Error Rate (BER). BER is
measured in terms of the ratio of the number of bits in error as a percentage of the total number of bits
received.
Depending on the application, the BER requirements vary. For applications such as banking, a very high BER is
required, of the order of 10
- 12
i.e., out of 10
12
bits only one bit can go wrong (even that has to be corrected or a
retransmission requested). For applications such as voice, a BER of 10
- 4
is acceptable.
As we discussed in the previous section, BER is dependent on the modulation technique used. The
performance in terms of the BER also is dependent on the transmission medium. A satellite channel, for
example, is characterized by a high bit error rate (generally, around 10
- 6
; in such a case, a higher-layer
protocol (data link layer) has to implement error detection techniques and automatic repeat request (ARQ)
protocols such as stop-and-wait and sliding window.
Because it is not possible to achieve a completely error-free transmission, errors should be detected or
corrected using error-detection or error-correcting codes. After detection of errors, using automatic repeat
request (ARQ) protocols, the receiver has to request retransmission of data.
<Day Day Up>
<Day Day Up>
10.5 SECURITY ISSUES
Security is of paramount importance because systems are prone to attacks. The various security threats can be
the following:
Interruption: The intended recipient is not allowed to receive the data this is an attack on the availability of
the system.
Interception: The intended recipient receives the data, but unauthorized persons also receive the datathis is
an attack on the confidentiality of the data.
Modification: An unauthorized person receives the data, modifies it, and then sends it to the intended
recipientthis is an attack on the integrity of the data.
Fabrication: An unauthorized person generates the data and sends it to a personthis is an attack on the
authenticity of the system.
To overcome these security threats, the data has to be encrypted. Encryption is a mechanism wherein the user
data is transformed using an encryption key. Only those who have the encryption key can decrypt the data.
There are two possibilities: link encryption and end-to-end encryption. In link encryption, at the transmitting end,
the data is encrypted and sent over the communication link. At the receiving end of the link, the data is
decrypted. In end-to-end encryption, the user encrypts the data and sends it over the communication link, and
the recipient decrypts the data. To provide high security, both types of encryption can be employed. Note that
encryption does not increase the data rate (or bandwidth). Length of the encryption key decides how safe the
encryption mechanism is. Though 56- and 64-bit keys were used in earlier days, now 512- and 1024-bit keys
are being used for highly secure communication systems.
The major security threats are: interruption, interception, modification, and fabrication. The data is encrypted at
the transmitting end to overcome these security threats. At the receiving end, the data is decrypted.
We will study security issues in detail in Chapter 38, "Information Security."
NoteFor encryption, there will be an encryption algorithm and an encryption key. The encryption algorithm
specifies the procedure for modifying the data using an encryption key. The algorithm can be made
public (known to everyone), but the encryption key is kept secret.
<Day Day Up>
<Day Day Up>
10.6 RADIO SYSTEM DESIGN ISSUES
Radio system design poses special problems because of the special nature of the radio signal propagation. The
important design issues are:
Frequency of operation: Radio systems cannot be operated in any frequency of our choice. Frequency
allocation needs to be obtained from the centralized authority of the government. Only certain bands such as
ham radio band and Industrial, Scientific, and Medical (ISM) band are unlicensed, and anyone can use these
bands without getting a license from the government authorities.
Radio survey: Radio frequency propagation characteristics depend on many factors, such as natural terrain
(presence of hills and valleys, lakes) and artificial terrain (presence of high-rise buildings). A radio survey must
be carried out to decide where to keep the antennas to achieve the maximum possible coverage. Multipath
fading causes signal degradation. Measures have to be taken to reduce the effect of multipath fading.
NoteThe propagation characteristics differ for different frequency bands. A number of mathematical
models are available to analyze radio propagation. The natural terrain (presence of hills, lakes,
greenery) and artificial terrain (presence of tall buildings) also affect radio propagation.
Line of sight communication: Some radio systems are line of sight systems, that is, there should not be any
obstructions such as tall buildings/hills between the transmitting station and the receiving station. In the case of
broadcasting applications, the transmitting antennas have to be located at the right places to obtain the
maximum coverage. Systems such as AM broadcast systems do not have this limitation because the radio
waves are reflected by the ionosphere, and hence the range is very high.
Path loss calculations: When a signal is transmitted with a particular signal strength, the signal traverses a
large distance and becomes attenuated. The loss of signal strength due to the propagation in the atmosphere
and attenuation in the communication subsystems (such as filters and the cable connecting the radio
equipment to the antenna) is called path loss. The path loss calculations have to be done to ensure that the
minimum required signal strength is available to the receiver to decode the information content. The receiver
should be sensitive enough to decode the signals. The required BER, SNR, gain of the antenna, modulation
technique used, rain attenuation, and gain of the amplifiers used are some of the parameters considered during
the path loss calculations.
NoteFor all radio systems, path loss calculations are very important. Based on the path loss calculations,
the receiver sensitivity, antenna gain, amplifier gains, etc. are calculated when designing radio
systems.
Rain attenuation: The attenuation of the radio signals due to rain varies, depending on the frequency band.
For instance, in satellite communication, at 17/12GHz the rain attenuation is very high as compared to 6/4GHz.
This aspect has to be taken into consideration in the path loss calculations.
Radio bandwidth: Radio spectrum being a limited natural resource, the bandwidth of a radio channel has to be
fully utilized. To achieve this, efficient source coding techniques have to be used. For example, to transmit
voice over a radio channel, it is not advisable to use 64kbps PCM (though many systems still use it). A better
approach would be to use low bit rate coding techniques (such as ADPCM, LPC, or its variations) so that in a
given radio bandwidth, more voice channels can be pumped in.
NoteAs radio spectrum is a limited natural resource, the radio channel has to be fully utilized. Using low bit
rate coding of voice/video signals and choosing an efficient modulation technique are very important
in radio system design.
Radio channels: A radio channel consists of a pair of frequenciesone frequency from base station to the end
station and one frequency from end station to the base station. A minimum separation is required between the
uplink and downlink frequencies.
Multiple access: Radio systems use multiple access techniques to make efficient use of bandwidth. FDMA,
TDMA, and CDMA systems, as discussed earlier, have different spectrum requirements and different
complexities.
All these issues need to be kept in mind when designing radio systems.
Design of radio systems involves special issues to be addressed. These include frequency of operation, radio
propagation characteristics, path loss calculations, rain attenuation, efficient usage of radio spectrum through
low-bit rate coding of voice and video signals, and usage of multiple access techniques.
<Day Day Up>
<Day Day Up>
10.7 TELECOMMUNICATION STANDARDS
In telecommunication system design, the standards play a very important role. The days when organizations
used to develop proprietary interfaces and protocols are gone. Before embarking on a system design, the
designer has to look at international/national standards for the interfaces and protocols. The various standards
bodies for telecommunication/data communications are:
American National Standards Institute (ANSI)
Electronics Industries Association (EIA)
European Telecommunications Standards Institute (ETSI)
Internet Engineering Task Force (IETF)
International Organization for Standardization (ISO)
International Telecommunications Union Telecommunications Services Sector (ITU-T), earlier known as
CCITT
Institute of Electrical and Electronics Engineers (IEEE)
Throughout this book, we will mention a number of relevant standards related to telecommunication systems
and interfaces as well as communication protocols. Referring to the standards documents is very important to
get an in-depth knowledge of the specifications, particularly during implementation.
Nowadays, communication system design is driven by international standards. The standards formulated by
standardization bodies such as ANSI, EIA, ETSI, IETF, ISO, ITU-T and IEEE need to be followed while
designing communication systems and protocols.
<Day Day Up>
<Day Day Up>
10.8 COST
The cost is the most important of the design parameters. To design a communication system that meets all the
performance criteria at minimum cost is the major challenge to communication engineers. The choice of the
transmission medium (twisted pair, coaxial cable, radio or fiber, etc.) needs to keep in view the cost. In
communication system design, engineers do not develop each and every subsystem. The various subsystems
are procured from different vendors and integrated. In such a case, the engineer has to choose the subsystem
that meets all the performance requirements and is cost effective. Experience is the best teacher for choosing
the right subsystems.
Cost is the most important design parameter while designing communication systems. To design a system that
meets all the performance requirements at the lowest cost is the callenge for all communcation engineers.
Summary
In designing a communication system, a number of issues need to be taken into consideration. These include
services to be supported such as data, voice, video, etc., which decide the data rate requirements; error
detection and correction mechanisms; type of modulation; performance criteria such as bit error rates; security
aspects; standards to be followed for the implementation; and the cost. These issues are discussed in detail in
this chapter.
References
K. Krechmer. "Standards Make the GIH Possible." IEEE Communications Magazine, Vol. 34, No. 8, August
1996. This paper gives information about the history of standardization and the various standards
authorities.
IEEE Communications Magazine, Vol. 34, No. 8, August 1996. This is a special issue on "Emerging Data
Communications Standards."
http://www.ansi.org The Web site of American National Standards Institute.
http://www.iso.org The Web site of International Organization for Standardization.
http://www.itu.int The Web site of International Telecommunications Union.
http://www.etsi.org The Web site of European Telecommunications Standards Institute.
http://www.amdah.com The Web site of Fiber Channel Association.
http://www.ietf.org The Web site of Internet Engineering Task Force.
Questions
List the various issues involved in telecommunication systems design. 1.
What are the special issues to be considered for radio system design? 2.
List some important international standardization bodies. 3.
What criteria are used to compare different modulation techniques? 4.
What are the common security threats? Explain the concept of encryption. 5.
Exercises
5.
1. Conduct a survey of the various international and national standardization bodies and prepare a
report.
2. Conduct a survey of commercial products available for planning and designing a cellular mobile
communication system.
3. Study the performance of different digital modulation schemes using Bit Error Rate (BER) and
bandwidth requirement as the performance criteria.
4. Design a local area network for your college campus. Design it based on the available international
standards. List the various options available.
Answers
1. The various standardization bodies are
American National Standards Institute (ANSI)
Advanced Television Systems Committee (ATSC)
Alliance for Telecommunications Industry Solutions (ATIS)
Cable Television Laboratories, Inc. (CableLabs)
European Committee for Standardization (CEN)
European Committee for Electrotechnical Standards (CENELEC)
European Computer Manufacturers Association (ECMA)
Electronic Industries Alliance (EIA)
European Telecommunications Standards Institute (ETSI)
Federal Communications Commission (FCC)
International Electrotechnical Commission (IEC)
Institute of Electrical and Electronics Engineers (IEEE)
International Organization for Standardization (ISO)
Internet Society (ISOC)
International Telecommunications Union (ITU)
Telecommunications Industry Association (TIA)
World Wide Web Consortium (W3C)
A number of industry consortia and special interest groups (SIGs) formulate the telecommunications
specifications. Some such consortia and groups are
10 Gigabit Ethernet Alliance
ATM Forum
Bluetooth SIG
CDMA Development Group
DSL Forum
Enterprise Computer Technology Forum
Frame Relay Forum
HomeRF Working Group
International Multimedia Telecommunications Consortium
Infrared Data Association
Object Management Group
Personal Communications Industry Association
Personal Computer Memory Card International Association
Satellite Broadcasting and Communications Association
SONET Interoperability Forum
Society for Motion Picture & Television Engineers
WAP Forum
Wireless Ethernet Compatibility Alliance
Wireless LAN Association
2. To plan and design a cellular mobile communication system, a radio survey needs to be carried out. A radio
survey involves study of the terrain and deciding where to install the base stations. The radio propagation
characteristics need to be studied for which computer models are available. Leading mobile communication
equipment suppliers such as Motorola, Ericsson, and Nokia provide these tools.
3. BER is a design parameter. You need to find out the BER required for the communication system being
designed. For this BER, the modulation technique, which requires the least energy per bit, is the best.
However, other considerations such as the implementation complexity or cost of the modulator/demodulator
also play an important role while choosing a modulation technique.
4. The standards for various types of LANs have been formulated by the professional body IEEE. IEEE 802.3
LAN is based on the Ethernet, FDDI is for optical fiberbased LANs, and IEEE 802.11 is for wireless LAN.
You can design a backbone network based on fiber and then interconnect 802.3 LANs and wireless LANs to
obtain a campus-wide network.
Projects
Prepare the paper design of a communication system to provide Internet access to 50 communities that
are located within a radius of about 30 km from the district headquarters. Consider the various design
options: data rate support, transmission medium, multiple access technique, performance, delay, and of
course the cost of the system.
1.
Simulate a digital communication system. The software has to (a) generate a continuous stream of bits,
(b) introduce errors by converting 1 to 0 or 0 to 1 at random places in the bit stream, and (c) modulate
the bit stream using different modulations.
2.
Prepare a paper design of a communication system interconnecting all the colleges providing technical
education in your state. The system is to be used for providing distance education. From a central
location (state capital), lectures have to be broadcast with a video data rate of 384kbps, audio and data
at 64kbps. At each college, there should be a provision to send data or voice to interact with the
professor at the central location.
3.
Design a communication system that links two branch offices of an organization. The two branches are
separated by a distance of 10km. The employees of the two branches should be able to talk to each
other, exchange computer files, and also do video-conferencing using 64kbps desktop video
conferencing systems. Make appropriate assumptions while preparing the design.
4.
<Day Day Up>
<Day Day Up>
Chapter 11: Public Switched Telephone Network
OVERVIEW
Ever since Alexander Graham Bell made the first telephone call, the telephone network has expanded by leaps
and bounds. The technical term for the telephone network is Public Switched Telephone Network (PSTN).
PSTN now interconnects the whole world to enable people to communicate using the most convenient and
effective means of communicationvoice.
In this chapter, we will study the architecture of the PSTN and also discuss the latest technologies being
introduced in the oldest network to provide better services to telephone subscribers. The PSTN is becoming
more and more digital, and hence our description of the PSTN is oriented toward digital technology.
<Day Day Up>
<Day Day Up>
11.1 PSTN NETWORK ELEMENTS
The elements of PSTN are shown in Figure 11.1. The PSTN consists of:
Subscriber terminals
Local loops
Switches (or exchanges)
Trunks
Figure 11.1: Elements of PSTN.
11.1.1 The Subscriber Terminal
In its simplest form, the subscriber terminal is the ordinary telephone with a keypad to dial the numbers. There
are two types of dialing: (a) pulse dialing; and (b) DTMF dialing.
The Public Switch Telephone Network (PSTN) consists of subscriber terminals, local loops, switches or
exchanges, and trunks.
Pulse dialing: In pulse dialing, when a digit is dialed, a series of pulses is sent out. When the user dials 1, 1
pulse is transmitted to the exchange, when 2 is dialed, 2 pulses are sent, and so on; when 0 is dialed, 10
pulses are sent. The exchange uses a pulse counter to recognize the digits. Since pulses are likely to be
distorted over the medium due to attenuation, pulse recognition accuracy is not very high. Many old switches
and telephones support only pulse dialing, though slowly pulse dialing is becoming outdated.
DTMF dialing: DTMF stands for Dual Tone Multi Frequency. DTMF dialing is also known as tone dialing or
speed dialing. When a digit is dialed, a combination of two sine waves is sent. The various combinations of
tones are shown in Figure 11.2. When 1 is dialed, a combination of 697Hz and 1209Hz is sent from the
terminal to the exchange. A DTMF recognition chip is used at the exchange to decode the digits. DTMF
recognition is highly accurate and is becoming predominant. Most present-day telephones support DTMF.
Figure 11.2: DTMF digits.
The two types of dialing supported by the subscriber terminals are (a) pulse dialing and (b) tone dialing or
DTMF dialing. In pulse dialing, for each digit, a series of pulses is sent to the switch. In tone dialing, for each
digit, a combination of two sine waves is sent.
NoteDTMF dialing is more reliable as compared to pulse dialing. Pulses are likely to get distorted due to
transmission impairments, and so pulse dialing is not always reliable. On the other hand, tone
detection is very reliable, and hence DTMF dialing is now extensively used.
11.1.2 Local Loop
The local loop is a dedicated link between a subscriber terminal and the switch. Present local loop uses twisted-
pair copper wire as the local loop. In the future, fiber is being planned to provide high bandwidth services to
subscribers. In remote and rural areas, where laying the cable is costly or infeasible (due to terrains such as
hills etc.), radio is used. This wireless local loop (WLL) has many advantages: fast installation, low maintenance
costs. Moreover, it obviates the need for digging below ground. Hence WLL deployment is also catching up,
even in urban areas.
Local loop is the dedicated link between the subscriber terminal and the switch. Twisted pair copper wire is the
most widely used medium for local loop. Nowdays, wireless local loop is gaining popularity. In the future, optical
fiber will be used as the local loop to support very high data rates.
11.1.3 Switch
In earlier days, mechanical and electromechanical switches (strowger and crossbar switches) were used
extensively. Present switches use digital technology. These digital switches have the capacity to support
several thousand to a few million telephones.
To cater to large areas, the switching system is organized as a hierarchy as shown in Figure 11.3. At the lowest
level of the hierarchy, the switches are called end offices or local exchanges. Above that, there will be toll
exchanges (class 4 switches), primary (class 3) switches, secondary (class 2) switches, and regional (class 1)
switches.
Figure 11.3: Hierarchical switching system of PSTN.
In a city, an exchange is designated as a toll exchange and acts as the gateway for all long distance calls.
Similarly, a few gateway switches carry calls from one nation to another. However, the billing for subscribers is
always done by the parent exchange (the exchange to which the subscriber is connected).
In PSTN, the switching system is organized as a hierarchical system. The local switches are connected to toll
exchanges. Toll exchanges are connected to primary switches which are in turn connected to secondary and
regional switches.
NoteThe switch to which a subscriber is connected is called the parent switch. Billing is always done by
the parent switch.
11.1.4 Trunks
Trunks interconnect the switches. Based on traffic considerations as well as administrative considerations, the
interconnection between the switches through trunks is decided. Nowdays, trunks are mostly digital: speech is
converted to PCM format, multiplexed, and transmitted through the trunks. The trunks can be T1 or E1 links if
the switches are of small capacity (say, 512 ports). Depending on which switches are connected, the trunks are
categorized as intracity trunks and intercity trunks.
In the following sections, we will study the important technical aspects of local loops, switches, and trunks.
The switches are interconnected through trunks. Most of the trunks are digital and use PCM format for carrying
the voice traffic. E1 trunks carry 30 voice channels.
<Day Day Up>
<Day Day Up>
11.2 LOCAL LOOP
The local loop, which is a dedicated connection between the terminal equipment (the telephone) and the switch,
is the costliest element in the telephone network. Generally, from the switch, a cable is laid up to a distribution
box (also called a distribution point) from which individual cable pairs are taken to the individual telephone
instruments.
To reduce the cable laying work, particularly to provide telephone connections to dense areas such as high-rise
residential complexes, digital loop carrier (DLC) systems are being introduced. The DLC system is shown in
Figure 11.4. The telephone cables are distributed from the DLC system, and the DLC is connected to the digital
switch using a single high-bandwidth cable.
Figure 11.4: Digital loop carrier (DLC) in PSTN.
To reduce the installation and maintenance effort (finding out where the cable fault is), wireless local loops are
now being introduced. Wireless local loops (WLLs) using CDMA technology are becoming widespread. The
advantages of WLL are (a) low maintenance costs because no digging is required; (b) low maintenance costs
because equipment will be only at two ends (either the switch or the distribution point and the terminal
equipment; (c) fast installation; and (d) possibility of limited mobility.
Most of the present local loops using copper can support very limited data rates. To access the Internet using
the telephone network, the speed is generally limited to about 56kbps. Nowdays, user demands are increasing
for voice and video services that cannot be supported by the present local loops. Hence, fiber will be the best
choice so that very high bandwidth can be available to subscribers, supporting services such as video
conferencing, graphics, etc. In the future, optical fiber would be the choice for the local loop. Experiments are
going on to develop plastic optical fibers that can take slight bends and support high data rates. Plastic optical
fiber would be the ideal choice for fiber to the home.
Because the data rate supported by twisted pair copper cable is limited, optical fiber will be used as the
preferred medium for local loop in the future to provide high-bandwidth services. For remote/rural areas,
wireless local loop is the preferred choice because of fast installation and low maintenance costs.
<Day Day Up>
<Day Day Up>
11.3 SWITCHING CONCEPTS
The PSTN operates on the circuit switching principle illustrated in Figure 11.5. When a subscriber calls another
subscriber, a circuit is established that is a concatenation of various channels on the trunks between the switch
connected to the calling subscriber and the switch connected to the called subscriber. Circuit switching
operation involves the following steps:
Call establishment 1.
Data transfer (conversation) 2.
Call disconnection 3.
Figure 11.5: Circuit switching.
PSTN operates on circuit switching. For two subscribers to converse, a circuit is established between the two
subscribers, and after conversation, the circuit is disconnected. The circuit is a concatenation of various trunks
between the switches.
To establish and disconnect the calls, information needs to be passed from the subscriber to the switch and
also between the switches. This information is known as signaling information. In PSTN, the signaling is carried
by the same physical channel that is used to transmit voice. The signaling between two switches is carried on
the trunks. As shown in Figure 11.6, some trunks are assigned as signaling trunks and some as traffic (or
voice) trunks. This is known as channel associated signaling (CAS).
Figure 11.6: Channel associated signaling.
To establish a call and subsequently disconnect the call, information needs to be exchanged between the
subscriber terminal and the switch and also between the switches. For billing the subscriber and for network
management, information is exchanged between the switches. This information is called signaling information
and is carried on a signaling trunk.
Call processing software resides in the switch. The functions of the call processing software are:
To keep track of the subscriber terminals and to feed the dial tone when the subscriber goes off-hook (lifts
the telephone).
To collect the digits dialed by the subscriber. Note that the subscriber may dial a few digits and then
pausethe software should be capable of handling such cases as well.
Analyze the digits and switch the call to the right destination by seizing the trunk.
Feed various tones to the subscriber terminal (such as hunting, busy, call hold, etc.).
When the subscriber goes on-hook, free the trunk.
Keep track of the call records (known as CDRs or call details records) that contain call information such as
date and time when the call is made, the called party number, whether the call is local/long distance, and
duration of the call.
Based on the CDRs, do an offline analysis to generate billing information.
NoteThe call details record (CDR) is generated by the switch. The CDR contains the details of all the calls
made. These details include calling subscriber number, called subscriber number, date and time
when the call was initiated, duration of the call, and so on. Billing information is generated by
processing the CDR.
The switch also contains the diagnostic software to carry out tests on the subscriber loop and the subscriber
terminals. This is done through special "line testing software" which feeds signals on the subscriber loop and
measures various parameters to check whether the local loop is OK or faulty.
Though circuit switching has been used extensively for many years, its disadvantage is that the communication
channels are not used efficiently. Particularly when voice is transmitted, nearly 40% of the time, the channel is
idle because of the gaps in the speech signal. Another drawback is that the signaling information is carried
using the same channels resulting in inefficient usage of the channel. In Chapter 26, we will discuss how
signaling can be carried by a separate signaling network called Signaling System No. 7 which is being
introduced on a large scale in PSTN.
The call processing software that resides on the switch carries out the following functions: collection of the
digits dialed by the subscriber, to switch the call to the called subscriber by seizing the trunks, to feed the
various tones to the subscribers, to free the trunks after the call is completed and to collect statistics related to
the calls.
NoteCircuit switching is not an efficient switching mechanism because a lot of time is wasted for
establishing and disconnecting the circuit. An alternative switching mechanism is packet switching
used in computer networks.
<Day Day Up>
<Day Day Up>
11.4 TRUNKING SYSTEMS
Two switches are connected together through trunks. The trunks are of different types:
Two-wire analog trunks, which are used to interconnect small switches. 1.
Four-wire trunks, which are also used to interconnect small switches. 2.
T1 carriers which are digital trunks. Each T1 carrier carries 24 voice channels. In Europe, the equivalent
standard is referred to as E1 trunk. Each E1 trunk supports 30 voice channels.
3.
Generally, the switches are connected through T1 carriers. Data corresponding to 24 voice channels is
multiplexed to form the T1 carrier. For every 125 microseconds, the bit stream from each voice channel
consists of 8 bits out of which 7 bits are data and one bit is control information. Hence, for each voice channel,
the total data consist of 7 X 8000 = 56, 000 bps of voice and 1 8000 bps = 8000 bps of control information.
Small switches are interconnected using 2-wire or 4-wire analog trunks or digital T1 carriers. T1 carrier supports
24 voice channels using Time Division Multiplexing (TDM) technique.
NoteIn T1 carrier, a frame consists of 193 bits192 bits corresponding to 24 voice channels' data and one
additional bit for framing. The frame duration is 125 microseconds. Hence, the gross data rate of T1
carrier is 1.544 Mbps.
Figure 11.7: Carrier Frame Format.
In major towns and cities, because of high traffic, T1 carriers will not suffice. In such a case, T2, T3, and T4
carriers are used. Higher capacity trunks are obtained by multiplexing T1 carriers. The standard for this digital
hierarchy is shown in Figure 11.8. Four T1 carriers are multiplexed to obtain T2 carrier. Seven T2 carriers are
multiplexed to obtain T3 carrier. Six T3 carriers are multiplexed to obtain T4 carrier.
Figure 11.8: Trunks with Higher Capacities.
<Day Day Up>
<Day Day Up>
11.5 SIGNALING
In a telephone network, before the conversation takes place, a circuit has to be established. Lot of information
is to be exchanged between the subscriber terminal and the switch, and between the switches for the call to
materialize and later get disconnected. This exchange of information is known as signaling.
Four T1 carriers are multiplexed to obtain T2 carrier. Note that the data rate of T2 carrier is 6.312 Mbps and not
4 1.544 Mbps (=6.176 Mbps). The extra bits are added for synchronization. Similarly, extra bits are added to
form T3 and T4 carriers. The digital hierarchy is standardized by ITU-T to provide high capacity trunks between
large switches.
Signaling is used to indicate/exchange the following information:
Calling and called party numbers
Availability/non-availability of network resources such as trunks
Availability of the called party
Billing information
Network information such as busy trunks, faulty telephones etc.
Routing information as regards how the call has to be routed.
To provide special services such as calling card facility, toll-free numbers, called party, paying the
telephone bill etc.
In the telephone network, there are three types of signaling.
In-band signaling: When a subscriber lifts his telephone, he gets a dial tone which is fed by the switch. The
subscriber dials the called number and the switch interprets the number and finds out to which switch the called
subscriber is connected. The switch establishes a connection to the other switch and the other switch checks
whether the called subscriber is available. If she is available, the path is established and the conversation takes
place. When the caller puts back the telephone, the circuit is freed. Prior to the conversation and after the
conversation, the information exchanged is the signaling information. Normally, this signaling information is
exchanged in the same communication link in which the conversation takes place. This signaling is known as
in-band signaling. In-band signaling is simple, but creates problems because the tones corresponding to the
signaling fall in the voice band and cause disturbances to the speech.
The signaling information exchanged between the subscriber and the switch consists of dialed digits and the
various tones such as dial tone, busy tone, ring back tone, etc. This signaling information is carried on the local
loop using the in-band signaling mechanism.
Channel-Associated Signaling (CAS): Between two switches, separate channels are used for signaling and
information transfer as shown in Figure 11.6. For instance, when two switches are connected using an E1 link,
one time slot is used for signaling. This results in substantial savings as traffic channels are not used for
transferring the signaling information.
Common channel signaling (CCS): Another mechanism for signaling is to have a separate communication
network for exchanging signaling information. When two network elements have to exchange signaling
information, they use this independent network, and the actual voice conversation takes place using the voice
trunks. This mechanism (though it appears complex and calls for additional infrastructure) is extremely efficient
and is now being widely used.
An ITU-T standard called Signaling System No. 7 (SS7) is used for common channel signaling. SS7 uses
concepts of data communications, and we will discuss the details of SS7 in a later portion of the book (in
Chapter 26, "Signaling System No. 7").
Two switches in the PSTN exchange signaling information using dedicated time slots in the trunks. This is
known as channel associated signaling.
NoteIn common channel signaling (CCS), a separate data communication network is used to exchange
signaling information. CCS is much more efficient and reliable compared to Channel-Associated
Signaling. An ITU-T standard called Signaling System No. 7 (SS7) is used for CCS. SS7 is now used
in PSTN, ISDN, and mobile communication systems.
Summary
In this chapter, the architecture of PSTN is presented. The PSTN consists of subscriber terminals, local loops,
switches, and trunks. The local loop is a dedicated link between the subscriber terminal and the switch.
Currently, twisted copper pair is used as the local loop, but optical fiber-based local loops are likely to be
installed in the future. The switches are interconnected through trunks. Normally digital trunks are used. The
basic digital trunk supports 30 voice channels and is called an E1 trunk. The PSTN operates in circuit switching
mode: a connection is established between two subscribers and, after the conversation, the circuit is
disconnected. The signaling information is exchanged between the subscriber and the switch as well as
between switches. The signaling used in PSTN is in-band signaling and channel associated signaling.
References
Ray Horak. Communications Systems and Networks, Third Edition. Wiley-Dreamtech India Pvt. Ltd., 2002.
Chapter 5 of this book is on PSTN.
50th Anniversary Commemorative Issue of IEEE Communications Magazine. May 2002. This issue traces
the major developments in communications.
Questions
What are the network elements of PSTN? Explain the function of each network element. 1.
What is the difference between pulse dialing and DTMF dialing? 2.
Explain circuit-switching operation. 3.
What are the different types of signaling used in PSTN? 4.
Exercises
1. Two switches need to be interconnected to carry 60 voice channels. Calculate the number of T1
carriers required.
2. Design a circuit to generate/decode DTMF tones. You can use a chip such as MITEL 8880.
3. Study the characteristics of an integrated circuit (IC) that does PCM coding and decoding.
4. The present switches are hardware driven. Nowdays, "soft switches" are being developed. Prepare
a technical paper on soft switches.
Answers
1. Each T1 carrier supports 24 voice channels. To support 60 voice channels, three T1 carriers are required.
Two T1 carriers are not sufficient as they can support only 48 voice channels. However, when 3 T1 carriers
are used, 12 voice channels remain free. It is a good practice to keep some spare capacity for future
expendability.
2. You can see the datasheet of the IC 8880 from MITEL. This is a DTMF chip, and you can use the reference
circuit given in the data sheet.
3. The IC 44233 generates PCM-coded speech.
4. The PSTN is based on circuit-switching. The switches contain (a) the line cards that interface with the
subscriber equipment; and (b) processor-based hardware/software to switch the calls and establish a circuit
between the two subscribers. Now the trend is to use packet switching for voice calls using the computer
networking protocols. Switches that handle packet- switched voice are called the soft switches.
1.
Projects
Simulate a DTMF generator on a PC. The telephone keypad has to be simulated through software.
When you click on a digit, the corresponding tones have to be generated (the tone frequencies are given
in Figure 11.2) and played through the sound card of your PC.
1.
Develop a telephone tapping software package. You can connect a voice/ data modem in parallel to the
telephone line. When someone calls your telephone, the complete conversation has to be recorded on
the PC to which the modem is connected. You can use the Telephony API (TAPI) to develop the
software.
2.
<Day Day Up>
<Day Day Up>
Chapter 12: Terrestrial Radio Communication Systems
Terrestrial radio as a transmission medium is attractive when the terrain does not permit laying of cables. To
cover hilly areas, areas separated by lakes, rivers, etc., terrestrial radio is used for local loops and trunks. For
providing telecommunication facilities to remote/rural areas, again radio is the best option. To avoid digging up
the roads, even in urban areas, radio systems are being used. Radio has another advantageit provides the
capability of mobility to the users. For broadcasting applications, radio is the best choice because radio waves
travel long distances. In this chapter, we will study some representative terrestrial radio systemsbroadcasting
systems, wireless local loops, cordless telecommunication systems, and trunked radio systems.
12.1 ADVANTAGES OF TERRESTRIAL RADIO SYSTEMS
As compared to the guided media such as twisted pair, coaxial cable, and optical fiber, terrestrial radio has
many advantages:
Installation of radio systems is easier compared to cable because digging can be avoided. The radio
equipment has to be installed only at the two endpoints.
Maintenance of the radio systems is also much easier as compared to cable systems. If the cable becomes
faulty, it is difficult to locate the fault and more difficult to rectify the fault.
Radio provides the most attractive featuremobility of the user. Even if the user is moving at a fast pace in
a car or even in an airplane, communication is possible.
Radio waves can propagate over large distances. The coverage area depends on the frequency band. HF
waves can travel hundreds of kilometers, VHF and UHF systems can cover up to 40 kilometers, and
microwave systems can cover a few kilometers. With the use of repeaters, the distance can be increased.
Terrestrial radio as the transmission medium has the following advantages: easy installation, easy
maintenance, and ability to cover large distances. Another main attraction of radio is that it provides mobility to
users.
However, while designing a radio system, the following need to be taken into consideration:
Radio wave propagation is affected by many factorsthe natural terrain (hills, mountains, valleys, lakes,
seashores, etc.), artificial terrain (multistoried buildings), and weather conditions (rainfall, snow, fog).
Radio waves are subject to interference with other radio systems operating in the same vicinity. Radio
waves are also affected by power generation equipment, aircraft engine noise, etc.
Radio waves are attenuated as they travel in the atmosphere. This loss of signal strength is known as path
loss. To overcome the effect of path loss, the radio receiver should have high sensitivityit should be
capable of receiving weak signals and amplifying the signal for later decoding.
In designing radio systems, the following aspects need to be taken into consideration: propagation
characteristics, which vary for different frequencies, based on the terrain; interference with other radio systems,
and path loss.
NoteThe path loss in a radio system is the cumulative loss due to the attenuation of the signal while
traveling in the free space and the attenuation in the various subsystems such as the filters, cable
connecting the radio equipment to the antenna, etc.
<Day Day Up>
<Day Day Up>
12.2 RADIO COMMUNICATION SYSTEM
The block diagram of a radio communication system is shown in Figure 12.1. The transmission section consists
of a baseband processor, which does the necessary filtering of the input signal by limiting the input signal
bandwidth to the required value and digitizes the signal using analog-to-digital conversion. If the input signal is
voice, the filter limits the bandwidth to 4kHz. If the input signal is video, the bandwidth will be limited to 5MHz. If
the radio communication system is a digital system, necessary source coding is also done by the baseband
processor. This signal is modulated using analog or digital modulation techniques.
Figure 12.1: Simplified block diagram of a radio communication system.
Suppose the radio communication system operates in the VHF band with a carrier frequency of 140MHz. The
baseband signal is converted into the radio frequency in two stages. In the first stage, called the intermediate
frequency (IF) stage, the signal is translated to an intermediate frequency. The most widely used standard IFs
are 455kHz, 10.7MHz, and 70MHz. In the second stage, the signal is translated to the required radio frequency
using an up-converter as shown in Figure 12.1(a). The up-converted signal is given to a power amplifier that
pumps out the modulated radio waves with the desired power level through the antenna. The antenna can be
an omnidirectional antenna, a sectoral antenna, or a directional antenna. An omnidirectional antenna radiates
equally in all directions. A sectoral antenna radiates at a fixed area, such as a 60
o
arc, a 120
o
arc, and so on. A
directional antenna transmits in a specific direction. Omnidirectional and sectoral antennas are used at base
stations, and directional antennas are used at remote stations.
At the receiving end, the signal is received by the antenna, down-converted to the IF frequency, demodulated,
and filtered, and the original signal is obtained. The baseband processor in the receiving section carries out the
necessary decoding.
In a radio system, the baseband signal is first up-converted into an intermediate frequency (IF) and then to the
desired radio frequency. Sometimes the up-conversion is done in two or more stages.
NoteSome standard IF frequencies are: 455kHz, 10.7MHz, and 70MHz. HF and VHF systems generally
use an IF of 455kHz.
The antennas are broadly classified as directional antennas, sectoral antennas, and omnidirectional
antennas. A directional antenna radiates in a specific direction. and an omnidirectional antenna
radiates in all directions. Sectoral antennas radiate in a sector of 60/120.
<Day Day Up>
<Day Day Up>
12.3 BROADCASTING SYSTEMS
Radio is the most effective medium for broadcasting audio and video. Broadcasting for large areas is done
through satellites, but for local programs within a country or state, broadcasting is done through terrestrial radio.
Unfortunately, the broadcasting systems have not evolved as fast as other communication systems. Even now,
the audio and video broadcasting systems are analog systems. Though a number of digital broadcasting
standards have been developed, they are yet to take off on a large scale. Digital broadcasting systems are now
being commercialized.
Broadcasting is one of the most important applications of radio systems. However, the present audio and video
broadcasting systems are analog systems.
12.3.1 Audio Broadcasting
In audio broadcasting systems, amplitude modulation (AM) is used. The analog audio signal with 15kHz
bandwidth is modulated using AM and then up-converted to the desired frequency band. Many such audio
programs of different broadcast stations are frequency division multiplexed (FDM) and sent over the air. The
AM radio frequency band is 5501610 kHz. The receiving stations consist of radio receivers that can be tuned
to the desired frequency band. The received signal in that frequency band is down-converted and demodulated,
and baseband audio signal is played through the speakers. Presently, frequency modulated (FM) audio
broadcasting is becoming predominant. As compared to AM, FM gives a better performance, so the quality of
the audio from FM stations is much better. FM stations operate in the frequency band 88108 MHz.
In AM broadcasting systems, the modulating signal of bandwidth 15kHz is amplitude modulated and up-
converted to the frequency band 5501610 kHz. A number of audio programs are multiplexed using frequency
division multiplexing.
NoteFM broadcasting is now becoming popular. In FM radio, the frequency band of operation is 88108
MHz. FM gives better quality audio because of its noise immunity.
12.3.2 TV Broadcasting
The TV broadcasting system is similar to the audio broadcasting system. The TV signal requires a bandwidth of
5MHz. The video signal is modulated and up-converted to the desired band and transmitted over the radio. At
the receiving end, the receiver filters out the desired signal and demodulates the signal, and the video is
displayed. VHF and UHF bands are used for TV broadcasting.
To provide very high quality video broadcasting using digital communication, efforts have been going on for the
past 20 years. In digital video broadcasting, the moving picture is divided into frames, and each frame is divided
into pixels. The number of frames and the pixels in each frame decide the resolution and hence the quality of
the video. ETSI has developed two standards: digital TV and HDTV (high-definition TV).
Digital TV standard ITU-R BT.601 uses 25 frames/second, with each frame divided into 720 pixels width and
526 pixels height. For this format, the uncompressed data rate is 166Mbps. Using compression techniques, the
data rate can be brought down to 510 Mbps.
HDTV standard ITU-R BT.709 uses 25 frames/ second, with each frame divided into 1920 pixels width and
1250 pixels height. For this format, the uncompressed data rate is 960Mbps. Using compression, the data rate
can be brought down to 2040 Mbps.
Present TV broadcasting systems operating in the VHF and UHF bands are analog systems. For digital TV
transmission, two standards, Digital TV standard and High Definition TV (HDTV) standard, have been
developed.
NoteFor digital TV transmission, the moving video is considered a series of frames, and each pixel in the
frame is quantized. Using compression techniques, the data rate is reduced to a few Mbps.
<Day Day Up>
<Day Day Up>
12.4 WIRELESS LOCAL LOOPS
For providing telecommunication facilities, the network elements required are the switch, the trunks, the local
loops, and the subscriber terminals. The local loop is the dedicated link between the subscriber terminal and
the switch. In cities and towns, the local loop uses twisted pair as the transmission medium because the
distance between the switch and the subscriber terminal generally will be less than 5 km. Because the
subscriber density is high in cities and towns, the cost of installing a switch for subscribers within a radius of 5
km is justified. In remote and rural areas, the subscriber density will be less, the number of calls made by the
subscribers will not be very high, and the areas are separated by long distances from the nearby towns. As a
result, laying a cable from one town to another is not cost effective. Installing a switch to cater to a small
number of subscribers is also prohibitively costly.
In the telephone network, the local loop is the costliest network element. To provide telephone services to
remote and rural areas, wireless local loop is the most cost-effective alternative.
Wireless local loops can be in two configurations. Figure 12.2 shows configuration 1. A radio base station will
be connected to the switch. The base station is generally located in a town at the same premises as the switch.
A number of remote stations communicate with the base station through radio. Each remote station can be
installed in an area, and it can support anywhere between 1 and 32 telephones. The distance between the base
station and each remote generally can be up to 30 km. A base station can provide telephone facilities to
subscribers in a radius of 30 km. This configuration is used extensively for providing telephone facilities in rural
and remote areas.
Figure 12.2: Wireless local loop configuration 1.
Wireless local loop can have two configurations. In one configuration, the subscriber telephone is connected to
the switch using radio as the medium. In the other configuration, wireless connectivity is provided between the
subscriber terminal and the distribution point, and the connectivity between the switch and the distribution point
is through a wired medium.
Figure 12.3 shows configuration 2 of wireless local loops. In this configuration, a number of base stations are
connected to the switch using cable. Each base station in turn communicates with a number of remote stations.
Each remote station can support a number of telephones. In this configuration, the local loop is a combination
of wired and wireless media. This configuration is used extensively in urban areas. TDMA and CDMA
technologies are used in this configuration. The number of subscribers supported by the base station/remote
station depends on the access technology. In the following sections, some representative wireless local loop
systems are described.
Figure 12.3: Wireless local loops configuration 2.
NoteWireless local loop also is gaining popularity in urban areas because of reduced installation and
maintenance costs.
12.4.1 Shared Radio Systems
The block diagram of a shared radio system is shown in Figure 12.4. This system can cater to a number of
communities within a radius of about 30 km from the base station. The system consists of a base station and a
number of remote stations (say, 15). Each remote station will provide one telephone connection. This remote
telephone can be used as a public call office (PCO) to be shared by a number of people. The base station
consists of a base station controller and a radio. The base station controller is connected to the PSTN through
the switch located at a town. The base station radio will have transmitter /receivers for two radio channels. Both
the channels can be used for either incoming or outgoing calls. So, the system works in the FDMA mode. When
a remote user wants to make a call, one of the free channels will be assigned to him, with one uplink frequency
and one downlink frequency. Since the base station can support only two radio channels at a time, only two
remote telephones can use the system. This is not a major problem because the traffic in rural areas is not very
high. This system is called a 2/15 shared radio system, 2 indicating the number of radio channels, and 15
indicating the number of remotes supported by the system. For each channel, 25kHz of bandwidth is allocated.
These systems operate in the VHF (150MHz and 170MHz) and UHF (400MHz) bands. Shared radio systems
are analog systems.
Figure 12.4: Shared radio system as wireless local loop.
This concept can be extended to develop higher order systems such as a 4/32 shared radio system, which will
have 4 radio channels and 32 remotes, or an 8/64 shared radio system, which will have 8 radio channels and
64 remotes.
These systems also can support data services up to a data rate of 9.6kbps. Using normal line modems, the
data can be sent in the voice channels.
In a shared radio system (SRS), a number of remote stations communicate via a central station. A few radio
channels are shared by the remotes using Frequency Division Multiple Access. SRS provides low-cost analog
wireless local loops.
12.4.2 Digital Local Loops
The digital wireless local loop using TDM/TDMA is shown in Figure 12.5. Unlike the shared radio systems, this
system uses digital communication. The system consists of a base station and number of remotes (up to 64).
The base station is connected to the switch using a T1 trunk (to support 24 voice channels). Each remote
station can be connected to a small rural switch. The rural switch can handle up to 32 subscribers. The
advantage of having a small switch is that the local calls within a community can be switched locally without the
need for a radio. All the remotes will be within a radius of 30 km from the base station. Communication from the
base to the remote will be in TDM mode, and from remote to the base will be in TDMA mode. The salient
features of the system are as follows:
Frequency of operation: The system operates in the 800MHz frequency band. The base station transmits in
the 840867.5 MHz band (downlink frequency). The remote stations transmit in the frequency band 885912.5
MHz band. The number of channels will be 12 with a channel spacing of 2.5MHz. The modulation in both
directions can be QPSK. Alternatively, the base station can use FSK for transmission, and remote stations can
use QPSKthis scheme has the advantage that the remote station electronics will be less complex because
demodulation of FSK is much easier than demodulation of QPSK. Base station and the remote stations transmit
a power of 2 watts. The base station uses an omni-directional antenna, and the remote stations use directional
antennas.
Voice coding: To conserve the spectrum, ADPCM can be used for voice coding. In ADPCM, the voice is
coded at 32kbps.
Access scheme: The base station transmits the data in TDM mode. The data for all the remote stations will be
multiplexed, and the multiplexed data will be broadcasted. All the remote stations will receive the data, and
each remote will decode the data only if the data is meant for it by matching the received address with its own
address.
Figure 12.5: Digital wireless local loop using TDM/TDMA.
The remote stations will use the TDMA for accessing the base station. Each remote will be assigned a time slot
and transmit the data in that time slot.
Assignment of TDMA slots: Since there is less traffic in rural areas, fixed assignment of time slots results in
wasted time slots because even if there is no call, the slot will be assigned to that remote. In dynamic TDMA,
each remote will get a time slot periodically that is exclusively for signaling. When a subscriber at a remote
picks up the telephone and dials a telephone number to make a call, a request for a voice slot is sent to the
base station along with the called number. The base station checks for the availability of the called party. If
available, a time slot is allocated in which the remote can send the digitized voice.
Frame and slot sizes: There is a trade-off between buffer storage required and synchronization overhead. If
the frame size is small, the synchronization overhead will be high; if the frame size is large, buffer storage
requirement will be high. As an optimal choice, 8 msec can be used as the frame size.
The TDM/TDMA frame formats are shown in Figure 12.6. From base to remote, the 8 msec frame consists of a
signaling slot of 416 bits and 27 voice slots of 288 bits. From the remote to the base, the signaling slot is also
418 bits and voice slot 288 bits. The voice slot contains 12 bits of guard time, 20 bits of preamble and sync, and
256 bits of 32 kbps ADPCM voice data. The guard time is used to take care of the propagation delay. The
preamble and sync bits are for synchronization. Since the frame has 27 time slots, 27 subscribers connected to
different remote stations can make calls simultaneously.
Figure 12.6: TDM/TDMA frame formats.
Because the base transmits the data in broadcast mode, all the remotes can receive the data. Each remote
synchronizes itself using the synchronization pattern and processes only the signaling slot data and the voice
slots allocated to it. The remaining slots are ignored. The remote communicates with the base in the channel
allocated to it for signaling. This type of signaling is called common channel signaling because a separate slot
is used for signaling, and the voice slots contain only voice data and no signaling information.
Digital local loops using the TDM/TDMA technique provide wireless connectivity between a number of remote
stations and a central station. The central station transmits the data in TDM broadcast mode, and the remote
stations use TDMA slots to send their data.
The digital wireless local loop systems based on TDM/TDMA provide very low cost solutions for rural/ sub-
urban areas. Hughes Network Systems, USA, SR Telecom, Canada, and Japan Radio Company, Japan are the
major organizations that developed such systems.
NoteIn the TDM/TDMA time slots, a separate slot is dedicated for carrying signaling information. This is
known as common channel signaling.
<Day Day Up>
<Day Day Up>
12.5 CORDLESS TELEPHONY
At home and the office, we use cordless telephones that provide limited mobility. ETSI developed a series of
Cordless Telephony (CT) standards to provide cordless telephony services in residential and office areas. The
first generation CT systems were analog systems, and hence the performance was poor. A second generation
Cordless Telephony (CT-2) system can be used as a two-way cordless telephone in an office environment or as
a two way cordless in the home. CT-2 is an FDMA-based system that operates in the 800MHz band. CT-3 is a
digital version of CT-2 and uses TDMA-TDD as the access scheme. CT-3 operates in the 8001000 MHz band
with an overall data rate of 640kbps. The speech data rate is 32kbps. In CT-2 and CT-3, the range is 5 meters
in built-up areas and 200 meters for line of sight. To enhance the capabilities of cordless telephone systems,
ETSI developed the DECT standards.
Cordless telephone systems provide limited mobility and a short range. To provide cordless telephone services
in residential and office environments, Cordless Telephony standards have been developed that have a
maximum range of 200 meters.
12.5.1 Digital Enhanced Cordless Telecommunications (DECT)
European Telecommunications Standards Institute (ETSI) developed the DECT standard for cordless
telephony. DECT-based handsets can be used in homes, at offices, or at public places such as shopping malls
and airports.
The DECT standard developed by ETSI has three configurations: (a) for residential operation; (b) for access to
telephone networks from public places; and (c) for use in office complexes.
In this configuration, in the home, an incoming telephone line can be shared by up to four extensions, as shown
in Figure 12.7. There will be a DECT base station connected to the incoming telephone line that can
communicate with any of the four extensions using radio. This configuration is a cordless PBX with four
extensions.
Figure 12.7: DECT at home.
In the home configuration, a DECT system operates as a cordless PBX with one base station and up to four
extensions.
Figure 12.8: Telepoint for public access.
In the telepoint configuration of DECT, a handset is connected to a telepoint base station, which in turn is
connected to the public telephone network. When a handset is within the coverage area of the base station,
outgoing calls can be made from the handset.
In places such as airports, railway stations, and large shopping complexes, a base station (called telepoint) can
be installed. Using the DECT handset, the PSTN can be accessed through the telepoint. In such a case, it is
expected that the user will not move considerably, so the handset should be within the coverage area of the
base station. This service has the following disadvantages:
Only outgoing calls can be supported. If incoming calls are to be supported, a radio pager has to be
integrated with the handset.
The environment in which the telepoint base stations are installed being noisy, special enclosures need to
be provided (called telepoint booths).
Figure 12.9: DECT micro-cellular system.
In a microcellular system, the coverage area is divided into cells as in mobile networks; each cell size is
typically about 10 meters in diameter. Because these systems operate inside buildings, the propagation
conditions require such small cells. Such systems find application in office buildings and small industrial
complexes. The cordless system using microcellular concept is also known as cordless PBX. Microcellular
systems are also useful for installing in temporary locations such as construction sites, big exhibitions, etc.
Unlike the telepoint system, microcellular systems do not provide public service, they serve only closed user
groups. Each cell will have a base station. All the base stations will be connected to the switch (PBX). The PBX
will have two databases: the home database (HDB) and the visitor database (VDB). These databases contain
the information about the subscribers and their present locations. The DECT handset is connected to the base
station using radio as the medium. When the handset moves from one cell to another, the radio connection
automatically is transferred to the other base station.
In a DECT microcellular system, the service area is divided into small cells of about 10 meters diameter, and
each cell has a base station. As a person moves from one cell to another cell while talking, the handset is
connected to the nearest base station.
For all three configurations, the DECT standard specifies the frequency of operation, access mechanism,
coding schemes, and such, which are described in the following sections.
DECT Standard
The broad specifications of DECT are as follows:
Range: The range of a DECT system is about 300 meters for a maximum peak power of 250 milliwatts.
Average power is 10 milliwatts. However, in the home and telepoint configurations, the range can be about 10
to 50 meters so that the power consumption can be less. It is also possible to extend the range up to 5
kilometers by increasing the transmit power.
Frequency band: The DECT system operates in the frequency band 18801900 MHz. A total of 10 carriers
can be assigned to different cells.
Voice coding: Voice is coded at 32kbps data rate using adaptive differential pulse code modulation (ADPCM).
Multiple access: The multiple access used is TDMA-TDD. The frame structure is shown in Figure 12.10. The
frame duration is 10 milliseconds. The first 12 slots are for base to remote communication, and the next 12
slots are for remote to base communication. The same frequency is used for both base to remote and remote to
base communication. The transmission data rate is 1152kbps. The channel assignment is dynamic: when a
user has to make a call, the frequency and time slot will be allocated.
Figure 12.10: DECT TDMA-TDD frame structure.
Since there are 10 carriers, and each carrier will have 12 time slots, total number of voice channels supported
is 120. If a telepoint base station is installed in a shopping complex, simultaneously 120 persons can make calls
through the base station. In the home configuration, this is not required. However, the advantage of a DECT
system is that the same handset used as a cordless telephone at home can be carried to the shopping complex
or to the office to make calls.
A DECT system can be used as a wireless local loop using the telepoint configuration. In India, a number of
villages are provided with telephone facilities using the DECT system developed at IIT, Madras. The only
limitation of DECT is that mobility should not be more than 20 km/hour.
DECT operates in the 18801900 MHz band in TDMA/TDD mode. For voice communication, 32kbps ADPCM is
used.
NoteDECT can be used to provide wireless local loops for rural areas by increasing the radio transmit
power. The handsets can be mobile, but with a restriction that the speed should not be more than 20
km/hour.
<Day Day Up>
<Day Day Up>
12.6 TRUNKED RADIO SYSTEMS
Radio systems used in cities and towns to provide mobile communication for closed user groups such as taxi
operators, ambulance service operators, police, etc. are called trunked radio systems. Trunked radio systems
can also be installed by organizations that need communication among their employees in their area of
operation such as construction sites. The trunked radio system consists of a base station at a central location in
a city/town. The user terminals are mobile devices that communicate with the operator at the base station or
with another mobile device using FDMA/TDMA. The operator of the trunked radio system will be assigned a set
of frequencies that are shared by all the mobile devices. As compared to cellular mobile communication
systems, trunked radio systems are low-cost systems because the entire service area is just a single cell.
Trunked radio systems are used to provide low-cost mobile communication services to closed user groups such
as taxi operators, ambulance service operators, and police.
NoteIn trunked radio systems, a city is generally covered by a single base station. The base station and
the mobile terminals communicate using FDMA/TDMA.
12.6.1 TETRA
ETSI developed a standard for trunked radio system called TETRA (Terrestrial Trunked Radio). The
configuration of TETRA is shown in Figure 12.11.
Figure 12.11: TETRA.
A TETRA system consists of TETRA nodes, which are radio base stations, to provide mobile communication
facilities for users in a service area. A trunked radio operator can interconnect a number of TETRA nodes using
cable or microwave radio. Each node can cater to a portion of a large city.
The TETRA standard has been formulated by ETSI. TETRA uses TDMA/FDD mechanism with 380390 MHz
uplink frequency and 390400 MHz downlink frequency. Both circuit switching and packet switching are
supported.
The salient features of TETRA are as follows:
Frequency of operation: The uplink frequency is 380 to 390 MHz, and the downlink frequency is 390 to 400
MHz. Each channel has a bandwidth of 25kHz.
Multiple access: The system operates in TDMA/ FDD. The TDMA frame is divided into four time slots that are
dynamically assigned.
Types of services: Two types of services are supported. Voice and data services use circuit switched
operation. This service uses the TDMA scheme with four slots per carrier. Packet data optimized (PDO)
services use packet switching for data communication. Data rates up to 36kbps can be achieved.
TETRA-based trunked systems are used extensively in Europe.
Summary
In this chapter, we discussed the details of representative terrestrial radio communication systems. Radio
systems provide many advantageseasy installation and maintenance and support for mobility. The radio as
the transmission medium is used in audio and video broadcasting, in wireless local loops, in cordless telephony
applications, and for trunked radio systems. The radio broadcasting systems are mostly analog, though in the
next few years, digital broadcasting systems are likely to increase. Wireless local loops are now being deployed
extensively in both urban and rural areas. Analog wireless local loops are shared radio systems in which a few
radio channels (2 or 4 or 8) are shared by a number of remote stations (16 or 32 or 64). Digital wireless local
loops use TDMA and CDMA technologies. Low-cost local loops can be provided using these technologies.
The digital enhanced cordless telecommunications (DECT) standard developed by ETSI can be used for
cordless telephony applications at home or office or at public places such as shopping complexes. DECT
operates in the 18801900 MHz band and uses the TDMA-TDD scheme to support up to 120 voice channels
using the ADPCM coding scheme for voice.
Trunked radio systems provide mobile communication facilities for closed user groups such as police, taxi
operators, and ambulances. The Terrestrial Trunked Radio (TETRA) standard developed by ETSI supports
both voice and data services. TETRA offers a low-cost solution for mobile communication as compared to
cellular mobile communication systems.
References
G. Karlsosson. "Asynchronous Transfer of Video", Vol. 34, No. 8, August 1996. Proceedings of the IEE
Conference on Rural Telecommunications, May 1988, London.
http://www.etsi.org The Web site of European Telecommunications Standards Institute. You can obtain the
ETSI standards referred to in this chapter from this site.
http://www.motorola.com Motorola is a leading manufacturer of trunked radio systems. You can obtain the
details from this site.
Questions
Draw the block diagram of a radio communication system and explain the various blocks. 1.
What is a wireless local loop? Explain the architecture of analog wireless local loop systems. 2.
Explain the architecture of digital wireless local loop systems based on TDMA. 3.
Explain the various configurations of a DECT system. List the salient features of the DECT standard. 4.
Describe the operation of a trunked radio system. What are the salient features of the TETRA standard? 5.
Exercises
5.
1. Prepare a technical report on digital broadcasting using HDTV.
2. Prepare a technical report on a wireless local loop using CDMA technology.
3. Prepare a technical report on the various commercial products available for trunked radio.
4. Study the details of path loss calculations for terrestrial radio systems.
5. Instead of air, water is used as the transmission medium for underwater communication. What are
the frequency bands used for this type of communication? As compared to terrestrial radio
systems, what are the major differences in underwater communication systems?
Answers
1. Digital TV transmission has many advantages: improved signal quality, bandwidth efficiency through use of
compression techniques, and effective control and management. Two standards HDTV (High Definition TV)
and SDTV (Standard Definition TV) have been developed for digital video broadcasting. In HDTV, the
aspect ratio is 16:9. Twenty-four or 30 frames are sent per second. Each frame is divided into 1080 1920
pixels or 720 1280 pixels. In SDTV, the aspect ratio is 4:3, with 24 or 30 frames per second, and each
frame with 480 640 pixels.
2. To provide wireless local loops, small base stations are installed at various locations. The subscriber
terminal (telephone instrument) communicates with the base station using CDMA access mechanism. You
can get the details of wireless local loop using CDMA from the Web site http://www.qualcomm.com.
3. Though most of the trunked radio systems are analog, the latest standard developed by European
Standards Telecommunications Institute (ETSI) is based on digital technologyit is called TETRA
(Terrestrial Trunked Radio). You can get the details from http://www.etsi.org. Motorola is a leading supplier
of trunked radio systems. You can get product details from http://www.motorola.com.
4. Path loss calculation involves finding out the loss/gain of different network elements. The network elements
are filters, amplifiers, antennas, and the cable connecting the RF equipment with the antenna. A major
contributor to path loss is the propagation loss in the medium.
5. Underwater communication systems are used in sports, search and rescue operations, and military
applications. As the attenuation of the electrical signal is very high in water, high-power transmitters are
required. VLF band is used in underwater communication. For instance, the phones used by divers operate
in the frequency band 30 to 35 kHz. If the transmit power is 1/2 watt, the range is about 400 yards, and if
the transmit power is 30 watts, the range is about 5 miles. You can get the details of some commercial
underwater communication systems from the site http://www.oceantechnologysystems.com.
Projects
Study the various propagation models for modeling the radio wave propagation in free space (such as
the Okumura-Hata model). Develop software to calculate the path loss in a radio communication system
and to carry out link analysis.
1.
You are asked to design a cordless PBX for your office. Work out the details of location of base stations,
if a DECT-based system has to be installed.
2.
<Day Day Up>
<Day Day Up>
Chapter 13: Satellite Communication Systems
Ever since the first communication satellite was launched in 1962 by the United States, satellites have been
used extensively for communications. In this chapter, we will study the various applications of satellites,
frequency bands in which the satellite communication systems operate, and the multiple access techniques
used in satellite communication systems. We also will study the architecture of a representative communication
system that uses satellite as the transmission medium.
13.1 APPLICATIONS OF SATELLITES
Satellites are used for a variety of applications such as these:
Astronomy
Atmospheric studies
Communication
Navigation
Remote sensing
Search and rescue operations
Space exploration
Surveillance
Weather monitoring
In communications, satellites are used for broadcasting, providing trunks between switches of telephone
networks, providing telephone facilities for remote and rural areas, land mobile communications, marine
communication, and many other uses. Many corporate networks use satellite communication for their interoffice
communication.
Satellites also are used to send location information to people on the move (on the earth, in aircraft, or
underwater). Global Positioning System (GPS) uses 24 satellites that continuously broadcast their positional
parameters. The users have GPS receivers. The GPS receiver calculates its own positional parameters
(longitude, latitude, and altitude) based on the data received from the satellites. We will discuss the details of
GPS in Chapter 32, "Global Positioning System".
Surveillance satellites are fitted with video cameras. These satellites continuously monitor the enemy's territory
and send video data to a ground station. Surveillance satellites are used by many countries to keep track of the
activities of other countries.
Satellites are used for a variety of applications such as communication, broadcasting, surveillance, navigation,
weather monitoring, atmospheric studies, remote sensing, and space exploration.
<Day Day Up>
<Day Day Up>
13.2 ARCHITECTURE OF A SATELLITE COMMUNICATION SYSTEM
As discussed in Chapter 3, "Transmission Media", satellite communication systems operate in two
configurations: (a) mesh; and (b) star. In mesh configuration, two satellite terminals communicate directly with
each other. In star configuration, there will be a central station (called a hub), and remote stations communicate
via the hub. The star configuration is the most widely used configuration because of its cost-effectiveness, and
we will study the details of satellite communication systems based on star configuration in this chapter.
Communication satellites operate in two configurations: (a) mesh; and (b) star. In mesh configuration, a remote
station can communicate directly with another remote station. In star configuration, two remote stations
communicate via a central station or hub.
The architecture of a satellite communication system is shown in Figure 13.1. The system consists of two
segments:
Space segment
Ground segment
Figure 13.1: Architecture of a satellite communication system.
The space segment consists of the satellite, which has three systems: fuel system, telemetry control system,
and transponders.
Space segment: The space segment consists of the satellite, which has three main systems: (a) fuel system;
(b) satellite and telemetry control system; and (c) transponders. The fuel system is responsible for making the
satellite run for years. It has solar panels, which generate the necessary energy for the operation of the
satellite. The satellite and telemetry control system is used for sending commands to the satellite as well as for
sending the status of onboard systems to the ground stations. The transponder is the communication system,
which acts as a relay in the sky. The transponder receives the signals from the ground stations, amplifies them,
and then sends them back to the ground stations. The reception and transmission are done at two different
frequencies. The transponder needs to do the necessary frequency translation.
Ground segment: The ground segment consists of a number of Earth stations. In a star configuration network,
there will be a central station called the hub and a number of remote stations. Each remote station will have a
very small aperture terminal (VSAT), an antenna of about 0.5 meter to 1.5 meters. Along with the antenna there
will an outdoor unit (ODU), which contains the radio hardware to receive the signal and amplify it. The radio
signal is sent to an indoor unit (IDU), which demodulates the signal and carries out the necessary baseband
processing. IDU is connected to an end systems, such as a PC, LAN, or PBX.
The central station consists of a large antenna (4.5 meters to 11 meters) along with all associated electronics to
handle a large number of VSATs. The central station also will have a Network Control Center (NCC) that does
all the management functions, such as configuring the remote stations, keeping a database of the remote
stations, monitoring the health of the remotes, traffic analysis, etc. The NCC's main responsibility is to assign
the necessary channels to various remotes based on the requirement.
NoteThe central station or the hub consists of a large antenna and associated electronics to handle a large
number of VSATs. The network control center (NCC) at the hub is responsible for all management
functions to control the satellite network.
The communication path from a ground station to the satellite is called the uplink. The communication link from
the satellite to the ground station is called the downlink. Separate frequencies are used for uplink and downlink.
When a remote transmits data using an uplink frequency, the satellite transponder receives the signal, amplifies
it, converts the signal to the downlink frequency, and retransmits it. Because the signal has to travel nearly
36,000 km in each direction, the signal received by the satellite as well as the remote is very weak. As soon as
the signal is received, it has to be amplified before further processing.
Communication satellites are stationed at 36,000 km above the surface of the earth, in geostationary orbit. Two
separate frequencies are used for uplink and downlink.
NoteDue to the large distance to be traversed by the signals, the attenuation is very high in satellite
systems. Hence, the sensitivity of the radio receivers at the Earth stations should be very high.
13.2.1 Frequencies of Operation
The three widely used frequency bands in satellite communication systems are C band, Ku band, and Ka band.
The higher the frequency, the smaller will be the antenna size. However, the effect of rain is greater at higher
frequencies.
The various bands of operation are:
C band: Uplink frequency band 6GHz (5.925 to 6.425 GHz)
Downlink frequency band: 4GHz (3.7 to 4.2 GHz)
Ku band: Uplink frequency band: 14GHz (13.95 to 14.5 GHz)
Downlink frequency band: 11/12GHz (10.7-11.25 GHz, 12.2-12.75 GHz)
Ka band, with uplink frequency band of 30GHz and downlink frequency band of 20GHz, is used for
broadcasting applications. Direct broadcast satellites, which broadcast video programs directly to homes
(without the need for distribution through cable TV networks) operate in the frequency band 17/ 12GHz, with
uplink frequency band being 17.3 to 18.1 GHz and downlink frequency band being 11.7 to 12.2 GHz.
Because the frequency of operation is higher in the Ku band, the antenna size will be much smaller as
compared to C band antennas. However, the effect of rain is greater in Ku band than in C band. For many
years, only C band was used for satellite communication. With advances in radio components such as
amplifiers, filters, modems, and so on, the effect of rain on Ku band can be nullified by necessary amplification.
Presently, Ku band is used extensively for communication.
Communication satellites operate in different frequency bands: C band (6/4GHz), Ku band (14/ 12GHz), and Ka
band (30/20GHz) band. The higher the frequency, the smaller the antenna.
<Day Day Up>
<Day Day Up>
13.3 PROBLEMS IN SATELLITE COMMUNICATION
The main attraction of satellite communication is that it provides communication facilities to any part on the
earthsatellites are insensitive to the distance. However, the problems associated with satellites are:
Propagation delay: In a star network, the total delay from one VSAT to another VSAT is nearly 0.5 seconds if
the VSAT has to communicate via the hub. This type of delay is not acceptable particularly for voice
communication, because it results in echo and talker overlap. Propagation delay also causes problems for
many data communication protocols such as TCP/IP. Special protocols need to be designed for data
communication networks that use satellites.
If the VSAT communicates directly with another VSAT, the propagation delay is nearly 0.25 seconds. We will
discuss multiple access techniques that facilitate direct communication from one VSAT to another VSAT.
Low bandwidth: As compared to the terrestrial media, particularly the optical fiber, the bandwidth supported by
satellites is much less. Though present satellites provide much more bandwidth than the satellites of the 1970s
and 1980s, the bandwidth is nowhere comparable to the optical fiber bandwidth.
Noise: Satellite channels are affected by rain, atmospheric disturbances, etc. As a result, the performance of
satellite links is generally poor as compared to terrestrial links. If data is received with errors, the data has to be
retransmitted by the sender. To reduce retransmissions, forward error correcting (FEC) codes are implemented.
The problems associated with satellite communication are: high propagation delay, low bandwidths as
compared to terrestrial media, and noise due to the effect of rain and atmospheric disturbances.
NoteThe large propagation delay in satellite networks poses problems for voice communication. High delay
causes echo and talker overlap. Echo cancellers need to be used to overcome this problem.
The TCP/IP protocol stack used in computer communication will not perform well on satellite
networks. The stack is suitably modified to overcome the problems due to propagation delay.
<Day Day Up>
<Day Day Up>
13.4 MULTIPLE ACCESS TECHNIQUES
A number of multiple access techniques are used in satellite communication. Based on the type of application
and the cost of the equipment, a particular multiple access technique has to be chosen. A VSAT network
operating in the star configuration can use the TDM/TDMA access scheme. The central station will multiplex the
data of all the remotes and broadcast it. All the remotes will receive the data, and the remote for which the data
is meant will decode the data; the rest of the remotes will ignore the data. Each remote will transmit in TDMA
mode in the time slot allocated to it. A signaling slot is available to each of the remotes for making a request for
a TDMA traffic slot. This mechanism is useful if the network has a large number of remotes and traffic flow is
mostly from the central station to the remotes. If direct communication from one remote to another remote is
required, the multiple access techniques discussed in the following sections are used.
VSAT networks operating in star architecture use the TDM/TDMA access mechanism. The hub multiplexes the
data of all remotes and broadcasts it. The remote stations use the TDMA slots to send their data.
13.4.1 DAMA-SCPC
In demand assigned multiple accesssingle channel per carrier (DAMA-SCPC), a channel is assigned to a
remote only when the remote has data to transmit. The channel assignment is done by one station that acts as
the network control center (NCC). Once the channel is assigned, the remote can directly transmit data to
another remote (as in a mesh configuration). Because the channel is assigned based on demand, the access
mechanism is DAMA. Because the data corresponding to one channel (say, voice) can be transmitted on one
carrier assigned to the remote, the transmission mechanism is SCPC. SCPC got its name due to the earlier
analog systems that used one channel per carrier. Now, it is possible to multiplex different channels and send
the data on one carrier, which is known as multi-channel per carrier (MCPC).
The configuration of the DAMA-SCPC network is shown in Figure 13.2. At both the NCC and the remote, there
will be a control channel modem and a number of modems (modem bank) to modulate and demodulate many
carriers. When a remote has to communicate with another remote, it will send a request in a TDMA request
channel. The NCC will send the control information to both the remotes, indicating the carrier assigned to each
remote using TDM broadcast mode. The control channel modem is used exclusively for requests and
commands. Once a carrier is assigned to the remote, the modem corresponding to that carrier is used to
transmit the data.
Figure 13.2: DAMA-SCPC mesh architecture.
NoteIn DAMA-SCPC-based networks, each remote should have a control channel modem and a modem
bank for different carrier frequencies.
The TDM control frame and TDMA request frame formats are shown in Figure 13.3. The TDM control frame
consists of unique word (UW) and control fields that are used for framing, synchronization, and timing. These
fields are followed by a number of data slots, each slot for one remote. The data slot consists of:
Preamble to indicate the beginning of the slot
Header that contains the remote address and configuration information.
Data indicating the information for call setup and disconnection (carrier assigned).
Frame check sequence (FCS) containing the checksum.
Postamble to indicate the end of the slot.
Figure 13.3: DAMA-SCPC control and request channels.
In a satellite network using DAMA-SCPC, a remote sends a request to the network control center (NCC), and
the NCC will assign different carriers to the two remotes that need to communicate with each other.
The TDMA request channel contains a series of slots for each remote. A remote has to send its requests for
call setup using its slot. The slot structure is the same as the structure of the data slot in the TDM frame.
The TDM control frame consists of a unique word, control word, and a number of time slots. Each slot consists
of the following fields: preamble, header, data, frame check sequence, and postamble. The TDMA request
channel has the same format as the slot of the TDM control frame.
The procedure for call setup is as follows:
The remote sends a request in its slot of the TDMA request channel indicating the address of the called
remote. The control channel modem is used for sending the request.
The network control center sends the control information in the TDM slot assigned to the remote indicating
the carrier assigned. The control channel modem is used for sending the command.
Using the modem for the assigned carrier, the remote sends its data to the other remote. It needs to be
noted that the data sent by a remote goes to the satellite, and the satellite broadcasts the data so that the
other remotes can receive the data.
Once the data transfer is complete, the remote sends the request for disconnection in the TDMA request
channel.
The network control center sends the command to the remote to free the modem corresponding to the
carrier assigned earlier.
The carrier assigned to the remote is now available in the pool of carriers that can be assigned to the other
remotes based on demand.
This configuration is very useful if remotes need to communicate with each other directly. However, one of the
remotes has to act as a network control center.
13.4.2 TDM-SCPC
In time division multiplexsingle channel per carrier (TDM-SCPC), every remote broadcasts its data in TDM
mode. Each remote is assigned a carrier frequency permanently, and so each remote will have one modulator.
However, each remote will have a bank of demodulators to demodulate the data received from other remotes.
Every remote will listen to transmissions from other remotes and decode the data meant for it based on the
address.
The attractive feature of this configuration is that there is no need for a network control center. Also, there is no
need for call setup.
Figure 13.4: TDM-SCPC mesh architecture.
The TDM frame format is shown in Figure 13.5. The format of each slot is the same as that discussed in the
previous section.
Figure 13.5: TDM-SCPC frame.
In TDM-SCPC, each remote is assigned a carrier frequency permanently, and the remote sends its data in TDM
mode using broadcasting. Every remote listens to the broadcast data and decodes the data meant for it.
NoteIn TDM-SCPC-based networks, each remote will have one modulator to transmit using the carrier
frequency assigned to it and also a bank of demodulators to demodulate the signals received from
different remote stations.
13.4.3 TDMA
In TDMA, all the remotes use the same frequency for transmission. As shown in Figure 13.6, at each remote
there will be a burst modem. Each remote will transmit its data as a burst in the TDMA time slot assigned to it.
The time slot allocation is done by the network control center.
Figure 13.6: TDMA mesh architecture.
The TDMA frame format is shown in Figure 13.7. Each time slot will have the same format as discussed earlier.
Signaling from the NCC is sent in the control word. The remote obtains information about the slot allocation by
analyzing the control field. Signaling from the remotes is sent in the header field. The NCC may also allocate a
time slot to a free remote so that the remote can send its signaling information in the data portion of its slot.
Figure 13.7: TDMA frame format.
In TDMA-based networks, all remotes use the same frequency for transmission. Each remote will be assigned a
time slot by the network control center. Using a burst modem, the remote sends its data in the time slot
assigned to it.
All the access techniques we have mentioned are used in commercial systems. Because the hardware
requirements vary for each type of access technique, the total system cost also varies. Based on cost
considerations, a particular type of access mechanism is chosen.
<Day Day Up>
<Day Day Up>
13.5 A REPRESENTATIVE NETWORK
Using any of the multiple access techniques discussed, a variety of applications can be supported on the
satellite network. Some typical applications are:
Interconnecting the local area networks of different branch offices of an organization spread over large
distances.
Interconnecting the PBXs of different branch offices of an organization.
Providing audio/video broadcasting from a central station with provision for audio/text interaction with users
at the remote site.
A satellite-based network is the best choice to broadcast multimedia content to a large number of remote
stations which are distributed geographically. From the hub, the information can be broadcast in TDM mode,
and all the remotes will be in receive-mode. Low-speed channels are used from the remotes to the hub for
interaction with the hub location.
Figure 13.8 shows the equipment configuration for providing distance education through satellite from a central
location to a number of remotes located at different colleges/universities in a country. At the hub, there will be
video transmission equipment. Additional infrastructure such as local area network, PBX, etc. also can be
connected to the baseband interface equipment of the satellite hub. The baseband equipment provides the
necessary interfaces to the video equipment, router connected to the LAN, PBX, etc. At each remote, there will
be video reception equipment and also PBX and LAN. A lecture can be broadcast from the hub in the TDM
mode. All the remotes can receive the lecture containing video, audio, and text. Whenever a user at a remote
needs to interact with the professor at the hub, he can make a request, and a carrier is assigned to the remote
in which he can send either text or voice.
Figure 13.8: (a) Hub configuration for video conferencing and (b) remote configuration.
Systems with similar architecture also are used to provide telemedicine service to rural areas. The central
location is connected to a hospital, and the remotes are located at remote locations. The patients can consult
the doctor at the central location using video conferencing.
NoteTo conserve radio spectrum, multimedia communication over satellite networks use Voice/ Video over
IP protocols in which voice and video are encoded using low-bit rate coding techniques.
Summary
Ever since the first communication satellite was launched in 1962, satellite communication systems have been
used for broadcasting and tele-communications as well as for providing location information. The attractive
feature of satellite communication is its insensitivity to distance. Hence, satellite communication is a very cost-
effective way of providing telecommunication facilities to rural and remote areas. Satellite communication
systems operate in C, Ku, and Ka bands.
Satellite communication systems operate in star and mesh configurations. In star configuration, there will be a
central station (hub) and a number of remotes. All the remotes communicate via the central station. In mesh
configuration, remotes can talk to each other directly. Star configuration is more attractive than mesh
configuration because in star configuration, small antennas called very small aperture terminals (VSATs) can
be used to reduce the cost of the remote. In star configuration, the central station broadcasts in time division
multiplex (TDM) mode to all the remotes, and remotes can transmit in time division multiple access (TDMA)
mode. The three multiple access techniques with which mesh configuration can be obtained are: DAMA-SCPC
(demand assigned multiple access-single channel per carrier), TDM-SCPC, and TDMA. With the availability of
onboard processing on satellites, satellite communication systems are now being used to provide mobile
communications as well. Satellite communications can be effectively used for applications such as distance
education and telemedicine.
References
R. Horak. Communications Systems and Networks. Wiley-Dreamtech India Pvt. Ltd., 2002.
http://www.intelsat.int Web site of Intelsat, which provides satellite services.
http://www.isro.org Web site of Indian Space Research Organization.
Questions
List the various applications of satellites. 1.
What are the problems associated with satellite communication? 2.
What is the impact of high propagation delay on voice communication? 3.
What is the impact of high propagation delay on data communication? 4.
What are the frequency bands of operation for communication satellites? 5.
Explain the various multiple access techniques used in satellite communication. 6.
Exercises
1. Prepare a technical report on direct broadcast satellites. Direct broadcast satellites are used for
transmitting TV programs directly to homes. These satellites operate in the 17/12GHz band.
2. Prepare a technical report on remote sensing satellites.
3. For supporting voice services on satellite-based networks, study the various voice encoding
techniques used.
4. Carry out a paper design to develop a satellite-based video surveillance system. The system has to
capture video data of a specific location and send it to an Earth station.
5. In a satellite network, the roundtrip delay is about 0.5 second. If the stop-and-wait protocol is used
on such a network for data communication, study how effective the satellite channel utilization is.
Answers
1. Direct broadcast satellites transmit TV programs directly to homes. These systems operate in the frequency
band 17/12GHz. However, due to the wide penetration of cable TV, direct broadcast satellite technology has
not taken off well in India, though DBS is extensively used in North America and Europe.
2. Remote sensing satellites are used for a variety of applications: to find out the areas in which natural
resources are available, to determine the water table under the earth's surface, to analyze the fertility of
lands, and so on. Remote sensing satellites have sensors that operate in the infrared and near- infrared
bands. The satellite imagery is sent to the ground stations. The imagery is analyzed based on the
application. Indian Space Research Organization (ISRO) launched the IRS (Indian Remote Sensing
Satellite) series satellites exclusively for remote sensing.
3. In satellite communication, efficient use of the bandwidth is very important, as bandwidth is much costlier
than the terrestrial media bandwidth. Low bit rate coding of voice is done using ADPCM, LPC, and CELP
techniques.
4. In satellite-based video surveillance, there should be a system that captures the video and transmits it to
the ground continuously. The video camera captures the scene and digitizes it using a standard coding
technique such as MPEG2 or MPEG4 and transmits to the Earth station. The minimum data rate to be
supported by the link is 1Mbps to obtain reasonably good quality video.
5. In a satellite network, the round-trip delay is about 0.5 second. If stop-and- wait protocol is used on such a
network for data communication, the satellite channel is not used effectively. After a packet is transmitted
from one side, it takes 0.5 seconds to reach the destination (assuming that there is no other delay). Then
the acknowledgement will be sent by the other station, and it will be received by the station after another 0.5
seconds. So, effectively one packet is transmitted every one second!
Projects
Work out the details of the commercial equipment required to implement a satellite-based distance
education system for which the architecture is explained in this chapter. You can obtain the details of the
commercial products from the Web sites of equipment vendors such as Hughes Network Systems,
Paragea, Scientific Atlanta, etc.
1.
Design and develop a communication system used in surveillance satellites. The video signal has to be
encoded at 384kbps data rate using a commercially available video codec and transmitted to the Earth
station.
2.
<Day Day Up>
<Day Day Up>
Chapter 14: Optical Fiber Communication Systems
During the last two decades, there has been an exponential growth in optical fiber communication systems.
Optical fiber has many attractive features: support for very high data rates, low transmission loss, and immunity
to interference. Presently, optical fiber is used in the backbone network, but in the future, it can be extended to
subscriber premisesoffices and homes. This chapter gives an overview of optical fiber communication
systems, with an emphasis on various network elements that make up the communication system.
14.1 EVOLUTION OF OPTICAL FIBER COMMUNICATION
In the 1960s, K.C. Kao and G.A. Hockham demonstrated the feasibility of transmitting information coded into
light signals through a glass fiber. However, fabrication of pure glass that can carry the light signals without loss
was successful only in the 1970s in Bell Laboratories. Subsequently, single-mode fiber was developed, which
has less loss and support for higher data rates. The next milestone was the development of wave division
multiplexing (WDM), which increased the capacity of a fiber significantly. Dense wave division multiplexing
(DWDM) is the next evolutionary step which increased the capacity of fiber to terabits.
The feasibility of transmitting information coded into light signals was demonstrated in the 1960s. However,
development of pure glass to carry light signals was successfully achieved only in the 1970s.
14.1.1 Multimode Optical Fiber Communication
A communication system using the multimode optical fiber is shown in Figure 14.1. A light emitting diode (LED)
or a semiconductor laser is the light source at the transmitter. LED is a low-power device, whereas laser is a
high-power device. Laser is used for larger distances. To transmit 1, the LED is switched on for the duration of
the pulse period; to transmit 0, the LED is switched off for the duration of the pulse period. The photodetector at
the receiving end converts the light signal to an electrical signal. The 800 and 1300 nm bands are used for the
transmission. (Note that in optical fiber communication, we will represent the bands in wavelength and not in
Hertz). Data rates up to 140Mbps can be supported by multimode fiber systems.
Figure 14.1: Multimode optical fiber communication system.
In a multimode fiber optic system, there will be a light emitting diode or a laser at the transmitter and a
photodetector at the receiver.
Multimode optical fiber has a loss of about 0.5 dB/km. Hence, a regenerator is required every 10 km. The main
attraction of multimode fiber is that it is of low cost, and hence it is used as the medium for communication over
small distances.
The multimode fiber has a loss of about 0.5 dB/km. If the distance between the transmitter and the receiver is
more than 10 km, a regenerator is required. The regenerator converts the light signal into an electrical signal
and then back to a light signal for pushing it on to the fiber. Regenerators are very costly, and proved to be very
expensive if used for very large distance communication. In a multimode fiber, light travels in multiple
propagation modes, and each mode travels at a different speed. The pulse received at the receiving end is
distorted. This is called multimodal dispersion.
The main attraction of multimode fiber is that it is cheaper and hence a cost-effective way of providing
communication for small distances.
14.1.2 Single-Mode Optical Fiber Communication
Single-mode optical fiber communication system is shown in Figure 14.2. The transmission is done at 1300 and
1550 nm wavelengths. Data rates up to 1 Gbps can be achieved using this system. The main advantage of the
single-mode fiber is that the loss is less and hence regenerators are required only for every 40 km. A laser is
used at the transmitting end and a photodetector at the receiving end. Developments in lasers and photo
detectors as well as manufacturing of the pure glass are the main reason for the dramatic increase in data rate
and decrease in loss.
Figure 14.2: Single-mode optical fiber communication system.
In single-mode optical fiber communication systems, transmission is done at 1300 and 1550 nm wavelengths.
Single mode fiber is of low loss and hence regenerators are required only for every 40 km.
14.1.3 Wave Division Multiplexing Systems
Wave division multiplexing (WDM) increases the capacity of an already laid optical fiber. As shown in Figure
14.3, signals corresponding to different sources are transmitted over the fiber at different wavelengths. At the
receiving end, the demultiplexer separates these different wavelengths, and the original signals are obtained.
Up to 16 or 32 wavelengths can be multiplexed together. The next development was dense wave division
multiplexing in which 256 wavelengths can be multiplexed and sent on a single fiber. Systems that can support
terabits per second data rates have been demonstrated.
Figure 14.3: Wave Division Multiplexing.
In wave division multiplexing, signals corresponding to different sources are transmitted at different
wavelengths. Hence, the capacity of an already laid fiber can be increased significantly.
The only problem with single-mode optical fiber is that regenerators are required every 40 km. The development
of an optical amplifier eliminated the need for regenerators. The optical amplifier is a specially made fiber of
about 10 meters. These amplifiers replaced the costly regenerators, resulting in tremendous cost savings for
using optical fiber for very long distance communication.
NoteOptical amplifiers eliminate the need for regenerators. The optical amplifier is a specially made fiber of
about 10 meters.
Using these systems (Figure 14.4), 10Tbps data rates can be transmitted over a distance of a few hundred
kilometers.
Figure 14.4: Optical communication with optical amplifier.
The various bands used in single-mode optical fiber are given in Table 14.1.
Table 14.1: Communication Bands Used in Single-Mode Fiber
Band Wavelength Range (nm)
O band 1260 to 1360
E band 1360 to 1460
S band 1460 to 1530
C band 1530 to 1565
L band 1565 to 1625
U band 1625 to 1675
Presently, C and L bands are being used due to the availability of the optical amplifiers. Developments in optical
fiber manufacturing, Raman amplification, and so forth will lead to use of the other bands in the future.
C band (1530 to 1565 nm) and L band (1565 to 1625 nm) are presently being used in single-mode optical fiber
communication due to the availability of optical amplifiers.
<Day Day Up>
<Day Day Up>
14.2 OPTICAL NETWORKS
So far, we have discussed optical communication systems for point-to-point links. These systems are
transmission systems that can carry optical signals at very high data rates over very large distances. As the
optical communication technology matured, a number of proprietary networking solutions were developed.
Subsequently, standardization activities resulted in a number of international standards to develop optical
networks.
14.2.1 SONET/SDH
SONET (synchronous optical network) is a standard developed by American National Standards Institute
(ANSI) for optical networking in North America. International Telecommunications Union Telecommunications
Sector (ITU-T) developed a slightly different standard that is called synchronous digital hierarchy (SDH).
SONET/SDH standards specify how to access single-mode optical fiber using standard interfaces and how to
multiplex the digital signals using synchronous TDM. These standards specify the rate hierarchy and interfaces
for data rates from 51.84Mbps to 39.813Gbps.
Synchronous optical network (SONET) is a standard for optical networking. SONET was developed by ANSI for
North America. A slightly different standard used in Europe is called the synchronous digital hierarchy (SDH).
The signal hierarchy in SONET/SDH is shown in Table 14.2.
Table 14.2: SONET/SDH Signal Hierarchy
Optical Carrier (OC) Level
(STM) Level
SDH Synchronous
Transfer Model
Data Rate No. of 64kbps
Channels
OC-1
51.84Mbps 672
OC-2
103.68Mbps 1,344
OC-3 STM-1 155.52Mbps 2,016
OC-4 STM-3 207.36Mbps 2,688
OC-9 STM-3 466.56Mbps 6,048
OC-12 STM-4 622.08Mbps 8,064
OC-18 STM-6 933.12Mbps 12,096
OC-24 STM-8 1.24416Gbps 16,128
OC-36 STM-12 1.86624Gbps 24,192
OC-48 STM-16 2.48832Gbps 32,256
OC-96 STM-32 4.976Gbps 64,512
OC-192 STM-64 9.953Gbps 129,024
OC-768 STM-256 39.813Gbps 516,096
A typical network based on SONET/SDH standards is shown in Figure 14.5. Though the network can operate in
any topology such as star or mesh, dual-ring topology is the preferred choice. In dual-ring topology, there will be
two fibers. One fiber will transmit in one direction and the other fiber in the opposite direction. The advantage of
this topology is that even if one link fails, the communication does not fail. Such a topology leads to survivable
networksnetworks that can survive even if some links fail. As shown in the figure, there can be a backbone
ring that operates say at OC-192/STM-64. The backbone network may interconnect major cities in a country. In
each city, there can be a ring operating at different speeds from OC-3 to OC-12. The add-drop multiplexer
(ADM) is used to insert the traffic channels into the SONET/SDH transmission pipe as well as to take out traffic
channels from the pipe. DXCs (digital cross connects) connect two rings and also do the
multiplexing/demultiplexing and switching functions. The line terminating equipment (LTE) provides user access
to the network.
Figure 14.5: SONET/SDH network.
A SONET/SDH network operates in dual-ring topology. One fiber is used for transmitting in one direction and
the other fiber for transmission in the opposite direction. This topology facilitates development of survivable
networksthe network can survive even if one link fails.
The ADM and the DXC are the two important network elements that provide the networking capability. The
functioning of ADM and DXC can be understood through the analogy of a train. When a train stops at a railway
station, some people get off and some people get on; the traffic is dropped and added. ADM does a similar
function. The traffic for a particular place (in the form of TDM slots) can get dropped and added. The input to
the ADM can be an E1 link with 30 voice channels. Some channels, say 10 voice channels, meant for a
particular station can be dropped and another 10 voice channels can be added.
Continuing with our train analogy, a train may stop at a railway junction, and some coaches can be detached
and some coaches can be attached to it. The detached coaches are connected to another train. The attached
coaches would have come from some other train of a different route. The DXC performs a similar function.
Traffic from different stations is multiplexed and demultiplexed, and also switching is done at a DXC.
The add-drop multiplexer (ADM) and the digital cross connect (DXC) are the two important network elements in
an optical fiber network.
14.2.2 Optical Transport Networks
Similar to a SONET/SDH network, a WDM network can be developed as shown in Figure 14.6. A WDM ring
can act as a backbone network. The backbone ring can be connected to a SONET/SDH ring or to passive
optical network (PON). The network elements in the WDM network will be OADM (optical ADM) and WXC
(WDM cross connect), which do similar functions as ADM and DXC, but in the optical wavelengths. At each
WXC, a SONET/SDH ring can be connected that serves different cities. Compared to the SONET/SDH
network, this network offers much better capacity and flexibility. The PON can be any of the present optical
networks based on star or ring topology.
An optical transport network consists of a backbone WDM ring to which SONET/SDH ring or passive optical
network (PON) can be connected. The network elements in this configuration are optical ADM and WDM cross
connect.
The network shown in Figure 14.6 has one problem that needs to be addressed. The data needs to be
converted from electrical domain to optical domain as well as from optical domain to electrical domain. This
calls for additional hardware. To avoid these conversions, all-optical networks are being developed. Research
in this direction is very promising, and commercial products are in the works.
Figure 14.6: WDM network.
<Day Day Up>
<Day Day Up>
14.3 BROADBAND SERVICES TO THE HOME/OFFICE
To prsovide broadband services to the home, the office, and so on, the Full Services Access Networks (FSAN)
group has been established by network operators to evolve optical access for providing a variety of
voice/data/video services. The conceptual architecture for such an access network is shown in Figure 14.7.
Work is in progress to define the various standards for deploying such a network.
Figure 14.7: Fiber to home/office.
The architecture consists of an optical distribution system that provides the connectivity to homes/offices, etc.
through standard interfaces. The distribution network is a collection of optical fibers and associated interface
equipment. In the case of fiber to home or fiber to office, an optical network termination (ONT) will be used to
which the end systems can be connected. In places where it is not feasible/economical to use optical fiber at
the end, twisted pair or very high speed digital subscriber line (VDSL) can be used. The optical network unit
(ONU) provides the interface between the network termination (NT) and the distribution network. The services
supported can be based on a variety of networks such as Asynchronous Transfer Mode (ATM), Frame Relay,
Internet, PSTN, and so on. The objective of FSAN is very clear: to develop a single worldwide broadband
access system.
The Full Services Access Networks (FSAN) group developed a conceptual architecture to provide broadband
multimedia services to the home/office using optical fiber as the main transmission medium.
NoteThe objective of the FSAN architecture is to develop a single worldwide broadband access system. In
addition to the optical network elements, even the non-optical fiber-based systems can be integrated
into this architecture.
Summary
This chapter presented advances in optical communication systems. Initially, multimode optical fibers were
used for transmission, which required regenerators every 10 kilometers. Because regenerators are very costly,
multimode optical fiber is used for small distances. With the development of single-mode optical fiber, the data
rates increased tremendously. The loss in single-mode fiber is also very low, and hence regenerators are
required only for every 40 km. Wave division multiplexing (WDM) and dense WDM (DWDM) facilitate sending
multiple wavelengths on the same fiber. Development of optical amplifiers resulted in a lot of cost savings. As a
result of all these developments, optical fiber communication at terabits per second data rates is achieved.
Standardization activities for optical networks resulted in synchronous optical network (SONET)/synchronous
digital hierarchy (SDH) standards. These standards specify the multiplexing hierarchy for transmitting data at
rates up to 39.813Gbps, referred to as OC-768. In SONET/SDH, a dual-ring topology is used that provides a
very reliable communication infrastructure. A backbone ring can interconnect the major cities in a country, and
smaller rings can cover each city. These rings can be interconnected through digital cross connect (DXC)
equipment. Now work is in progress to develop standard interfaces to achieve broadband access to
homes/offices using optical fiber communication.
References
R.Ramaswami. "Optical fiber communication: From Transmission to Networking". IEEE Communications
Magazine, 50
th
Anniversary Issue, May 2002.
Y. Maeda et al. "FSAN OAN-WG and Future Issues for Broadband Optical Access Networks". IEEE
Communications Magazine, Vol. 31, No. 12, December 2001.
http://www.fsanet.net Web site of Full Services Access Networks Group.
http://www.ansi.org Web site of American National Standards Institute.
Questions
What are the advantages and disadvantages of multimode optical fiber communication systems? 1.
What is wave division multiplexing? 2.
Explain the architecture of a SONET/SDH ring. 3.
What is the function of add-drop multiplexer (ADM)? 4.
Explain the conceptual architecture proposed by FSAN. 5.
Exercises
1. Study the specifications of commercially available optical components such as lasers, LEDs, and
photodetectors.
2. Study the recent developments in DWDM and the data rates achieved.
3. Prepare a technical report on Raman amplification.
4. Prepare a technical report on the various wavelength bands used in single-mode fiber and the
issues involved in making use of these bands (availability of optical components, amplifiers, loss in
the cable, etc.).
Answers
1. You can get the details from http://www.efiber.net and http://www.fiberalliance.net.
2. Latest information on DWDM commercial products can be obtained from http://www.cisco.com,
http://www.nortelnetworks.com, http://www.ericsson.com, and http://www.siemens.de.
3. A Raman amplifier is a device that amplifies the optical signals directly. Hence, there is no need to convert
the optical signal to electrical signal, amplify it, and reconvert it into optical signal.
4. Presently C band and L band are used in optical fiber, mainly because of the availability of optical
components in these bands.
Projects
Carry out the paper design of a communication system that provides connectivity between all the county
seats and state capital of your state. From the county seat to all the major towns in the county,
connectivity should be provided. Suggest the design alternatives for mesh and star architecture. High-
bandwidth services such as video conferencing, high-speed Internet access, and so forth should be
provided.
1.
An organization has five branches in a city. Each branch has to be provided with high-speed connectivity
for interbranch communication. Propose a solution using fiber optic communication. Study the
alternatives in using (a) ring, star, mesh topology; (b) single-mode fiber, multimode fiber; and (c)
SONET, WDM. Choose the best alternative based on the following criteria: (a) cost; (b) future
expandability in case more branches are opened later; and (c) reliability and availability of the network.
2.
<Day Day Up>
<Day Day Up>
Part II: Data Communication Protocols and Computer
Networking
Chapter 15: Issues in Computer Networking
Chapter 16: ISO/OSI Protocol Architecture
Chapter 17: Local Area Networks
Chapter 18: Wide Area Networks and X.25 Protocols
Chapter 19: Internetworking
Chapter 20: TCP/IP Protocol Suite
Chapter 21: Internet Protocol (IP)
Chapter 22: Transport Layer ProtocolsTCP and UDP
Chapter 23: Distributed Applications
Chapter 24: The Wired Internet
Chapter 25: Network Computing
Chapter 26: Signaling System No. 7
Chapter 27: Integrated Services Digital Network
Chapter 28: Frame Relay
Chapter 29: Asynchronous Transfer Mode
The twentieth century's two great gifts to humankind are the PC and the Internet. The PC has now become
ubiquitous and is an integral part of the daily lives of most of us. The Internet, the network of computer networks
spreading across the globe, is now making distance irrelevant. The Internet is the platform to access
information that is available anywhere in the world through a click of the mouse. Unlike PSTN, the Internet is a
recent phenomenon, just about three decades old. But then, during the past three decades, the field of
computer networking has seen developments at a breath- taking pace.
For entities (people, computers, telephones, or any appliances) to communicate with each other, established
procedures are mandatory. These established procedures or protocols, fundamental to networking, are
discussed in detail in this part of the book. We will study the OSI reference model, the TCP/IP architecture, and
the various protocols for distributed applications. The technologies and standards for local and wide area
networks are also covered. We will study the developments in network computing and the exciting new
applications using this technology such as application service provisioning, dynamic distributed systems, etc.
We will also study Signaling System No. 7, Integrated Services Digital Network (ISDN), Frame Relay, and
Asynchronous Transfer Mode (ATM) systems.
Computer networks are being used extensively not just for data, but for voice, fax and video communication as
well. For designing any communication network, a good understanding of the protocols described in this part of
the book is most important for every budding telecommunications professional. This part of the book has 15
chapters, which cover the details of the data communication protocols and representative computer networks.
<Day Day Up>
<Day Day Up>
Chapter 15: Issues in Computer Networking
OVERVIEW
With the widespread use of computers in all walks of life, the need arose for making the computers
communicate with one another to share data and information. Circuit switching, which works well for voice
communication, is inefficient for computer communication, and a radically new approach, called packet
switching, was developed. In this chapter, we will study the concept of packet switching, which is fundamental
to computer networks. We will also study the various services and applications supported on computer
networks.
To make two computers talk to each other is a tough task. The two computers may be running different
operating systems, using different data formats, have different speeds, and so on. We need to establish very
detailed procedures to make computers exchange information. To tackle this big problem, we divide the
problem into small portions and tackle each portion. This philosophy has led to the layered approach to protocol
architecturewe will discuss the importance of the layered approach and protocol standards.
<Day Day Up>
<Day Day Up>
15.1 THE BEGINNING OF THE INTERNET
The threat of a war between the Soviet Union and the United States in the 1960s led to the development of the
Internet. The U.S. Department of Defense wanted to develop a computer network that would continue to work
even if some portions of the communication network were destroyed by the enemy.
Consider a network of four computers shown in Figure 15.1. If we use the circuit switching operation for
communication between any two computers, the procedure is:
Establish the connection 1.
Transfer the data 2.
Disconnection 3.
Figure 15.1: A computer network.
In circuit switching, a connection is established between the two users, data is transferred, and then the circuit
is disconnected. The telephone network uses circuit switching to establish voice calls between subscribers.
If the communication link is destroyed, then it is not possible to make the two computers exchange data. This is
the inherent problem in a circuit switching operation. If there are alternate communication links, the computers
can exchange data, provided the data is routed through the alternate paths, even without the user's knowledge.
This led to the concept of packet switching, a revolutionary concept that is fundamental to data
communications.
Instead of circuit switching, packet switching is used in computer communication. Packet switching has two
main advantages: reliable communication even if some links are not working and call setup and disconnection
can be eliminated.
Using the concept of packet switching, the TCP/IP (Transmission Control Protocol/Internet Protocol) suite was
developed during 1969-70 when the first form of the Internet, known as DARPANET (Defense Advanced
Research Projects Agency Network) was deployed, interconnecting a few sites in the U.S.
Packet switching has two main attractionsproviding reliable communications even if some communication
links fail and efficient usage of the communication links because no time is lost for call setup and disconnection.
However, packet switching requires new protocols to be designed, and we will study these issues in detail in
this chapter.
<Day Day Up>
<Day Day Up>
15.2 SERVICES AND APPLICATIONS
The need for a network of computers arises mainly to share the information present in different computers. The
information can be transported electronically from one computer to another, obviating the need for physical
transportation through floppies or CD-ROMs. In addition, resources can be sharedfor example, a printer
connected to a computer on the network can be accessed from other computers as well. The services that can
be supported on the network are:
Electronic mail: This is the most popular service in computer networks. People can exchange mail and the
mail can contain, in addition to the text, graphics, computer programs, and such.
File transfer: A file on one computer can be transferred to another computer electronically.
Remote login: A person at one computer can log in to another computer and access the programs or files on
the other computer. For instance, a person using a PC can log in to a mainframe computer and execute the
programs on the mainframe.
These are the basic services supported by computer networks. Using these services, many applications can be
developed, such as bibliographic services to obtain literature on a specific topic, to search different computers
on a network for specific information, to find out where the information is available and then obtain the
information, etc.
The three basic services supported by computer networks are: email, file transfer and remote login.
<Day Day Up>
<Day Day Up>
15.3 PACKET SWITCHING CONCEPTS
Suppose you want to transfer a file from one computer to another. In packet switching, a file is divided into
small units (for instance, 1,024 bytes) called packets, and each packet is sent over the transmission medium.
At the receiving end, these packets will be put together, and the file is given to the user. The sender just gives a
command to send a file, and the recipient receives the filethe underlying operation of packetization is
transparent and is not known to the user.
To transmit data using this concept, we need special equipment called packet switches, as shown in Figure
15.2. A packet switch has ports to receive packets from incoming lines and ports to send packets on outgoing
lines. The packet switch receives each packet, analyzes the data fields in the packet to find the destination
address, and puts the packet in the required outgoing line. This mechanism is known as packet switching. The
packet switch should have buffers to hold the packets if they arrive at a rate faster than the rate at which the
packets can be processed. When the packets arrive on the input ports, they are kept in buffers, each packet is
analyzed by the packet switch (to find out the destination address), and based on the destination address, the
packet will be sent through one of the outgoing ports. In other words, the packet is "routed" to the appropriate
destination.
Figure 15.2: A packet switching network.
In packet switching, the data is divided into small units called packets, and each packet is sent over the
network. The packet switch analyzes the destination address in the packet and routes the packet toward the
destination.
Good concept. The main advantage is that each packet can take a different route to reach the destination, and
it is not necessary that all packets follow the same route. If one communication link fails, packets can take a
different route; if one communication link has too much traffic, the packets can take a route with less traffic.
What are the problems?
Since each packet can travel independently, the packet has to carry the address of the destination.
Since different packets can take different routes, they may not be received in sequence at the destination,
and hence each packet has to carry a sequence number so that at the destination all the packets can be
put in sequence.
Due to transmission errors, some packets may be received with errors. Each packet must contain some
error detection mechanism.
Some packets may get lost due to intermittent problems in the medium or packet switches. There must be
some mechanism for acknowledgements the receiver has to inform the sender whether a particular
packet is received correctly or request a retransmission of the packet. Each packet should also have the
source address in addition to the destination address.
There are a few more problems, but these are the major problems. Is packet switching worth it, then? Well, yes,
because it provides a very reliable mechanism for transfer of data provided we establish the necessary
procedures. Also, we can avoid the call setup and call disconnection procedures, saving time.
For packet switching to work, we need a packet switch. The job of a packet switch is to get the packets from an
incoming port, analyze the packet to see its destination address, and then put the packet in the outgoing port.
In the packet switch, the incoming packet is kept in a buffer (in a queue). The software takes each packet,
analyzes it, and keeps it in an outgoing buffer. This switching has to be done very fast. For fast packet
switching, the size of the packet is an important issue.
If the packet size is very large, the packet switch should have a large buffer. If the packet is small, for each
packet there will be overhead (additional data to be inserted such as addresses, CRC, etc.). The size of the
packet is a design parameter. Just to get an idea of the packet size used in practical networks, in Ethernet
LANs, the maximum packet size is 1,526 bytes, and in X.25-based networks, it is 1,024 bytes.
A packet switch will have incoming ports and outgoing ports. The packet received on the incoming port will be
analyzed by the packet switch for the destination address, and the packet is given to one of the outgoing ports
towards the destination.
Packet switching takes two forms: virtual circuit and datagram service.
NoteThe size of the packet is an important design parameter. If the packet is small, the overhead on each
packet will be very high. On the other hand, if the packet is large, a high capacity buffer is required in
the packet switch.
15.3.1 Virtual Circuit
When a computer (source node) has to transmit some information to another computer (destination node) over
a packet network, a small packet, called the call setup packet, is sent by the source node to the destination
node. The call setup packet will take a route to reach the destination node, and the destination can send a call
accept packet. The route taken by these packets is subsequently used to transmit the data packets. A virtual
circuit is established between the transmitting and receiving nodes to transfer the data. This is similar to circuit
switching, and hence some time is lost in call setup and call disconnection. However, the advantage is that all
the data packets are received at the destination in the same sequence, so reassembly is easier. The virtual
circuit service (also known as connection-oriented service) concept is shown in Figure 15.3.
Figure 15.3: Virtual circuit (connection-oriented) service.
At the time of setting up the call, at each packet switch, the necessary buffers will be allocated, and the
outgoing port will also be allocated. When a packet arrives, the packet switch knows what to do. If the packet
switch cannot accept a call for any reason, it informs the sender so that the sender can find an alternative path.
A virtual circuit can be set up on a per-call basis or on a permanent basis. A virtual circuit set up on a per-call
basis is called a switched virtual circuit (SVC), similar to the call set up in a PSTN when we call someone else's
telephone. A virtual circuit set up on a permanent basis is called a permanent virtual circuit (PVC), which is
similar to a leased line.
NoteFor creating a permanent virtual circuit (PVC), each packet switch on the route between the source
and destination has to be programmed. The PVC is a concatenation of the routes between packet
switches. A PVC is used when there is heavy traffic between two nodes so that no time is wasted in
setting up a call for each data transfer session.
X.25, Asynchronous Transfer Mode (ATM), and Frame Relaybased networks use the virtual circuit
mechanism.
In virtual circuit service, also known as connection-oriented service, first a circuit is established between the
source and destination, then data transfer takes place, and finally the circuit is disconnected. This is similar to
circuit-switching operation.
15.3.2 Datagram Service
In datagram service, there is no procedure for call setup (and hence for call disconnection as well). Each data
packet (called a datagram) is handled independently: the data is divided into packets, and each packet can take
its own route to reach the destination, as shown in Figure 15.4. However, in this scheme, the packets can reach
the destination at different times and hence not in sequence. It is the responsibility of the destination to put the
packets in order and check whether all the packets are received or whether some packets are lost. If packets
are lost, this has to be communicated to the source with a request for retransmission. The advantage of
datagram service is that there is no call setup.
Figure 15.4: Datagram service.
For data communication (such as e-mail or file transfer), datagram service is quite efficient because the
channel can be used efficiently.
In datagram service, each packet is handled independently by the packet switch. The advantage of datagram
service is that there is no call setup and disconnection procedures, and hence it is much more efficient as
compared to a virtual circuit.
15.3.3 Source Routing
A third mechanism used to send packets from the source to destination is source routing. In this mechanism,
the source decides which route the packet has to take. This information is also included in the packet. Each
packet switch receives the packet and then forwards it to the next packet switch based on the information
available in the packet. For source routing to work, the source should know the complete topology of the
network (how many packet switches are there, how each packet switch is connected to other packet switches,
and so on). Hence, this mechanism is rarely used.
In source routing, the source specifies the route to be taken by the packets. However, this is difficult to
implement because the topology of the network needs to be known by the source.
<Day Day Up>
<Day Day Up>
15.4 ISSUES IN COMPUTER NETWORKING
To design and develop computer networks is a challenging task because there are many issues to be
considered. Here are some of the major issues:
Geographic area: The area to be covered by the network is of foremost importance. The area of coverage can
be a single floor in a building, multiple floors in a building, a large campus with different buildings, an entire city
with offices in different locations, a country with offices in different cities, or the world with offices in different
countries. Based on the area of coverage, the communication media is chosen. A typical corporate network can
use different mediafiber for LAN, copper wire or terrestrial radio for MAN (metropolitan area network), fiber or
satellite for WAN (wide area network), and so on. In addition to the geographic coverage, what services are to
be supported also determines the type of mediumvoice and video communication require larger bandwidths.
NoteA wide area network can use a combination of different transmission mediasuch as satellite radio,
coaxial cable and optical fiber. Generally, the speeds of wide area networks are much lower
compared to the speeds of local area networks.
Services to be supported: Nowdays, computer networks need to support not just data, but voice, fax, and
video services as well. This requirement needs to keep in mind the available bandwidths and cost
considerations. The higher the bandwidth requirement, the higher the cost. For different types of services to be
supported, different application-level protocols are required.
NoteThough in the initial days of computer networks, mostly data services were supported, nowdays voice
and video services are also becoming predominant. Real-time communication of voice and video over
computer networks requires additional protocols.
Security: To ensure that the network provides secure communication is of paramount importance. To ensure
secrecy, necessary protocols need to be implemented. To support applications such as e-commerce, security is
a must.
Different computing platforms: Computers have a wide variety of processors and operating systems.
Because of these differences, filesystems will be different, data formats will be different, and filename
conventions will be different (for instance, Unix versus Windows 9x). The protocols need to be designed in such
a way that computers with different operating systems and data formats can communicate with one another.
Error control: The transmission media introduce errors during the transmission. Also, because of congestion
in the networks, some packets may be lost. The protocols need to take care of the errors. Error detection is the
first step: the receiver has to check whether the data that has been transmitted is received correctly using some
additional bits called CRC (cyclic redundancy check). If the data is incorrect, the receiver has to ask for
retransmission of the packet. Alternatively, error-correcting codes can be used to correct a few errors at the
receiver without asking for retransmission.
NoteIf errors are detected at the receiving end, the receiver has to ask for retransmission of packets. If the
transmission medium is not very reliable, this results in a lot of retransmissions, causing waste of
bandwidth. To reduce retransmissions, error-correction is a good approach and is generally used in
satellite communication systems.
Flow control: As computers on a network will be of different capacities in terms of processing power, memory,
and so forth, it may happen that one computer may not be able to receive packets with the same speed with
which the other computer is sending. Protocols should be designed to control the flow of packetsthe receiver
may have to inform the sender "hold on, do not send any more packets until I ask you to." This mechanism is
handled through flow control protocol. Flow control poses special problems in high-delay networks, such as
satellite networks where the roundtrip delay is quite high.
Addressing: When two computers in a network have to share information, the sender has to specify to whom
the packets are addressed. The receiver has to know from where the packets have come. So, addressing
needs to be handled by the protocols.
NoteThe global Internet solves the addressing problem by assigning a unique address to every machine
connected to it. This address is known as the IP address.
Type of communication: Generally, two computers need to talk to each other. Cases also arise when a
computer has to broadcast a packet to all the computers on a network. Also, there may be a packet being sent
from one computer to a selected number of computers (for video conferencing between computers). This type
of communication can be point-to-point, broadcasting, or multicasting. The protocols must have the capability to
take care of these different types of communications.
NoteMulticasting is a requirement in applications such as video conferencing. For instance, if five persons
situated at five different locations want to participate in a video conference, the data from each
location needs to be sent to the other four locations.
Signaling packets: Before the actual data transfer takes place, it may be necessary to set up the call (as in the
case of virtual circuit service) and disconnect the call. Protocols need to be designed for this to happen through
transmission of special packets.
Congestion control: The packet switch will have a limited buffer for queuing the incoming packets. If the
queue is full and the packet switch cannot take any more packets, then it results in congestion. Protocols need
to take care of congestion to increase the throughput and reduce the delay. Two alternatives can be
implemented: (a) send a control packet from the congested node to the other nodes informing them to reduce
the number of packets, or (b) inform the other nodes to follow different routes. The strategies followed for
congestion control are similar to the strategies adopted by traffic police at traffic islands.
NoteIn computer networks, it is difficult to predict the traffic. Suddenly, there may be heavy traffic as a
result of which some packet switches cannot handle incoming packets at all. In such cases, the
packets might be discarded.
Internetworking: Based on the need, different networks need to be connected together, such as a LAN and a
WAN. The protocols used in the two networks are likely to be different, and to inter-work, there must be some
machines that do the protocol conversion. This protocol conversion is achieved by entities known as routers or
gateways.
Segementation and reassembly: When two networks are interconnected, there is a possibility that the packet
sizes supported by the two networks are different. There must be some designated machines that will take care
of the differences in packet size. The large packets have to be broken down into smaller packets (segmented)
and later put together (reassembled).
In developing computer networks using packet switching, a number of issues need to be considered. These
include the geographical area to be covered, services to be supported, security, different computing platforms,
mechanisms to control errors and the different speeds of the computers, addressing, support for real-time
communication services such as voice/video, whether networking of networks is required, etc.
Real-time communication: For data communication (such as e-mail and file transfer), there may not be a strict
limit on the delay with which a packet has to be received at the receiver. However, for real-time communication
such as fax, voice, or video over packet networks, the delay is an important parameter. For instance, once the
speech starts being played at the receiving end, all the packets should be received with constant delay;
otherwise there will be breaks in the speech. Special protocols need to be designed to handle real-time
communication.
Network management: To ensure that the computer networks operate as per the user requirements, to detect
faults, to analyze traffic on the network, to rectify problems if any, etc., we need to design protocols whereby the
networks can be managed well. Network management protocols take care of these issues.
These are just a few issues in computer communication, we will discuss how these issues are resolved for
developing user-friendly computer networks, throughout this book.
<Day Day Up>
<Day Day Up>
15.5 NEED FOR LAYERED APPROACH AND PROTOCOL STANDARDS
A protocol can be defined as a set of conventions between two entities for communication. As discussed in the
previous section, many protocols are required in computer communication to tackle different issues. One way of
achieving computer communication is to write monolithic software for all the protocols to be implemented. This
approach, being not modular, leads to lots of problems in debugging while developing the software and also in
maintenance. On the other hand, a "layered approach" leads to modularity of the software. In a layered
approach, each layer is used only for some specific protocols. Layered approach has many advantages:
Every layer will perform well-defined, specific functions.
Due to changes in the standards or technology, if there are modifications in one layer's functionality or
implementation, the other layers are not affected and hence changes are easier to handle.
If necessary, a layer can be divided into sub-layers for handling different functions (as in the case of LAN).
If necessary, a layer can be eliminated or by-passed.
If the protocols for different layers are based on international standards, software or hardware can be
procured from different vendors. This multi-vendor approach has a major advantagebecause of the
competition among the vendors, the prices will be competitive and in the bargain, the end user will be
benefited.
NoteIn Local Area Networks, the datalink layer is divided into two sub-layers viz., logical link sub-layer and
medium access control sub-layer. A LAN need not have the network layer at all. This type of flexibility
is provided by the layered approach to protocol development.
However, while deciding on the number of layers, the following points need to be kept in mind:
If the number of layers is high, there will be too much protocol overhead. Hence, the number of layers
should be optimized.
The interfaces between two adjacent layers should be minimal so that when a layer's software/hardware is
modified, the impact on the adjacent layers is minimal.
For each layer, as shown in Figure 15.5, there will be: (a) service definition that specifies the functions of the
layer (b) protocol specification that specifies the precise syntax and semantics for interoperability, and (c)
addressing or service access point to interface with the higher layer. These three form the specifications of the
protocol layer. International bodies such as International Organization for Standardization (ISO) and Internet
Engineering Task Force (IETF) standardized these specifications so that any equipment vendor or software
developer can develop networking hardware/ software that will interoperate.
Figure 15.5: Specification of the protocol layer.
The layered approach to protocol development is a very important concept. Each layer does a specific job and
interfaces with the layers above and below it. This results in modular development of protocols for computer
communication.
To make two computers talk to each other, we run the layered software on both computers. As shown in Figure
15.6, each layer interacts with the layer above it and the layer below it. It also communicates with the peer layer
in the other machine. Each layers provides defined services to the layers above and below it.
Figure 15.6: Layer's services and protocols.
For computer communication, the layered approach has been well accepted, and the ISO/OSI protocol suite
and the TCP/IP protocol suite follow this approach. The ISO/OSI protocol suite is a seven-layer architecture,
whereas TCP/IP is a five-layer architecture. The beauty of the layered architecture will be evident when we
study the ISO/OSI protocol architecture in the next chapter.
The two important layered architectures for computer communication are: ISO/OSI architecture in which there
are 7 layers and TCP/IP architecture in which there are 5 layers.
Summary
In this chapter, we studied the important concepts of packet switching. Packet switching is fundamental to
computer networking. In packet switching, the data to be sent is divided into small packets, and each packet is
transmitted over the network. In a virtual circuit, the route to be taken by each packet is determined before the
data transfer takes place. In a virtual circuit, there will be three phasescall setup, data transfer, and call
disconnection. ATM, Frame Relay, and X.25 based networks are based on a virtual circuit. In datagram service,
each packet is handled independently, and hence, there is no call setup and call disconnection. The Internet
uses datagram service.
We also studied the various issues involved in developing computer networks. Coverage area, services to be
provided to end users, computing platforms (hardware and operating systems), error control, flow control,
addressing, signaling, networking of networks, real-time communication, segmentation and reassembly,
congestion control, and network management are the main issues to be handled. To address all these issues,
protocols need to be developed for making computers talk to each other. We introduced the concept of the
layered approach to computer networking. Instead of handling all the issues together, each layer can handle
specific functions so that the software/hardware for making computers talk to each other can be developed in a
modular fashion.
References
50th Anniversary Commemorative Issue of IEEE Communications Magazine, May 2002. This issue
contains excellent articles on the history of communications as well as the papers that have influenced
communications technology developments during the past 50 years.
Larry L. Peterson and Bruce S. Davie. Computer Networks: A Systems Approach. Morgan Kaufmann
Publishers Inc., 2000. A systems approach rather than a layered approach to computer networking makes
this book very interesting.
http://www.acm.org The Web site of the Association for Computing Machinery (ACM). If you are a member
of ACM, you can access the online education portal, which gives excellent tutorials on different aspects of
computing, including computer networks.
http://www.computer.org The Web site of the IEEE Computer Society. If you are a member of the IEEE
Computer Society, you will have access to excellent online educational material on computer networking.
http://www.ieee.org The Web site of IEEE, the largest professional body of electrical and electronic
engineers. This site provides access to many standards developed by IEEE on computer networks.
1.
Questions
What is packet switching? As compared to circuit switching, what are its advantages and
disadvantages?
1.
Explain virtual circuit service and datagram service. Compare the two services in terms of quality of
service provided, reliability of service provided, and implementation complexity.
2.
If you connect two PCs using a point-to-point link, what are the issues involved in providing various
applications such as file transfer, chat, and e-mail?
3.
When you connect three or more PCs in a network (as a local area network), list the additional issues
involved for providing the applications in Question 3.
4.
What is the fundamental concept in a layered approach to computer networking? Discuss the pros and
cons of developing monolithic software for each application (for file transfer, e-mail, etc.).
5.
Exercises
1. On the local area network installed in your department/organization, find out the address of the
computer on which you are working (it is called the IP address).
2. When you access a Web site through a browser, you get a message "Connecting to" followed by
the IP address of the Web server. What is the use of this address to you? Is it necessary that this
address be displayed at all? Discuss.
3. The Ethernet LAN uses a maximum packet size of 1526 bytes. An X.25 network uses a maximum
packet size of 1024 bytes. In both cases, the packet size is variable. Discuss the pros and cons of
having variable size packets as compared to fixed size packets.
4. ATM uses a fixed packet size of 53 bytes. This is a small packet as compared to Ethernet or X.25
packet sizes. Discuss the merits and disadvantages of having small fixed size packets.
Answers
1. Every computer is given an IP address. The IP address is generally assigned by your system administrator.
The screen that displays the IP address of a PC is shown in Figure C.8.
Figure C.8: IP address of a computer.
To obtain this screen, do the following:
Right click on Network Neighborhood icon and select the Properties option from the pop-up menu.
The following screen (Figure C.9) will appear:
Figure C.9: Selection of the network interface card.
In the screen seen in Figure C.9, select TCP/IP => Realtek RTL8029(AS) PCI Ethernet NIC and click
Properties. Note that you need to select the correct Ethernet network interface card installed on your
PC to obtain the properties. (Realtek is the name of the company that manufactures the Ethernet
cards.)
2. When you access a Web site through a browser, the Domain Name Service of your Internet service
provider gives the IP address of the origin server in which the resource is located. Then the TCP connection
is established between the client and the origin server. When you see the message Connecting to followed
by the IP address of the Web server, it is an indication that the DNS of your ISP has done its job. If the DNS
is down, you will not get this message and will not be able to access the URL. Note that the DNS may be
working, but still you may not be able to access the resource if the origin server is down.
3. The Ethernet LAN uses a maximum packet size of 1526 bytes. X.25 network uses a maximum packet size
of 1024 bytes. In both cases, the packet size is variable. Variable packet size leads to more processing by
the switches. Also, the switches should have variable size buffers. However, if large size packet is
negotiated, data transfer is fast, and protocol overhead is less. Fixed-size packets certainly can be switched
much faster. In Asynchronous Transfer Mode (ATM) networks, fixed-size packets are used.
4. ATM uses a fixed packet size of 53 bytes. This is a small packet compared to Ethernet or X.25 packet
sizes. The small size causes fast packet switching and fixed size buffers at the switches. The only
disadvantage is slightly higher overhead. Out of the 53 bytes, 5 bytes are for header information.
Projects
Install Microsoft NetMeeting software on two systems connected over a LAN in your
department/laboratory. Run the NetMeeting application and try different services provided by this
application (audio conferencing, video conferencing, white board, etc.)
1.
Search the Internet to find various sites that provide free networking software source code with which
you can experiment. (You can try the open source sites that provide Linux-based networking software.)
2.
2.
<Day Day Up>
<Day Day Up>
Chapter 16: ISO/OSI Protocol Architecture
OVERVIEW
As we saw in the previous chapter, to make two computers talk to each other, protocols play an important role.
How do we tackle the big problem of taking care of so many issues involved in networking computersthe
speeds, formats, interfaces, applications, etc.? The famous riddle provides the solution: How do you eat an
elephant? One bite at a time!
When the problem is big, divide it into smaller problems and solve each problem. The OSI protocol architecture
does precisely that. A layered approach is followed whereby the problem is divided into seven layers. Each
layer does a specific job. This seven-layer architecture, developed by the International Organization for
Standardization (ISO), is the topic of this chapter.
<Day Day Up>
<Day Day Up>
16.1 OSI REFERENCE MODEL
The International Organization for Standardization (ISO) has developed the Open Systems Interconnection
(OSI) protocol architecture, which is a seven-layer architecture for computer communications. This standard is
specified in ISO 7498 and ITU-T X.200 recommendations.
The Open Systems Interconnection (OSI) protocol architecture developed by the International Organization for
Standardization (ISO) is a seven-layer architecture. This architecture is defined in ITU-T Recommendation
X.200.
During the late 1980s, a number of vendors started marketing software based on ISO/OSI protocol suite.
However, by that time, there already was a large number of TCP/IP-based networks, and in the race between
ISO/OSI and TCP/IP, TCP/IP won without much effort.
The ISO/OSI model (also referred to as the ISO/OSI protocol suite) still is an important model to study because
it is considered as the reference model for computer communications. We can map any protocol suite on to the
ISO/OSI model for studying the functionality of the layers.
NoteIn the battle between ISO/OSI protocol architecture and TCP/IP protocol architecture, TCP/IP won
mainly because of the large installation base of the TCP/IP protocol software. Still, studying the
ISO/OSI protocol architecture is important because it is acknowledged as the reference model for
computer communication protocols.
The ISO/OSI protocol suite is shown in Figure 16.1. The seven layers are (from the bottom)
Physical layer
Datalink layer
Network layer
Transport layer
Session layer
Presentation layer
Application layer
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Datalink Layer
Physical Layer
Figure 16.1: ISO/OSI Protocol Suite.
NoteWhat is the rationale for seven layers? Why not six layers or eight layers? During the standardization
process, there were two proposalsone proposal with six layers, and the other with eight layers. To
achieve consensus, finally the seven-layer architecture was standardizedjust the average of six and
eight. That is how standardization is done!
Each layer performs a specific set of functions. If we consider two systems, the protocol stack has to run on
each system to exchange useful information for a particular application. As shown in Figure 16.2, the two
application programs on the two end systems communicate with each other via the protocol suite. The
application program on one end system sends the data to the layer below (application layer), which in turn adds
its header information and sends it to the layer below (presentation layer). Each layer adds the header and
forwards to the layer below. Finally, the physical layer sends the data over the communication medium in the
form of bits. This bit stream is received by the other system, and each layer strips off the header and passes
the remaining data to the layer above it. The header information of one layer is interpreted by the corresponding
layer in the other system. This is known as peer-to-peer communication. For instance, the header added by the
transport layer is interpreted by the transport layer of the other system. So, though the two peer layers do not
communicate with each other directly, the header can be interpreted only by the peer layer. We will study the
functionality of each layer in the following sections.
Figure 16.2: Peer-to-peer communication in layered approach.
Each protocol layer adds a header and passes the packet to the layer below. Because the header is interpreted
only by the corresponding layer in the receiving system, the communication is called peer-to-peer
communication. Peer means a layer at the same level.
<Day Day Up>
<Day Day Up>
16.2 PHYSICAL LAYER
The physical layer specifies the physical interface between devices. This layer describes the mechanical,
electrical, functional, and procedural characteristics of the interface.
An example of the physical layer is Electronic Industries Association (EIA) RS232, which specifies the serial
communication interface. We connect a modem to the PC through the RS232 interface. The modem is referred
to as DCE (data circuit terminating equipment) and the PC as DTE (data terminal equipment). RS232
specifications are briefly described below to give an idea of the detail with which each layer in the protocol
architecture is specified.
Mechanical specifications of RS232 specify that a 25-pin connector should be used with details of pin
assignments. Pin 1 is for shield, pin 2 is for transmit data, pin 3 for receive data, pin 4 for request to send, pin 5
for clear to send, pin 6 for DCE ready, pin 7 for signal ground, and so on.
The physical layer specifies the mechanical, electrical, functional, and procedural characteristics of the
interface. RS232 is an example of physical layer specifications.
Electrical specifications of RS232 give the details of voltage levels for transmitting binary one and zero, data
rates, and so on. With respect to ground, a voltage more negative than 3 volts is interpreted as binary one and
a voltage more positive than +3 volts is interpreted as binary zero. The data rate supported is less than 20kbps
for a distance less than 15 meters. With good hardware design, higher data rate and higher distance can be
supported; these are the minimum requirements given in the standard for data rate and distance.
Functional specifications of RS232 give the details of the data signals, control signals, and timing signals and
how to carry out loop-back testing.
Procedural specifications of RS232 give the details of sequence of operations to be carried out for specific
applications. For instance, if a PC is connected to the modem using RS232, the procedure is as follows:
When modem (DCE) is ready, it gives the DCE ready signal.
When PC (DTE) is ready to send data, it gives the request to send signal.
DCE gives the clear to send signal indicating that data can be sent.
DTE sends the data on the transmit data line.
A detailed procedure is specified to establish the connection, transfer the data, and then to disconnect. The
procedural specifications also take care of the possible problems encountered during the operation such as
modem failure, PC failure, link failure, and so forth.
When you connect two PCs using an RS232 cable, you need to specify the communication parameters. These
parameters are the speed (300bps, 600bps, 19.2kbps, etc.), the number of data bits (seven or eight), the
number of stop bits (one or two), and the parity (even or odd). Only when the same parameters are set on both
the PCs can you establish the communication link and transfer the data.
NoteEvery computer has an RS232 interface. The modem is connected to the computer through an
RS232 interface. The computer is the DTE, and the modem is the DCE. The protocol described here
is used for communication between the computer and the modem.
<Day Day Up>
<Day Day Up>
16.3 DATALINK LAYER
The physical layer's job is to push the bit stream through a defined interface such as RS232. Before pushing
the data, the link has to be activated, and it has to be maintained during the data transfer. After data transfer is
complete, the link has to be disconnected. It is possible that one computer may be sending the data very fast,
and the receiving computer may not be able to absorb the data at that rate because of its slow processing
power. In such a case, the receiving computer has to inform the sending computer to stop transmission for
some time and then resume it again. This mechanism is known as flow control, which is the job of the datalink
layer. The datalink layer's other function is to ensure error controlthis may be in the form of sending
acknowledgements or CRC. So, the functions of the data link layer are:
Activate the link, maintain the link, and deactivate the link
Error detection and control
Flow control
The datalink layer's job is to activate the link, maintain the link for data transfer and deactivate the link after the
data transfer is complete. Error detection and control, and flow control are also done by the datalink layer.
Some standard datalink layer protocols are high level datalink control (HDLC), link access protocol balanced
(LAPB) used in X.25 networks, and link access protocol D (LAPD) used in ISDN.
NoteIn some communication protocols, error detection and flow control are handled by the transport layer.
16.3.1 High-Level Datalink Control (HDLC)
HDLC is the most widely used datalink layer protocol. Many other datalink layer protocols such as LAPB and
LAPD are derived from HDLC. We will briefly describe the HDLC protocol here.
Consider a simple case of two PC's that would like to communicate over an RS232 link. To exchange
meaningful data, one of the PCs can take the responsibility of controlling the operations, and the other can
operate under the control of the first one. One system can act as a primary node and issue commands; the
other acts as a secondary node and gives responses to the commands. Alternatively, both machines can give
commands and responses, in which case the node is called the combined node.
Depending on the type of node, the link can be of: (a) unbalanced configuration in which there will be one
primary node and one or more secondary nodes; or (b) balanced configuration in which there will be two
combined nodes.
The data transfer can be in one of the following three modes:
Normal response mode (NRM): This mode is used in unbalanced configuration. The primary node will
initiate the data transfer, but the secondary node can send data only on command from the primary node.
NRM is used for communication between a host computer and the terminals connected to it.
Asynchronous balanced mode (ABM): This mode is used with balanced configuration. A combined node
can initiate transmission. ABM is used extensively for point-to-point full-duplex communication.
Asynchronous response mode (ARM): This mode is used with unbalanced configuration. The primary node
will have the responsibility to initiate the link, error recovery, and logical disconnection, but the secondary
node may initiate data transmission without permission from the primary. ARM is rarely used.
The HDLC frame structure is shown in Figure 16.3. The details of the frame structure are as follows:
Flag (8 bits): The flag will have the pattern 01111110. This is to indicate the beginning of the frame. The
receiver continuously looks for this pattern to find the beginning of the frame. The frame also ends with the flag
so that the receiver can detect the end of frame as well. The actual data may contain the same bit pattern, in
which case the frame synchronization is lost. To overcome this, a procedure known as bit stuffing is
usedwhen six ones appear continuously in the data, after five ones, a zero is inserted. At the receiver, after
the starting flag is detected, if five ones are detected followed by a zero, that zero is removed. If six ones are
detected followed by one zero, then it is taken as the ending flag, and the frame is processed further. However,
note that bit stuffing is not foolproof if there are bit errors.
Address (8 bits): This field identifies the secondary node that has to receive the frame. Though an address is
not necessary for point-to-point connections, it is included for uniformity. Normally, the address is of 8 bits in
length, though higher lengths can be used. The address with all ones is used as the broadcast address.
Control (8 or 16 bits): The control field specifies the type of frame. Three types of frames are defined:
information frames which carry data of the HDLC user (layer above HDLC); supervisory frames, which provide
automatic repeat request (ARQ) information to request retransmission when data is corrupted; and unnumbered
frames, which provide link control functions. Information and supervisory frames contain 3-bit or 7-bit sequence
numbers.
Information (variable): This field contains the user information for information frames and unnumbered frames.
It should be in multiples of 8 bits.
CRC (16 or 32 bits): Excluding the flags, 16-bit or 32-bit CRC is calculated and kept in this field for error
detection.
Figure 16.3: HDLC frame format: (a) information frame format, (b) 8-bit control field format
HDLC operation takes three steps: initialization, data transfer, and disconnection.
HDLC protocol is an example of datalink layer protocol. Many datalink layer protocols used in X.25, Frame
Relay, and Integrated Services Digital Network (ISDN) are derived from HDLC.
In the initialization phase, the node that wants to set up a link gives a command indicating the mode of
transmission (NRM, ABM, or ARM) and whether a 3-bit or 7-bit sequence number is to be used. The other node
responds with unnumbered acknowledged frame if it is ready to accept the connection; otherwise it sends a
disconnected mode frame. If the initialization has been accepted, a logical connection is established between
the two nodes.
In the data transfer mode, data is exchanged between the two nodes using the information frames starting with
sequence number 0. To ensure proper flow control and error control, the N(S) and N(R) fields in the control field
are used. N(S) is the sequence number and N(R) is the acknowledgement for the information frames received
(refer to Figure 16.3b for the formats of the control field.)
Supervisory frames are also used for error control and flow control. The receive ready (RR) frame is used to
indicate the last information frame received. Hence, the sender knows that up to that frame, all frames have
been received intact. RR frame is used when there is no information frame containing the acknowledgement to
be sent. Receive-not-ready (RNR) frame is sent to acknowledge the receipt of frames up to a particular
sequence number and also to tell the sender to suspend further transmission. When the node that sent RNR is
ready again, it sends an RR frame.
The disconnect phase is initiated by one of the nodes sending a disconnect (DISC) frame. The other node has
to respond with an unnumbered acknowledged frame to complete the disconnection phase.
The HDLC protocol is the basis for a number of datalink layer protocols to be discussed in subsequent
chapters. Note that in the datalink layer, a number of bytes are put together and a new entity is formed, which is
referred to as a frame.
<Day Day Up>
<Day Day Up>
16.4 NETWORK LAYER
The important function of the network layer is to relieve the higher layers of the need to know anything about
the underlying transmission and switching technologies. Network layer protocol provides for transfer of
information between the end systems. The functions of the network layer are:
Switching and routing of packets
Management of multiple datalinks
Negotiating with the network for priority and destination address
The functions of the network layer are switching and routing of packets. Additional functionality such as
management of multiple links and assigning priority to packets is also done at this layer.
<Day Day Up>
<Day Day Up>
16.5 TRANSPORT LAYER
The transport layer can provide two types of servicesconnection-oriented and connectionless. In connection-
oriented service, a connection is established between the two end systems before the transfer of data. The
transport layer functionality is to ensure that data is received error-free, packets are received in sequence, and
that there is no duplication of packets. The transport layer also has to ensure that the required quality of service
is maintained. Quality of service can be specified in terms of bit error rate or delay. In connectionless service,
the packets are transported without any guarantee of their receipt at the other end.
The transport layer provides end-to-end reliable data transfer between two end systems. This layer ensures that
all packets are received without error and the packets are received in sequence.
The transport layer can provide either connection-oriented service or connectionless service. TCP provides
connection-oriented service and UDP provides connectionless service. Note that only TCP provides reliable
data transfer.
In TCP/IP networks, TCP is an example of a connection-oriented transport layer, and UDP is an example of a
connectionless transport layer.
NoteQuality of service parameters such as delay and throughput are important for some applications. For
example, in voice communication, there should not be variation in delay in receipt of packets. The
transport layer provides the facility to specify quality of service parameters.
<Day Day Up>
<Day Day Up>
16.6 SESSION LAYER
The session layer specifies the mechanism for controlling the dialogue in the end systems. Session layer
functionality is as follows:
Dialogue discipline: whether the communication should be full duplex or half duplex.
Grouping: to group data into logical units.
Recovery: a mechanism for recovery in case of intermittent failures of the links.
The functionality of session layer is to provide recovery mechanism in case of intermittent failure of links and to
negotiate whether the communication should be full-duplex or half-duplex.
Because it does not have much work to do, in many networks, there is no session layer.
NoteIn many practical networks, the functionality of the session layer is taken by other layers, and the
session layer is just a null layer.
<Day Day Up>
<Day Day Up>
16.7 PRESENTATION LAYER
In computer networks, computers may be running different operating systems having varying file systems and
file storage formats. For example, the file formats of Windows and Unix operating systems are different. Some
computers may be using ASCII for character representation, and some computers may be using EBCDIC.
Before exchanging the data for a file transfer, for instance, the two systems have to negotiate and agree on the
format to be used for exchanging the data. This is done at the presentation layer. To provide secure
communication, the data may need to be encrypted, which is also a part of this layer's job. Compression of the
data for faster data transfer is also done in this layer. Hence, the functions of the presentation layer are:
Provide for selection of data formats to be exchanged between applications
Encryption
Data compression
The functions of the presentation layer are negotiation of data formats, data encryption and data compression.
<Day Day Up>
<Day Day Up>
16.8 APPLICATION LAYER
The application layer provides management functions to support distributed applications such as e-mail, file
transfer, remote login, and the World Wide Web. With the widespread use of TCP/IP-based networks,
applications using the ISO/ OSI protocol suite are very limited. Some of the prominent application layer
protocols are these:
FTAM (file transfer, access and management) for file transfer applications.
X.400 MHS (message handling systems) for electronic mail.
X.500 for directory services.
The ISO/OSI reference model is fundamental to computer communication because it serves as a reference
model for all protocol suites.
The application layer provides the functionality required for specific applications such as e-mail, file transfer and
remote login. A separate protocol is defined for each application. Examples are X.400 for e-mail, FTAM for file
transfer and X.500 for directory services.
NoteThe application layer protocols of OSI are not widely used in practical networks. However, some
application layer protocols used on the Internet are derived from the OSI application layer protocols.
For example, Lightweight Directory Access protocol (LDAP) is derived from X.500.
Summary
The ISO/OSI protocol architecture is presented in this chapter. This layered model has seven layers: physical
layer, datalink layer, network layer, transport layer, session layer, presentation layer, and application layer.
Each layer has a specific functionality. The physical layer specifies the electrical, mechanical, functional, and
procedural characteristics for transmission of the bit stream. The datalink layer's job is to establish and maintain
the link and disconnect after the data transfer is complete. Error control and flow control are also done in this
layer. The network layer takes care of routing of the packets in the network. The transport layer provides an
end-to-end reliable data transfer. Even if the packets are received out of sequence or with error, the transport
layer ensures that the packets are put in sequence and all the packets are received without error by giving the
information about the erroneously received packets to the sender and obtaining the packets again. The job of
the session layer is to establish and maintain sessions. The presentation layer takes care of the presentation
formats, including compression and encryption. The application layer provides the interface to different
application programs such as e-mail, file transfer, remote login, and so on.
Two important concepts in a layered model are protocol encapsulation and peer-to-peer communication. A layer
sends its data in the form of a protocol data unit (PDU) to the layer below. The lower layer adds its own header
to the PDU (without making any changes to the PDU) and then sends it to the layer below. This mechanism is
known as protocol encapsulation. Finally when the data stream is received at the other end, the data passes
through the layers again. Each layer strips off the corresponding header and takes appropriate action based on
the information in the header. For instance, the header information of the transport layer is interpreted only by
the transport layer at the other end. Even though the transport layers on two machines do not talk to each other
directly, the header information is interpreted only by the peer layer (layer at the same level). This is known as
peer-to-peer communication.
Though there are hardly any networks that run the ISO/OSI protocol, a good understanding of this protocol is
important because it acts as a reference model for many protocol stacks.
References
Dreamtech Software Team. Programming for Embedded Systems. Wiley Dreamtech India Pvt. Ltd., 2002.
This book contains chapters on serial communication programming and protocol conversion software
development.
Joe Campbell. C Programmer's Guide to Serial Communications (Second Edition). Prentice Hall Inc., 1997.
This is an excellent book for developing a variety of applications and systems software using serial
communications. A must-read for getting good expertise in serial communication programming.
A.S. Tanenbaum. Computer Networks. Prentice Hall Inc., 1997.
W. Stallings. Data and Computer Communications (Fifth Edition). Prentice Hall Inc., 1999.
Questions
Explain the ISO/OSI protocol architecture. 1.
Explain the serial communication protocol using the RS232 standard. 2.
Explain the HDLC protocol. 3.
List the application layer protocols of the OSI reference model. 4.
Exercises
1. Standardization of various protocols by the international standards bodies is done through
consensus. Everyone has to accept the proposal, then only the proposal becomes a standard. For
standardizing the ISO/OSI architecture, there were two proposalsone proposal based on six
layers and the other proposal based on eight layers. Finally, seven layers were accepted (just the
average of six and eight, but no other reason!). Develop six layer and eight layer architectures and
study the pros and cons of each.
2. Interconnect two PCs running the Windows 9x/2000/XP operating system through RS232 and
experiment on the communication parameters.
3. Interconnect two PCs running the Linux operating system through RS232 and experiment on the
communication parameters.
Answers
1. A six-layer architecture for computer communication can be just the elimination of the session layer in the
ISO/OSI architecture. Session layer functionality is minimal and can be eliminated. An eight-layer
architecture can have an additional layer to provide security features that runs above the transport layer.
2. Windows 9x/2000/XP operating systems provide the Hyper Terminal with which you can establish
communication between two systems through an RS232 link.
3. In Linux operating system, you will have access to the complete source code for serial communication. You
can interconnect a Linux system and a Windows system using RS232.
Projects
Interconnect two PCs using an RS232 cable. Write a program to transfer a file from one computer to the
other through the RS232 link.
1.
Interconnect two PCs using an RS232 cable and write a program for a chat application. Make the
software modular to give a layered approach to software development.
2.
If you have a processor-based board (such as 8051 or 8085/8086) in your microprocessor laboratory,
interconnect this board with a PC using an RS232 link. You need to write the software on both the PC
and the processor board to achieve communication.
3.
<Day Day Up>
<Day Day Up>
Chapter 17: Local Area Networks
During the1980s, PCs became ubiquitous in all organizations. Every executive started having his own PC for
automation of individual activities. The need arose for sharing information within the organizationsending
messages, sharing files and databases, and so forth. Whether the organization is located in one building or
spread over a large campus, the need for networking the computers cannot be overemphasized. Today, we
hardly find a computer, that is not networked. A local area network (LAN) interconnects computers over small
distances up to about 10 kilometers. In this chapter, we will study the various configurations, technologies,
protocols, and standards of LANs.
17.1 THE ETHERNET LAN
In 1978, Xerox Corporation, Intel Corporation, and Digital Equipment Corporation standardized Ethernet. This
has become the most popular LAN standard. IEEE released a compatible standard, IEEE 802.3. A LAN is
represented in Figure 17.1.
Figure 17.1: Ethernet LAN.
The Ethernet local area network developed by Xerox, Intel, and DEC in 1978 became the most popular LAN
standard. IEEE released a compatible standard called IEEE 802.3.
The cable, called the Ether, is a coaxial cable of inch diameter and up to 500 meters long. A resistor is added
between the center wire and the shield at each end of the cable to avoid reflection of the electrical signal. The
cable is connected to a transceiver (transmitter and receiver) whose job is to transmit the signals onto the Ether
and also to sense the Ether for the presence of the signals. The transceiver is connected through a transceiver
cable (also called attachment unit interface cable) to a network card (also known as a network adapter card).
The network card is a PC add-on card that is plugged into the motherboard of the computer. The coaxial cable
used in Ethernet LAN is shown in Figure 17.2.
Figure 17.2: Coaxial cable used in an Ethernet LAN.
This architecture is known as bus architecture because all the nodes share the same communication channel.
Ethernet operates at 10Mbps data rate. When a node has data to transmit, it divides the data into packets, and
each packet is broadcast on the channel. Each node will receive the packet, and if the packet is meant for it
(based on the destination address in the packet), the packet will be processed. All other nodes will discard the
packet.
17.1.1 LAN Protocol Layers
The protocol architecture of a LAN is shown in Figure 17.3. Every node on the LAN will have hardware/
software corresponding to all these layers. The bottom two protocol layers are
Physical layer
Datalink layer, which is divided into
Medium access control (MAC) sublayer
Logical link control (LLC) sublayer
Figure 17.3: LAN protocol layers.
In LAN protocol architecture, the bottom two layersphysical layer and datalink layerare defined. The
datalink layer is divided into two sublayers: medium access control sublayer and logical link control sublayer.
Above the datalink layer, the TCP/IP protocol suite is run to provide various applications on, the LAN. The
network card and the associated software in the PC provide the first two layers' functionality, and the TCP/IP
stack is integrated with every desktop operating system (Windows, Unix/Linux, etc.). While studying LANs, we
will focus only on these two layers. All the standards for LANs, which we will study in subsequent sections, will
also address only these two layers.
Physical layer: This layer specifies the encoding/decoding of signals, preamble generation, and removal for
synchronization and bit transmission and reception.
MAC sublayer: The MAC sublayer governs the access to the LAN's transmission medium. A special protocol is
required because a number of nodes share the same medium. This protocol is known as the medium access
control protocol. In Ethernet, the medium access protocol is carrier sense multiple access/ collision detection
(CSMA/CD).
LLC sublayer: Above the MAC sublayer, the LLC layer runs. The data received from the higher layer (IP layer)
is assembled into frames, and error detection and address fields are added at the transmitter and sent to the
MAC layer. The MAC layer, when it gets a chance to send its data using CSMA/CD protocol, sends the frame
via the physical layer. At the receiver, frames are disassembled, the address is recognized, and error detection
is carried out. The LLC layer provides interface to the higher layers.
The MAC sublayer defines the protocol using which multiple nodes share the medium. In Ethernet, the MAC
protocol is carrier sense multiple access/ collision detection (CSMA/CD).
17.1.2 CSMA/CD Protocol
In Ethernet LANs, all the nodes share the same medium (Ether). The protocol used to share the medium is
known as carrier sense multiple access with collision detection (CSMA/CD).
When a node has to send a packet, it will broadcast it on the medium. All the nodes on the LAN will receive the
packet and check the destination address in the packet. The node whose address matches the destination
address in the packet will accept the packet. To ensure that more than one node does not broadcast its packet
on the Ether, just before transmitting a packet, each node has to first monitor the Ether to determine if any
signal is present; in other words, sense the carrier. However, there is still a problemtake the case of two
nodes (A and C) on the LAN in Figure 17.1. If node A sent a packet, it takes finite time for the signal to reach
node C. Meanwhile, node C will sense the carrier and find that there is no activity on the Ether, so it also will
send a packet. These two packets will collide on the medium, resulting in garbling of data. To avoid these
collisions, each node will wait for a random amount of time if a collision occurs. Binary exponential back-off
policy is used to avoid collisions to the maximum possible extenta node will wait for a random amount of time
if there is a collision, twice that time if there is a second collision, thrice that time if there is third collision, and so
on. This ensures that the probability of collisions is minimized. But a collisions cannot be eliminated altogether.
That is why, even though on the Ethernet the data can be transmitted at the rate of 10Mbps, the effective data
rate or throughput will be much lower.
NoteWhen a collision of packets occurs on the medium, each node has to follow the binary exponential
back-off policy wherein a node has to wait for a random amount of time if a collision occurs, twice that
time if a collision occurs a second time, and so on.
17.1.3 Ethernet Addresses
To identify each node (computer) attached to the Ethernet, each node is given a 48-bit address. This address is
also known as the hardware address or physical address because the network card on the computer carries
this address, which is given by the hardware manufacturer. Therefore, each computer is uniquely identified.
The Ethernet address can be of three types:
Unicast address 1.
Broadcast address 2.
Multicast address 3.
When a node sends a packet with a unicast address, the packet is meant for only one node specified by that
address. When a node sends a packet with a broadcast address (all ones), the packet is meant for all the
nodes on the network. A set of nodes can be grouped together and given a multicast address. When a node
sends a packet with a multicast address, the packet is meant for all the nodes in that group. Generally, network
cards accept the unicast address and broadcast address. They can also be programmed to accept multicast
addresses.
Each computer on an Ethernet LAN is identified by a 48-bit address. This address is contained in the network
card on the computer.
NoteEthernet address can be of three types: unicast, broadcast, and multicast. Generally, network cards
accept unicast addresses and broadcast addresses. They need to be specially programmed to accept
multicast addresses.
17.1.4 Ethernet Frame Format
The Ethernet frame format is shown in Figure 17.4.
Figure 17.4: Ethernet frame format.
Preamble (8 bytes): The beginning of the frame.
Destination address (6 bytes): Destination address of the packet.
Source address (6 bytes): Source address of the packet.
Frame type (2 bytes): Type of the frame.
Frame data (variable): User data, which can vary from 64 bytes to 1500 bytes.
CRC (4 bytes): 32-bit CRC computed for each frame.
The maximum allowed size of an Ethernet frame is 1526 bytes, out of which 26 bytes are used for the header
and the CRC. Hence, 1500 bytes is the maximum allowed user data in an Ethernet frame.
Hence, the maximum allowed size of an Ethernet frame is 1526 bytes. This is the frame format of Ethernet
developed by Digital, Intel, and Xerox.
Above the LLC, the IP layer runs, and each node on the LAN is assigned an IP address. Above the IP layer, the
TCP layer and other application layer protocols run to provide applications such as e-mail, file transfer, and so
on.
Ethernet LANs have become widely popular. Initially, 10Mbps Ethernet LANs were used, and now 100Mbps
Ethernet LANs are common.
<Day Day Up>
<Day Day Up>
17.2 LAN TRANSMISSION MEDIA
For a LAN, the transmission medium can be twisted copper cable, coaxial cable, optical fiber, and radio. The
topology, data rates, and medium access protocols will differ for the different media.
LANs can be broadly categorized as baseband LANs and broadband LANs. Broadband LANs can span larger
distances up to tens of kilometers. Broadband LAN uses frequency division multiplexing, in which multiple
channels are used for data, voice, and video. RF modems are required for communication. They operate in
unidirectional mode because it is difficult to design amplifiers that pass signals of one frequency in both
directions. To achieve full connectivity, two data paths are requiredone frequency to transmit and one to
receive.
LANs can be broadly classified as baseband LANs and broadband LANs. In a baseband LAN, the baseband
signals are transmitted over the medium. In broadband LANs, the signals are multiplexed using frequency
division multiplexing.
<Day Day Up>
<Day Day Up>
17.3 LAN TOPOLOGIES
A LAN can operate in the following topologies: bus, tree, ring, and star, as shown in Figure 17.5.
Figure 17.5: LAN topologies.
Bus topology: In bus topology, the transmission medium has terminating resistance at both ends. Data
transmitted from a node will flow in both directions. Terminating resistance absorbs the data, removing it from
the bus when it reaches the end points. When a node has data to transmit, it broadcasts it over the medium,
and all the other nodes receive it.
In baseband LANs, binary data is inserted on the cable directly, and the entire bandwidth is consumed by the
signal. The signal propagates in both directions. Distance up to only a few meters is possible due to attenuation
of the signal.
In an IEEE 802.3 LAN, 50-ohm cable of 0.4 inch diameter is used. Maximum cable length is 500 meters per
segment. This standard is called 10 BASE 5 to indicate 10Mbps data rate, baseband, and 500 meter cable
length.
For low-cost PC LANs (called cheapernet), 10 BASE 2 standard is followed.
The differences between 10 BASE 5 and 10 BASE 2 LANs are listed below.
10 BASE 5 10 BASE 2
Data rate 10Mbps 10Mbps
Maximum segment length 500 meters 185 meters
Network span 2500 meters 1000 meters
Nodes per segment 100 30
Node spacing 2.5 meters 0.5 meter
Cable diameter 0.4 inch 0.25 inch
Tree topology: The tree topology is shown in Figure 17.5(b). The tree is a general form of bus, or the bus is a
special case of tree. Data transmitted from a node is received by all the other nodes. There will be terminating
resistances at each of the segments.
Ring topology: The ring topology is shown in Figure 17.5(c). In a ring topology, the medium is in the form of a
ring with repeaters that connect to the nodes. In other words, repeaters are joined by the point-to-point links in a
closed loop. A repeater receives the data from one link and transmits it on the other link. Each frame travels
from the source through all other nodes back to the source where it is removed.
NoteIn ring topology, if the transmission medium fails (for instance, due to a cut in the cable), the network
is down. To avoid such situations, dual-ring LANs are used in which two cables are used to connect
the nodes.
The repeater inserts data into the medium sequentially, bit by bit. Each repeater regenerates the data and
retransmits each bit. The data packet contains the header with the destination address field. Packet removal is
done either by the specified repeater or the source, after the ring is traversed.
The various LAN topologies are bus, tree, ring, and star. Choice of a particular topology is dependent on factors
such as reliability, data rate, cost, and medium of transmission.
The repeater can be in one of the following three states:
Listen state: Scan the data for the address, for permission to retransmit; pass the data to the attached station.
Transmit state: Retransmit the data to the next repeater and modify a bit to indicate that the packet has been
copied (serves as acknowledgement).
Bypass state: Data is sent without delay.
Baseband coaxial cable, twisted pair, or fiber can be used for repeater-to-repeater links.
To recover data from a repeater and to transmit data to the next repeater, precise timing is required. The timing
is recovered from the data signals. Deviation of timing recovery is called timing jitter. Timing jitter places a
limitation on the number of repeaters in a ring.
Ring topology has the following problems:
Failure of a repeater results in the breakdown of the network.
New repeater installation is cumbersome.
Timing jitter problems have to be solved (extra hardware is required).
Star topology: In star topology (Figure 17.5(d)), there will be a central hub to which all the nodes are
connected. The central node can operate in broadcast mode or as a switch. In star networks, addition and
deletion of nodes is very easy; however, if the central hub fails, the entire network is down. The hub acts as a
repeaterdata transmitted by a station is received by all stations. Hence logically, this topology is equivalent to
a bus.
<Day Day Up>
<Day Day Up>
17.4 MEDIUM ACCESS CONTROL PROTOCOLS IN LANS
The MAC sublayer differs according to the topology of the LAN. In general, how do different nodes access the
medium? There are two possibilities: centralized control, in which a control station grants permission to different
nodes, or a decentralized network, in which stations collectively perform medium access control function. In
decentralized networks, there are three types of categories: round robin, reservation, contention.
Round robin: Each station is given an opportunity to transmit for a fixed time. Control of the sequence of the
nodes may be centralized or distributed. This mechanism is useful when all the stations have data to transmit.
Reservation: Each node is given a time slot (as in TDMA). Reservation of the slots may be centralized or
decentralized.
Contention: This is a distributed control mechanism wherein all stations contend for the medium. When two or
more stations contend for the medium simultaneously, it may result in collision. CSMA/CD, which was described
earlier, is an example of this protocol.
The CSMA/CD protocol is used in bus/tree and star topologies.
The CSMA/CD protocol differs based on whether the LAN is broadband or baseband. In baseband LANs,
carrier sensing is done by detecting voltage pulse train. In broadband LANs, carrier sense is done by detecting
RF carrier. For collision detection, in baseband LANs, if the signal on the cable at the transmitter tap point
exceeds a threshold, collision is detected (because collision produces voltage swings). Because of the
attenuation in the cable, a maximum length of the segment is specified in the standardmainly to ensure
collision detection using the threshold. In broadband LANs, head-end can detect the garbled data, or a station
can do bit-by-bit comparison between transmitted and received data.
Because multiple nodes have to share the medium, the choice of the MAC protocol is very important in LANs.
Though CSMA/CD is the most popular MAC protocol, variations of this protocol have been developed to
increase the throughput.
<Day Day Up>
<Day Day Up>
17.5 LAN STANDARDS
There is a wide variety of LANsdifferent topologies, different transmission media, different data rates, and so
forth. The Institute of Electrical and Electronics Engineers (IEEE) set up a committee known as the 802
committee to develop various LAN standards. These standards together are known as IEEE 802 standards.
These standards address only the physical and datalink layers of LANs. They specify the protocols to be used
in MAC and LLC sublayers, the physical layer specifications, and the physical medium to be used.
The IEEE 802 committee formulated various LAN standards. These standards address only the physical and
datalink layers of LANs. Note that the standards also specify the physical medium to be used in different LANs.
17.5.1 IEEE 802.2 Standard
IEEE 802.2 standard specifies the LLC sublayer, which provides the following services:
Unacknowledged connectionless service (type 1 service): This is a datagram service. There will be no flow
control and no error control. Higher layers have to take care of these issues.
The IEEE 802.2 standard specifies the LLC sublayer specifications. This sublayer specification is common to all
IEEE standards-based LANs.
Connection-mode service (type 2 service): A logical connection will be set up, and flow control and error
control are provided.
Acknowledged connectionless service (type 3 service): This is a datagram service, but with
acknowledgements.
This LLC layer is common to all the other IEEE standardsbased LANs.
17.5.2 IEEE 802.3 Standard
Based on the popularity of Ethernet, IEEE released a compatible LAN standard that is specified in IEEE 802.3.
LANs based on the 802.3 standard have the following characteristics:
Topology: Bus, tree, or star
MAC sublayer: CSMA/CD
Physical layer can be one of the following:
Baseband coaxial cable operating at 10Mbps
Unshielded twisted pair operating at 10Mbps or 100Mbps
Shielded twisted pair operating at 100Mbps
Broadband coaxial cable operating at 10Mbps
Optical fiber operating at 10Mbps
IEEE 802.3 operating at 10Mbps has six alternatives:
10 BASE 5: 10Mbps baseband 500 meter segment length
10 BASE 2: 10Mbps baseband, 200 meter segment length
10 BASE T: 10Mbps baseband, twisted pair
10 BROAD 36: 10Mbps broadband, 3600 meter end-to-end span (1800 meter segment)
10 BASE F: 10Mbps baseband, fiber
1 BASE T: 1Mbps baseband, twisted pair (now obsolete)
In addition, IEEE 802.3 specifies 100Mbps LAN (fast Ethernet) (known as 100 BASET).
The format of the MAC frame in IEEE 802.3 standard is slightly different from that of the Ethernet frame. The
IEEE 802.3 MAC frame format is shown in Figure 17.6.
Figure 17.6: Frame for IEEE 802.3 standard.
Preamble (7 bytes): The bit pattern 010101 is sent for the receiver to establish synchronization.
SFD (1 byte): Start frame delimiter 10101011 to indicate the actual start of the frame. This enables the receiver
to locate the first bit of the rest of the frame.
The IEEE 802.3 standard is based on the popular Ethernet LAN. The MAC frame formats of Ethernet and IEEE
802.3 are slightly different.
DA (2 or 6 bytes): Destination address. 48 bits or 16 bits (must be the same for a particular LAN). It can be a
node address, group address, or global address.
SA (2 or 6 bytes): Source addressaddress of the node that sent the frame.
Length (2 bytes): Length of the LLC data field.
LLC data (variable): Data from the LLC layer.
Pad (variable): Bytes added to ensure that frame is long enough for proper operation of the collision detection
scheme.
FCS (4 bytes): Frame check sequence is calculated based on all the bits except the preamble, SFD, and FCS
(32 bits).
17.5.3 IEEE 802.4 Standard
IEEE 802.4 standardbased LANs have the following characteristics:
Topology: Bus, tree, or star
MAC sublayer: Token bus
The physical layer can be one of the following:
Broadband coaxial cable at 1, 5 or 10Mbps
Carrier band coaxial cable at 1, 5 or 10Mbps
Optical fiber at 5, 10, or 20 Mbps
17.5.4 IEEE 802.5 Standard
IEEE 802.5based LANs have the following characteristics:
Topology: Ring
MAC protocol: Token Ring
The physical layer can be one of the following:
Shielded twisted pair at 4 or 16Mbps, maximum number of repeaters is 250
Unshielded twisted pair at 4Mbps, maximum number of repeaters is 72
Because the topology of this LAN is ring, Token Ring protocol is used for MAC.
IEEE 802.5 MAC protocol: A small frame called a token circulates when all the nodes are idle. The node
wishing to transmit seizes the token by changing one bit in the token and transforming it into a start-of-frame
sequence for a data frame. The node appends the data to construct the data frame. Since there is no token on
the ring, all other nodes only listen. The data frame transmitted by the node makes a round trip and returns to
the originating node. The node will insert a new token on the ring when it has completed transmission of the
frame or the leading edge of the transmitted frame has returned to the node.
Advantages of this MAC protocol are that it provides a flexible control to access the medium and is efficient
under heavy load conditions. However, the disadvantages are that maintenance of the token is a problem: if the
token is lost, the ring does not operateso one node acts as monitor. This protocol is inefficient for light load
conditions. The following improvements can be made to the Token Ring protocol:
Token Ring priority: Optional priority field and reservation fields in data frame and token (three bits and hence
eight levels) are included. A node can transmit if its token priority is higher than the received one (set in the
previous data frame). To avoid one or more nodes to having the highest priority all the time, the node that
raises its priority in one token must lower its priority subsequently.
Early token release: For efficient ring utilization, a transmitting node can release a token as soon as it
completes frame transmission, even if the frame header has not returned to the node.
An IEEE 802.5 LAN is based on ring topology. Hence, the MAC protocol is the Token Ring protocol. A small
frame called a token circulates around the ring when the nodes are idle. A node wishing to transmit will seize
the token and transmit its data. Since there is no token on the ring, all other nodes cannot transmit.
17.5.5 IEEE 802.12 Standard
IEEE 802.12based LANs have the following characteristics:
Topology: Ring
MAC: Round-robin priority
Physical layer: Unshielded twisted pair operating at 100Mbps
17.5.6 FDDI LAN
FDDI (fiber distributed data interface)-based LANs have the following characteristics:
Topology: Dual bus
MAC Protocol: Token Ring
Physical layer can be one of the following:
Optical fiber operating at 100Mbps, maximum number of repeaters is 100, and the maximum distance
between repeaters is 2 km.
Unshielded twisted pair at 100Mbps, maximum number of repeaters is 100, and maximum distance
between repeaters is 100 meters.
In a fiber distributed data Interface (FDDI) LAN, the MAC protocol is similar to that of IEEE 802.5 except that
early token release strategy is followed.
FDDI MAC protocol: It has the same functionality as the IEEE 802.5 MAC protocol except that in 802.5, a bit in
the token is reserved to convert it into a data frame. In FDDI, once a token is recognized, it is seized, and the
data frame is transmitted. This is done to achieve high data rate support. In FDDI, early token release is
followed: the token is released after transmitting the data frame without waiting to receive the leading bit of the
data frame.
<Day Day Up>
<Day Day Up>
17.6 LAN BRIDGE
A bridge is used to interconnect two LANs. If both LANs use the same set of protocols, the bridge need not do
any protocol conversion. However, if the two LANs run different protocols, the bridge needs to do the necessary
protocol conversion. Figure 17.7 shows the protocol conversion required. One LAN is based on IEEE 802.3
standard running CSMA/CD protocol. The other LAN is based on IEEE 802.4 standard running Token Ring
protocol. The bridge has to run both the stacks as shown in the figure. It takes a packet from the 802.3 LAN
and obtains the LLC frame. The LLC frame is then given to the 802.4 MAC protocol for transfer over the
physical medium.
Figure 17.7: LAN bridge.
A bridge interconnects two LANs. If the protocols used by the two LANs are different, the bridge will carry out
the necessary protocol conversion.
All the LANs mentioned in this section use guided media (twisted pair, coaxial cable, or optical fiber) as the
transmission medium. Another set of IEEE standards is available for wireless LANs (WLANs). WLANs have
become extremely popular in recent years. These WLAN standards are discussed in the next section.
<Day Day Up>
<Day Day Up>
17.7 WIRELESS LANS
Experiments on WLANs were conducted more than three decades ago. Those WLANs used a protocol known
as ALOHA for medium access. In ALOHA, any station can transmit on the air. If collision is detected (every
station is always in receive mode), the transmitting station waits for a random time, and retransmits. This is an
inefficient access protocol, but 18% is the channel utilization efficiency because there will be a lot of collisions.
Variations of ALOHA such as slotted ALOHA and reservation ALOHA were developed, which are the basis for
CSMA/CD LANs.
As wireless technologies matured and the need for mobile computing increased tremendously, WLANs gained
popularity. The IEEE 802.11 family of standards specifies the WLAN standards for different applications. These
WLANs can be used in offices for networking of different devices such as desktop, laptop, palmtop, printer, fax
machine, and so forth. WLANs also can be used at home to network home appliances.
The two configurations in which a WLAN can work are shown in Figure 17.8.
Figure 17.8: WLAN configurations.
In the configuration shown in Figure 17.8(a), a WLAN node (for example, a laptop with a radio and antenna)
can communicate with another node via an access point. A WLAN can contain a number of access points. In
the configuration shown in Figure 17.8(b), two nodes can communicate directly with each other without the
need for a central relay. Such a configuration is known as an ad hoc network. Two or more devices can form a
network when they come nearer to one another without the need for a centralized control. Ad hoc networks are
very useful for data synchronization. For instance, when a mobile phone comes near the vicinity of a PDA, the
two can form a network, and the address book on the PDA can be transferred to the mobile phone.
Wireless LANs operate in two configurations. In one configuration, a node will communicate with another node
via an access point. In the other configuration, two nodes can communicate directly without any centralized
control.
WLANs based on IEEE 802.11 family standards can be used in offices and homes as well as public places
such as airports, hotels, and coffee shops.
IEEE developed the IEEE 802.11 family standards to cater to different industry segment requirements. All the
standards in this family use radio as the transmission medium.
17.7.1 IEEE 802.11 Family Standards
The IEEE 802.11 family standards cover the physical and MAC layers of wireless LANs. The LLC layer is the
same as discussed earlier. The architecture of the IEEE 802.11 standard for WLANs is shown in Figure 17.9.
Each wireless LAN node has a radio and an antenna. All the nodes running the same MAC protocol and
competing to access the same medium will form a basic service set (BSS). This BSS can interface to a
backbone LAN through an access point (AP). The backbone LAN can be a wired LAN such as Ethernet LAN.
Two or more BSSs can be interconnected through the backbone LAN.
Figure 17.9: IEEE 802.11 wireless LAN.
NoteIn a wireless LAN, there will be a number of access points. These access points are interconnected
through a backbone network. The backbone network can be a high-speed network based on
Asynchronous Transfer Mode (ATM) protocols.
The physical medium specifications for 802.11 WLANs are:
Diffused infrared with a wavelength between 850 and 950 nm. The data rate supported using this medium
is 1Mbps. A 2Mbps data rate is optional.
Direct sequence spread spectrum operating in 2.4GHz ISM band. Up to seven channels each with a data
rate of 1Mbps or 2Mbps can be used.
Frequency hopping spread spectrum operating at 2.4GHz ISM band with 1Mbps data rate. A 2Mbps data
rate is optional.
Extensions to IEEE 802.11 have been developed to support higher data rates. The 802.11b standard has been
developed that supports data rates up to 22Mbps at 2.4GHz, with a range of 100 meters. Another extension,
802.11a, operates in the 5GHz frequency band and can support data rates up to 54Mbps with a range of 100
meters.
The three physical medium specifications in IEEE 802.11 wireless LANs are: (a) diffused infrared; (b) direct
sequence spread spectrum in 2.4GHz ISM band; and (c) frequency hopping spread spectrum operating in
2.4GHz ISM band.
NoteThe Industrial, Scientific and Medical (ISM) band is a free band. No government approvals are
required to install radio systems operating in this band.
17.7.2 Medium Access Control
The MAC protocol used in 802.11 is called CSMA/CA (carrier sense multiple access with collision avoidance).
Before transmitting, a station senses the radio medium and, if the channel is free for a period longer than a
predefined value (known as distributed inter frame spacing or DIFS), the station transmits immediately. If the
channel is busy, the node keeps sensing the channel. If it is free for a period of DIFS, it waits for another period
called the random backoff interval and then transmits its frame. When the destination receives the frame, it has
to send an acknowledgment (ACK). To send the ACK, the destination will sense the medium. If it is free for a
predefined short time (known as the short inter frame space or SIFS), the ACK is sent. If the ACK does not
reach the station, the frame has to be retransmitted using the same procedure. A maximum of seven
retransmissions is allowed, after which the frame is discarded. Figure 17.10 depicts the CSMA/CA mechanism.
Figure 17.10: Medium access in 802.11 LAN.
The MAC protocol used in IEEE 802.11 is carrier sense multiple access/collision avoidance (CSMA/CA). A
node wishing to transmit senses the channel and, if the channel is free for more than a predefined period, it will
transmit its data. If the channel is busy, the node will wait for an additional period called the backoff interval.
17.7.3 WiFi
The IEEE 802.11b standard is popularly known as "Wireless Fidelity" (or WiFi) in short. It has become widely
popular for wireless LANs in office environments. Proponents of this technology consider it great competition to
third generation wireless networks, which also provide high data rate mobile Internet access. WiFi can be used
to provide broadband wireless Internet access as shown in Figure 17.11.
Figure 17.11: Broadband wireless access through wireless LAN.
Access Points (APs) can be installed at various locations in the city. The APs are also called "hot spots". All the
APs in a city can be interconnected through an ATM-based backbone network. As the wireless device moves
from one location to another, the mobile device is connected to the nearest AP.
The proponents of WiFi consider this architecture, for providing broadband Internet access, to be competitive
with third generation wireless networks, which support only 2Mbps data rates. Efforts are now being made to
make WiFi a highly secure network so that such an architecture can become widespread. Singapore is the first
country to provide mobile Internet access using this approach.
IEEE 802.11b standard is popularly known as Wireless Fideliy or WiFi. The access points are known as hot
spots. By installing hot spots at various places in a city, a metropolitan area network can be developed, and this
network will be a competitor to 3G wireless networks.
NoteWiFi hot spots are being installed in many public places such as hotel lounges, airports, restaurants,
and so on to provide wireless access to the Internet.
17.7.4 Mobile IP
When a mobile device moves from one location to another, the packets have to be sent to the router to which
the mobile device is attached. Consider a scenario shown in Figure 17.12. The mobile device initially is
connected to its home network. The mobile device initiates a very large file transfer from a server located on the
Internet. The server delivers the packets to the router, and the router in turn delivers the packets to the mobile
device. Now the mobile device is moving in a car and is reaching another ISP (foreign network). The mobile
device now has to attach to the router of the new ISP (known as foreign agent). But the packets keep getting
delivered to the home agent. The foreign agent assigns a new IP address (called the care- of address) to the
mobile device, and this address is made known to the home agent. The home agent forwards all the packets to
the foreign agent, which in turn delivers them to the mobile device. If the mobile device has to send packets to
the server, it sends them directly via the foreign agent using the care-of address. The requirements for mobile
Internet access using mobile IP are:
The home agent assigns an IP address to the mobile device, called the home address.
The mobile device initiates a connection to the server and the server sends the packets to the home agent
(Step 1).
As the mobile device approaches the foreign agent, the foreign agent assigns a temporary address to the
mobile device, called the care-of address. The care-of address is sent to the home agent. The home agent
forwards the packets to the foreign agent (Step 2).
The foreign agent delivers the packets to the mobile device (Step 3).
If the mobile agent has to send packets to the server, it sends the packet to the foreign agent (Step 4). The
foreign agent sends the packet directly to the server (Step 5).
Figure 17.12: Mobile IP.
In mobile IP, the mobile device will have two addresses: home address and care-of aaddress. When the mobile
device moves from one network to another network, the packets will be forwarded to the mobile device using
those addresses.
17.7.5 IEEE 802.15.3 Standard
The 802.15.3 standard has been developed to meet personal and home area networking needs. This system
operates in the 2.4GHz band with a range of 10 meters. Data rates up to 55Mbps are supported. This standard
also can be used for industrial applications to network different devices in process control systems. For such
industrial applications, the operating temperature has to be from40C to 60C.
17.7.6 HiperLAN2
High performance LAN (HiperLAN) is the standard developed by the European Telecommunications Standards
Institute (ETSI) for wireless LANs. The HiperLAN1 standard was developed for providing ad hoc connectivity to
wireless devices. It uses the CSMA/ CA protocol. However, HiperLAN1 was not well suited for applications such
as voice/video communication, which require real-time performance. HiperLAN2 is the next standard developed
by ETSI. HiperLAN2 supports broadband multimedia services as well.
HiperLAN is a wireless LAN standard developed by ETSI. It operates in the 2.4GHz band using TDMA-TDD.
Data rates up to 54Mbps are supported to provide broadband multimedia services.
HiperLAN can be used in offices, homes, and public places such as airports, hotels, and so forth. This standard
supports both the configuration shown in Figure 17.8. The salient features of HiperLAN2 are as follows:
Modes of operation: There are two modes of operationcentralized and direct. In centralized mode, a mobile
device communicates with another device via the access point (Figure 17.8(a)). This mode is used in office
environments. In direct mode, two mobile devices communicate directly with each other. This mode is useful for
personal area networking at homes and offices.
Medium access control: The medium access control protocol is TDMA-TDD. The TDMA frame is of 2
milliseconds duration during which AP-to-mobile communication and mobile-to-AP communication take place.
The time slots are assigned dynamically, based on a request for a connection.
Data rates: Data rates up to 54Mbps are supported. Broadband multimedia services can be supported such as
audio and video communication and video conferencing.
Quality of service: In multimedia applications involving voice and video communication, when real-time
communication is required, the quality of service parameters are important. For instance, the user should be
able to specify that the delay for a particular communication should be less than 10 milliseconds, or the bit error
rate should be very low (one error for 100 million bits). HiperLAN2 allows such quality of service parameters by
assuring a certain minimum delay or certain maximum throughput. Compared to 802.11 series standards, this
standard has this attractive feature.
Summary
This chapter presented the details of local area network protocols and standards. LANs are used to connect
devices within a radius of about 10 km. Ethernet, developed in 1978, has become the most popular LAN.
Ethernet operates at 10Mbps using carrier sense multiple access with collision detection (CSMA/ CD) protocol
for medium access. The IEEE 802.3 standard is based on Ethernet.
The IEEE 802 committee developed a series of standards for LANs using twisted copper pair, coaxial cable,
fiber, and radio as the transmission media. The logical link control (LLC) layer, which is common to all the
standards, is specified in IEEE 802.2. The IEEE 802.11 family of standards is for wireless LANs.
Wireless LAN based on IEEE 802.11b, popularly known as WiFi (Wireless Fidelity), is the most popular LAN
and can be used to provide connectivity in both home and office environments. HiperLAN2 is the wireless LAN
standard developed by the European Telecommunications Standards Institute (ETSI). With the availability of
wireless LANs that can support up to 54Mbps data rates, wireless LANs are being considered for providing
broadband wireless Internet connectivity.
References
R.O. LeMire. "Wireless LANs and Mobile Networking Standards and Future Directions", IEEE
Communications Magazine. Vol. 34, No. 8, August 1996.
S. Xu and T. Saadawi. "Does the IEEE 802.11 MAC Protocol Work Well in Multihop Wireless and Ad Hoc
Networks", IEEE Communications Magazine. Vol. 39, No. 6, June 2001.
IEEE Communications Magazine. Vol. 39, No. 11, December 2001. This issue contains a number of
articles on wireless personal and home area networks.
J.P. Macker et al. "Mobile and Wireless Internet Services: Putting the Pieces Together". IEEE
Communications Magazine, Vol. 39, No. 6, June 2001.
P. S. Henry. "WiFi: What's Next?" IEEE Communications Magazine, Vol. 40, No. 12, December 2002.
J. Khun-Jush et al. "HiperLAN2: Broadband Wireless Communications at 5 GHz". IEEE Communications
Magazine, Vol. 40, No. 6, June 2002.
http://www.ieee.org Web site of IEEE. You can get the IEEE 802 standards documents from this site.
http://www.hiperlan2.com Web site of the HiperLAN2 Global Forum.
Questions
Explain the Ethernet local area network operation, giving the details of medium access protocol and
Ethernet frame format.
1.
What is the difference between Ethernet MAC frame format and the IEEE 802.3 MAC frame format? 2.
What are the different topologies used for LANs? 3.
What are the different MAC protocols used for LANs? 4.
Exercise
4.
1. Measure the throughput of the LAN installed in your office/department. You can use the software
package available on your server to find out the effective data rate or throughput. Study the effect
of traffic on the throughput by increasing the traffic (by invoking many file transfers simultaneously).
2. What are the issues related to security in wireless LANs?
3. Work out a detailed plan for installation of a new LAN on your college campus. You need to study
the topology, the expected traffic, and how to interconnect different LAN segments located in
different buildings.
4. Survey the commercially available IEEE 802.11 products.
Answers
1. You can obtain the LAN connection statistics using the procedure given below.
The screen shot given in Figure C.10 shows the LAN activity. It displays the sent packets and received
packets as well as connection statistics.
Figure C.10: Local area network connection status.
On Windows 2000 operating system, you need to do the following:
In the Start menu, select My Network Place and then right-click on it. In the pop-up menu that appears,
select Properties option. A window with the title Network connections will appear. Right-click on the icon in
the Network connections window and select the Status option from the pop-up menu.
2. Security is a major issue in wireless LANs. The 802.11 standard-based LANs do not provide complete
security of information. The security is optional and in many installations, this feature is disabled. The
encryption key is common to all the nodes and is stored as a file in the computers. If the computer is stolen,
the security key is known to the person who stole the computer. Of course, work is going on to improve the
security features of wireless LANs.
3. To plan a LAN, you need to obtain a map of your campus and find out the traffic requirements. If there are
two buildings separated by say, more than 500 meters, you need to install two LAN segments and
interconnect them. If laying the cable in a building is not feasible, you need to consider wireless LAN option.
4. You can get the details of IEEE 802.11 products from http://www.palowireless.com.
Projects
1.
Simulate home agent and a foreign agent on two LAN nodes. You need to study the details of mobile IP
to take up this project.
1.
Carry out a paper design to develop a wireless LAN for your university campus. You need to do a survey
of the various 802.11b products available before doing the design.
2.
Interconnect four PCs as a ring using RS232 cables. Develop a Token Ring LAN with these four nodes. 3.
<Day Day Up>
<Day Day Up>
Chapter 18: Wide Area Networks and X.25 Protocols
To connect computers or LANs spread over a large geographical area is now the order of the day. These wide
area networks (WANs) may be private networks connecting corporate offices spread across the country or the
globe, or they may be public networks offering data services to the public. In this chapter, we will study the
issues involved in wide area networking, with special emphasis on X.25 protocols used extensively in wide area
networks.
18.1 ISSUES IN WIDE AREA NETWORKING
When a computer network is spread over a large geographical area, some special problems are encountered
that are not present in the LANs.
WANs generally do not support very high speeds. Due to lower transmission rates, delays are likely to be
higher. If satellite radio is used as the medium, then the delay is much higher and, as a result, special care
must be taken in terms of flow control protocols.
Because of the delay, the response time also will be high. To the extent possible, protocol overheads need
to be minimized.
The communication medium in a WAN environment may not be as reliable as in LANs, and hence the error
rate is likely to be higher. This may lead to more retransmissions and more delay.
Lower transmission rates and higher delays pose problems for real-time voice and video communication.
Higher delay causes gaps in voice communication and jerky images in video communication.
Network management is more involved and complex as network elements are spread over large
geographical areas.
Wide area networks are characterized by low transmission speeds, high propagation delay, and complex
network management.
The options in transmission media for WANs are dialup/ leased lines, optical fiber, and satellite radio. X.25,
Frame Relay, and Asynchronous Transfer Mode (ATM) protocols are used in WANs.
The various options for developing WANs are:
Dial up lines
Point-to-point leased lines
Switched digital networks based on Integrated Services Digital Network (ISDN)
Switched digital networks based on X.25 standard protocols
Optical fiber networks based on Frame Relay and Asynchronous Transfer Mode (ATM)
For WANs, X.25 is an important standard. X.25-based WANs have been in place since the 1980s. An overview
of the X.25 standard is presented in the following sections.
NoteX.25 protocols are still used extensively in satellite-based wide area networks. However, the protocol
overhead is very high in X.25 as compared to Frame Relay.
<Day Day Up>
<Day Day Up>
18.2 OVERVIEW OF X.25
A typical X.25-based packet switched network is shown in Figure 18.1. The network consists of the end
systems, called data terminal equipment (DTE), and the X.25 packet switches, called the data circuit
terminating equipment (DCE). The packet switches are linked through communication media for transport of
data in the form of packets. The packet size is variable, with a maximum limit of 1024 bytes.
Figure 18.1: X.25 packet switched network.
ITU-T Recommendation X.25 (also referred to as the X.25 standard) is the specification for interface between
an end system and packet switch. Most of the public data networks (PDNs) and ISDN packet switching
networks use the X.25 standard.
An X.25 network consists of packet switches and end systems. The data transmission is done in packet format,
the maximum packet size being 1024 bytes.
NoteX.25 protocols address the first three layers of the OSI reference model: physical layer, datalink layer,
and network layer.
X.25 covers the first three layers in the OSI reference model: physical layer, datalink layer, and network layer.
Physical layer: This layer specifies the physical interface between the end system (DTE) and the link to the
switching node (DCE). RS232 can be used as the physical layer.
Datalink layer: This layer ensures reliable data transfer as a sequence of frames. Link access protocol
balanced (LAPB), a subset of HDLC, is used at this layer.
NoteThe datalink layer in X.25 is called link access protocol balanced (LAPB). LAPB is derived from
HDLC.
Network layer: This is also called the packet layer. This layer provides the virtual circuit functionality. X.25
provides two types of virtual circuits:
Switched virtual circuit: A virtual circuit is established dynamically between two DTEs whenever required,
using call setup and call clearing procedures.
Permanent virtual circuit: This is a fixed, network-assigned virtual circuit, and hence call setup and call
clearing procedures are not required. This is equivalent to a leased line.
Protocol encapsulation in X.25 networks is shown in Figure 18.2. The data from a higher layer is passed on to
the X.25 packet layer. The packet layer inserts control information as header and makes a packet. The packet
is passed on to the datalink layer (LAPB), which adds the header and trailer and forms an LAPB frame. This
frame is passed on to the physical layer for transmission over the medium.
Figure 18.2: Protocol encapsulation in X.25.
X.25 supports multiplexing. A DTE can establish 4095 virtual circuits simultaneously with other DTEs over a
single DTE-DCE link. Each packet contains a 12-bit virtual circuit number.
X.25 supports virtual circuits. A virtual circuit is established between two end systems. The virtual circuit can be
set up on a per-call basis or it can be permanent. These are called switched virtual circuit and permanent virtual
circuit, respectively.
The procedure for establishing a call between the source DTE and the destination DTE in an X.25 network is as
follows:
Source DTE sends a call-request packet to its DCE. The packet includes the source address,
destination address, and virtual circuit number.
1.
Network routes the packet to the destination's DCE. 2.
Destination's DCE sends an incoming-call packet to the destination DTE. This packet has the same
frame format as the call-request packet but different virtual circuit number selected by the destination's
DCE.
3.
Destination DTE sends a call-accepted packet. 4.
Source's DCE receives call-accepted packet and sends call-connected packet to the source DTE. Virtual
circuit number is same as that in the call-request packet.
5.
Source DTE and destination DTE use their respective virtual circuit numbers to exchange data and
control packets.
6.
Source/destination sends a clear-request packet to terminate the virtual circuit and receives a clear-
confirmation packet.
7.
Source/destination receives a clear-indication packet and transmits a clear-confirmation packet. 8.
In an X.25 network, communication takes place in three stages: (a) call setup; (b) data transfer; and (c) call
disconnection. Hence, X.25 provides a connection-oriented service.
The protocol is to send a set of packets to establish a call, transfer data and control information, and call
disconnection. For each type of packet, the packet format is specified. The various packets exchanged are:
Call setup packets: From DTE to DCE, call-request and call-accepted packets
From DCE to DTE, incoming-call and call-connected packets
Call clearing packets: From DTE to DCE, clear-request and clear-confirmation packets
From DCE to DTE, clear-indication and clear-confirmation packets
Data transfer packets: Data packets
Interruption packets: Interrupt and interrupt confirmation packets
Flow control packets: RR (receiver ready) packet
RNR (receiver not ready) packet
REJ (reject) packet
Reset packets: Reset request, reset confirmation packets
Restart packets: From DTE to DCE, restart-request, restart-confirmation packets
From DCE to DTE, restart-indication and restart-confirmation packets
Diagnostics packets: Packets that carry diagnostics information
Registration packets: Request-registration and request-confirmation packets
For each type of packets, the packet formats are specified in the X.25 recommendations. Formats of some of
the packets are discussed here.
NoteThe large number of packet exchanges indicates that the protocol overhead is very high in X.25
networks. However, this is an excellent protocol to use on transmission media that are characterized
by high error rates.
The different types of packets exchanged in an X.25 network are call setup packets, call clearing packets,
interruption packets, reset packets, restart packets, flow control packets, registration packets, and data
packets.
The various fields in a data packet are Q bit, virtual circuit number, send sequence number, receive sequence
number, M bit, and user data.
The format of data packet is shown in Figure 18.3(a). The fields in the data packet are:
Q (1 bit): The Q bit is not defined in the standardit can be used to define two types of data packets.
Virtual circuit number (12 bits): The 12 bits are divided into 4 bits for group ID and 8 bits for channel ID,
which together form the virtual circuit number.
P(R) and P(S): Send sequence number and receive sequence number used for flow control as in HDLC.
Figure 18.3: X.25 Packet Formats.
D = 0 for acknowledgement between DTE and network (for flow control), and D = 1 for acknowledgement from
remote DTE.
M bit: In X.25, two types of packets are definedA packets and B packets. In A packets, M = 1 and D = 0,
indicating that the packet is full; it is of maximum allowable packet length. B packet is a packet that is not an A
packet. The packet sequence consists of a number of A packets and one B packet.
The format of a control packet is shown in Figure 18.3(b). This format is used for all packets for call setup and
call disconnection. For example, the call-request packet will have additional information containing calling DTE
address length (four bits), called DTE address length (four bits), calling DTE address, called DTE address.
The format of flow control packet is shown in Figure 18.3(c). P(R) indicates the number of the next packet
expected to be received. Note that flow control is used at both layers 2 and 3 in X.25.
<Day Day Up>
<Day Day Up>
18.3 A SATELLITE-BASED X.25 NETWORK
X.25 is used extensively in satellite-based wide area networks. The typical architecture of the network is shown
in Figure 18.4. At the satellite hub (central station), there will be a packet switch (DCE) to which an X.25 host is
connected.
Figure 18.4: Satellite-based X.25 wide area network.
A PC add-on card plugged into the server makes the server an X.25 host. At the remotes, there are two
possible configurations. Some remotes can have X.25 packet switches, which are connected to X.25 hosts. At
other remotes, a special device called a packet assembler/disassembler (PAD) is used. This PAD, as the name
suggests, takes the data from a terminal/PC, assembles the packets in the X.25 format, and sends it over the
satellite link. The packets received from the satellite are disassembled and given to the terminal/PC.
Communication between the PAD and the terminal/PC is through RS232. Two other standards are specified for
this configuration. The X.3 standard specifies the parameters of the PAD. The X.28 standard specifies the
protocol used by the terminal/PC to communicate with the PAD. The PAD parameters such as the speed of
communication, number of data bits, and so on can be set by the terminal/PC.
When a remote terminal wants to communicate with another remote, a virtual circuit is established between the
two terminals. The source sends the packets in X.25 format, and the packet switch at the hub switches the
packet to the appropriate destination. In such a case, the X.25 host at the hub does not play any role. Some
data has to be broadcast to all the VSAT terminals, then the host is required. The host will use the broadcast
address to send the packets over the satellite link, and all the remotes will receive it.
Data terminals such as PCs can be connected to the X.25 network through a packet assembler/disassembler
(PAD). The PAD takes the data from the PC and assembles the packets in X.25 format to send them over the
network.
NoteThe PAD parameters are specified in ITU-T Recommendation X.3. Recommendation X.28 specifies
the protocol used by the PC to communicate with the PAD.
<Day Day Up>
<Day Day Up>
18.4 ADDRESSING IN X.25 NETWORKS
Because X.25 is used for WANs, addressing the different end systems is very important. Addressing in X.25
networks is performed based on the X.121 standard. Figure 18.5 shows the two formats for the X.121
addressing scheme.
Figure 18.5: X.121 Address Formats.
X.121 addresses will have 14 digits. In one format, there will be 4 digits of data network identification code
(DNIC) and 10 digits that can be specified by the VSAT network operator. In the second format, there will be 3
digits of data country code (DCC) and 11 digits that can be specified by the VSAT network operator. DNIC and
DCC will be given by national/international authorities.
Generally, the address/subaddress portion is divided into logical fields. For instance, in the first format, out of
the 11 digits, 2 digits can be given to the state, 3 digits to the district, and the remaining digits to the hosts. A
hierarchy can be developed. This complete address directory has to be stored on the host at the central station.
Though X.25 has been used extensively in public data networks (PDNs), it is a highly complicated protocol
because it has been developed to work on transmission media, which are highly susceptible to noise. At each
hop (node to node), an exchange of data packet is followed by an acknowledgement packet. In addition, flow
control and error control packets are exchanged. Such high overhead is suitable for links with high error rates.
With the advent of optical fiber, which is highly immune to noise, such complicated protocols are not necessary.
However, in satellite-based data networks, X.25 has been used widely to provide reliable data transfer.
In X.25 networks, the addressing of the various end systems is done per ITU-T Recommendation X.121. X.121
address will have 14 digits.
Summary
In this chapter, X.25-based wide area networking is discussed. X.25 provides a connection-oriented service.
Hence, when two systems have to communicate with each other, a virtual circuit is established, data
transferred, and then the circuit is disconnected. X.25 is a robust protocol and will work even if the transmission
medium is not very reliable. X.25 addresses the first three layers of the OSI protocol architecture. The physical
layer specifies the interface between the DTE and DCE. The datalink layer is called the LAPB (link access
protocol balanced), which is derived from HDLC. The network layer (also called the packet layer) will take care
of the virtual circuit establishment. Flow control is done at both layers 2 and 3.
When an asynchronous terminal has to communicate with an X.25 network, a packet assembler disassembler
(PAD) is used that takes the data from the terminal and formats it into X.25 packets to send over the X.25
network. Similarly, it disassembles the packets from the X.25 network and presents them to the terminal. In
X.25 networks, addressing is done based on the X.121 standard, which specifies a 14-digit address for each
system.
Though X.25 is a robust protocol that can be used in networks with unreliable transmission media, as media
have become reliable (for example, optical fiber), X.25 has lost its popularity. It is used extensively in satellite-
based data networks.
References
W. Stallings. Computer and Data Communications. Prentice-Hall, Inc., 1999.
http://www.farsite.co.uk This site provides useful information on X.25 commercial products.
1.
Questions
List the various issues in wide area networking. 1.
Describe the architecture of an X.25-based wide area network. 2.
Explain the call setup, data transfer, and call disconnection procedures in an X.25 network. 3.
Describe the formats of X.121 addressing. 4.
Explain the X.25 packet formats. 5.
Exercises
1. Study the X.25 switch, PAD, and X.25 add-on cards supplied by various vendors.
2. Make a comparison of the protocols used in X.25 and Frame Relay. Explain why X.25 is called a
heavyweight protocol.
3. Design an addressing scheme based on X.121 for a nationwide network connecting various state
governments, district offices, and central government departments.
4. List the important PAD parameters.
Answers
1. You can obtain the details of X.25 products from the site http://www.nationaldatamux.com.
2. In X.25, there will be a number of packet transfers between the switches mainly for acknowledgements, flow
control, and error control. Hence, the protocol overhead is very high as compared to Frame Relay, which is
mainly used in optical fiber systems in which there are fewer errors.
3. The addressing based on X.121 for a nationwide network should be done in a hierarchical fashion. X.121
address will have 14 digits, out of which 3 digits are for country code. Out of the remaining 11 digits, 2 digits
can be assigned to the state, 2 digits for each district, and the remaining 7 digits for the different end
systems.
4. There are 18 PAD parameters per X.3 standard. The important parameters are baud rate, local echo mode,
idle timer, line delete, line display, character editing, input flow control, and discard output.
Projects
Simulate a PAD on a PC. The PAD has to communicate with the PC through an RS232 link. From the
PC, you should be able to set the PAD parameters.
1.
For an X.25-based wide area network connecting all the universities in the country, you have been asked
to design an addressing scheme. Design the addressing scheme based on X.121 formats. You can
divide the address/ subaddress field into three portions. The first portion is for the state code, the second
portion is for the district code, and the third portion is for the university ID. Think of an alternative that
does not use the geographical location. (You can divide the universities into different categories such as
technical universities, agricultural universities, and so on.)
2.
<Day Day Up>
<Day Day Up>
Chapter 19: Internetworking
To develop a LAN or WAN is straightforward, but how to network different networks is not. Each network has its
own protocols, packet sizes and formats, speeds, addressing schemes, and so on. Internetworking or
networking of networks is certainly a challenging task, and in this chapter we will discuss the issues involved in
internetworking.
19.1 ISSUES IN INTERNETWORKING
Consider the simple case of connecting two networks as shown in Figure 19.1. Network A is a LAN based on
Ethernet, and network B is a WAN based on X.25 protocols. It is not possible for a node on the LAN to transmit
a packet that can be understood by a node on the WAN because of the following:
The addressing formats are different.
The packet sizes are different.
The medium access protocols are different.
The speeds of operation are different.
The protocols used for acknowledgements, flow control, error control, and so on are different.
Figure 19.1: Internetworking with a router.
To achieve connectivity among different networks, we need to solve the problem of interconnecting
heterogeneous networksthis is known as internetwoking. Two different networks can be connected using a
router (or gateway) as shown in Figure 19.1. The connected network as a whole is referred to as an internet
(small i) and each network as a subnetwork (or subnet) of the internet. The router operates at layer 3 of the OSI
reference model. The router does the necessary protocol translation to make the two subnetworks talk to each
other. Note that the router is the name given by equipment vendors; a router is referred to as a gateway in the
documents of the Internet standards.
Interconnection of heterogeneous networks is called internetworking. A router or gateway is used to
interconnect two networks.
NoteThe router that interconnects two networks does the necessary protocol conversion. The router
operates at layer 3 of the OSI reference model.
The functions of the router are as follows:
Accept the packet from the subnetwork A.
Translate the packet to a format understood by subnetwork B.
Transmit the packet to B.
Similarly, the packets from subnetwork B are transferred to subnetwork A after necessary translation or protocol
conversion.
Now, consider the internet shown in Figure 19.2, where a number of subnetworks are connected together
through routers. A node on subnetwork A has to send packets to a node on subnetwork F. For the packet to
reach F, the packet has to traverse through other routers. A router's job is also to transfer the packet to an
appropriate router so that the packet reaches the destination. To achieve this, each router has to keep a table,
known as the routing table.
Figure 19.2: An internet.
This routing table decides to which router (or network) the packet has to be sent next.
NoteEvery router has a routing table. This routing table is used to route the packet to the next router on
the network. The routing table is updated periodically to take care of changes in the topology of the
network and the traffic on the network.
The requirements of internetworking are:
Links between subnetworks. 1.
Routing and delivery of packets. 2.
Accounting to keep track of use of various networks and routers. 3.
To be able to accommodate.
Different addressing schemes a.
Different packet sizes b.
Different network access mechanisms c.
Different timeouts d.
Different error recovery mechanisms e.
Different status report mechanisms f.
Different routing techniques g.
Connection-oriented and connectionless services h.
4.
To achieve internetworking, the router is the most important element. To meet all the above requirements, the
router has to perform protocol conversion. In addition, the following are required:
To take care of the differing protocols above the datalink layer, we need another layer of protocol that runs
on each router and end system, known as the Internet Protocol (IP).
Because the addressing formats differ, we need a universal addressing scheme to address each machine
uniquely. Each machine on the internet is given a unique address, known as an IP address.
For two end systems on the different networks to transfer packets irrespective of the underlying networks,
we need another layer of communication software that has to run on each end system (or host). This
protocol that handles the transportation of packets transparently is known as Transmission Control Protocol
(TCP).
Above the TCP, we need to run different sets of protocols to provide different applications for the end
users. We need a separate protocol for each applicationfor file transfer, electronic mail, remote login,
web access; and so on.
These protocols form the TCP/IP protocol suite. The TCP/IP protocol suite is the heart of the world-wide
Internet.
Internetworking is a complex task because different networks have different packet sizes, different address
formats, different access protocols, different timeouts, and so on.
NoteThe TCP/IP protocol stack runs on each and every end system, whereas the IP protocol runs on
every router on the Internet.
Another issue in internetworking is how to connect two or more subnetworks when some subnetworks support
connection-oriented service and some networks support connectionless service.
Connection-oriented internetworking: When two networks, both supporting connection-oriented service,
have to be connected, it is called connection-oriented internetworking. In this case, the router that is used to
connect the two subnetworks appears as a node to the subnetworks to which it is attached. A logical
connection is established between the two networks, and this connection is a concatenation of the sequence of
logical connections across the subnetworks.
When two networks, both supporting connection-oriented service, are interconnected, it is called connection-
oriented internetworking.
Connectionless internetworking: Consider a situation in which a number of subnetworks are to be
interconnected, but some subnetworks support connection-oriented service and some subnetworks support
connectionless service. In this situation, the internet can provide datagram service. Each packet is treated
independently and routed from the source node to the destination node via the routers and the networks. At
each router, an independent routing decision is made. The advantage of this internetworking is that it provides
a flexible service because underlying networks can be a mix of connectionless and connection-oriented
services. Implementation using this approach is quite easy.
In connectionless internetworking, each packet is handled independently and routed from the source to the
destination via routers.
NoteConnectionless internetworking provides lots of flexibility because the networks can be a mix of both
connection-oriented and connectionless services. The global Internet provides connectionless
internetworking.
The Internet Protocol (IP) and ISO/OSI Connectionless Network Protocol (CLNP) provide connectionless
internetworking service (datagram service). However, note that the problems associated with the datagram
service have to be taken care of. Packets may be lost, packets may be received with different delays as a result
of which they will not be in sequence, and some packets may be received more than once. These problems are
taken care of by the TCP layer.
Figure 19.3 illustrates the protocols that need to run on each node and the router in an internetworking
scenario. Two LANs are interconnected through a WAN. Each node on the LAN runs the TCP/IP protocol suite
(including the application layer software), and the WAN runs the protocols based on X.25 standards. The router
has to run both the LAN and WAN protocols. In addition, it runs the IP softwareit takes the packet from the
LAN, converts it into packets that can be understood by the WAN protocols, and transmits on the WAN.
Similarly, packets received from the other side are converted into a format that can be understood by the LAN.
Figure 19.3 depicts the protocol stacks that run on the LAN nodes and the two routers.
Figure 19.3: Internetworking LANs through a WAN.
When a LAN is connected to a WAN, the router interconnecting the LAN and WAN has to run the protocol
stacks of both LAN and WAN and do the necessary protocol translation.
<Day Day Up>
<Day Day Up>
19.2 THE INTERNETTHE GLOBAL NETWORK OF NETWORKS
The Internet is a global network of computer networks. The TCP/IP protocol suite ensures that the diverse
networks (LANs, corporate WANs, public data networks, etc.) can be internetworked for people to exchange
information.
The entire TCP/IP protocol suite is the result of researchers working in universities and research laboratories.
There is no central administrative authority to control the Internet. To ensure that the protocol suite meets the
growing requirements, new developments have to take place and new protocols need to be defined. This
activity is controlled by the Internet Architecture Board (IAB), which consists of a chairperson and a number of
volunteers. IAB has two armsthe Internet Research Task Force (IRTF) and the Internet Engineering Task
Force (IETF). IRTF coordinates the research efforts. IETF, consisting of a number of working groups,
coordinates the engineering issues. Any new protocol to be implemented has to be approved by IETF. For each
protocol, an RFC (Request for Comments) document is released by IETF.
The TCP/IP protocol stack provides the capability to interconnect different networks to form a global network of
networks, called the Internet. All protocols to be used in the Internet have to be ratified by the Internet
Engineering Task Force (IETF).
The entire Internet spanning the entire planet is just an extension of the small internet shown in Figure 19.2.
The TCP/IP protocol suite enables any machine on the Internet to communicate with any other machine. The
Internet is very dynamicevery day, a number of networks are connected, and some are disconnected as well.
In spite of this dynamic nature, how do we get connected to any machine and access a Web site? It is the IP
and the TCP that do the trick. If internetworking still appears a mystery, we will unravel the mystery in the next
chapter, where we will study the TCP/IP protocol suite in detail.
Summary
To connect two or more networks is a challenging task because each network has its own protocols. In this
chapter, we studied how a router can be used to network heterogeneous networks. The router's function is to
do the necessary protocol conversion. The Internet, a global network interconnecting millions of networks, also
uses the same mechanism. The IP and TCP protocols provide the means of achieving global connectivity. The
Internet Protocol (IP) has to run on each and every router and each and every end system. The Transmission
Control Protocol (TCP) has to run on each and every end system. The router is introduced in this chapter. We
will study the details in later chapters.
References
V. Cerf and R. Kahn. "A protocol for Packet Network Interconnection". IEEE Transactions on
Communications, COM-22, Vol. 5, May 1974. This paper is written by the two persons who laid the
foundation for the global Internet.
D.E. Comer and D.L. Stevens. Internetworking with TCP/IP, Vol. III: Client/Server Programming and
Applications, BSD Socket Version. Prentice Hall Inc., Englewood Cliffs, N.J., 1993. This book gives the
complete software for internetworking using TCP/IP.
http://www.ietf.org The Web site of IETF. You can obtain the RFCs (Request for Comments) that give the
complete details of the Internet protocols from this site.
Questions
What are the various issues involved in networking of heterogeneous networks? 1.
What are the functions of a gateway or router? 2.
Draw a diagram that depicts the protocol stacks that need to run on the end systems (hosts) and the
routers when an Ethernet LAN is connected to an X.25 WAN.
3.
Exercises
1. In your department/organization, if there are two LANs, study how they are connected and what
network elements (bridge/router) interconnect the two LANs.
2. Cisco Corporation is one of the leading suppliers of routers. Study the various internetworking
products supplied by them. You can get the information from http://www.cisco.com.
Answers
1. The type of network element used to interconnect two LANs depends on the protocols used in the two
LANs. If both LANs run the same protocols, there is no need for any protocol conversion. If the LANs use
different protocols, a network element needs to do the protocol conversion.
2. The earlier routers were only to handle the protocol conversion for data application. Nowdays, routers are
capable of handling voice/fax/video services as well. Cisco's AVVID (Architecture for Voice, Video, and
Integrated Data) supports multimedia application and the routers are capable of protocol conversion
between the PSTN and the Internet. Routers now support IP Version 6.
Projects
The IP (Internet Protocol) is the heart of internetworking. Study the source code for the IP
implementation on a Linux system.
1.
Develop a protocol converter that takes RS232 data in serial format and converts it into Ethernet format.
The software has to read the data from a serial port on the PC and convert the data into Ethernet
packets.
2.
Embed the software developed in Project #2 in a processor-based system. You can use an embedded
operating system such as Embedded Linux or RTLinux.
3.
<Day Day Up>
<Day Day Up>
Chapter 20: TCP/IP Protocol Suite
OVERVIEW
The TCP/IP protocol suite was developed during the initial days of research on the Internet and evolved over
the years, making it a simple yet efficient architecture for computer networking. The ISO/OSI architecture
developed subsequently has not caught on very well because the Internet had spread very fast and the large
installation base of TCP/IP-based networks could not be replaced with the ISO/OSI protocol suite.
The TCP/IP protocol suite is now an integral part of most operating systems, making every computer network-
ready. Even very small embedded systems are being provided with TCP/IP support to make them network
enabled. These systems include Web cameras, Web TVs, and so on. In this chapter, we will study the TCP/IP
architecture. A thorough understanding of this architecture is a must for everyone who is interested in the field
of computer networking.
<Day Day Up>
<Day Day Up>
20.1 TCP/IP PROTOCOL SUITE
The TCP/IP protocol suite was developed as part of the United States Department of Defense's project ARP
Anet (Advanced Research Projects Agency Network), but the standards are publicly available. Due to fast
spread of the Internet, the TCP/IP protocol suite has a very large installation base. The TCP/IP software is now
an integral part of most operating systems, including Unix, Linux, Windows. The TCP/IP stack is also being
embedded into systems running real-time operating systems such as VxWorks, RTLinux, and OS/9 and
handheld operating systems such as Embedded XP, Palm OS, Symbian OS and so on.
The TCP/IP protocol suite is depicted in Figure 20.1. It consists of 5 layers:
Physical layer
Datalink layer (also referred to as the network layer)
Internet Protocol (IP) layer
Transport layer (TCP layer and UDP layer)
Application layer
Figure 20.1: TCP/IP protocol suite.
The TCP/IP protocol suite consists of five layers: physical layer, datalink layer (also referred to as network
layer), Internet Protocol (IP) layer, Transmission Control Protocol (TCP) layer, and application layer.
Physical layer: This layer defines the characteristics of the transmission such as data rate and signal-encoding
scheme.
Datalink layer: This layer defines the logical interface between the end system and the subnetwork.
Internet Protocol (IP) layer: This layer routes the data from source to destination through routers. Addressing
is an integral part of the routing mechanism. IP provides an unreliable service: the packets may be lost, arrive
out of order, or have variable delay. The IP layer runs on every end system and every router.
Transport layer: This layer is also called the host-to-host layer because it provides end-to-end data transfer
service between two hosts (or end systems) connected to the Internet. Because the IP layer does not provide a
reliable service, it is the responsibility of the transport layer to incorporate reliability through acknowledgments,
retransmissions, and so on. The transport layer software runs on every end system.
For applications that require reliable data transfer (such as most data applications), a connection-oriented
transport protocol called Transmission Control Protocol (TCP) is defined. For connectionless service, user
datagram protocol (UDP) is defined. Applications such as network management, that do not need very reliable
packet transfer use the UDP layer.
Application layer: This layer differs from application to application. Two processes on two end systems
communicate with the application layer as the interface.
As in OSI architecture, peer-to-peer communication applies to the TCP/IP architecture as well. The application
process (such as for transferring a file) generates an application byte stream, which is divided into TCP
segments and sent to the IP layer. The TCP segment is encapsulated in the IP datagram and sent to the
datalink layer. The IP datagram is encapsulated in the datalink layer frame. Since datalink layer can be
subdivided into LLC layer and MAC layer, IP datagram is encapsulated in the LLC layer and then passed on to
the MAC layer. The MAC frame is sent over the physical medium. At the destination, each layer strips off the
header, does the necessary processing based on the information in the header, and passes the remaining
portion of the data to the higher layer. This mechanism for protocol encapsulation is depicted in Figure 20.2.
Figure 20.2: Protocol encapsulation in TCP/IP.
The complete TCP/IP protocol stack is shown in Figure 20.3 indicating various application layer protocols. The
various application layer protocols are
Simple Mail Transfer Protocol (SMTP), for electronic mail containing ASCII text.
Multimedia Internet Mail Extension (MIME), for electronic mail with multimedia content.
File Transfer Protocol (FTP) for file transfer.
Telnet for remote login.
Hypertext Transfer Protocol (HTTP) for World Wide Web service.
Figure 20.3: TCP/IP protocol stack.
In the TCP/IP protocol stack, the various application layer protocols are SMTP for e-mail, FTP for file transfer,
Telnet for remote login, and HTTP for World Wide Web service.
In addition, the following protocols are also depicted:
Border gateway protocol (BGP), a routing protocol to exchange routing information between routers.
Exterior gateway protocol (EGP) is another routing protocol.
Internet Control Message Protocol (ICMP), which is at the same level as IP but uses IP service.
Simple Network Management Protocol (SNMP) for network management. Note that SNMP uses UDP and
not TCP.
NoteThe TCP and IP layer software runs on every end system. The IP layer software runs on every router.
<Day Day Up>
<Day Day Up>
20.2 OPERATION OF TCP AND IP
Consider the internet shown in Figure 20.4. Each end system will be running the TCP/IP protocol stack,
including the application layer software. Each router will be running the IP layer software. If the networks use
different protocols (e.g., one is an Ethernet LAN and another is an X.25 WAN), the router will do the necessary
protocol conversion as well. End system A wants to transfer a file to end system B.
Figure 20.4: TCP/IP operation in an internet.
When two end systems have to exchange data using TCP/IP protocol stack, a TCP connection is established,
and the data transfer takes place. Though IP layer does not provide a reliable service, it is the TCP layer that
will ensure end-to-end reliable transfer of data through error detection and retransmission of packets.
Each end system must have a unique addressthis address is the IP address. In addition, the process in end
system A should establish a connection with the process running in end system B to transfer the file. Another
address is assigned to this address, known as the port address. For each application, a specific port address is
specified. When port 1 of A wishes to exchange data with port 2 on B, the procedure is:
Process on A gives message to its TCP: Send to B port 2. 1.
TCP on A gives the message to its IP: Send to host B. (Note: IP layer need not know the port of B.) 2.
IP on A gives message to the datalink layer with instructions to send it to router X. 3.
Router X examines the IP address and routes it to B. 4.
B receives the packet, each layer strips off the header, and finally the message is delivered to the
process at port 2.
5.
NoteThough we talk about TCP connection, it needs to be noted that there is no real connection between
the end systems. It is a virtual connection. In other words, TCP connection is only an abstraction.
<Day Day Up>
<Day Day Up>
20.3 INTERNET PROTOCOL (IP)
Internet Protocol (IP) is the protocol that enables various networks to talk to each other. IP defines the data
formats for transferring data between various networks, and it also specifies the addressing and routing
mechanisms. The service delivered by IP is unreliable connectionless packet service. The service is unreliable
because there is no guarantee that the packets will be deliveredpackets may be lost if there is congestion,
though a best effort is made for the delivery. The packets may not be received in sequence, packets may be
duplicated, and packets may arrive at the destination with variable delay. The service is connectionless
because each packet is handled independently. IP defines the rules for discarding packets, generating error
messages, and how hosts and routers should process the packets.
The main functions of the IP layer are addressing and routing. Each machine is given an IP address that is
unique on the network. The destination address in the IP datagram is used to route the packet from the source
to the destination.
IP is implemented as software. This software must run on every end system and on every router in any internet
using the TCP/IP protocol suite.
In Figure 20.4, the router X may deliver the packet to network Q directly or it may deliver it to router Y, which in
turn delivers to network Q. So, the packets may take different routes and arrive at the end system B out of
sequence. It is the TCP layer that takes care of presenting the data in proper format to the application layer.
NoteIt is important to note that the IP layer does not provide a reliable service. The packets may be lost on
the route from the source to the destination if there is congestion in the network. It is the responsibility
of the TCP layer to ask for retransmissions and ensure that all the packets are received at the
destination.
<Day Day Up>
<Day Day Up>
20.4 TRANSMISSION CONTROL PROTOCOL (TCP)
It is the job of transport layer protocol to ensure that the data is delivered to the application layer without any
errors. The functions of the transport layer are:
To check whether the packets are received in sequence or not. If they are not in sequence, they have to be
arranged in sequence.
To check whether each packet is received without errors using the checksum. If packets are received in
error, TCP layer has to ask for retransmissions.
To check whether all packets are received or whether some packets are lost. It may happen that one of the
routers may drop a packet (discard it) because its buffer is full, or the router itself may go faulty. If packets
are lost, the TCP layer has to inform the other end system to retransmit the packet. Dropping a packet is
generally due to congestion on the network.
The TCP layer provides end-to-end reliable transfer of data by taking care of flow control and error control. If
packets are received in error, retransmission is requested. If packets are received out of order, they are put in
sequence. It appears to the application layer as though everything is fine, but the TCP layer needs to do a lot of
work to achieve this.
Sometimes, one system may send the packets very fast, and the router or end system may not be able to
receive the packets at that speed. Flow control is done by the transport layer.
It is the job of the transport layer to provide an end-to-end reliable transfer of data even if the underlying IP
layer does not provide reliable service. The Transmission Control Protocol (TCP) does all these functions,
through flow control and acknowledgements.
NoteIn TCP/IP networks, it is not possible to ensure that all the packets are received at the destination
with constant delay. In other words, the delay may vary from packet to packet. Hence, we can say that
TCP/IP networks do not guarantee a desired quality of service. This characteristic poses problems to
transfer real-time data such as voice or video over TCP/IP networks.
20.4.1 Flow Control and Acknowledgements
To provide a reliable transmission, the acknowledgement policy is used. The two protocols for this mechanism
are the stop-and-wait protocol and the sliding window protocol. These protocols take care of lost packets, flow
control, and error detection.
Stop-and-Wait Protocol
When the source (end system A) sends the first packet to the destination (end system B), B sends an
acknowledgment packet. Then A sends the second packet, and B sends the acknowledgement. This is a very
simple protocol. But the problem is that if the acknowledgement for a packet is lost, what has to be done? A
sends the first packet and then starts a timer. The destination, after receiving the packet, sends an
acknowledgement. If the acknowledgement is received before the timer expires, the source sends the next
packet and resets the timer. If the packet sent by the source is lost, or if the acknowledgement sent by the
destination is lost, the timer will expire, and the source resends the packet.
In stop-and-wait protocol, the source sends a packet and only after the acknowledgement is received from the
destination is the next packet sent. This is a simple protocol, but it results in lots of delay, and the bandwidth is
not used efficiently.
Figure 20.5: Stop-and-wait protocol.
This protocol is very simple to implement. However, the drawback is that the throughput will be very poor and
the channel bandwidth is not used efficiently. For instance, if this protocol is used in a satellite network, A will
send a packet, and after one second it will receive the acknowledgment. During that one second, the satellite
channel is free, and the channel is not used effectively. A refinement to this protocol is the sliding window
protocol.
Sliding Window Protocol
In this protocol, the source sends a certain number of packets without waiting for the acknowledgements. The
destination receives the packets and sends an acknowledgement for each packet. The source will have a timer
for each packet and keeps track of the unacknowledged packets. If the timer expires for a particular packet and
the acknowledgement is not received, that packet will be resent. This way, the throughput on the network can
be increased substantially.
There are many options as to when to send the acknowledgement. One option is the window size. If the sliding
window size is seven, the source can send up to seven packets without waiting for the acknowledgement. The
destination can send an acknowledgement after receiving all seven packets. If the destination has not received
packet four, it can send an acknowledgement indicating that up to packet three were received. As shown in
Figure 20.6, if B sends ACK three, the source knows that up to packetthree were received correctly, and it
sends all the packets from four onwards again. Another option in the sliding window protocol is when to send
the acknowledgements. A positive acknowledgment can be sent indicating that up to packet #n all packets are
received. Alternatively, a negative acknowledgement may be sent indicating that packet #n is not received.
Figure 20.6: Sliding window protocol.
In sliding window protocol, a certain number of packets (say seven) are sent by the source without waiting for
an acknowledgement. The destination can send a single acknowledgement for packet 3 indicating that 3
packets are received. Using this approach, the number of acknowledgements can be reduced, and the
throughput can be increased.
The sliding window protocol also addresses flow control. If the destination cannot receive packets, with the
speed with which the source sends the packets, the destination can control the packets flow.
Using this simple protocol, TCP layer will take care of flow control error control and tell the source that the
packets are being received. We will discuss the details of the IP and TCP layers in the next two chapters.
20.4.2 Congestion Control
On the Internet, many connections get established and closed, so the traffic on the Internet is difficult to predict.
If the traffic suddenly goes up, there will be congestion in the network and, as a result, some packets may be
discarded by the routers. Every host has to have some discipline in transmitting its packetsthere is no point in
pushing packets onto the network if there is congestion.
In TCP/IP networks, congestion control is done through an additive-increase, multiplicative-decrease
mechanism. To start with, a congestion window size is fixed. If there is congestion, the window size is reduced
to half. If the congestion is reduced, the window size is increased by one.
In TCP, congestion control is done through a mechanism called additive increase/multiplicative decrease. A
congestion window size is fixed at the beginning of the transmission, such as 16 packets. If there is suddenly
congestion, and a packet loss is detected, the TCP reduces the congestion window size to 8. Even then, if a
packet loss is detected, the window size is reduced to 4, and then to Z. The decrease is multiplicative.
If the congestion is reduced on the network, and acknowledgements for the packets are being received by the
source, for each acknowledgement received, the TCP increases the window size by 1. If 4 was the earlier
window size, it becomes 5, then 6 and so on. The increase is additive.
This simple mechanism for flow control ensures that the transmission channel is used effectively.
<Day Day Up>
<Day Day Up>
20.5 USER DATAGRAM PROTOCOL (UDP)
The TCP provides a reliable service by taking care of error control and flow control. However, the processing
required for the TCP layer is very high and is called a heavyweight protocol. In some applications such as real-
time voice/video communication and network management, such high processing requirements create
problems. Another transport protocol is used for such applications. That is user datagram protocol (UDP). UDP
provides a connectionless service. It sends the packets to the destination one after the other, without caring
whether they are being received correctly or not. It is the job of the application layer to take care of the
problems associated with lack of acknowledgements and error control. Simple Network Management Protocol
(SNMP), which is used for network management, runs above the UDP.
To provide reliable service, the TCP layer does lots of processing, and hence it is called a heavy-weight
protocol. The UDP is another transport layer that provides connectionless service. Processing of UDP packets
is much faster.
NoteIn applications such as real-time voice communications, if a packet is lost, there is no point in asking
for a retransmission because it causes lots of delay. For such applications, UDP is a better choice
than TCP.
<Day Day Up>
<Day Day Up>
20.6 TCP/IP OVER SATELLITE LINKS
The TCP/IP protocol stack can be used in any networkthe transmission medium can be cable, optical fiber,
terrestrial radio, or satellite radio. However, when TCP/IP is used in satellite networks, the stack poses
problems. This is due to the characteristics of the satellite channels. The problems are as follows:
The satellite channel has a large propagation delay. Large delay causes timeouts in the flow control
protocol. The source assumes that the packets have not reached the destination and resends the packets.
As a result, the destination receives duplicate packets. This causes congestion in the network.
The satellite channels have a larger Bit Error Rate (BER) than the terrestrial channels. As a result, packet
losses will result in more retransmissions. When retransmission of packets is required, TCP automatically
reduces the window size, though the network is not congested. As a result, the throughput of the channel
goes down.
Satellite communication systems are characterized by large propagation delay and high Bit Error Rate. In such
systems, using TCP/IP creates problems because of the timeouts for acknowledgements and retransmissions.
TCP/IP protocols are suitably modified to work on satellite channels.
To overcome these problems, a number of solutions are proposed, which include the following:
To improve the link performance, error-correcting codes are used. Errors can be corrected at the
destination, and retransmissions can be reduced.
Instead of using the flow control protocols at the transport layer, these protocols can be implemented at the
datalink layer, so that the TCP layer does not reduce the window size.
Instead of using a default window size of 16 bits in the TCP segment, 32 bits can be used to increase the
throughput.
For bulk transfer of information from the source to the destination, multiple TCP connections can be
established.
Another interesting technique used is called spoofing. A small piece of software will run at the source,
generating the acknowledgements locally. The local TCP layer is cheated by the spoofing software. The
spoofing software in turn receives the actual acknowledgement from the destination and discards it. If a
packet is to be retransmitted because it was received in error at the destination, the spoofing software
requests the TCP layer to resend the packet.
Many improvements in the TCP/IP protocol layers are required for its use in satellite networks.
<Day Day Up>
<Day Day Up>
20.7 INTERPLANETARY (IPN) INTERNET
The Internet, as we know it, is a network of connected networks spread across the earth. The optical
fiberbased backbone (the set of high-capacity, high-availability communication links between network traffic
hubs) of the Internet supports very high data rates with negligible delay and negligible error rates, and
continuous connectivity is assured. If there is loss of packets, it implies congestion of the network.
Now, imagine Internets on other planets and spacecraft in transit. How do we go about interconnecting these
internets with the Earth's Internet? Or think of having an Internet service provider to the entire solar system.
The brainchild of Vincent Cerf, the Interplanetary Internet (InterPlaNet or IPN), aims at achieving precisely this.
IPN's objective is to define the architecture and protocols to permit interoperation of the Internet on the earth
and other remotely located internets situated on other planets and spacecraft in transit, or to build an Internet of
Internets.
The deep space communication channels are characterized by high data loss due to errors, transient link
outages, asymmetric data rates, unidirectional channels, power constrained end systems. To develop protocols
to work in this type of communication environment is a technology challenge; also, such protocols will lead to
better solutions even to develop systems on the earth.
The three basic objectives of the IPN are these:
Deploy low delay internets on other planets and remote spacecraft.
Connect these distributed (or disconnected) internets through an interplanetary backbone that can handle
the high delay.
Create gateways and relays to interface between high delay and low delay environments.
The TCP/IP protocol suite cannot be used for the IPN for the following reasons:
Communication capacity is very expensive, every bit counts, and so protocol overhead has to be
minimized.
Interactive protocols do not work and so:
Reliable sequential delivery takes too long.
Negotiation is not practical.
It is difficult to implement flow control and congestion control protocols.
Retransmission for error recovery is expensive.
Protocols need to be connectionless.
The proposed IPN architecture is shown in Figure 20.7. The thrust areas for implementation of this architecture
are:
Deployment of internets on various planets and spacecraft
Inter-internet protocols
Interplanetary gateways (IG)
Stable backbone
Security of the user data and the backbone
Figure 20.7: Architecture of Interplanetary Internet.
Presently work is underway by IPNSIG (Interplanetary Internet Special Interest Group: http://www.ipnsig.org) to
define and test the new set of protocols and the backbone architecture. The work includes defining new layers
in the place of TCP and IP, using RF instead of fiber for the backbone network, as well as the addressing
issues. (A few years from now, if you have to send mail to someone, you may need to specify .earth or .mars
extension to refer to the internets of Earth and Mars.)
The vision of Vincent Cerf, the Interplanetary Internet, will interconnect the various internets located on the
Earth and other planets and spacecraft. To develop this network, new protocols need to be designed because
TCP/IP protocol stack does not work due to enormous propagation delays, low speeds, and the need for
connectionless services.
The time frame for the IPN protocol testing is 2003+ for Mars Internet. It is proposed to use schools as test
beds for testing the new protocols. Though the IPN may be of practical use perhaps 20 years from now, it is
expected that the outcome of this research will solve many of the high-delay problems encountered on the
networks on the earth itself.
Summary
This chapter presented an overview of the TCP/IP protocol stack. Above the physical and datalink layers, the
Internet Protocol (IP) layer takes care of addressing and routing. IP provides a connectionless service, and
there is no guarantee that all the packets will be received. The packets also may be received out of sequence.
It is the job of transport layer protocol to take care of these problems. The transport layer provides end-to-end
reliable service by taking care of flow control, error control, and acknowledgements. Above the TCP layer,
different application layer protocols will be running, such as Simple Mail Transfer Protocol (SMTP) for e-mail,
File Transfer Protocol (FTP) for transferring files, and Hypertext Transfer Protocol (HTTP) for the World Wide
Web. The user datagram protocol (UDP) also runs above the IP, but it provides a connectionless service. Since
the processing involved in UDP is less, it is used for network management and real-time communication
applications. The TCP/IP protocol stack presents problems when used in satellite networks because satellite
networks have high propagation delay. The TCP layer has to be suitably modified for use in satellite networks.
Another innovative project is the Interplanetary Internet, which plans for interconnection of internets of different
planets and spacecrafts. The results of this research will help improve the TCP/IP protocol stack performance
in high-delay networks.
References
L.L. Peterson and B.S. Davie. Computer Networks: A Systems Approach. Morgan Kaufman Publishers
Inc., CA, 2000. This book gives a systems approach, rather than a layered approach to computer networks.
A.S. Tanenbaum. Computer Networks. Prentice Hall, Inc., NJ, 1996. This book gives a layered approach to
describe the computer networking protocols.
N. Ghani and S. Dixit. "TCP/IP Enhancements for Satellite Networks". IEEE Communications Magazine,
Vol. 37, No. 1, July 1999.
http://www.ietf.org The Requests for Comments (RFCs) that give the complete details of the TCP/IP
protocol stack can be obtained from this site. Each protocol specification gives the complete details of
implementation.
http://www.ipnsig.org Web site of Interplanetary Internet.
Questions
Explain the functions of different layers in the TCP/IP protocol architecture. 1.
Explain the operation of TCP and IP. 2.
IP does not provide a reliable service, but TCP provides end-to-end reliable service. How? 3.
What are the limitations of the TCP/IP protocol stack? 4.
Differentiate between TCP and UDP. 5.
List the problems associated with running the TCP/IP protocol stack in a satellite network. 6.
Explain how congestion is controlled in TCP/IP networks. 7.
Exercises
1. Write a technical report on Interplanetary Internet.
2. Prepare a technical report on running the TCP/IP protocol stack on a satellite network.
3. Two systems, A and B, are connected by a point-to-point link, but the communication is only from A
to B. Work out a mechanism to transfer a file from A to B using UDP as the transport protocol.
4. Discuss the benefits of using UDP for data applications if the transmission link is very reliable and
if there is no congestion on the network.
5. Compare the performance of stop-and-wait protocol and sliding window protocol in terms of delay
and throughput.
Answers
1. You can obtain the details of Interplanetary Internet at http://www.ipnsig.org.
2. The TCP/IP protocol stack does not perform well on a satellite network because of the large propagation
delay. There will be timeouts before the acknowledgement is received, so packets are retransmitted by the
sender though the packets are received at the other end. This causes unnecessary traffic on the network.
To overcome these problems, spoofing and link accelerators are used.
3. When the communication is one-way only, the TCP protocol cannot be used at the transport layer because
acknowledgements cannot be sent in the reverse direction. In such a case, the connectionless transport
protocol UDP has to be used. To transfer a file, the file has to be divided into UDP datagrams and sent over
the communication link. At the receiving end, the datagrams have to be assembled by the application layer
protocol. It is possible that some of the datagrams are received with errors, but retransmission cannot be
done because of lack of the reverse link. The receiver has to check every datagram for errors, and if there
is an error even in a single packet, the whole file is discarded. The sender may send the file multiple times
so that at least once all the datagrams are received without error.
4. If the transmission medium is very reliable, the packets will be received correctly. Also, if there is no
congestion, the packets are likely to be received without variable delay and in sequence. Hence, UDP
provides a fast data transfer, and the transmission medium is utilized effectively.
5. Stop-and-wait protocol is very inefficient because the communication channel bandwidth is not utilized well.
After the first packet is sent, an acknowledgement has to be received, and then only the second packet can
be sent. On the other hand, in sliding window protocol, a number of packets can be sent without waiting for
acknowledgements. Hence, channel utilization is better if sliding window protocol is used.
Projects
1.
2.
Interconnect two PCs using an RS232 link. Simulate a high delay in the network, run the TCP/IP protocol
stack, and observe the throughput on the LAN to study the impact of the delay on the TCP/IP stack.
1.
Develop software for spoofingwhen the acknowledgement receipt is delayed from the other machine,
an acknowledgement can be locally generated to cheat the TCP layer.
2.
<Day Day Up>
<Day Day Up>
Chapter 21: Internet Protocol (IP)
The Internet Protocol (IP) is the heart of the Internet. Networks running different protocols are connected
together to form the global network because of the IP. In this chapter, we will study IP version 4, which is
presently running on most of the routers and end systems. We also will discuss IP Version 6 which is the next
version of the IP that is being deployed and that will become predominant in coming years. We also will discuss
various routing protocols. Details of Internet Control Message Protocol (ICMP) are also presented in this
chapter.
21.1 OVERVIEW OF INTERNET PROTOCOL
Internet Protocol (IP) is the protocol that enables various networks to talk to each other. IP defines the data
formats for transferring data between various networks. It also specifies the addressing and routing
mechanisms. The service delivered by IP is unreliable connectionless packet service. The service is unreliable
because there is no guarantee that the packets will be deliveredpackets may be lost if there is congestion,
though "best-effort" is made for the delivery. The packets may not be received in sequence, packets may be
duplicated, and packets may arrive at the destination with variable delay. The service is connectionless
because each packet is handled independently. IP defines the rules for discarding packets, generating error
messages, and how hosts and routers should process the packets.
The functions of the IP layer are addressing and routing. IP provides a connectionless service: each packet is
handled independently by the router.
IP is implemented as software. This software must run on every end system and on every router in any internet
that uses the TCP/IP protocol suite.
NoteThe IP software runs on every router as well as on every end system.
<Day Day Up>
<Day Day Up>
21.2 INTERNET ADDRESSING SCHEME
Each end system on the network has to be uniquely identified. For this, the addressing scheme is very
important. Since each end system is a node on a network, the addressing scheme should be such that the
address contains both an ID for the network and an ID for the host. This scheme is followed in the IP
addressing scheme. Each node on a TCP/IP network is identified by a 32-bit address. The address consists of
the network ID and the host ID. IP address can be of five formats, as shown in Figure 21.1.
Figure 21.1: IP address formats.
In IP Version 4, each system on the network is given a unique 32-bit IP address. The address consists of
network ID and the host ID.
Class A addresses: Class A addressing is used when a site contains a small number of networks, and each
network has many nodes (more than 65,536). Seven bits are used for network ID and 24 bits for host ID. A
class A address has 0 in the first bit.
The maximum number of class A networks can be 126 (the network addresses 0 and 127 are reserved). Each
network can accommodate up to (2
24
2) hosts. Note that two host addresses are reserved (all zeros and all
ones).
The IP addresses are divided into five classes: A, B, C, D, and E. The number of bits assigned to the network
ID field and the host ID field are different in each class.
Class B addresses: Class B addressing is used when a site has a medium number of networks and each
network has more than 256 but less than 65,536 hosts. Fourteen bits are allocated for network ID and 16 bits
for the host ID. A class B address has 10 for the first two bits.
Class C addresses: Class C addressing is used when a site has a large number of networks with each
network having fewer than 256 hosts. Twenty-one bits are allocated to network ID and 8 bits to host ID. A class
C address has 110 for the first three bits.
Class D addresses: These addresses are used when multicasting is required, such as when a datagram has
to be sent to multiple hosts simultaneously.
Class E addresses: These addresses are reserved for future use.
In the IP address, if the host address bits are all zeros, the IP address represents the network address. If the
host address bits are all ones, the IP address is the broadcast addressthe packet is addressed to all hosts on
the network.
It is possible that a host may be connected to different networks. A router is also connected to different
networks. Such computers are known as multihomed hosts. These hosts need multiple IP addresses, each
address corresponding to the machine's network connection. Hence, an IP address is given to the network
connection.
When the sender wants to communicate over a network but does not know the network ID, network ID is set to
all zeros. When that network sends a reply, it contains its network ID, which is recorded by the sender for future
use.
Some of the drawbacks of this addressing scheme are:
In a class C network, if the number of hosts increases to more than 256, the whole addressing scheme has
to be changed because network ID has to change. This calls for lots of work for the system administrator.
If a host is disconnected from one network and connected to another network, the IP address has to
change.
NoteThe addressing format of IP Version 4 is not an efficient addressing scheme because changing from
one class of address to another class is very difficult.
An important point to be noted while transmitting the IP address is that integers are sent most significant byte
first ("Big-Endian style") so that all the machines can interpret the correct address.
If a datagram has to be sent to multiple hosts, a multicast addressing scheme specified by class D addresses is
used. IP multicast addresses can be assigned by a central authority (called well-known addresses) or
temporarily created (called transient multicast groups). Multicast addressing is useful for applications such as
audio and video conferencing.
In a class D addressing scheme, the first four bits are 1110. The remaining 28 bits identify the multicast
address. Obviously, this address can be used only in the destination IP address and not in the source IP
address.
The multicast address is used to send a datagram to multiple hosts. This addressing mechanism is required for
applications such as audio/video conferencing.
An IP multicast address is mapped to the Ethernet multicast address by placing the lower 23 bits of the IP
address into the low-order 23 bits of the Ethernet multicast address. However, note that this mapping is not
unique, because five bits in the IP address are ignored. Hence, it is possible that some hosts on the Ethernet
that are not part of the multicast group may receive a datagram erroneously, and it is the host's responsibility to
discard the datagram.
Special routers called multicast routers are used to route datagrams with multicast addresses. Internet's
multicast backbone (MBONE) has multicast routers that route the multicast traffic over the Internet. If a router
does not support multicast routing, the mulitcast datagram is encapsulated in the normal unicast IP datagram,
and the receiver has to interpret the multicast address.
21.2.1 Dotted Decimal Notation
Because it is difficult for us to read the IP address if it is written in 32-bit format, dotted decimal notation is
used. If the IP address is
it can be represented as 254.127.129.170 for easy readability.
Internet Network Information Center (InterNIC) is the central authority to assign IP addresses. InterNIC gives
the network ID and the organization is free to assign host addresses. Many organizations assign local IP
addresses without obtaining the network ID from InterNIC. This is OK if the network remains isolated, but if it is
connected to the Internet later on, there may be an address clash.
The 32-bit IP address is represented as a.b.c.d where the values of a, b, c and d can be between 0 and 255.
This notation is called dotted decimal notation.
<Day Day Up>
<Day Day Up>
21.3 ADDRESS RESOLUTION PROTOCOL
Consider a case when a router connected to a LAN receives a packet for one of the nodes on the LAN. The
packet is routed up to the router based on the network ID. How does the router send the packet to that specific
node? Remember, the packet has to be sent to the node using the physical address, and the received packet
contains only the IP address.
Address resolution protocol (ARP) is used when the router does not know the IP address of the node to which
the packet has to be sent. The router broadcasts the packet, and the node with the corresponding IP address
responds with its physical address.
The ARP solves this problem. The router connected to the LAN sends a broadcast message with the IP
address of the node as the destination address. All the nodes receive this packet, and the node with that IP
address responds with its physical address. Subsequently, the packets will be transmitted to that node with that
physical address. Since broadcasting is a costly proposition, the router can keep a cache in which the physical
addresses corresponding to the IP addresses are stored for subsequent use. Since the physical address may
change, the router keeps the physical address in the cache only, because it is likely to be valid only for a certain
period of time.
In the Ethernet frame, the ARP message is embedded in the data field. The type field of the Ethernet frame
contains the code 0806 (hex) to indicate that it is an ARP message.
<Day Day Up>
<Day Day Up>
21.4 REVERSE ADDRESS RESOLUTION PROTOCOL
Machines that do not have secondary storage (diskless machines) do not know their IP addresses. At the time
of bootup, the machine has to get its own IP address. However, the physical address of the machine is known,
because it is a part of the hardware, the network card. As you can see, this problem is the opposite of the
earlier problem, and hence reverse address resolution protocol (RARP) was developed.
In reverse address resolution protocol, a server stores the IP addresses and the corresponding network
addresses of the nodes. This protocol is used when the nodes are diskless machines and do not know their IP
addresses.
In RARP, the machine that wants to find out its IP address broadcasts a packet with its own network address. A
RARP server, which stores the IP addresses corresponding to all the network addresses (in secondary
storage), receives the packet and then sends a reply to the machine with the information on its IP address. A
requirement for RARP to succeed is that a designated RARP server should be present that contains the lookup
table. Sometimes, for providing a reliable service, primary and secondary RARP servers will be installed in
LANs. If the primary RARP server fails, the secondary RARP server will take over and provide the IP addresses
to the nodes on the LAN.
<Day Day Up>
<Day Day Up>
21.5 IP DATAGRAM FORMAT
The IP that runs on most of the hosts and routers on the Internet is IP Version 4 (IPv4). Gradually, the IP
software will be upgraded to IP Version 6 (IPv6). We will first study the IPv4 and then study IPv6.
The basic unit of data transfer is called datagram in IP Version 4 and a packet in IP Version 6. We will use the
words datagram and packet interchangeably in the following discussion. The datagram of IPv4 has a header
and the data fields. The detailed format of the datagram is shown in Figure 21.2. Each of the fields is explained.
Figure 21.2: IPv4 datagram format.
Version number (4 bits): Version number of the IP. The version presently running in most of the systems is
Version 4. Version 6 is now slowly being deployed. This field ensures that the correct version of the software is
used to process the datagram.
Header length (4 bits): Length of the IP header in 32-bit words. Note that some fields in the header are
variable, and so the IP datagram header length may vary. The minimum length of the header is 20 bytes, and
so the minimum value of this field is 5.
Service type (8 bits): These bits specify how the datagram has to be handled by systems. The first three bits
specify the precedence0 indicating normal precedence and 7 network control. Most routers ignore these bits,
but if all the routers implement it, these bits can be used for providing precedence to send control information
such as congestion control. The 4th, 5th, and 6th bits are called D, T, and R bits. Setting D bit is to request low
delay, setting T bit is to request high throughput, and setting R bit is to request high reliability. However, it is
only a request; there is no guarantee that the request will be honored. Note that these bits are to set the quality
of service (QoS) parameters. there is no guarantee that the required QoS will be provided. That is the reason
we keep hearing that sentence "IP does not guarantee a desired QoS".
Length (16 bits): Total length of the datagram in bytes including header and data. The length of the data field
is calculated by subtracting the header length from the value of this field. Since 16 bits is allotted to this field,
the maximum size of an IP datagram is limited to 65,535 bytes.
Note that the IP datagram size is much larger than can be accommodated by a LAN that can handle only 1526
bytes in one frame for instance. In such a case, the datagram has to be fragmented and sent over the network.
The minimum datagram size that every host and router must handle is 576 bytes. Each fragment contains most
of the original datagram header. The fragmentation is done at routers, but the reassembly is done at the
destination.
NoteThe minimum datagram size every host and router must handle is 576 bytes. If larger datagrams are
received, they may need to be fragmented and reassembled.
Identification (16 bits): Unique ID to identify the datagram. This field lets the destination know which fragment
belongs to which datagaram. The source address, destination address, and identification together uniquely
identify the datagram on the Internet.
Flags (3 bits): If the first bit, called the do-not-fragment bit, is set to 1, this is to indicate that the datagram
should not be fragmented. If a router cannot handle the datagram without fragmenting it, the datagram is
discarded, and an error message is sent to the source. The second bit is called the more-fragments bit and is
set to 0 to indicate that this is the last fragment. The third bit remains unused.
The time-to-live field is used to ensure that a datagram does not go round and round in the network without
reaching its destination. Every router decrements this field by 1, and if a router receives a datagram with this
field value of 0, the datagram is discarded.
Fragment offset (3 bits): Specifies the offset of the fragment in the datagram, measured in 8 octets (one octet
is 8 bits), starting at offset 0. The destination receives all the fragments and reassembles the fragments using
the offset value, starting with 0 to the highest value.
Time-to-live (12 bits): It may happen that a packet on the Internet may go round and round without reaching
the destination, particularly when using the dynamic routing algorithms. To avoid such unnecessary traffic, this
field is very useful. This field contains the number of hops a packet can travel. At every router, this field is
decremented by 1, and either the packet reaches the destination before the field becomes 0 or, if it reaches 0
earlier, it is discarded. The default hop count is 64the packet can traverse at most through 64 routers.
NoteIt is possible for some packets to be discarded by a router and IP is said to provide an unreliable
service. The TCP has to take care of the lost datagrams by asking for retransmission.
Protocol (8 bits): The protocol field specifies which higher layer protocol is encapsulated in the data area of
the IP packet. The higher layer protocol can be TCP or UDP.
Header checksum (16 bits): The bit pattern is considered as a 16-bit integer, and these bit patterns are added
using one's complement arithmetic. The one's complement of the result is taken as the header checksum. Note
that checksum is calculated only for the header, and this value has to be calculated at each router because
some of the fields in the header change at each router. For calculation of the checksum, this field is considered
to be having all zeros.
Source IP address (32 bits): This field contains the IP address of the source that is sending the datagram.
Destination IP address (32 bits): This field contains the IP address of the final destination.
Options: This is a variable-length field. This field contains data for network testing and debugging. The format
for the option field is option code of one octet followed by data for the option. These options are for operations
such as recording the route of a datagram, timestamping a datagram along the route, and source routing that
specifies the route to be taken by the datagram.
When the record route option is set, each router adds its IP address in the options field and then forwards it.
When the source route option is set and the IP addresses of all the hops are mentioned in the options field, the
datagram takes only that route. This is not of any significance from an end user point of view but network
administrators use it for finding the throughput of the specific paths. However, one needs to know the topology
of the network to specify the source routing.
When the timestamp option is set, each router inserts its IP address (32 bits, which again is optional) and
timestamp (32 bits) in the options field, which can be analyzed at the destination.
Padding (8 bits): To make the IP header an exact multiple of 32 bits, the padding bits are added if required.
Data (variable): This is a variable field whose length is specified in the datagram header. It should be an
integer multiple of 8 bits.
<Day Day Up>
<Day Day Up>
21.6 SUBNET ADDRESSING
The IP addressing scheme discussed earlier has a main problem: it is likely that the IP addresses will get
exhausted fast. To overcome this problem, subnet addressing and supernet addressing are used.
In subnet addressing, a site that has a number of networks will have a single IP address. Consider a site with
three physical networks. Instead of assigning three IP addresses, only one IP address can be assigned.
In subnetting, the IP address is divided into two portionsnetwork portion and local portion. The network
portion is for the external network consumption, and the local portion is for the local administrator. The local
portion is divided into the physical network portion and the host portion. In essence, we are creating a
hierarchical network with the Internet as part of the address and the local part containing the physical address
portion and the host portion. For instance, in decimal notation, the first two fields can be used for the Internet
portion, the third field for local network ID, and the fourth for the host, so that of the 32 bits, 16 bits are for
network ID, 8 bits are for the local network ID, and 8 bits are for the host ID.
To make better use of the IP addresses, subnet addressing is used. In subnet addressing, a site that has a
number of networks uses a single IP address. A packet received with the subnet addressing is routed to the
correct destination using subnet mask.
For the other routers to know that subnet addressing scheme is used at a site, there should be a way to let
others know this subnet addressing scheme. This is done through a 32-bit subnet mask. Bits in the subnet
mask are set to 1 if the subnet treats the corresponding IP address bit as network address bit and 0 if the
subnet does not treat the corresponding IP address bit as network address bit. As an example, a site may have
the first 16 bits for the network ID, 8 bits for the local network ID and the remaining 8 bits for host ID. In such a
case, the subnet mask is
The subnet mask is generally represented in dotted decimal notation as 255.255.0.0.
<Day Day Up>
<Day Day Up>
21.7 SUPERNET ADDRESSING
In the Internet addressing scheme, it is likely that the class B addresses get exhausted faster than class A or
class C addresses. To overcome this problem, the supernet addressing scheme is introduced. Instead of
allocating a class B address, a site can be allocated a chunk of class C addresses. At the site, the physical
networks can be allocated these class C addresses.
In supernet addressing, a site is allocated a block of class C addresses instead of one class B address.
However, this results in a large routing table. To solve this problem, classless inter-domain routing technique is
used.
In addition to making better use of address space, this supernet addressing scheme allows a hierarchy to be
developed at a site. For instance, an Internet service provider (ISP) can be given a block of class C addresses
that he can use to create a supernet with the networks of the ISP's subscribers.
Instead of one class B address, if a large number of class C addresses are given to a site, the routing table
becomes very large. To solve this problem, the classless inter-domain routing (CIDR) technique is used, in
which the routing table contains the entries in the format (network address, count)
where network address is the starting of the IP address block and count is the number of IP addresses in that
block.
If each ISP is given a block of class C addresses with one entry in the routing table, the routing to that ISP
becomes very easy.
<Day Day Up>
<Day Day Up>
21.8 LIMITATIONS OF IP VERSION 4
The IP as we know it is running since the late 1970s. During those early days of the Internet, there were no
PCs, and computers were either super computers, mainframes, or minis. With the advent of PCs, there has
been a tremendous growth in the use of computers and the need to network them, and above all to be on the
Internet to access worldwide resources. In the 1990s, the need was felt to revise the IP protocol to deal with the
exponential growth of the Internet, to provide new services that require better security, and to provide real-time
services for audio and video conferencing. IP Version 4 has the following limitations:
The main drawback of IP Version 4 is its limited address space due to the address length of 32 bits. Nearly
4 billion addresses are possible with this address length, which appears very high (with a population of 6
billion and a large percentage of the population in the developing world never having seen a computer). But
now we want every TV to be connected to the Internet and we want Internet-enabled appliances such as
refrigerators, cameras, and so on. This makes the present address length of 32 bits insufficient, and it
needs to be expanded.
The present IP format does not provide the necessary mechanisms to transmit audio and video packets
that require priority processing at the routers so that they can be received at the destination with constant
delay, not variable delay. The Internet is being used extensively for voice and video communications, and
the need for change in the format of the IP datagram is urgent.
Applications such as e-commerce require high securityboth in terms of maintaining secrecy while
transmitting and authentication of the sender. IP Version 4 has very limited security features.
The IP datagram has a fixed header with variable options, because of which each router has to do lots of
processing, which calls for high processing power of the routers and also lots of delay in processing.
The drawbacks of IP Version 4 are limited addressing space, inability to support real-time communication
services such as voice and video, lack of enough security features, and the need for high-processing power at
the routers to analyze the IP datagram header.
Due to these limitations of IP Version 4, IP Version 6 has been developed. IP Version 5 has been used on an
experimental basis for some time, but it was not widely deployed.
<Day Day Up>
<Day Day Up>
21.9 FEATURES OF IP VERSION 6
The important features of IP Version 6 are:
Increased address space: instead of 32 bits, IPv6 uses an address length of 128 bits.
Increased security features.
Modified header format to reduce processing at the routers.
Capability for resource allocation to support real-time audio and video applications.
Support for unicast, multicast, and anycast addressing formats.
New options to provide additional facilities.
Compatibility with IPv4. However, translator software is required for conversion of IPv4 datagrams into IPv6
packets.
In IP Version 6, the address length is 128 bits. In the future, every desktop, laptop, mobile phone, TV set, etc.
can be given a unique IP address.
NoteIP Version 6 provides backward compatibility with IP Version 4 because a very large number of
routers have IP Version 4 software, and it will be many more years before all routers run IP Version 6
software.
21.9.1 IPv6 Packet Format
In IPv4, the header contains many fields because of which the router has to do lots of processing. In IPv6,
some header fields have been dropped or made optional for faster processing of the packets by the router.
Optional headers (called extension headers) have been included that provide greater flexibility. Flow labeling
capability has been added for real-time transmission applications such as audio and video.
The IPv6 packet consists of the IPv6 header, a number of optional headers, and the data. The IPv6 packet
general format is shown in Figure 21.3, and the packet header format is shown in Figure 21.4.
Figure 21.3: IPv6 packet general format.
Figure 21.4: IPv6 packet header.
In IP Version 6, there are many optional headers. Optional headers provide the necessary flexibility. If these are
absent, the processing of the header information can be done very fast at the routers.
Version (4 bits): Specifies the version number 6.
Priority or traffic class (8 bits): This field specifies the type of data being sent in the data field. Priority is
indicated through this field.
Flow label (20 bits): Audio and video data needs to be handled as special packets for real-time applications.
This field specifies the special handling of such type of data.
Payload length (16-bit unsigned integer): Length in octets, and extension headers are also considered part
of the length.
Next header (8 bits): This field identifies the type of header following the IPv6 header.
Hop limit (8-bit unsigned integer): This field is decremented by 1 at each node that forwards the packet.
When the value in this field becomes 0, the packet is discarded.
Source address (128 bits): This field specifies the source address of the packet.
Destination address (128 bits): This field specifies the destination address of the packet.
IPv6 extension headers: Depending on the requirement, the IPv6 header can be followed by a number of
optional headers, known as extension headers. Extension headers are not normally processed by any node
except the hop-by-hop option header. If this header is present, it must immediately follow the IPv6 header only.
Extension headers must be processed in the order given in Figure 21.5. Figure 21.6 gives the format of
different extension headers from which the functionality of each header is obvious, along with the number of
bits required for each field.
Figure 21.5: IPv6 packet with all extension headers.
Figure 21.6: IPv6 routing header formats.
If no extension headers are present, the IP packet has an IPv6 header with the next header field containing
TCP, followed by the TCP header and data.
The optional headers are called extension headers. If no extension headers are present, the IP packet has the
IP header with the next header field as TCP.
If the routing header is to be included as an extension header, the IPv6 header contains Routing in the next
header field, followed by the routing header. The next header field of the routing header contains TCP. The
routing header is then followed by the TCP header and the data.
The IPv6 format appears complicated, with very long address fields and many extension headers. However,
this reduces a lot of the processing burden on the routers, and so the routers can do the packet switching very
fast. IPv6 will provide the necessary functionality for real-time audio and video transmission as well as improved
security.
However, very few routers and end systems are presently running IPv6 software, and it will be many more years
before IPv6 becomes universal. Microsoft Windows XP supports IPv6, and soon most of the operating systems
and routers will be upgraded to support IPv6.
<Day Day Up>
<Day Day Up>
21.10 ROUTING PROTOCOLS
When a packet arrives at a router, the router has to make a decision as to which node/router the packet has to
be sent to. This is done using the routing protocols.
When a router receives a packet, the header information has to be analyzed to find the destination address,
and the packet has to be forwarded toward the destination. The router does this job with the help of the routing
tables.
Each end system and router maintains a routing table. The routing table can be static or dynamic. In static
routing, each system maintains a routing table that does not change often. In dynamic routing, the routing table
is upgraded periodically. Recording of the route taken by a packet can be supportedeach router appends its
IP address to the datagram. This is used mainly for testing and debugging of the network. While routing the
packet, the decision has to be made with the following issues in mind.
Datagram lifetime
Segmentation and reassembly
Error control
Flow control
Datagram lifetime: If dynamic routing is used, the possibility for a datagram to loop indefinitely through the
Internet exists. To avoid this, each datagram can be marked with a lifetimea hop count. When a datagram is
passed through a router, the count is decreased. Alternatively, a true measure of time can be used, which
requires a global clocking mechanism. When a datagram is received by a router with a hop count of zero, then
the datagram is discarded.
If there is congestion in the network, the router may forward the packet through an alternate route. However, if
source routing is specified in the packet, then the packet has to be routed according to these instructions.
Segmentation and reassembly: When a router receives a datagram and if it has to be routed to a network
that supports a smaller packet size, the datagram has to be broken into small fragments, which is known as
segmentation. Segmentation is done by routers. Reassembly is done by end systems and not routers because
a large buffer is required at the router and all datagrams must pass through the same router, in which case
dynamic routing is not possible.
Error control: Datagrams may be discarded by a router due to lifetime expiration, congestion, or error in the
datagram. Hence, IP does not guarantee successful delivery of datagrams.
Flow control: If a router receives datagrams at a very fast rate and cannot process the datagrams at that rate,
the router may send flow control packets requesting reduced data flow.
Consider an internet in which a number of networks are interconnected through routers. When a datagram has
to be sent from one node on one network to a node on another network, the path to be taken by the datagram
is decided by the routing algorithms. Each router receives the datagram and makes a decision as to the next
hop. This is a complicated mechanism because:
If the IP datagram specifies the option for source routing, the datagram has to be routed per the
instructions.
The datagram length may vary and, based on the length, the datagram may need to be forwarded to
different routers because not all routers may be able to handle large datagrams. (Though they can
fragment it, it may result in too much delay, or the do-not-fragment bit may have been set.)
The network may get congested, and some routes may be experiencing traffic jams. In such cases, to
ensure no packet loss, alternate routes may need to be chosen.
The path to be chosen has to be the shortest, to reduce load on the network and the delay.
In this section, we will study the routing protocols. Note that, though routing is the function of a router, hosts
also may be involved in routing under special circumstances. For instance, a network may have two routers that
provide connections to other networks. A host on this network can forward the datagram to one of the routers.
(Which router is the issue.) Similarly, multihomed hosts are involved in routing decisions. In the following
subsections, we will discuss routing as applied to the routers.
21.10.1 Direct Delivery
Consider a LAN running TCP/IP protocol suite. Delivering an IP datagram to a node is straightforward; it does
not involve any routers. Because the IP address has the network ID as a field, and the source and the
destination have the same network ID, the datagram is delivered directly to the destination. This is known as
direct delivery.
If the source and the destination have the same network ID, the datagram is directly delivered to the
destination. This is known as direct delivery.
Consider a case in which a LAN is connected to the Internet through a router. After the datagram travels across
the Internet and reaches the final destination's router, the direct delivery mechanism is used for handing over
the datagram to the destination.
21.10.2 Indirect Delivery
On the Internet, between the source and the destination, there are many routers. Each router has to get the
datagram and forward it to the next router toward the destination. This is known as indirect delivery. The routing
has to be such that finally there should be one router that will deliver the datagram to the destination using the
direct delivery mechanism.
In indirect delivery, between the source and the destination, there may be a number of routers. In such a case,
each router has to forward the datagram to another router. This is known as indirect delivery.
There are a number of mechanisms for indirect routing.
Routing Tables and Next-Hop Routing
Each router and host can have a table, known as the routing table, that contains the destination network
address (not the host address) and the router to which the datagram has to be forwarded. When a datagram
arrives, the network ID portion is examined, and the routing table is consulted to find the next router to which
the datagram is to be forwarded. This mechanism reduces the size of the routing table because we need not
store the destination host addresses; only the network addresses are required. Each entry in a routing table is
in the format.
Destination network IP address: IP address of the next router to which datagram is to be forwarded.
The next router is called the next-hop router, and routing based on this type of routing tables is called next-hop
routing. If the routing table does not contain an entry for a destination network address, the datagram is sent to
a default router.
Each entry in a routing table will be in the format, destination network IP address: IP address of the next router
to which the datagram is to be forwarded.
Adaptive Routing
In this routing mechanism, the routing of packets is based on information about failure of links and congestion.
The advantages of this mechanism are improved performance of the network and less congestion. However,
the disadvantages are processing burden on routers and traffic burden on network to send routing information
and network status. Information about the failure of links and congestion is shared between routers passing the
necessary messages.
In adaptive routing, the routing tables are periodically updated based on the traffic information. The routers
need to exchange this traffic information periodically.
First generation Internet (ARPAnet) implemented an adaptive algorithm using estimated delay as the criterion.
Each node sends to all the neighbors the estimated delay (or the queue length). The line speed is not
considered, and so it is not an accurate algorithm. The routing algorithm was subsequently modified in the
second generation Internet, and an adaptive algorithm based on actual delay was implemented. Instead of
queue length, delay is measured. When a packet arrives, it is timestamped, and when it is retransmitted the
time is noted. Hence, delay is calculated exactly. The average delay is calculated every 10 seconds and sent to
all neighboring nodes (known as flooding). The disadvantage with this routing algorithm is that every node tries
to get the best route. Subsequently, the routing algorithm has been further refined. The algorithms uses a line
utilization parameter for the adaptive algorithm. Instead of the best route, a threshold value is used so that
every node will not try to get the best route.
For the routing algorithms, the concept of autonomous systems is important. An internet connected by
homogeneous routers, with all routers under the administrative control of a single entity, is called an
autonomous system. An autonomous system can use a routing protocol of its choice, but when autonomous
systems are interconnected (as in the Internet), a well-defined protocol is a must.
<Day Day Up>
<Day Day Up>
21.11 AUTONOMOUS SYSTEM
A site may have a large number of networks within one administrative control. Such a system is called an
autonomous system. This system can be connected to the external Internet through one or more routers. To
provide efficient routing, the entire system appears as a single entity on the external Internet and not as multiple
networks. Within the system, how the routing is to be done can be decided by the administrative authority. To
provide the necessary routing information to the external Internet, one or more routers will be acting as the
interface.
An autonomous system is a network of computer networks within one administrative control. An autonomous
system can use its own routing protocols.
Each autonomous system is given an autonomous system number by a central authority.
<Day Day Up>
<Day Day Up>
21.12 ROUTING WITHIN AUTONOMOUS SYSTEMS
If the autonomous system consists of a small number of networks, the administrator can manually change the
routing tables. If the system is very large and changes in topology (addition and deletion of networks) are
frequent, an automatic procedure is required. Routing protocols used within an autonomous system are called
interior gateway protocols (IGPs). A number of IGPs are used because the autonomous systems vary widelyit
may be a small system with few LANs or a WAN with a large number of networks. Open shortest path first
(OSPF) is the most efficient protocol used for routing within an autonomous system. In OSPF, routing is done
based on the shortest paththe shortest path can be calculated between two nodes using criteria such as
distance, dollar cost, and so on.
Routing protocols used within the autonomous system are called interior gateway protocols. Open shortest path
first (OSPF) is one such protocol.
21.12.1 Open Shortest Path First (OSPF)
Interior router protocol (IRP) passes routing information between routers within an autonomous system. Open
shortest path first (OSPF) protocol is used as the interior router protocol in TCP/IP networks. OSPF routing is a
dynamic routing technique. As the name implies, the OSPF protocol computes a route through the Internet that
incurs the least cost. It is analogous to a salesman finding the shortest path from one location to another
location. The shortest path can be calculated based on the distance to be traveled or based on the cost of
travel. Similarly, the cost of routing the packets can be based on a measure that can be configured by the
administratorit can be a function of delay, data rate, or simply dollar cost. Each router maintains the database
of costs. Any change in costs is shared with other routers.
In OSPF protocol, the shortest path is calculated from the source to the destination, and routing is done using
this path. The criteria for calculation of the shortest path can be distance or cost.
21.12.2 Flooding
Flooding is another protocol that can be used for routers to share the routing information. In flooding, each
router sends its routing information to all the other routers to which it is directly connected. This information is
shared again by these routers with their neighbors. This is a very simple and easy-to-implement protocol.
However, the traffic on the network will be very high, particularly if the flooding is done very frequently.
In flooding, each router sends the packets to all the other routers connected to it. Routing information is shared
among the routers using this protocol.
<Day Day Up>
<Day Day Up>
21.13 ROUTING BETWEEN AUTONOMOUS SYSTEMS
The Internet is a network of a large number of autonomous systems. A host connected to one autonomous
system can communicate with another host connected to another autonomous system only when the routing
information is shared between the two autonomous systems. The protocols used for sharing the routing
information are exterior gateway protocol (EGP) and border gateway protocol (BGP).
On the Internet, routing between two autonomous systems is done through exterior gateway protocol (EGP) or
border gateway protocol (BGP).
21.13.1 Exterior Gateway Protocol
As shown in Figure 21.7, two routers each belonging to a different autonomous system share routing
information using the exterior gateway protocol (EGP). These routers are called exterior routers. Conceptually,
EGP works as seen in Figure 21.7.
A router acquires a neighbor by sending a hello message. The neighbor responds to the hello.
The two routers periodically exchange messages to check whether the neighbor is alive. The Internet being
very dynamic, there is no guarantee that today's neighbor is there tomorrow. Whenever there is any
change in the routing information within the autonomous system, the information is conveyed to the other
router through routing update messages.
Figure 21.7: Exterior gateway protocol.
EGP has the following restrictions:
An exterior router conveys only reachability information. It does not ensure that the path is the shortest.
An exterior router can announce only the routing information to reach the networks within the autonomous
system, not some other autonomous system.
EGP can advertise only one path to a given network, even if alternate paths are available.
The EGP message consists of a header and parameters fields. The header has the following fields:
Version (8 bits): Specifies the version number of EGP.
Type (8 bits): Specifies the type of message. The message types are:
Neighbor acquisition messages (type 3)
Neighbor reachability messages (type 5)
Poll request messages (type 2)
Routing update messages (type 1)
Error
Code (16 bits): Specifies the subtype of the message. For example, type 3 messages have the following
codes:
Acquisition request (code 0) to request for neighborhood
Acquisition confirm (code 1) to accept neighborhood
Acquisition refusal (code 2) to refuse neighborhood
Cease request (code 3), request for termination of neighborhood
Cease confirm (code 4), confirmation to cease request
Neighbor reachability messages (type 5) messages have the following codes:
Hello request message (code 0)
I heard you message (code 1)
Periodically, the neighbors exchange the routing tables using the type 1 message format, with code as 0. The
parameters field contains a list of all routers on the network and the distance of the destination for each.
Using this simple protocol, the Internet attained its great flexibility. An autonomous system can enter into an
arrangement with a neighbor to share the routing information without the need for a central authority to control
the connectivity between various internets. However, because of the limitations mentioned, a new protocol,
border gateway protocol, has been developed.
In EGP, two routers establish a neighborhood relationship and keep exchanging the routing information. Hence,
there is no need for a centralized system to send routing information to different routers.
21.13.2 Border Gateway Protocol
Routers connected to different autonomous systems can exchange routing information using BGP, the latest
version being BGP Version 4. As compared to EGP, BGP is a complex protocol. We will illustrate the
functioning of BGP through an example.
Figure 21.8 shows how BGP is used between autonomous systems. Assume that AS#1 and AS#2 are two
autonomous systems that need to share routing information. AS#2 is connected to two other autonomous
systems: AS#3 and AS#4. AS#1 will have one router designated as a BGP speaker, which has the authority to
advertise its routing information with other routers. Similarly, AS#2 will have a BGP speaker.
Figure 21.8: Border gateway protocol.
In BGP, each autonomous system will have a BGP speaker that is authorized to advertise its routing
information. This BGP speaker sends information about all the autonomous systems that can be reached via its
routers.
The BGP speaker of AS#2 will send the reachability information to the BGP speaker of AS #1. This reachability
information includes all the other autonomous systems that can be reached via its routersin our example, the
autonomous systems AS#3 and AS#4. It is this feature that gives complexity to the BGP, but the advantage is
that the complete reachability information is obtained by AS#1.
It may happen that because of the failure of a link between AS#2 and AS#3, for example, AS#3 becomes
unreachable temporarily. The BGP will also give negative advertisement by sending information about the
routes that have been withdrawn.
It is the BGP that is making the Internet a dynamic network. When new networks are added to the Internet, a
user can access the new network with the BGP without a centralized administrative authority.
<Day Day Up>
<Day Day Up>
21.14 INTERNET CONTROL MESSAGE PROTOCOL
The Internet is a dynamic network. Networks get connected and removed without the knowledge of the
senders, who can be anywhere on the earth. Also, each router functions autonomously, routing the datagrams
based on the destination IP address and the routing table. It is likely that a datagram cannot be forwarded or
delivered to its destination, or due to congestion the datagram may have to be dropped. There must be a
procedure to report errors to the source whenever there is a problem. The Internet Control Message Protocol
(ICMP) is used for this purpose. Every IP implementation must support ICMP also. ICMP messages are a part
of the data field of the IP datagram.
An ICMP message, which is encapsulated in the data field of the IP datagram, consists of three fields:
Message type to identify the type of message (8 bits)
Code field to provide further information on the message type (8 bits)
Checksum field (16 bits), which uses the same algorithm as the IP checksum
In addition, an ICMP message contains the first 64 data bits of the datagram that caused the problem.
The ICMP type fields and corresponding message types are given in the following table:
Type Field ICMP Message Type
0 Echo reply
3 Destination unreachable
4 Source quench
5 Change the route (redirect)
8 Echo request
11 Time to live expired for datagram
12 Parameter problem on datagram
13 Timestamp request
14 Timestamp reply
17 Address mask request
18 Address mask reply
Echo request and echo reply messages are used to test the reachability of a system. Source sends an echo
request to the receiver, and the receiver sends an echo reply to the sender. The request can contain an
optional data field that is returned along with the echo reply. When an echo reply is received by the sender, this
is an indication that the entire route to the destination is OK, and the destination is reachable. The system
command ping is used to send the ICMP echo requests and display the reply messages.
Reports of an unreachable destination are sent by a router to the source. The possible reason is also sent in
the ICMP message. The reason can be the following:
Network unreachable
Host unreachable
Protocol unreachable
Port unreachable
Fragmentation needed but do-not-fragment bit is set
Source routing failed
Destination network unknown
Source host isolated
Communication with the destination network prohibited for administrative reasons
Network unreachable for type of service, and host unreachable for the type of service
The source quench message is sent by the router to the source when congestion is experienced. This message
is to inform the source to reduce its rate of datagrams because the router cannot handle such a high speed.
When a router cannot handle the incoming datagrams, it discards the datagram and sends the source quench
message to the source.
A change the route (redirect) message is sent by a router to the host. Generally, it is the router's function and
responsibility to update its routes, and hosts keep minimal routing information. However, when a router detects
that a host is using a nonoptimal route, it sends an ICMP message to the host to change the route. This
message contains the IP address of the router the host has to use for the routing.
ICMP is used to report errors to the source. Information such as network or host unreachable, time-to-live field
expired, fragmentation needed but do-not-fragment bit is set, and so on, are sent to the source using this
protocol.
A time-to-live-expired message is sent by a router when the hop count becomes zero. As mentioned earlier, to
ensure that datagrams do not keep on circulating between routers endlessly, this hop count is introduced.
When hop count becomes zero, the datagram is discarded and the message is sent to the source. This
message is also sent when fragment reassembly time exceeds the threshold value.
For any other problem because of which the datagram has to be discarded, the router sends a parameter
problem message to the source.
Timestamp request and reply messages are used between systems to obtain the time information. Because
each system acts independently, there is no mechanism for synchronizing the clocks. A system can send a
timestamp request to another system and obtain the timestamp reply. This information can be used to compute
the delays on the network and also to synchronize the clocks of the two systems.
Subnet address mask request and reply messages are exchanged between machines. In some IP addresses, a
portion of the host address corresponds to the subnet address. The information required to interpret this
address is represented by 32 bits called the subnet mask. For example, if a host wants to know the subnet
mask used by a LAN, the host sends a subnet address mask request to the router on the LAN or broadcasts
the message on the LAN. The subnet address mask reply will contain the subnet address mask.
In the next chapter, we will study the details of the TCP layer that runs above the IP layer.
NoteEvery IP implementation must support ICMP. ICMP messages are included in the data field of the IP
datagram.
Summary
The Internet Protocol, the heart of the global Internet, is presented in this chapter. The present version of IP,
running on end systems and routers, is IP Version 4. The two main functions of IP are addressing and routing.
IP Version 4 has an address length of 32 bits. Each system is given a 32-bit IP address that uniquely identifies
the system globally. This addressing scheme can cover at most 4 billion addresses. This turns out to be a small
number for the future, particularly when we would like to connect even consumer items such as TVs, mobile
phones, and such to the Internet. The latest version of IP, IP Version 6, has an address length of 128 bits.
In addition to the address length, IP Version 4 has limitations for handling secure applications and real-time
applications. The detailed formats of IP Version 4 and Version 6 are presented, which bring out the salient
features of both versions.
Another important function of IP is routing. We studied the routing protocols within an autonomous system and
between autonomous systems. An autonomous system is a network within the administrative control of an
organization. Routers within an autonomous system can share routing information using protocols such as open
shortest path first (OSPF) and flooding. Routers connected to different autonomous systems can share routing
information using exterior gateway protocol or border gateway protocol.
The Internet Control Message Protocol (ICMP) is used to report errors and to send management information
between routers. ICMP messages are sent as part of the IP datagram. The details of ICMP are also presented
in this chapter.
References
W. R. Stevens. TCP/IP Illustrated Vol. I: The Protocols, Addison Wesley, Reading, MA, 1994. This book
gives a complete description of TCP/IP and its implementation in Berkeley Unix.
D.E. Comer and D.L. Stevens. Internetworking with TCP/IP Vol. III: Client/Server Programming and
Applications. BSD Socket Version. Prentice Hall, Englewood Cliffs, N.J., 1993.
Questions
Describe the format of an IP Version 4 datagram. 1.
What are the limitations of IP Version 4 and explain how IP Version 6 addresses these limitations. 2.
Describe the format of an IP Version 6 packet. 3.
Describe the protocols used for routing within autonomous systems. 4.
Describe the protocols used for routing between autonomous systems. 5.
Explain the ICMP protocol and its functionality. 6.
Exercises
1. Find out the IP address of your computer.
2. Calculate the total number of addresses supported by class B IP address format. Note that 14 bits
are used for network ID and 16 bits for host ID.
3. Calculate the total number of addresses supported by class C IP address format. Note that 24 bits
are used for network ID and 8 bits for host ID.
4. Calculate the maximum number of addresses supported by IP Version 6.
5. Write a technical paper on IP Version 6.
Answers
1. You can use the procedure described in Exercise #1 of Chapter 15 to obtain the IP address of your
computer.
2. In class B IP address format, 14 bits are used for network ID and 16 bits for host ID. Hence, 2
14
networks
can be addressed, and each network can have 2
16
hosts.
3. In class C IP address format, 24 bits are used for network ID and 8 bits for host ID. Hence, 2
24
networks
can be addressed and in each network 2
8
hosts.
4. The maximum number of addresses supported by IP Version 6 is
340,282,366,920,938,463,463,374,607,431,768,211,456.
5. You can obtain the RFC from the site http://www.ietf.org.
Projects
Write a program that captures all the packets transmitted over the LAN and displays the source IP
address and the destination IP address of each packet. You can use the packet driver software that
comes with the Windows Device Driver Kit to develop this program.
1.
Extend the software that is written in Project #1 so that all the packets corresponding to a particular
destination IP address are stored in a file. Use this software to find out the passwords of different users
of the LAN. Because this software is nothing but hacking software, obtain the permission of your system
administrator before testing this software. You can refine this software to develop a firewall that filters
out all the packets with a specific IP address.
2.
<Day Day Up>
<Day Day Up>
Chapter 22: Transport Layer ProtocolsTCP and UDP
As we saw in the previous chapter, IP does not provide a reliable service it does not guarantee successful
delivery of packets. However, from an end user's point of view, reliable service is a mustwhen you transfer a
file, you want the file to be delivered intact at the destination. The TCP layer provides the mechanism for
ensuring reliable service between two end systems. In this chapter, we will study the details of TCP. We will
also discuss UDP, another transport layer protocol that provides less overhead than TCP.
22.1 SERVICES OF THE TRANSPORT LAYER
When we use a network for a service (such as a file transfer), we expect reliable service so that the file reaches
the destination intact. However, IP does not provide reliable service. The packets may be lost, they may arrive
out of order, they might arrive with variable delay, and so on. The transport layer protocols provide end-to-end
transmission by taking care of all these problems; it provides a reliable data transfer mechanism to the higher
layers.
Suppose you want to transfer a file from one system to another system. You invoke a file transfer application on
your system. The networking software first has to establish a connection with the other end system, transfer the
file, and then remove the connection. The transport layer provides the necessary functionality to establish the
connection, transfer the packets to the other system reliably even if there are packet losses because of the IP's
limitations, and then remove the connection. The transport layer protocol does this job by providing the
following services:
Type of service
Quality of service
Data transfer
Connection management
Expedited delivery
Status reporting
Type of service: The service can be connection-oriented or connectionless. In connection-oriented service,
flow control and error control are incorporated so that the service is more reliable and sequence of packets is
maintained. However, there is an overhead to connection establishment. In connectionless service, there are
no overheads for connection establishment and so it is efficient, but reliable data transfer is not guaranteed.
Connectionless service is used for applications such as telemetry, real-time applications such as audio/video
communication, and so on. On the Internet, TCP provides connection-oriented service, and UDP provides
connectionless service.
Quality of service: Depending on the application, the quality of service parameters have to be defined, such as
acceptable error levels, desired average and maximum delay, desired average and minimum throughput, and
priority levels. Note that IP also provides the services of priority, delay, and such. However, in the TCP/IP
networks, the required quality of service is not guaranteed. The quality of service parameters are very important
when transmitting voice or video data because delay, packet loss, and so on will have an impact on the quality
of the speech or video.
NoteThe quality of service parameters can be set and will be honored in systems using Asynchronous
Transfer Mode (ATM) and Frame Relay.
Data transfer: This service defines whether the data transfer is full duplex, half duplex, or simplex. Transfer of
data and control information is the function of this service.
The communication can be full duplex, half duplex, or simplex. In some communication systems, the
communication can be only one way (simplex). In such a case, it is not possible to transmit acknowledgements
at all. UDP can be used in such systems. An example of such a system is a satellite communication system in
which the VSATs can receive data but cannot transmit.
Connection management: In connection-oriented service, the transport layer is responsible for establishment
and termination of the connection. In case of abrupt termination of a connection, data in transit may be lost. In
case of graceful termination, termination is prevented until data has been completely delivered.
The services provided by the transport layer are connection management and data transfer. In addition, this
layer handles issues related to quality of service, status reporting, and delivery of urgent data.
Expedited delivery: When some urgent data is to be sent, the interrupt mechanism is used to transfer the
urgent data.
Status reports: Status reporting of performance characteristics is also done by this layer. The performance
characteristics are delay, throughput, addresses (network address or port address), current timer values, and
degradation in quality of service.
NoteIn TCP/IP networks, though quality of service parameters can be set, the quality of service cannot be
guaranteed.
<Day Day Up>
<Day Day Up>
22.2 TRANSMISSION CONTROL PROTOCOL
Transmission Control Protocol (TCP) is a connection-oriented protocol that provides reliable communication
across an internet. TCP is equivalent to ISO transport layer protocol.
The units of data exchanged between two end systems are called TCP segments. Ordinarily, TCP waits for
sufficient data to be accumulated to create a TCP segment. This TCP segment is given to the IP layer. The
TCP user (the application layer) can request the TCP to transmit data with a push flag. At the receiving end,
TCP will deliver the data in the same manner. This mechanism is known as data stream push.
When urgent data is to be transmitted, urgent data signaling is used, which is a means of informing the
destination TCP user that urgent data is being transmitted. It is up to the destination user to determine
appropriate action. In the TCP segment, there is a field called urgent pointer to indicate that the data is urgent.
When a TCP connection is to be established between two end systems, commands are used from the TCP
user (the higher layer protocol), which are known as TCP service request primitives. Responses, which are
known as TCP service response primitives, are sent from the TCP to the TCP user.
The TCP service request primitives are:
"Fully specified passive open": Listen for connection attempt at specified security and precedence from
specified destination. The following parameters are the arguments: source port, destination port,
destination address, "time out", "time out action", "precedence", "security range". Parameters within the
parentheses are optional.
"Active open": Request connection at a particular level of security and precedence to a specified
destination.
"Active open with data": Request connection as in "Active Open" and transmit data with the request.
"Send": Transfer data across the named connection.
"Close": Close connection gracefully.
"Abort" Close connection abruptly.
"Status": Query connection status.
The TCP service response primitives (issued by TCP to local TCP user) are:
"Open ID": Inform TCP user of the connection name assigned to pending connection request.
"Open failure": Reports failure of an active open request.
"Deliver": Reports arrival of data.
"Closing": Reports that remote TCP user has issued a close and that all data sent by remote TCP user is
delivered.
"Terminate": Reports that connection has been terminated. Reason for termination is provided.
"Response": Reports current status of connection.
"Error": Reports internal error.
Whenever a connection has to be established, the service request primitives are sent to the TCP, and the TCP
sends the service response primitives. If there are no errors, a connection is established with the other TCP
peer running on another machine, and data transfer takes place. After successful transfer of data, the
application layer is informed by the TCP layer that the data has been successfully transferred.
The TCP layer provides a connection-oriented service. A TCP connection is established between two end
systems, and a series of service request primitives and service response primitives is exchanged for
connection management, data transfer, and error reporting.
22.2.1 Virtual Circuit Connection
Before two end systems transfer data, a TCP connection is established. This is a virtual circuit connection. A
connection is established between the TCP protocol ports, which identify the ultimate destination within the
machines. This connection is full duplex, so the data flows in both directions. Note that the TCP connection is
only an abstraction; it is not a real connection because the IP datagrams take different routes to reach the
destination.
Each connection is identified by a pair of end points. Each end point is defined by the host number and the port
number. The host number is the IP address, and the port number is a predefined small integer. The port
number is a process ID running on end systemsthere will be no physical port. For instance, a connection can
be represented by the two end points
A TCP connection is identified by a pair of end points. Each end point is defined by the IP address (host
number) and the port number. Port number is a predefined small integer.
(125.34.5.7, 25) and (230.16.4.23, 53)
It is possible to establish multiple connections between two end points as shown in Figure 22.1. For instance,
one of the above end points can have another connection such as
(125.34.5.7, 25) and (240.5.4.2, 53)
Figure 22.1: Multiple TCP connections.
Hence, a given TCP port number can be shared by multiple connections.
22.2.2 TCP Segment Format
The TCP segment consists of the TCP header and protocol data unit (PDU) data. The format of the TCP
header is shown in Figure 22.2. The minimum header length is 20 bytes.
Figure 22.2: TCP segment format.
Source port address (16 bits): The port number that identifies the application program of the source.
Destination port address (16 bits): The port number that identifies the application program of the destination.
Sequence number (32 bits): The sequence number of the TCP segment.
Acknowledgement number (32 bits): The number of the octet that the source
Header length (4 bits): The length of the header in 32-bit words. This is required because the header length
varies due to the presence of the options field.
Reserved (6 bits): Reserved for future use.
Code bits (6 bits): Specify the purpose and content of the segment. The six bits are URG, ACK, PSH, RST,
SYN and FIN. If these bits are set, the following information is conveyed:
URG Urgent pointer field is valid.
ACK Acknowledgment field is valid.
PSH The segment requests a push.
RST Reset the connection.
SYN Synchronize the sequence numbers.
FIN Sender has reached end of its byte stream.
Window (16 bits): Specifies the buffer size, which indicates how much data it is ready to accept, beginning
with the byte indicated in ACK field. This is flow control information.
Checksum (16 bits): Checksum is calculated from the header fields and the data fields. For checksum
computation, the pseudoheader is used. The pseudoheader consists of 96 bits32 bits of source IP address,
32 bits of destination IP address, 8 bits of zeros, 8 bits of protocol, and 16 bits of TCP length. This data is
obtained from the IP layer software, and checksum is calculated using the same one's complement of 16-bit
words and taking the one's complement of the result. The protocol field is to specify which underlying protocol is
used; for IP, the value is 6.
The TCP segment header consists of the following fields: source port address, destination port address,
sequence number, acknowledgement number, header length, reserved bits, code bits, window size, checksum,
urgent pointer and options. The minimum header length is 20 bytes.
Urgent pointer (16 bits): This field is used to indicate that the data segment is urgent. The destination has to
process this segment even if there are other segments to be processed. This is required in such applications as
remote login, when the user has to abort a program without waiting any further.
Options, if any (variable): This field is used to negotiate the maximum TCP segment size. The TCP software
can indicate the maximum segment size (MSS) in this field. Otherwise, a segment size of 536 bytes is used.
This value is 576 bytes of default IP datagram minus the TCP and IP header lengths.
Padding (8 bits): Padding for making the TCP segment complete.
Data (variable): User data, which is of variable length.
22.2.3 TCP Mechanism
For data transfer, there will be three phases: connection establishment, data transfer, and connection
termination.
Connection Establishment
The connection is determined by the source and destination ports. Some of the important currently assigned
TCP port numbers are given in the following table:
Port number Application Description
21 FTP File Transfer Protocol
23 Telnet remote login
25 SMTP Simple Mail Transfer Protocol
37 time time
42 nameserver hostname server
53 domain Domain Name Server
79 Finger Finger
Port number Application Description
80 HTTP World Wide Web
103 x400 X.400 messaging service
113 auth authentication service
Only one TCP connection is established between a pair of ports. However, one port can support multiple
connections, each with a different partner port as shown in Figure 22.1. Handshaking is used for establishing
the connection using the following procedure:
Sender sends a request for connection with sequence number.
Receiver responds with request for connection with ACK flag set.
Sender responds with ACK flag set.
Data Transfer
The data is transferred in segments but viewed as a byte stream. Every byte is numbered using modulo 2
32
.
Each segment contains the sequence number of the first byte in the data field. Flow control is specified in the
number of bytes. Data is buffered by the sender and the receiver, and when to construct a segment is at the
discretion of the TCP (except when push flag is used). For priority data, an urgent flag is used. If a segment
arrives at a host and the segment is not meant for it, an rst flag is set.
Connection Termination
Each TCP user issues a close primitive. TCP sets the fin flag on the last segment. If the user issues an abort
primitive, abrupt termination is done. All data in buffers is discarded and an rst segment is sent.
To implement the TCP, there are a few implementation options, which are briefly discussed.
Send policy: When does the TCP layer start sending the data to the layer below? One option is to buffer the
data and construct the TCP segment. The other option is not to buffer the data and construct the TCP segment
when some data is available to be sent to the layer below.
Delivery policy: After the data is received from the layer below, when to transfer the data to the upper layer
(the TCP user) is another issue. One option is to buffer the TCP segment and send to the TCP user. The other
option is to send without buffering. When to deliver is a performance consideration.
A TCP connection is established between two TCP ports on two end systems. Only one connection can be
established between a pair of ports. However, one port can support multiple connections, each with a different
partner port.
Accept policy: If segments are received out of sequence, there are two options:
In-order: accept only segments received in order, discard segments that are received out of order. This is a
simple implementation but a burden on the network.
In-window: accept all segments that are within the receive window. This reduces the number of
transmissions, but a buffering scheme is required at the hosts.
Retransmit policy: Three retransmission strategies are possible.
First only: The sender maintains a timer. If ACK is received, the corresponding segment is removed from
the buffer and the timer is reset. If the timer expires, the segment at the front of the queue is retransmitted
and the timer is reset.
Batch: The sender maintains one timer for retransmission for the entire queue. If ACK is received,
segments are removed from the buffer and the timer is reset. If the timer expires, all segments in the queue
are retransmitted and the timer is reset.
Individual: The sender maintains one timer for each segment in the queue. If ACK is received, the
segments are removed from the queue and the timers are reset. If any timer expires, the corresponding
segment is retransmitted individually and the timer is reset.
80 HTTP World Wide Web
103 x400 X.400 messaging service
113 auth authentication service
Only one TCP connection is established between a pair of ports. However, one port can support multiple
connections, each with a different partner port as shown in Figure 22.1. Handshaking is used for establishing
the connection using the following procedure:
Sender sends a request for connection with sequence number.
Receiver responds with request for connection with ACK flag set.
Sender responds with ACK flag set.
Data Transfer
The data is transferred in segments but viewed as a byte stream. Every byte is numbered using modulo 2
32
.
Each segment contains the sequence number of the first byte in the data field. Flow control is specified in the
number of bytes. Data is buffered by the sender and the receiver, and when to construct a segment is at the
discretion of the TCP (except when push flag is used). For priority data, an urgent flag is used. If a segment
arrives at a host and the segment is not meant for it, an rst flag is set.
Connection Termination
Each TCP user issues a close primitive. TCP sets the fin flag on the last segment. If the user issues an abort
primitive, abrupt termination is done. All data in buffers is discarded and an rst segment is sent.
To implement the TCP, there are a few implementation options, which are briefly discussed.
Send policy: When does the TCP layer start sending the data to the layer below? One option is to buffer the
data and construct the TCP segment. The other option is not to buffer the data and construct the TCP segment
when some data is available to be sent to the layer below.
Delivery policy: After the data is received from the layer below, when to transfer the data to the upper layer
(the TCP user) is another issue. One option is to buffer the TCP segment and send to the TCP user. The other
option is to send without buffering. When to deliver is a performance consideration.
A TCP connection is established between two TCP ports on two end systems. Only one connection can be
established between a pair of ports. However, one port can support multiple connections, each with a different
partner port.
Accept policy: If segments are received out of sequence, there are two options:
In-order: accept only segments received in order, discard segments that are received out of order. This is a
simple implementation but a burden on the network.
In-window: accept all segments that are within the receive window. This reduces the number of
transmissions, but a buffering scheme is required at the hosts.
Retransmit policy: Three retransmission strategies are possible.
First only: The sender maintains a timer. If ACK is received, the corresponding segment is removed from
the buffer and the timer is reset. If the timer expires, the segment at the front of the queue is retransmitted
and the timer is reset.
Batch: The sender maintains one timer for retransmission for the entire queue. If ACK is received,
segments are removed from the buffer and the timer is reset. If the timer expires, all segments in the queue
are retransmitted and the timer is reset.
Individual: The sender maintains one timer for each segment in the queue. If ACK is received, the
segments are removed from the queue and the timers are reset. If any timer expires, the corresponding
segment is retransmitted individually and the timer is reset.
Acknowledgement policy: There are two options here.
Immediate: When data is accepted, immediately transmit empty segment with ACK number. This is simple
but involves extra transmissions.
Cumulative: When data is accepted, wait for outbound segment with data in which ACK is also sent (called
piggybacking the ACK). To avoid long delay, a window timer is set. if the timer expires before ACK is sent,
an empty segment containing ACK is transmitted. This involves more processing but is used extensively
due to a smaller number of transmissions.
Implementation of TCP in software gives the software developer these choices. Based on the delay
considerations and bandwidth considerations, one choice may be better than the other.
NoteThe port number is a predefined integer for each application that runs above the TCP layer. Some
important port numbers are: 21 for FTP, 25 for SMTP, and 80 for HTTP.
<Day Day Up>
<Day Day Up>
22.3 USER DATAGRAM PROTOCOL
User datagram protocol (UDP) is another transport layer protocol that can be run above the IP layer. In the
place of TCP, UDP can be used to transport a message from one machine to another machine, using the IP as
the underlying protocol. UDP provides an unreliable connectionless delivery service. It is unreliable because
packets may be lost, arrive out of sequenceor or be duplicated. It is the higher layer software that has to take
care of these problems. The advantage of UDP is that it has low protocol overhead compared to TCP. A UDP
message is called a user datagram.
The format of the user datagram is shown in Figure 22.3.
Figure 22.3: UDP datagram format.
Source port address (16 bits): Specifies the protocol port address from which data is originating. This is an
optional field. If present, this is to indicate the address to which a response has to be sent by the destination
port. If absent, the field should contain all zeros.
Destination port address (16 bits): Specifies the protocol port address to which data is intended.
Message length (16 bits): Specifies the length of the user datagram, including the header and data in octets.
Checksum (16 bits): The data is divided into 16-bit words, the one's complement sum is computed, and its
one's complement is taken. However, checksum calculation is optional, and can be avoided by keeping zeros in
this field to reduce computational overhead.
Data (variable length): The actual data of the user datagram.
This user datagram is encapsulated in the IP datagram's data field and then encapsulated in the frame of the
datalink layer and sent over the network.
Though UDP is unreliable, it reduces the protocol overhead as compared to TCP. Hence, UDP is used for
network management. Simple Network Management Protocol (SNMP) runs above the UDP. UDP is also used
for voice/fax/video communication on IP networks to enable real-time transmission. This aspect is addressed in
the chapter on multimedia communication over IP networks.
Another use of UDP is in networks where it is not possible to implement acknowledgements. For example,
consider a satellite network (shown in Figure 22.4) working in star configuration with a hub (central location)
and a number of receive-only VSATs. The server located at the hub (central station) has to transmit a file to the
VSATs. It is not possible to use the TCP protocol in such a network because the VSAT cannot send an
acknowledgement. We can use UDP to transmit the packets one after another from the computer at the hub;
the computer at the VSAT location assembles all the UDP packets to get the file. The application layer running
above UDP has to take care of assembling the file by joining the UDP datagrams in sequence.
Figure 22.4: File transfer from hub to receive-only VSATs using UDP.
The UDP datagram has the following fields: source port address, destination port address, message length,
checksum, and data. The checksum field is optional.
NoteThe header of UDP is much smaller than the header of TCP. As a result, processing of UDP user
datagrams is much faster.
<Day Day Up>
<Day Day Up>
22.4 TRANSPORT LAYER SECURITY
The TCP/IP stack does not provide the necessary security featuresand security has become the most
important issue, particularly when the Internet has to be used for e-commerce transactions. Users type in their
credit card numbers, bank account numbers, and other confidential personal information. This data can be
stolen and misused. When confidential documents are transmitted over the Internet, it has to be ensured that
unauthorized persons do not receive them. To achieve the desired security, a special layer software runs on the
TCP to provide privacy and confidentiality of the data over the Internet. This layer is known as transport layer
security (or Secure Socket Layer).
TLS consists of two sublayersTLS record protocol, which runs above the TCP, and TLS handshake protocol,
which runs above the TLS record protocol. TLS record protocol provides connection security using
cryptographic techniques such as DES (Data Encryption Standard), proposed by the U.S. Department of
Defense. For each session, a unique key is generated, and all the data is encrypted using the chosen
algorithm. The key is valid only for that session. For a new session, a new key has to be generated. The key is
negotiated between the two systems (for exmaple, client and server) using TLS handshake protocol.
The process for providing secure communication is as follows: When a client and server have to exchange
information using a secure communication link, the TLS handshake protocol enables the client and the server
to authenticate each other and negotiate the encryption algorithm and encryption keys before the application
process transmits or receives the first byte of data. Once the client and the server agree on the algorithm and
the keys, the TLS protocol encrypts the data with the TLS record protocol and passes the encrypted data to the
transport layer.
Above the transport layer protocol (TCP or UDP), the application layer protocol will be running. We will discuss
various application layer protocols in the next chapter.
To provide security while transferring the data, a layer called transport layer security (TLS) is introduced
between the transport layer and the application layer. TLS consists of two sublayers: TLS record protocol and
TLS handshake protocol.
NoteTo provide secure communication, the two important features to be incorporated are authentication
and encryption of data. Authentication ensures the genuineness of the users. Encryption transforms
the bit stream using an encryption key, and the data can be decoded only if the encryption key is
known to the receiver.
Summary
This chapter presented the details of the transport layer protocols used on the Internet. The transport layer is
an end-to-end protocolit is the responsibility of the transport layer to ensure that all the packets are put in
sequence, to retransmit the packets if there are errors, and to report the status information. The Transmission
Control Protocol (TCP) is a transport layer protocol that provides connection-oriented service. Between two end
systems, a virtual connection is established by specifying the IP address and the port address. After the
connection is established, data transfer takes place, and then the connection is removed. To provide the
necessary security features, transport layer security (TLS) is another layer of software that can be run to
provide the necessary encryption of data. The user datagram protocol (UDP) provides a connectionless
service. The UDP datagram can contain only the source address, destination address, message length, and
message. As a result, the UDP is a very light-weight protocol, and high processing power is not required to
analyze the UDP header. Hence, UDP is used for applications such as real-time voice/video communication.
The formats of TCP segment and UDP datagram are also presented in this chapter.
References
W.R. Stevens. TCP/IP Illustrated, Vol. I; The Protocols. Addison Wesley, Reading, MA, 1994. This book
gives a complete description of TCP and its implementation in Berkeley Unix.
D. E. Comer and D.L. Stevens. Internetworking with TCP/IP Vol. III: Client/Server Programming and
Applications. BSD Socket Version, Prentice Hall Inc., Englewood Cliffs, 1993.
http://www.ietf.org IETF home page. You can obtain the Requests for Comments (RFCs) that give the
complete details of the protocols from this site.
Questions
Explain the services of the transport layer protocol. 1.
Describe the TCP segment format. 2.
Describe the UDP datagram format. 3.
Explain the differences between TCP and UDP. 4.
Explain the transport layer security mechanism. 5.
Explain why TCP is not well suited for real-time communication. 6.
Exercises
1. Obtain the source code for TCP/IP stack implementation from the Internet and study the code.
2. Write a Java program to implement the UDP datagram.
3. What is the silly window syndrome?
4. In a satellite communication system, a file has to be transferred from the central station to a
number of VSATs, but the VSATs are receive-only, and there is no communication from VSAT to
the hub. Work out a procedure for the file transfer.
5. Study the intricacies of the sliding-window protocol used in the TCP layer.
Answers
1. The open source for TCP/IP protocol stack is available with the Linux operating system.
2. The Java code for implementation of UDP server and UDP client are given in Listing C.6 and Listing C.7,
respectively. The server software is used to transfer a file to the client. The server divides the file into UDP
datagrams and sends it. The client will receive each datagram and assemble the file. This code can be
tested on a LAN environment.
Listing C.6: UDP server software.
import java.net.*;
public class UDPServer
{
public static DatagramSocket ds;
public static int buffer_size=10;
public static int serverport=555;
public static int clientport=444;
public static byte buffer[]=new byte[buffer_size];
public static void Server() throws Exception
{
int pos=0;
byte b[] = { 'H','e','l','l','o'};
ds.send(new DatagramPacket
(b, b.length, InetAddress.getLocalHost(), clientport));
}
public static void main(String args[])
{
try{
System.out.println("Server is ready");
ds=new DatagramSocket(serverport);
Server();
}catch(Exception e){ }
}
}
Listing C.7: UDP Client software.
import java.net.*;
public class UDPClient
{
public static DatagramSocket ds;
public static int buffer_size=5;
public static int serverport=555;
public static int clientport=444;
public static byte buffer[]=new byte[buffer_size];
public static void Client() throws Exception
{
while(true)
{
DatagramPacket dp = new DatagramPacket(buffer, buffer.length);
ds.receive(dp);
byte b[] = dp.getData();
for(int i=0;i<=b.length;i++)
System.out.print((char)b[i] + " ");
}
}
public static void main(String args[])
{
try{
System.out.println("Client is ready");
ds=new DatagramSocket(clientport);
Client();
}catch(Exception e){ }
}
}
3. In sliding window protocol used in TCP, the receiver must advertise its window size. Receiver may advertise
a small window size due to various reasons such as buffer full. In such a case, the sender has to transmit
small segments. This results in inefficient utilization of the bandwidth. To avoid this problem, the receiver
may delay advertising a new window size, or the sender may delay sending the data when the window size
is small.
4. In a satellite communication system, if the VSATs are receive only, it is not possible for VSAT to send an
acknowledgement to the server located at the hub. In such a case, the server at the hub has to use the
UDP as the transport layer to transmit the file. The server software will divide the file into UDP segments
and broadcast each datagram. The datagram contains the VSAT address as the destination address. The
VSAT will receive the datagarams and assemble the file.
5. In sliding window protocol used in the TCP layer, the receiver has to advertise the window size, and the
sender must adhere to this size. This may cause the silly window syndrome.
Projects
Using open source TCP/IP software, develop a LAN analyzer. The LAN analyzer has to capture each
packet that is broadcast by the nodes and calculate the number of packets transmitted per second. It
also has to display the traffic matrix, which indicates the number of packets transmitted from one node to
another node. You need to analyze each packet for its source address and destination address.
1.
Write software that captures the packets being transmitted on the LAN and checks whether the user 2.
data portion of the packet contains a keyword. The GUI should facilitate giving a keyword. For instance,
if the keyword is specified as "Professor," the software has to check whether the word "Professor" is
present in the user data portion of the packet.
2.
<Day Day Up>
<Day Day Up>
Chapter 23: Distributed Applications
All the applications available on the Internetelectronic mail, file transfer, remote login, World Wide Web, and
so onare based on simple application layer protocols that run on the TCP/IP protocols. In this chapter, we will
study the most widely used application layer protocols: SMTP and MIME for electronic mail, HTTP for the World
Wide Web, and LDAP for directory services. We also will discuss SNMP for network management.
23.1 SIMPLE MAIL TRANSFER PROTOCOL (SMTP)
Simple Mail Transfer Protocol (SMTP) is used for electronic mail. E-mail consists of two parts: header and
message body. The header is of the format
To: kvkk.Prasad@ieee.org
Cc: wdt@bol.net.in
Sub: hello
Bcc: icsplted@hd1.vsnl.net.in
The body consists of the actual message and attachments, if any. In general, the attachments can be text or
binary files, graphics, and audio or video clips. However, SMTP supports only text messages.
SMTP uses information written on the envelope (message header) but does not look at the contents. However,
SMTP handles e-mail with the restriction that the message character set must be 7-bit ASCII. MIME gives
SMTP the capability to add multimedia features, as discussed in the next section. SMTP adds log information
to the start of the delivered message that indicates the path the message took for delivery at the destination.
NoteEvery e-mail message contains a header that gives the details of the paths traveled by the message
to reach the destination.
The SMTP mechanism is shown in Figure 23.1. There will be an SMTP sender and an SMTP receiver. A TCP
connection is first established between the sender and the receiver.
Figure 23.1: SMTP.
SMTP is the application layer protocol for e-mail. SMTP handles mail with the restriction that the message
should contain only ASCII text. Multimedia mail messages are handled by MIME, an extension to SMTP.
SMTP sender: A user composes the mail message consisting of the header and the body and gives the
message to the SMTP sender. The TCP connection between the sender and the receiver uses port 25 on the
target host. When a mail is for multiple hosts, the SMTP sender sends mail to the destinations and deletes it
from the queue, transfers all copies using a single connection, and then closes the connection.
If the destination is unreachable (a TCP connection cannot be established, the host is unreachable, or the
message has a wrong address), the message is requeued for later delivery, and the SMTP sender gives up
after a certain number of retries and returns an error message if attempts to retransmit fail.
Note that in the protocol, the SMTP sender's responsibility ends when message is transferred to the SMTP
receiver (not when the user sees the message).
SMTP receiver: SMTP receiver accepts each message from the SMTP sender and puts the mail in the
corresponding mailbox. If the SMTP receiver is a forwarding machine, it keeps the message in the outgoing
queue. SMTP receiver should be capable of handling errors such as disk full or wrong user address.
The format for the text message used in SMTP consists of the message ID, a header containing the date, from,
to, cc, and bcc addresses, and the message body in ASCII text. This format is specified in RFC 822.
SMTP protocol: The SMTP protocol uses the TCP connection between the sender and the receiver to transfer
the messages. After the messages are transferred, the sender and receiver can switch roles: the sender can
become the receiver to get the messages. SMTP does not include an acknowledgement to the message
originator.
The format to be used for text messages in SMTP is specified in RFC 822. The format consists of:
Message ID: a unique identifier
Header: date, from, subject, to, cc, bcc
Message body in ASCII text
NoteWeb-based mail is used extensively mainly because it is provided as a free service by a number of
service providers such as Hotmail and Yahoo!. In Web-based mailing systems, the HTTP protocol is
used to connect to the server.
Application software to generate mail messages is not part of SMTP. It is an application program that runs
above the SMTP.
SMTP commands are sent by the SMTP sender and replies are sent by the SMTP receiver.
SMTP works as follows: A TCP connection is established between the sender and receiver using port 25.
SMTP sender and receiver exchange a series of commands/responses for connection setup, mail transfer, and
disconnection. Sender and receiver can switch roles for exchange of the mail.
Some SMTP commands are:
HELO <space><domain><CR><LF> to send identification
RCPT <space>To:<path><CR><LF> identifies recipient of the mail
DATA <CR><LF> to transfer message text
QUIT <CR><LFalso> to close TCP connection
TURN <CR><LF> reverse role of sender and receiver
MAIL <space>FROM<><CR><LF> to identify originator
(CR stands for carriage return and LF for line feed.)
Some SMTP replies corresponding to these commands are given below.
Positive completion reply messages
220 <domain> Service ready
250 Requested mail action okay, completed the task
251 User not local, will forward mail
Positive intermediate reply
354 Start mail input, end with <CR><LF>
Transient negative replies
450 Requested mail action not taken, mailbox not available
451 Requested action not taken, insufficient disk space
Permanent negative replies
500 Syntax error, command unrecognized
501 Command not implemented
Using the above commands and replies, a typical SMTP session would be as follows. The session consists of
three phases: connection setup, mail transfer, and disconnection.
Connection setup:
Sender opens TCP connection with target host. 1.
Once connection is established, receiver identifies itself 220 service ready. 2.
Sender identifies itself with HELO command. 3.
Receiver accepts sender's identification: 250 OK. 4.
Mail transfer:
MAIL command identifies the originator of the message. 1.
RCPT command identifies the recipients of the message. 2.
DAT command transfers the message text.
If receiver accepts mail, 250 is sent.
If receiver has to forward the mail, 251 is sent.
If mailbox does not exist, 550 is sent.
Disconnection:
3.
Sender sends QUIT command and waits for reply. 1.
Sender initiates TCP close operation. 2.
Because SMTP messages have to be in the format specified in RFC 822, the SMTP mails will have the
following limitations:
Executable files cannot be transmitted as attachments.
Text other than 7-bit ASCII cannot be transmitted.
SMTP servers may reject a mail message if it exceeds a certain size.
If two machines use different codes (such as ASCII and EBCDIC), translation problems are encountered.
SMTP cannot operate with X.400 MHS (message handling system), the standards specified for electronic
mail in the ISO/OSI protocol suite.
There are nonstandard implementations of SMTP/RFC 822:
Deletion, addition, reordering of <CR< and <LF>
Truncation or wrapping lines longer than 76 characters
Padding of lines to the same length
Conversion of a Tab to multiple spaces
To overcome these limitations, the MIME protocol has been defined and runs above the SMTP.
NoteX.400 message handling system is the standard application layer software for e-mail in the ISO/OSI
protocol suite. It is a very sophisticated protocol that can handle registered mail, acknowledgments,
and so on: SMTP, as the name implies, is simple and hence has limited features.
<Day Day Up>
<Day Day Up>
23.2 MULTIPURPOSE INTERNET MAIL EXTENSION (MIME)
MIME is defined in RFC 1521 and 1522. The specifications include:
Five new message header fields that provide information about the body of the message.
A number of content formats to support multimedia mail.
The five header fields are:
MIME Version 1.0 (conforms to RFC 1521 and 1522).
Content type that describes data contained in the body.
Content transfer encoding that indicates the type of transformation that has been used to represent the
body of the message.
Content ID to uniquely identify MIME content types.
Content description: plain text description of object, such as audio, text, or video clip.
Some of the MIME content types are listed here.
Type Subtype Description
Text Plain
GIF
Video MPEG
3G Partnership Program
AAL
Artificial Intelligence
AIN
Amplitude Modulation
AMPS
Authentication Center
AVVID
Delta Modulation
DNS
Frequency Hopping
FOMA
Frequency Modulation
FSK
High Frequency
HiperLAN
Intelligent Network
IP
Internet Protocol
IPN
Interplanetary Internet
IPsec
IP Security Protocol
IrDA
Interworking Function
JMF
Mediation Device
MEO
Mobile Station
MSC
Network Computer
NE
Network Element
NMC
Operating System
OSI
Personal Computer
PCM
Point of Presence
PPG
Quality of Service
QPSK
Virtual Circuit
VLF
Voice over IP
VSAT
Wireless Fidelity
WLL
Work Station
WSP