Nothing Special   »   [go: up one dir, main page]

CN Unit-Iv Notes - Extra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

UNIT - IV

Transport Layer
Introduction to Transport Layer :

● The main role of the transport layer is to provide the communication services directly to
the application processes running on different hosts.
● Transport layer offers peer-to-peer and end-to-end connection between two processes on
remote hosts. Transport layer takes data from upper layer (i.e. Application layer) and then
breaks it into smaller size segments, numbers each byte, and hands over to a lower layer
(Network Layer) for delivery through the network router.
● The transport layer is operated by the Operating system and also referred to as the Heart
of OSI Model.
● The transport layer provides a logical communication between application processes
running on different hosts. Although the application processes on different hosts are not
physically connected, application processes use the logical communication provided by
the transport layer to send the messages to each other.

● The transport layer protocols are implemented in the end systems but not in the network
routers.

● A computer network provides more than one protocol to the network applications. For
example, TCP and UDP are two transport layer protocols that provide a different set of
services to the network layer.

● All transport layer protocols provide multiplexing/demultiplexing service. It also


provides other services such as reliable data transfer, bandwidth guarantees, and delay
guarantees.

● Each of the applications in the application layer has the ability to send a message by using
TCP or UDP .The application communicates by using either of these two protocols.

10

1
Both TCP and UDP will then communicate with the internet protocol in the internet layer. The
applications can read and write to the transport layer. Therefore, we can say that communication is a two-
way process.

Functions of Transport Layer

● This Layer is the first one which breaks the information data, supplied by the Application
layer into smaller units called segments. It numbers every byte in the segment and
maintains their accounting.
● This layer ensures that data must be received in the same sequence in which it was sent.
● This layer provides end-to-end delivery of data between hosts which may or may not
belong to the same subnet.

2
● All server processes intended to communicate over the network are equipped
withwell-knownTransportServiceAccessPoints(TSAPs)alsoknownasportnumbers.
End-to-End Communication

A process on one host identifies its peer host on a remote network by means of TSAPs,
alsoknownasPortnumbers.TSAPsareverywelldefinedandaprocesswhichistryingtocommunicate
with its peer knows this in advance.

For example, when a DHCP client wants to communicate with a remote DHCP server, it
always requests port number 67. When a DNS client wants to communicate with a remote
DNS server, it always requests port number 53 (UDP).

Responsibilities of transport layer:

1. This layer breaks messages into packets.


2. It performs error recovery if the lower layers are not adequately error free.
3. Functions of flow control if not done adequately at the network layer.
4. This layer is responsible for setting up and releasing connections across the network.
5. It is responsible for process to process delivery i.e., the delivery of packet, part of a message
from one process to another.

Client to server paradigm is used for process to process communication.

A process on the local machine (host) called a client need services from a process usually on the remote
host called server.

The services that a transport protocol can provide are often constrained by the service model of the
underlying network layer protocol.

3
A transport protocol can offer reliable data transfer service to an application even when the underlying
network protocol is unreliable, even when the network protocol losses, garbles and duplicate packets.

The transport layer requires transport layer address called a port number for selecting among multiple
processes running on the destination host.

Source port number is used for reply and destination port number for delivery. The port numbers

from 0 to 65,535 are used in internet.

i) Service Provided to the Upper Layer:

The transport layer should provide service to the higher-level protocols.

The transport entity that provides services to transport service users, which might be an application
process.

The following service which provided by the transport layer is:

1. Type of service
2. Quality of service
3. Data transfer
4. User interface
5. Connection management
6. Expedited delivery
7. Status reporting
8. Security

Type of Service:

It provides two types of services connection oriented and connectionless (or)datagram services
4
K.Narasimha, Asst. Professor
A connection-oriented service provides for establishment maintenance and termination of a logical
connection between the transport service users.

The connection – oriented service allows for connection selected features such as flow control error
control and sequenced delivery.

Quality of Service:

The transport layer protocol entity should allow transport service user to specify the Quality of
transmission service to be provided

The following quality of service parameters

1. Error and loss levels


2. Desired average and maximum delay
3. Throughput
4. Priority level
5. Resilience

The error and loss level measure the no. of lost or garbled message as a fraction of the total sent.

The desired average and maximum delay measures the time between a message being sent by the
transport user on the source machine and its being received by the transport user on the destination
machine

The throughput parameter measures the no. of bytes of user data transferred per second.

The priority level parameter provides a way for a transport user to indicate that some of their
connections are more importance than ones.

The high priority connections get serviced before the low – priority one

Data transfer: it transfers data between two transport entities both user data and control data must be
transferred. Full duplex half duplex and simplex modes also be offered.

User interface: user interface at the application layer level should be provided

Connection management:

If connection – oriented service is provided the transport entity is responsible for establishing and
terminating connections.

Symmetric connection procedure should be provide which allows either transport service user to initiate
connection establishment

Expedited packet delivery:

5
K.Narasimha, Asst. Professor
It performs fast delivery of data packets

Status reporting: it provides reporting on address, performance characteristics of a connection, class of


protocol in use, assent timer values

Security: it provides a variety of security services. It provides encryption and decryption of data.

The transport entity may be capable of routing through secure links or nodes if such a service is
available from the transmission facility.

ii)Transport Service Primitives:


To allow users to access the transport service, the transport layer must provide some operations to
application programs, that is, a transport service interface.

To get an idea of what a transport service might be like, consider the five primitives listed in Fig.
Transport layer primitives. This transport interface is truly bare bones, but it gives the essential flavor of
what a connection-oriented transport interface has to do. It allows application programs to establish, use,
and then release connections, which is sufficient for many applications. To see how these primitives
might be used, consider an application with a server and a number of remote clients. To start with, the
server executes a LISTEN primitive, typically by calling a library procedure that makes a system call
that blocks the server until a client turns up.

Fig. Transport layer service primitives

When a client wants to talk to the server, it executes a CONNECT primitive. The transport entity carries
out this primitive by blocking the caller and sending a packet to the server. Encapsulated in the payload of
this packet is a transport layer message for the server’s transport entity. A quick note on terminology is
now in order. For lack of a better term, we will use the term segment for messages sent from transport
entity to transport entity. TCP, UDP and other Internet protocols use this term. Some older protocols used
the ungainly name TPDU (Transport Protocol Data Unit). That term is not used much any more now but
you may see it in older papers and books. Thus, segments (exchanged by the transport layer) are
6
contained in packets (exchanged by the network layer). In turn, these packets are contained in frames
(exchanged by the data link layer). When a frame arrives, the data link layer processes the frame header
and, if the destination address matches for local delivery, passes the contents of the frame payload field
up to the network entity. The network entity similarly processes the packet header and then passes the
contents of the packet payload up to the transport entity.

the client’s CONNECT call causes a CONNECTION REQUEST segment to be sent to the server. When
it arrives, the transport entity checks to see that the server is blocked on a LISTEN (i.e., is interested in
handling requests). If so, it then unblocks the server and sends a CONNECTION ACCEPTED segment
back to the client. When this segment arrives, the client is unblocked and the connection is established.
Data can now be exchanged using the SEND and RECEIVE primitives. In the simplest form, either party
can do a (blocking) RECEIVE to wait for the other party to do a SEND. When the segment arrives, the
receiver is unblocked. It can then process the segment and send a reply. As long as both sides can keep
track of whose turn it is to send, this scheme works fine.

When a connection is no longer needed, it must be released to free up table space within the two transport
entities. Disconnection has two variants: asymmetric and symmetric. In the asymmetric variant, either
transport user can issue a DISCONNECT primitive, which results in a DISCONNECT segment being
sent to the remote transport entity. Upon its arrival, the connection is released.

Elements of Transport Protocol:

In the transport layer, the larger number of connections that must be managed and variations in the
bandwidth each connection may receive make the idea of dedicating many buffers to each one less
attractive. In the following sections, we will examine all of these important issues, and others.

a.Addressing:

When an application (e.g., a user) process wishes to set up a connection to a remote application process, it
7
must specify which one to connect to. (Connectionless transport has the same problem: to whom should
each message be sent?) The method normally used is to define transport addresses to which processes can
listen for connection requests. In the Internet, these endpoints are called ports. We will use the generic
term TSAP (Transport Service Access Point) to mean a specific endpoint in the transport layer. The
analogous endpoints in the network layer (i.e., network layer addresses) are not-surprisingly called
NSAPs (Network Service Access Points). IP addresses are examples of NSAPs.

Application processes, both clients and servers, can attach themselves to a local TSAP to establish a
connection to a remote TSAP. These connections run through NSAPs on each host, as shown. The
purpose of having TSAPs is that in some networks, each computer has a single NSAP, so some way is
needed to distinguish multiple transport endpoints that share that NSAP.

Transport connection scenario:

1. The time of day server process on host 2 attaches itself to TSAP 51 to wait for an incoming call.
For this it can use the LISTENCALL.
2. An application process on host 1 wants to find out the time of day, so it uses a connect request
specifying TSAP 13 as the source and TSAP 51 as the destination.
3. Transportconnectionbeingestablishedbetweentheapplicationprocessonhost 1 and server 1 on
host2.
4. The application process then sends over a request for the same.
5. The time server process responds with the current time
8
K.Narasimha, Asst. Professor
6. The transport connection is then released.

b. Connection Establishment:

While establishing a connection when the subnet can lose, store and duplicate packets

Example:

1. A user establishes a connection with a bank


2. Sendsmessagetothebanktotransferlargeamountofmoneytotheaccountof a not entirely trust
worthy person.
3. And there release the connection.

If each packet in the connect establishment are duplicated and stored in the subnet

After the connection has been released, all the packet pop out of the subnet and arrive at the destination
in order asking the bank to establish a new connection, transfer money (again) and release the
connection.

Hand shaking scenario for establishing a connection:

Normal operation:

1. Host 1 select a sequence number ‘x’ & sends a connection request TDPV containing it to host 2.
Host 2 replies with an ACKTPDU acknowledging ‘x’ and announcing its own initial sequence
number‘y’
2. Host 1 acknowledges host 2’s choice of an initial sequence number in thefirst
data TPDU that it sends

Old duplicate:

The first TPDU is a delayed duplicate connection request from a old connection. This
TPDU arrives at host 2 without hosts 1’s acknowledge.

9
K.Narasimha, Asst. Professor
Host 2 sends ACK TPDU to host 1 and ask for verification.

When host 1 rejects host 2’s attempt to establish a connection host 2 realizes that it was tricked by a
delayed duplicate and abandons the connection So the delay duplicate does not damage.

Figure: old duplicate CR

10
Duplicate CK and duplicate ACK:

When both delayed connection request and as ACK are floating around in the subnet Host 2 gets a

delayed connection request and replies to it.

WhentheseconddelayedTPDUarriveshost2thefactthat‘z’hasbeenacknowledged rather than ‘y’ tells host 2


that too is an old duplicate.

c. Connection Release:

Connection release is easier than establishment. These are two types.

1. Symmetric release
2. Asymmetric release

Asymmetricreleaseisabruptandmayresultindataalossonewaytoavoiddataloss is to use symmetric release


in which each direction is released independently of the other one.

Figure: abrupt disconnection with loss of data.

After a connection is established, host 1 sends a TPDU that arrives properly at host
2. Then host 1 sends another TPDU

11
K.Narasimha, Asst. Professor
Unfortunately, host 2 issues a DISCONNECT before the second TPDU arrives. The result is that
connection is released to avoid data loss.

One way is above said; a more sophisticated release protocol is required to avoid data loss.

Four protocol scenario for releasing a connection:

Normal case of 3 – way handshake.

From an above figure, one of the user sends a DR (Disconnection request) TPDU to initiate the
connection release.

When it arrives, the recipient sends back and DR TPDU and start a timer.

When this DR arrivers, the original sender sends back an ack TPDU and release the connection.

Finally when the ACK TPDU arriver, the receiver also releases the connection.

Final ACK lost:

12
K.Narasimha, Asst. Professor
13Computer Networks SIET

If final ACK TPDV us lost the situation is saved by the timer. When the timer expires, the connection is
released any way.

Response lost:

IfsendDRhostthentheuserinitiatingthedisconnectionwillnotreceivertheexpected response will timeout.

The 2ndtime no TPDV are host and all TPDV are delivered correctly and ontime.

Response host and subsequent DR host:

d. Buffering and Flow Control:


Having examined connection establishment and release in some detail, let us now look at how
connections are managed while they are in use. The key issues are error control and flow control. Error
control is ensuring that the data is delivered with the desired level of reliability, usually that all of the data
is delivered without any errors. Flow control is keeping a fast transmitter from overrunning a slow
receiver. Both of these issues have come up before, when we studied the data link layer. The solutions
that are used at the transport layer are the same mechanisms that we studied in Chap. 3. As a very brief
recap:
1. A frame carries an error-detecting code (e.g., a CRC or checksum) that is used to check if the
information was correctly received.
K.Narasimha, Asst. Professor
14Computer Networks SIET

2. A frame carries a sequence number to identify itself and is retransmitted by the sender until it receives
an acknowledgement of successful receipt from the receiver. This is called ARQ (Automatic Repeat
Request).
3. There is a maximum number of frames that the sender will allow to be outstanding at any time,
pausing if the receiver is not acknowledging frames quickly enough. If this maximum is one packet the
protocol is called stop-and-wait. Larger windows enable pipelining and improve performance on long,
fast links.
4. The sliding window protocol combines these features and is also used to support bidirectional data
transfer.
Given that transport protocols generally use larger sliding windows, we will look at the issue of buffering
data more carefully. Since a host may have many connections, each of which is treated separately, it may
need a substantial amount of buffering for the sliding windows. The buffers are needed at both the sender
and the receiver. Certainly they are needed at the sender to hold all transmitted but as yet
unacknowledged segments. They are needed there because these segments may be lost and need to be
retransmitted.
There still remains the question of how to organize the buffer pool. If most segments are nearly the same
size, it is natural to organize the buffers as a pool of identically sized buffers, with one segment per
buffer. However, if there is wide variation in segment size, from short requests for Web pages to large
packets in peer-to-peer file transfers, a pool of fixed-sized buffers presents problems. If the buffer size is
chosen to be equal to the largest possible segment, space will be wasted whenever a short segment
arrives. If the buffer size is chosen to be less than the maximum segment size, multiple buffers will be
needed for long segments, with the attendant complexity.
Another approach to the buffer size problem is to use variable-sized buffers. The advantage here is
better memory utilization, at the price of more complicated buffer management. A third possibility is to
dedicate a single large circular buffer per connection. This system is simple and elegant and does not
depend on segment sizes, but makes good use of memory only when the connections are heavily loaded.

As connections are opened and closed and as the traffic pattern changes, the sender and receiver need to
dynamically adjust their buffer allocations. Consequently, the transport protocol should allow a sending
host to request buffer space at the other end. Buffers could be allocated per connection, or collectively,
for all the connections running between the two hosts. Alternatively, the receiver, knowing its buffer
situation (but not knowing the offered traffic) could tell the sender ‘‘I have reserved X buffers for you.’’
If the number of open connections should increase, it may be necessary for an allocation to be reduced, so
15Computer Networks SIET

the protocol should provide for this possibility.

e. Multiplexing:
Multiplexing, or sharing several conversations over connections, virtual circuits, and physical links plays
a role in several layers of the network architecture. In the transport layer, the need for multiplexing can
arise in a number of ways. For example, if only one network address is available on a host, all transport
connections on that machine have to use it. When a segment comes in, some way is needed to tell which
process to give it to. This situation, called multiplexing.

Multiplexing can also be useful in the transport layer for another reason. Suppose, for example, that a host
has multiple network paths that it can use. If a user needs more bandwidth or more reliability than one of
the network paths can provide, a way out is to have a connection that distributes the traffic among
multiple network paths on a round-robin basis. This modus operandi is called inverse multiplexing.
16Computer Networks SIET

f. Crash Recovery:

If hosts and routers are subject to crashes or connections are long-lived (e.g., large software or media
downloads), recovery from these crashes becomes an issue. If the transport entity is entirely within the
hosts, recovery from network and router crashes is straightforward. The transport entities expect lost
segments all the time and know how to cope with them by using retransmissions. A more troublesome
problem is how to recover from host crashes. In particular, it may be desirable for clients to be able to
continue working when servers crash and quickly reboot. To illustrate the difficulty, let us assume that one
host, the client, is sending a long file to another host, the file server, using a simple stop-and-wait protocol.
The transport layer on the server just passes the incoming segments to the transport user, one by one.
Partway through the transmission, the server crashes. When it comes back up, its tables are reinitialized, so
it no longer knows precisely where it was. In an attempt to recover its previous status, the server might
send a broadcast segment to all other hosts, announcing that it has just crashed and requesting that its
clients inform it of the status of all open connections. Each client can be in one of two states: one segment
outstanding, S1, or no segments outstanding, S0. Based on only this state information, the client must
decide whether to retransmit the most recent segment. At first glance, it would seem obvious: the client
should retransmit if and only if it has an unacknowledged segment outstanding (i.e., is in state S1) when it
learns of the crash. However, a closer inspection reveals difficulties with this naive approach.
Consider, for example, the situation in which the server’s transport entity first sends an acknowledgement
and then, when the acknowledgement has been sent, writes to the application process. Writing a segment
onto the output stream and sending an acknowledgement are two distinct events that cannot be done
K.Narasimha, Asst. Professor
17Computer Networks SIET

simultaneously. If a crash occurs after the acknowledgement has been sent but before the write has been
fully completed, the client will receive the acknowledgement and thus be in state S0 when the crash
recovery announcement arrives. The client will therefore not retransmit, (incorrectly) thinking that the
segment has arrived. This decision by the client leads to a missing segment. At this point you may be
thinking: ‘‘That problem can be solved easily. All you have to do is reprogram the transport entity to first
do the write and then send the acknowledgement.’’ Try again.
Imagine that the write has been done but the crash occurs before the acknowledgement can be sent. The
client will be in state S1 and thus retransmit, leading to an undetected duplicate segment in the output
stream to the server application process. No matter how the client and server are programmed, there are
always situations where the protocol fails to recover properly. The server can be programmed in one of two
ways: acknowledge first or write first. The client can be programmed in one of four ways: always
retransmit the last segment, never retransmit the last segment, retransmit only in state S0, or retransmit
only in state S1. This gives eight combinations, but as we shall see, for each combination there is some set
of events that makes the protocol fail.
Three events are possible at the server: sending an acknowledgement (A), writing to the output process
(W), and crashing (C). The three events can occur in six different orderings: AC(W), AWC, C(AW),
C(WA), WAC, and WC(A), where the parentheses are used to indicate that neither A nor W can follow C
(i.e., once it has crashed, it has crashed). Figure 6-18 shows all eight combinations of client and server
strategies and the valid event sequences for each one. Notice that for each strategy there is some sequence
of events that causes the protocol to fail. For example, if the client always retransmits, the AWC event will
generate an undetected duplicate, even though the other two events work properly.

K.Narasimha, Asst. Professor


18Computer Networks SIET

User Datagram Protocol (UDP):

It is a simple datagram-oriented transport layer protocol. This protocol is used in place of TCP.UDP is
connectionless protocol provides no reliability or flow control mechanism. It also has no error recovery
procedures.

Several application layer protocols such as TFTP (trivial file transfer protocol) and RPC use UDP.

UDP makes use of the port concept to direct the datagram to proper upperlayer applications.

UDP serves as a simple interface to the IP.

The port number identifies the sending process and the receiving process.
K.Narasimha, Asst. Professor
19Computer Networks SIET

The UDP datagram contains a source port number and destination port number. Source port
number identifies the port of the sending application process.

The destination port number identifies the receiving process on the destination
hostmachine.TheUDPlengthfieldisthelengthoftheUDPheaderandtheUDPdata in bytes. The minimum
value for this field is 8bytes

The UDP checksum covers the UDP header and the UDP data. Both UDP and TCP include a 12
bytes pseudo header with the UDP datagram just for the checksum computation.

Pseudo header includes certain fields from the IP header the purpose of UDP is to check twice that
the data has arrived at one correct destination.
The UDP check sum is end to end check sum. It is calculated by the sender, and then verified by receiver.
It is designed to catch any modification of the UDP header or data anywhere between sender and receiver.
Applications
1. UDP is used for some route up dating protocol such as RIP
2. Used for multicasting
3. It is suitable for a process with internal flow & error control mechanism.

Transmission Control Protocol (TCP):

TCP is an connection oriented protocol and reliable in nature, and the them connection oriented means the
two applications using TCP must establish a TCP connection with each other before they can exchange
data.

It determines how to break application data into packets to any accept packets from the network layer,
ménages flow control, and because it is meant to provide error – free data transmission.

TCP Services:

The TCP and UDP use the same network layer (IP) it is provides a connection–
oriented, reliable, byte stream service.

It does not support multicasting and bread casting. The application data is broken
intowhatTCPconsidersthebestsizedchunkstosend.Theunitofinformationpassed by TCP to IP is called
segment

When TCP sends a segment it maintains a timer, waiting for the other end to acknowledge reception of
segment. If an acknowledgment is not received in time, the segment is retransmitted

When the TCP receives data from other end of the connection, it sends an acknowledgment. TCP
maintains a checksum on its heads and data.

TCPsegmentsaretransmittedasIPdatagramscanarriveoutoforder;TCPsegments can arrive out of order.


Since IP datagram can get duplicated, a receiving TCP must discard duplicate data.

K.Narasimha, Asst. Professor


20Computer Networks SIET

TCP also provides flow control. Each end of a TCP connection has a finite amount of buffer space.

A receiving TCP only allows the other end to send as much data as the receiver has buffer for. This
prevents a fast host from taking all the buffers on a slower host.

A TCP connection is a bytes stream, not a message A stream of 8 – bits exchanged across the TCP
connection between the two applications.

There are no record makers automatically inserted by TCP this is called byte stream service. The TCP
doesn’t interpret the content so of the bytes at all. TCP has no idea if the data bytes being exchanged are
binary data, ASCII character or any other.

TCP Header Format:

The TCP data is encapsulated in an IP datagram as shown in figure:

Figure: TCP header

SourcePort:itspecifiestheapplicationsendingthesegmentthisisdifferentfromthe IP address, which specifies


an internet address (16 –bits)
K.Narasimha, Asst. Professor
21Computer Networks SIET

Destination Port: it identifies the receiving application port numbers below 256 are called well – know
port and have assigned to commonly used applications.

DNS – 53
FTP–21 Port numbers
TELNET –23

Sequence Number: each byte in the stream are numbered the using sequence number (it wraps back to 0 after
232-1).

Acknowledgement Number: This field identifies the sequence number of the next data by the that the
sender expects to arrive is the acknowledge bit is set. If the acknowledge bit is not set, this field has no
effect

Header length: it specifies the length of the TCP header 32 – bit words.

Reserved: this field is reserved for future use and must be set to 0 (zero) Flag bits: in this

one (or) more flags set by at the same time

URG: the urgent pointer is valid if it set to 1.

ACK: ack is bit get to 1 to indicate that the acknowledgement number is valid.

PSH: the receiver should pass this data to the application as soon as possible.

RST: this flag is used to reset the connection it is also used to reject an invalid segment.

SYN: synchronize sequence number to initiate a connection. The connection request


hasSYN=1andACK=0toindicatethatthepiggybackacknowledgementfieldisnot in use.

FIN: the FIN bit is used to release a connection. It specifies that the sender is finished sending data.

Window size: this field can be used to control the flow of data and congestion.

Checksum: used for transport layer error detection.

Urgent pointer: if the URG flag is bit is set, the segment contains urge data meaning the receiving TCP
entity must deliver it to the higher layers immediately.

Options: Size of this field is variable options field may be used to provide others functions that are not
covered by the header

Data: data field size is variable. It contains user data.

TCP provides full duplex service therefore each end of a connection must contains a sequence number of
the data flowing in each direction.

TCP can be described as a sliding window protocol without selective or negative acknowledgements.
K.Narasimha, Asst. Professor
22Computer Networks SIET

TCP header length is 32 – bit words. This information is needed because the options field is of variable
length with a 4 – bit field; TCP is limited to a 60 bytes header.

TCP flow control is handled using a variable size sliding window. This is the number
ofbytesstartingwiththeonespecifiedbytheacknowledgmentnumberfield,thatthe receiver willing to accept.

TCP Sliding Window and Flow Control:


Flow control is a technique whose primary purpose is to properly match the
transmission rate of sender to that of the receiver and the network.
It is important for the transmission to be high enough rates to ensure good performance,
but also protect again over whelming the network(or)receiving host.
Flow control is not same as the congestion control. Congestion control is primary
concerned with a sustained overload of network intermediate devices such as IP routers
TCP uses the window field, as the primary means for flow control. During the data
transferphase,thewindowfieldisusedtoadjusttherateofflowofthebytestream
between communicating TCP’s.

From an above figure there is a 4 – byte sliding window moving from left to right, the window
“slides” as bytes in the stream are sent and acknowledged.

K.Narasimha, Asst. Professor


23Computer Networks SIET

A TCP sliding window provides more efficient use of network band width than positive
acknowledgement and retransmission (PAR) because it enables hosts to send multiple bytes or
packets before waiting for an acknowledgement.
In TCP, the receiver specifies the current window size in every packet. Because TCP provides a bytes
stream connection, window sizes are expressed in bytes.
This means that a window is the number of bytes that the sender is allowed to send before waiting for
an acknowledgement.

Note: window size = value means → connecting setup and transfer data
Window size = 0 → no data to send.
TCP congestion control:

The congestion can occur when multiple input streams arrive at a router whose output capacity is less
than the sum of the input.

Congestion can also occur when data arrives on a fast LAN and gets out a shower WAN.

The router R 1is labeled as the bottlenecks because it is the congestion point. Router R1 can receive
packets from the LAN on its left faster than they can be sent out the WAN on its right

Which router R2 puts the received packets on to the LAN on its right, they maintain the same spacing
as they did the WAN on its left, even through the bandwidth of the LAN is higher.

Slow start is the way to initiate data flow across a connection. Congestion avoidance is a way to deal
with lost packets.

Congestion

Congestion: when too many packets rushing to a node or a part of network, the network performance
degrades, and this situation is called congestion.

K.Narasimha, Asst. Professor


24Computer Networks SIET

When no of packets dumped into the subnet and as traffic increases the network is no longer able to cope
and design losing packets at very high traffic, performance collapses completely and almost no packets
are delivered.

The congestion control is a process of maintaining the no. of packets in network below a certain level at
which performance falls off.

Congestion control sure that subnet is able to carry the offered traffic. So congestion control is different
process than flow control.

The effect of congestion on throughput of a network is shown above.

Ideally as offered load increases, throughput also increases practically throughput drops with increase in
offered load because the buffers at each nodes are full and starts discarding packets.
Therefore the source station must retransmits the discarded packets in addition with the new packets.

Under these circumstances, the network utilization and hence performance falls off.

Causes of Congestion:

1. At any node if all of a sudden stream of packets begin arriving on three or more links and all
needed the same output links, there is a queue of packet for outgoing channel.
2. The other cause of congestion is slow processor speed. If the routers CPU speed is slow and
performing tasks like queuing buffers, table updating etc queues are built up, even though the line
capacity is not fully utilized.
3. The band width of links are also important in congestion. The links to be used must be of high
band width to avoid the congestion.
4. Any mismatch between parts of the system also cause congestion up grading the processors
speed but not changing the links or vice – versa will cause the congestion.
5. When the arrival of packets are not uniform i.e., when the traffic is busy.
K.Narasimha, Asst. Professor
25Computer Networks SIET

Congestion Control:

Congestion control refers to the techniques used to control or prevent congestion. Congestion control
techniques can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens. The congestion
control is handled either by the source or the destination.

Policies adopted by open loop congestion control:


1.Retransmission-Policy:
It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet
is lost or corrupted, the packet needs to be retransmitted. This transmission may increase the
congestion in the network. To prevent congestion, retransmission timers must be designed to
prevent congestion and also able to optimize efficiency.

2. Window-Policy:
The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n
window are resent, although some packets may be received successfully at the receiver side. This
duplication may increase the congestion in the network and making it
worse. Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
3. Discarding-Policy:
A good discarding policy adopted by the routers is that the routers may prevent congestion and at the
same time partially discards the corrupted or less sensitive package and also able to maintain

K.Narasimha, Asst. Professor


26Computer Networks SIET

the quality of a message. In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the audio file.
4. Acknowledgment-Policy:
Since acknowledgement is also the part of the load in network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to prevent congestion
related to acknowledgment. The receiver should send acknowledgement for N packets rather
than sending acknowledgement for a single packet. The receiver should send a acknowledgment only if it
has to sent a packet or a timer expires.
5. Admission-Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first
check the resource requirement of a network flow before transmitting it further. If there is a chance of
congestion or there is a congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.

Closed Loop Congestion Control:


Closed loop congestion control technique is used to treat or alleviate congestion after it happens. Several
techniques are used by different protocols; some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stop receiving packet from upstream node.
This may cause the upstream node or nodes to become congested and rejects receiving data from
above nodes. Backpressure is a node- to-node congestion control technique that propagate in the
opposite direction of data flow. The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may
be get congested due to slowing down of the output data flow. Similarly 1st node may get
congested and informs the source to slow down.
K.Narasimha, Asst. Professor
27Computer Networks SIET

2. Choke-Packet-Technique:
Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router
monitor its resources and the utilization at each of its output lines. whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly sends a
choke packet to the source giving it a feedback to reduce the traffic. The intermediate nodes
through which the packets has traveled are not warned about congestion.

3. Implicit-Signaling:
In implicit signaling, there is no communication between the congested nodes and the source. The
source guesses that there is congestion in a network. For example when sender sends several
packets and there is no acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit-Signaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source
or destination to inform about congestion. The difference between choke packet and explicit
signaling is that the signal is included in the packets that carry data rather than creating different
packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling: In forward signaling signal is sent in the direction of the congestion. The
destination is warned about congestion. The receiver in this case adopt policies to prevent
further congestion.
• Backward Signaling: In backward signaling signal is sent in the opposite direction of the
congestion. The source is warned about congestion and it needs to slow down.
Performance Issues:
Performance issues are very important in computer networks. With the below five aspects we can
understand the network performance:
1. Performance problems.
2. Measuring network performance.
K.Narasimha, Asst. Professor
28Computer Networks SIET
3. System design for better performance.
4. Fast segment processing.
5. Protocols for future high performance networks.
Performance Problems in Computer Networks:
These problems may arise due to various reasons such as congestion, structural resource
imbalance.
Network Performance Measurement
The basic steps used to improve performance are:
• Measure the network relevant parameters and performance.
• Understanding what is happening
• Change one parameter.

Measuring network performance and parameters has many set falls few of them are listed:
• Make sure that sample size is large Enough
• Make sure that the samples are representative
• Be sure that nothing unexpected is going on during your tests
• Understand what are you measuring
• Be careful about Extrapolating the results

System Design for better Performance


• CPU Speed is More Important than Network speed
• Reduce packet count to reduce software overhead
• Minimize context switches
• Minimize Copying
• Use higher bandwidth but not lower delay
• Avoid Congestion
• Avoid Timeouts

K.Narasimha, Asst. Professor

You might also like