"Jnana Sangama", Belagavi, Karnataka-590018: Visvesvaraya Technological University
"Jnana Sangama", Belagavi, Karnataka-590018: Visvesvaraya Technological University
"Jnana Sangama", Belagavi, Karnataka-590018: Visvesvaraya Technological University
A Presentation Report On
“FREE NETWORK CONGESTION CONTROL”
Submitted in partial fulfillment of the requirements for the Award of Degree of
BACHELOR OF ENGINEERING
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
Dr. Devika G
Assistant professor
Department of Computer Science
CERTIFICATE
1……………………… ……………………..
2………………………… ………………………..
ABSTRACT
Computer graphics is a field of computer science that deals with creating, manipulating,
and representing visual content using computers. This field encompasses a wide range of
techniques and applications, from simple 2D drawings to complex 3D simulations and virtual
reality REQUIREMENT
Main aim of this Mini Project is to illustrate the concepts and usage of Congestion
Control in OpenGL. In data networking and queuing theory, network congestion occurs when
a link or node is carrying so much data that its quality of service deteriorates. When more
packets were sent than could be handled by intermediate routers, the intermediate routers
discarded many packets, expecting the end points of the network to retransmit the information.
When this packet loss occurred, the end points sent extra packets that repeated the
information lost, doubling the data rate sent, exactly the opposite of what should be done during
congestion. This pushed the entire network into a 'congestion collapse' where most packets
were lost and the resultant throughput was negligible. IN THE PROJECT used input devices
like mouse and key board to interact with program
ACKNOWLEDGEMENT
Last but not least I would also like to thank my parents and friends for their moral support.
MANOJKUMAR M JIDDI
(4GK21CS024)
DECLARATION
MANOJKUMAR M JIDDI
(4GK21CS024)
CONTENTS
PARTICULARS PAGE NO
1 Introduction 1-3
4 Implementation 15-45
6 Results 48-49
7 Conclusion 50
8 References 51
CHAPTER -1
INTRODUCTION
Computer graphics is a fascinating field that sits at the intersection of computer science,
visual art, and mathematics, enabling the creation and manipulation of images and visual
effects through computational processes. It has profoundly impacted numerous industries,
including entertainment, engineering, medicine, education, and more, transforming the way we
interact with and perceive digital media. This introduction delves into the origins, core
concepts, applications, and future directions of computer graphics, offering a comprehensive
overview of its significance and evolution.
The origins of computer graphics can be traced back to the mid-20th century when early
computers were primarily used for numerical calculations. The idea of using computers for
graphical representation emerged as researchers sought new ways to visualize data and interact
with machines. One of the pioneering moments in the history of computer graphics was Ivan
Sutherland's creation of Sketchpad in 1963. Sketchpad was an innovative computer program
that allowed users to interact with graphical objects directly using a light pen, laying the
foundation for graphical user interfaces and computer-aided design (CAD) systems.
The rapid evolution of computer hardware has played a crucial role in the advancement
of computer graphics. Graphics Processing Units (GPUs), specialized hardware designed to
accelerate the rendering process, have become ubiquitous in modern computing. GPUs are
capable of performing parallel processing, allowing them to handle the massive computational
demands of rendering complex scenes. This technological progress has enabled the
development of increasingly sophisticated graphics applications, from immersive virtual reality
environments to cutting-edge scientific visualizations.
In conclusion, computer graphics is a dynamic and ever-evolving field that has had a
profound impact on numerous aspects of modern life. From its early beginnings with Sketchpad
to the sophisticated rendering techniques of today, computer graphics has continuously pushed
the boundaries of what is visually possible. Its applications are vast and varied, influencing
industries as diverse as entertainment, medicine, engineering, and education. As technology
continues to advance, computer graphics will undoubtedly play a central role in shaping the
future of digital media and interactive experiences.
CHAPTER -2
The concept of network congestion control dates back to the early days of computer
networks, with the pioneering work of Kleinrock in the 1960s on queueing theory and packet-
switched networks. The primary goal of congestion control is to manage the data transmission
rate of senders to avoid overwhelming the network, which can lead to packet loss, increased
latency, and reduced throughput. The seminal work by Jacobson in the late 1980s introduced
the first practical congestion control algorithm, TCP Tahoe, which laid the foundation for
subsequent developments in the field.
TCP Tahoe and its successors, such as TCP Reno and TCP NewReno, employ a
combination of additive increase multiplicative decrease (AIMD), slow start, and fast
retransmit mechanisms to control congestion. These algorithms have been extensively studied
and refined over the years, leading to a deeper understanding of their performance
characteristics and limitations. One notable limitation is their suboptimal performance in high-
bandwidth, long-distance networks, often referred to as "long fat networks" (LFNs). To address
these challenges, a plethora of alternative congestion control algorithms have been proposed.
DCCP incorporates features from both TCP and UDP, offering a flexible framework
for real-time applications that require timely delivery but can tolerate some packet loss. The
development and analysis of DCCP have contributed to a broader understanding of congestion
control in diverse network environments.
Another significant area of research in free network congestion control is the use of
active queue management (AQM) techniques. AQM schemes, such as Random Early Detection
(RED) and its variants, aim to control congestion by preemptively dropping packets or marking
them with ECN before the queue becomes full. This proactive approach helps to smooth traffic
flow and maintain lower queue lengths, thereby reducing latency and jitter. The effectiveness
of AQM techniques has been extensively studied through both theoretical analysis and
empirical evaluations.
The advent of software-defined networking (SDN) has opened new avenues for
congestion control research. SDN decouples the control plane from the data plane, allowing
for more centralized and flexible management of network resources. Researchers have
explored various SDN-based congestion control mechanisms that leverage global network
visibility and programmability. These mechanisms can dynamically adjust routing and
resource allocation to mitigate congestion, leading to more efficient and adaptive network
operations.
In the realm of wireless networks, congestion control presents unique challenges due to
the dynamic and heterogeneous nature of wireless communication. Traditional congestion
control algorithms often perform poorly in wireless environments, prompting the development
of specialized approaches. Techniques such as rate adaptation, cross-layer optimization, and
cooperative communication have been proposed to enhance congestion control in wireless
networks. These approaches consider factors like signal strength, interference, and mobility,
offering more robust solutions for wireless congestion management.
Recent advancements in machine learning and artificial intelligence (AI) have also
influenced the field of congestion control. AI-driven congestion control algorithms leverage
predictive analytics and real-time data to make more informed decisions about traffic
management. For example, reinforcement learning techniques have been applied to optimize
congestion control strategies dynamically. These approaches have shown promise in achieving
better performance and adaptability compared to traditional algorithms.
The literature on free network congestion control also encompasses studies on fairness
and resource allocation. Fairness in congestion control ensures that all network users receive a
fair share of resources, preventing scenarios where some users monopolize bandwidth at the
expense of others. Algorithms like TCP Vegas and FAST TCP incorporate fairness
considerations into their design, aiming to balance efficiency and equity. Additionally, game-
theoretic models have been employed to analyze and design congestion control mechanisms
that promote fair resource allocation in competitive network environments.
In summary, the literature on free network congestion control is rich and diverse,
reflecting the complexity and importance of this field. From early theoretical models to
advanced AI-driven techniques, researchers have made significant strides in understanding and
addressing the challenges of network congestion. The ongoing evolution of network
technologies and the increasing demand for high-performance communication services ensure
that congestion control will remain a vital area of research and innovation in the years to come.
CHAPTER -3
Firstly, the system must have robust mechanisms for detecting network congestion. This
involves monitoring various network metrics such as packet loss, delay variations, and
bandwidth utilization. Effective congestion detection can be achieved through techniques like
packet marking, using Explicit Congestion Notification (ECN), or analyzing changes in round-
trip times. The system should be able to promptly identify when the network is approaching or
experiencing congestion to take necessary actions to mitigate it.
Once congestion is detected, the system needs a reliable notification mechanism to inform
relevant network nodes or endpoints. This can be implemented through packet marking or
direct signaling methods, ensuring that both the sender and receiver are aware of the network
conditions. Timely and accurate notifications are crucial for the system to respond effectively
to congestion and prevent further deterioration of network performance.
The core functionality of the system revolves around dynamic rate adjustment. The system
must be capable of adapting data transmission rates based on real-time network conditions.
This involves increasing the transmission rate when the network is underutilized and reducing
it when congestion is detected. The system should support a variety of congestion control
algorithms, such as TCP Tahoe, TCP Reno, Cubic, and BBR, to cater to different types of
networks and application
➢ GRAPHICS SYSTEM,
➢ Pentium P4 with 256 of Ram(Min)
CHAPTER -4
INTRODUCTION TO OPENGL
You can control modes independently of each other; that is, setting one mode
doesn't affect whether other modes are set .Primitives are specified, modes are
set, and other OpenGL operations are described by issuing commands in the form
of function calls.
Commands are always processed in the order in which they are received, although
there may be an indeterminate delay before a command takes effect. This means
that each primitive is drawn completely before any subsequent command takes
effect. It also means that state-querying commands return data that's consistent
with complete execution of all previously issued OpenGL commands.
4.3:Basic OpenGL Operation
The figure shown below gives an abstract, high-level block diagram of how
OpenGL processes data. In the diagram, commands enter from the left and
proceed through what can be thought of as a processing pipeline. Some
commands specify geometric objects to be drawn, and others control how the
objects are handled during the various processing stages.
As shown by the first block in the diagram, rather than having all commands
proceed immediately through the pipeline, you can choose to accumulate some
of them in a display list for processing at a later time.
per-fragment operations, which performs the final operations on the data before
it's stored as pixels in the frame buffer. These operations include conditional
updates to the frame buffer based on incoming and previously stored z-value s
(for z-buffering) and blending of incoming pixel colors with stored colors, as well
as masking and other logical operations on pixel values.
All elements of OpenGL state, including the contents of the texture memory and
even of the frame buffer, can be obtained by an OpenGL application.
CHAPTER -5
SYSTEM DESIGN
Principles
Key Components
1. Congestion Detection:
o Metrics Collection: Each node monitors queue lengths, packet loss, delay,
and throughput.
o Thresholds: Nodes use thresholds to detect congestion. For example, if the
queue length exceeds a certain value, congestion is inferred.
2. Congestion Mitigation:
o Rate Limiting: Nodes adjust their transmission rates to reduce congestion.
o Traffic Shaping: Nodes smooth traffic flows to prevent bursts.
o Prioritization: Nodes prioritize critical traffic over less important traffic.
3. Communication Protocols:
o Periodic Updates: Nodes regularly share their congestion status with
neighbors.
o Event-Driven Updates: Nodes send immediate updates when significant
congestion changes occur.
o Gossip Protocols: Information spreads through the network as nodes
randomly share updates with a subset of their neighbors
3. glutCreateWindow() : this opens the OPENGL window and displays the title at top of
the window
19. glTranslatef() : used to translate or move the rotation centre from one point to another
in three dimensions
➢ User Right mouse button to get menu and choose option accordingly.
➢ Q-> Quit
CHAPTER -6
IMPLEMENTATION
6.1: End-to-End Congestion Control
In this approach, the endpoints (i.e., the sender and receiver) detect and respond to congestion
without requiring intermediate nodes to perform congestion control. The most common method
is through TCP (Transmission Control Protocol) variants, which adjust the sending rate based
on network feedback.
Key Techniques:
➢ TCP Reno: Adjusts the congestion window size based on packet loss signals, using
algorithms like slow start, congestion avoidance, fast retransmit, and fast recovery.
➢ TCP Vegas: Uses RTT (Round-Trip Time) measurements to detect congestion early
and adjust the sending rate before packet loss occurs.
These methods use network delay as a signal for congestion. The idea is to detect congestion
before packet loss occurs by observing increases in RTT.
Key Techniques:
➢ TCP Vegas: Measures RTT to adjust the sending rate, aiming to prevent congestion by
reacting to increases in delay.
➢ FAST TCP: A variant of TCP that also uses RTT measurements to adjust the sending
rate more aggressively and accurately.
ECN is a mechanism where network routers mark packets instead of dropping them to signal
congestion. Endpoints then adjust their sending rates based on these marks.
Key Techniques:
Routers manage their packet queues to control congestion proactively by dropping or marking
packets before the queue becomes full.
Key Techniques:
➢ RED (Random Early Detection): Drops packets randomly before the queue is full to
signal congestion early, encouraging endpoints to reduce their sending rates.
➢ CoDel (Controlled Delay): Drops packets based on the time they have spent in the
queue, aiming to maintain a target delay.
This approach involves dynamically adjusting the sending rate based on feedback from the
network or the receiver.
Key Techniques:
➢ XCP (eXplicit Control Protocol): Routers provide explicit feedback about the level
of congestion, and senders adjust their rates accordingly.
➢ RCP (Rate Control Protocol): Routers compute and communicate a fair rate for each
flow to the senders.
CODE
➢ Header Includes and Namespace: Includes necessary headers (<GL/glut.h>,
<iostream>, <math.h>, <string.h>). Imports standard libraries for OpenGL,
input/output operations, mathematical functions, and string handling
#include <windows.h>
#include<string.h>
#include<stdarg.h>
#include<stdio.h>
#include <glut.h>
void *font;
void *currentfont;
static double x=0.0,x1=-3.5,x2=3.5,y1=-1.4,y2=-1.4,x3=-3.5,y3=1.3;
static double move=-60;
static bool
goDown=false,goup=false,down=false,congested=false,remote=false;
➢ setFont Function: Sets the font for drawing characters using setFont.Currently, it
always sets the font to GLUT_BITMAP_TIMES_ROMAN_24 regardless of the font
parameter.
Model and simulation testing for free network congestion control involves creating a virtual
environment to evaluate the performance and effectiveness of the proposed congestion control
mechanisms. This process typically begins with developing a comprehensive model that
represents the network, including its topology, traffic patterns, and key components such as
routers, endpoints, and control algorithms. The simulation environment is built to replicate
real-world network conditions as closely as possible. This includes defining the network
topology with nodes and connections, specifying traffic generation patterns to mimic various
types of data flows, and implementing the congestion control algorithms under test. Tools like
ns-3, OMNeT++, or Mininet are commonly used for network simulation due to their flexibility
and detailed modeling capabilities.
Within the simulation, different scenarios are created to test the congestion control
mechanisms. These scenarios might include varying traffic loads, changes in network topology,
and the introduction of network failures or attacks. The goal is to observe how the system reacts
under different conditions and to measure key performance indicators such as throughput,
latency, packet loss, and fairness among users.
The simulation also involves incorporating Active Queue Management (AQM) techniques and
Explicit Congestion Notification (ECN) mechanisms to manage packet queues and signal
congestion. By monitoring the performance of these techniques within the simulation,
researchers can fine-tune their parameters for optimal results.
Data collected from the simulations provide insights into the strengths and weaknesses of the
congestion control strategies. Metrics such as the average and peak throughput, latency
distribution, packet loss rates, and the efficiency of congestion signaling are analyzed. This
analysis helps in identifying areas where the system performs well and where it may need
improvement.
Ultimately, model and simulation testing enables a thorough evaluation of the congestion
control system in a controlled and reproducible environment. This process helps ensure that
the system is robust, efficient, and capable of maintaining high network performance under a
variety of conditions.
Sl. No. Test function Outcome Inference
1 Bandwidth Utilization Test High utilization Network efficiently
achieved utilizes bandwidth
2 Latency Measurement Low latency Congestion control
observed minimizes delay
3 Packet Loss Test Minimal packet loss Effective congestion
avoidance
4 Throughput Analysis Consistent high Stable network
throughput performance
5 Fairness Check Equal resource Fair allocation among
distribution users
6 Scalability Test Performance Scalable congestion
degrades gracefully control mechanism
7 Bufferbloat Detection No significant Adequate buffer
bufferbloat management
RESULTS
Congestion control aims to regulate network traffic to avoid congestion collapse, ensure
fair resource allocation, and maintain high throughput and low latency. Key components
include. Flow Control Adjusting the rate at which data is sent to avoid overwhelming network
nodes. Feedback Mechanisms Using feedback from the network (e.g., packet loss, delays) to
adjust the sending rate.
Figure 7.1: Represents an external network or the internet. Likely a central server or service
within the network
Figure 7.2: The server is directly connected to router R4. The office is connected to router
R1.The hacker is connected to router R5.Data flows between components through the routers.
Routers manage traffic, ensuring efficient communication.
Figure 7.3: The connection to the “HACKER” suggests a need for robust security measures.
The cloud service should protect against unauthorized access and monitor potential threats.
Free network congestion control focuses on evaluating its effectiveness in managing traffic
flow and ensuring fair resource allocation, considering regulatory impacts, technological
advancements, user experience improvements, and identifying future directions for innovation
and research.
Remember that effective congestion control improves network performance, minimizes delays,
and enhances user experience
Congestion control is essential for keeping computer networks running smoothly. It helps
prevent network overloads by managing the flow of data, ensuring that information gets where
it needs to go without delays or loss. Effective congestion control improves network
performance and reliability, making sure that users have a stable and efficient connection. By
using these techniques, networks can handle high traffic volumes and continue to operate
effectively.
REFERENCES
1. http://jerome.jouvie.free.fr/OpenGl/Lessons/Lesson3.php
2. http://google.com
3. http://opengl.org
4. "Computer Networks: A Systems Approach" by Larry L. Peterson and Bruce S.
5. "Traffic and Congestion Control in IP/TCP Networks" by Raghupathy Sivakumar,
6. Prathima Agrawal, and Mohan Gurusamy
7. Donald D Hearn, M Pauline Baker and WarrenCarithers: Computer Graphics with
OpenGL 4th
8. Edition, Pearson, 2014
9. S.Sridhar, Digital Image Processing, second edition, Oxford University press 2016.