Nothing Special   »   [go: up one dir, main page]

(IJCST-V9I4P8) :rayan Soqati

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

RESEARCH ARTICLE OPEN ACCESS

Investigating on Web Server Load Balancing Using SSL


Back-End Forwarding Method
Rayan Soqati
Department Of Computer Science and Engineering
ABSTRACT
The cluster-based data centres consist of three tiers (Web server, application server, and database server) are being used to
host complex Web services such as e-commerce applications. The application server handles dynamic and sensitive Web
contents that need protection from eavesdropping, tampering, and forgery. Although the Secure Sockets Layer (SSL) is
the most popular protocol to provide a secure channel between a client and a cluster-based network server, its high
overhead degrades the server performance considerably and, thus, affects the server scalability. Therefore, improving the
performance of SSL-enabled network servers is critical for designing scalable and high-performance data centres. To
improve the performance of application servers, the proposed back-end forwarding scheme can further enhance the
performance due to better load balancing. The SSL backend forward scheme can minimize the average latency by about
40 percent and improve throughput across a variety of workloads.
Keywords: - Secure Socket Layer, Web Clusters, Load balancing, Protection from eavesdropping

I. INTRODUCTION
Load balancing is a staple solution in virtually every
Load balancing refers to efficiently distributing data centre. However, today’s application delivery
incoming network traffic across a group of backend controllers (ADCs) represent a considerable evolution
servers, also known as a server farm or server from simple server load balancing methods.A load
pool.Server load balancing provides scalability and high balancer acts as the “traffic cop” sitting in front of your
availability for applications, Web sites and cloud servers and routing client requests across all servers
services by monitoring the health of servers, evenly capable of fulfilling those requests in a manner that
distributing loads across servers and maintaining maximizes speed and capacity utilization and ensures
session persistence and a seamless user experience in that no one server is overworked, which could degrade
the event that one or more servers become performance as shown in Fig 1. If a single server goes
overburdened or unresponsive. down, the load balancer redirects traffic to the
remaining online servers. When a new server is added
to the server group, the load balancer automatically
starts to send requests to it. In this manner, a load
balancer performs the following functions:

• Distributes client requests or network load


efficiently across multiple servers
• Ensures high availability and reliability by
sending requests only to servers that are online
Provides the flexibility to add or subtract
servers as demand dictates

To reach high availability, the load balancer must


monitor the servers to avoid forwarding requests to
overloaded or dead servers. Several different load
balancing methods are available to choose from. When
working with servers that differ significantly in
Fig1. Classic load balancer architecture (load
processing speed and memory, one might want to use a
dispatcher)
method such as Ratio or Weighted Least Connections.
Load balancing calculations can be localized to each
pool (member-based calculation) or they may apply to

ISSN: 2347-8578 www.ijcstjournal.org Page 32


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

all pools of which a server is a member (node-based 2.4 Fastest (node) Fastest (Application)
calculation).
II. SERVER LOAD BALANCING The Fastest methods select a server based on the least
TECHNIQUES number of current sessions. The following rules apply
2.1 Round Robin to the fastest load balancing methods:

• These methods require that you assign both a


This is the default load balancing method. Round Robin
Layer 7 and a TCP type of profile to the virtual server.
mode passes each new connection request to the next
• If a Layer 7 profile is not configured, the
server in line, eventually distributing connections
virtual server falls back to Least Connections
evenly across the array of machines being load
load balancing mode.
balanced.
Usage:
Usage:
The Fastest methods are useful in environments where
Round Robin mode works well in most configurations,
nodes are distributed across separate logical networks.
especially if the equipment that you are load balancing
is roughly equal in processing speed and memory.
2.5 Least Connections (member) Least Connections
2.2 Ratio (member) and Ratio (Node) (node)

The BIG-IP system distributes connections among pool The Least Connections methods are relatively simple in
members or nodes in a static rotation according to ratio that the BIG-IP system passes a new connection to the
weights that you define. In this case, the number of pool member or node that has the least number of active
connections that each system receives over time is connections.
proportionate to the ratio weight you defined for each Note: If the One Connect feature is enabled, the Least
pool member or node. You set a ratio weight when you Connections methods do not include idle connections in
create each pool member or node. the calculations when selecting a pool member or node.
The Least Connections methods use only active
connections in their calculations.
Usage:
These are static load balancing methods, basing
distribution on user-specified ratio weights that are Usage:
proportional to the capacity of the servers. The Least Connections methods function best in
environments where the servers have similar
2.3 Dynamic Ratio (member) Dynamic Ratio capabilities. Otherwise, some amount of latency can
(node) occur.

For example, consider the case where a pool has two


The Dynamic Ratio methods select a server based on
servers of differing capacities, A and B. Server A has
various aspects of real-time server performance
95 active connections with a connection limit of 100,
analysis. These methods are similar to the Ratio
while server B has 96 active connections with a much
methods, except that with Dynamic Ratio methods, the
larger connection limit of 500. In this case, the Least
ratio weights are system-generated, and the values of
Connections method selects server A, the server with
the ratio weights are not static. These methods are
the lowest number of active connections, even though
based on continuous monitoring of the servers, and the
the server is close to reaching capacity. If you have
ratio weights are therefore continually changing.
servers with varying capacities, consider using the
Weighted Least Connections methods instead.
Usage:
The Dynamic Ratio methods are used specifically for
load balancing traffic to Real Networks Real System 2.6 Weighted Least Connections (member)
Server platforms, Windows platforms equipped with Weighted Least Connections (node)
Windows Management Instrumentation (WMI), or any
server equipped with an SNMP agent such as the UC Like the Least Connections methods, these load
Davis SNMP agent or Windows 2000 Server SNMP balancing methods select pool members or nodes based
agent on the number of active connections. However, the

ISSN: 2347-8578 www.ijcstjournal.org Page 33


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

Weighted Least Connections methods also base their The Predictive methods use the ranking methods used
selections on server capacity. by the Observed methods, where servers are rated
according to the number of current connections.
The Weighted Least Connections (member) method However, with the Predictive methods, the BIG-IP
specifies that the system uses the value you specify in system analyzes the trend of the ranking over time,
Connection Limit to establish a proportional algorithm determining whether a nodes performance is currently
for each pool member. The system bases the load improving or declining. The servers with performance
balancing decision on that proportion and the number of rankings that are currently improving, rather than
current connections to that pool member. For example, declining, receive a higher proportion of the
member_a has 20 connections and its connection limit connections. The need for the Predictive methods is
is 100, so it is at 20% of capacity. Similarly, member_b rare, and they are not recommended for large pools.
has 20 connections and its connection limit is 200, so it
is at 10% of capacity. In this case, the system select
selects member_b. This algorithm requires all pool 2.9 Least Sessions
members to have a non-zero connection limit specified.
The Weighted Least Connections (node) method
specifies that the system uses the value you specify in The Least Sessions method selects the server that
the node's Connection Limit setting and the number of currently has the least number of entries in the
current connections to a node to establish a proportional persistence table. Use of this load balancing method
algorithm. This algorithm requires all nodes used by requires that the virtual server reference a type of
pool members to have a non-zero connection limit profile that tracks persistence connections, such as the
specified. If all servers have equal capacity, these load Source Address Affinity or Universal profile type.
balancing methods behave in the same way as the Least
Connections methods. The Least Sessions method works best in environments
where the servers or other equipment
Which the user is load balancing have similar
Note: If the One Connect feature is enabled, the
capabilities.
Weighted Least Connections methods do not include
idle connections in the calculations when selecting a
pool member or node. The Weighted Least Connections 2.10 L3 Address
methods use only active connections in their
calculations. This method functions in the same way as the Least
Usage: Connections methods. It is not recommended for large
Weighted Least Connections methods work best in pools and incompatible with cookie persistence.
environments where the servers have differing
capacities. For example, if two servers have the same
number of active connections but one server has more III. PROBLEM ISSUES
capacity than the other, the BIG-IP system calculates
the percentage of capacity being used on each server Usually at this point, a problem arises like how does a
and uses that percentage in its calculations. load balancer decide which host to send the connection
to? And what happens if the selected host is not
working? If the selected host is not working it doesn't
2.7 Observed (member) Observed (node)
respond to the client request and the connection attempt
eventually times out and fails. This is obviously not a
With the Observed methods, nodes are ranked based on preferred circumstance, as it doesn't ensure high
the number of connections. The Observed methods availability. That's why most load balancing technology
track the number of Layer 4 connections to each node includes some level of health monitoring that
over time and creates a ratio for load balancing. The determines whether a host is actually available before
need for the Observed methods is rare, and they are not attempting to send connections to it. There are multiple
recommended for large pools. levels of health monitoring, each with increasing
granularity and focus. A basic monitor would simply
PING the host itself. If the host does not respond to
2.8 Predictive (member) Predictive (node) PING, it is a good assumption that any services defined
on the host are probably down and should be removed
from the cluster of available services. Unfortunately,

ISSN: 2347-8578 www.ijcstjournal.org Page 34


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

even if the host responds to PING, it doesn't necessarily when it reaches the bottom of the list, it simply starts
mean the service itself is working. Therefore most again at the top. While this is simple and very
devices can do "service PINGs" of some kind, ranging predictable, it assumes that all connections will have a
from simple TCP connections all the way to interacting similar load and duration on the back-end host, which is
with the application via a scripted or intelligent not always true. More advanced algorithms use things
interaction. These higher-level health monitors not only like current-connection counts, host utilization, and
provide greater confidence in the availability of the even real-world response times for existing traffic to the
actual services (as opposed to the host), but they also host in order to pick the most appropriate host from the
allow the load balancer to differentiate between available cluster services. Sufficiently advanced load
multiple services on a single host. The load balancer balancing systems will also be able to synthesize health
understands that while one service might be monitoring information with load balancing algorithms
unavailable, other services on the same host might be to include an understanding of service dependency.
working just fine and should still be considered as valid This is the case when a single host has multiple
destinations for user traffic. While load balancer services, all of which are necessary to complete the
decides which host to send a connection request, each user's request. A common example would be in e-
virtual server has a specific dedicated cluster of services commerce situations where a single host will provide
(listing the hosts that offer that service) which makes up both standard HTTP services (port 80) as well as
the list of possibilities as shown in Fig 2. Additionally, HTTPS (SSL/TLS at port 443). In many of these
the health monitoring modifies that list to make a list of circumstances, you don't want a user going to a host
"currently available" hosts that provide the indicated that has one service operational, but not the other. In
service. It is this modified list from which the load other words, if the HTTPS services should fail on a
balancer chooses the host that will receive a new host, you also want that host's HTTP service to be taken
connection. out of the cluster list of available services. This
functionality is increasingly important as HTTP-like
services become more differentiated with XML and
scripting.

Connection maintenance
If the user is trying to utilize a long-lived TCP
connection (telnet, FTP, and more) that doesn't
immediately close, the load balancer must ensure that
multiple data packets carried across that connection do
not get load balanced to other available service hosts.
This is connection maintenance and requires two key
capabilities:
1) the ability to keep track of open connections and the
host service they belong to; and 2) the ability to
continue to monitor that connection so the connection
table can be updated when the connection closes. This
is rather standard fare for most load balancers.

Persistence
Increasingly more common, however, is when the client
uses multiple short-lived TCP connections (for
example, HTTP) to accomplish a single task. In some
cases, like standard web browsing, it doesn't matter and
each new request can go to any of the back-end service
Fig 2: Load balancing comprises four basic concepts-
hosts; however, there are many more instances (XML,
virtual servers, clusters, services and hosts
ecommerce "shopping cart," HTTPS, and so on) where
it is extremely important that multiple connections from
Deciding the exact host depends on the load balancing the same user go to the same back-end service host and
algorithm associated with that particular cluster. The not be load balanced. This concept is called persistence,
most common is simple round-robin where the load or server affinity. There are multiple ways to address
balancer simply goes down the list starting at the top this depending on the protocol and the desired results.
and allocates each new connection to the next host; For example, in modern HTTP transactions, the server

ISSN: 2347-8578 www.ijcstjournal.org Page 35


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

can specify a "keep-alive" connection, which turns But when server load balancing doesn't work correctly,
those multiple short-lived connections into a single a virtual infrastructure can suffer from painful
long-lived connection that can be handled just like the performance problems. There will be a check box with
other longlived connections. However, this provides a connected option next to the disk drives inside VM
little relief. Even worse, as the use of web services configuration screen to select the box unless you have
increases, keeping all of these connections open longer disk data transferred to a VM.
than necessary would strain the resources of the entire
system. In these cases, most load balancers provide
other mechanisms for creating artificial server affinity. But connecting the disk drive creates a dependency
One of the most basic forms of persistence is between a VM and the physical disk, which can in turn
sourceaddress affinity. This involves simply recording cause load balancing to fail. When disk drivers are not
the source IP address of incoming requests and the used, disconnect them, or server loads may not be
service host they were load balanced to, and making all balanced.
future transaction go to the same host. This is also an
easy way to deal with application dependency as it can Affinity and anti-affinity
be applied across all virtual servers and all services. In Affinity in the virtual world refers to how VMs can be
practice however, the wide-spread use of proxy servers configured to always (or never) collocate on the same
on the Internet and internally in enterprise networks virtual host. By configuring affinity rules, we prevent
renders this form of persistence almost useless; in both domain controllers from residing on the same host
theory it works, but proxy-servers inherently hide many and, if a host experiences a failure, both from going
users behind a single IP address resulting in none of down.
those users being load balanced after the first user's
request— essentially nullifying the load balancing VMware and Microsoft allow configuring VMs to
capability. Today, the intelligence of load balancer– follow (or not follow) one another as they live migrate.
based devices allows organizations to actually open up But user shouldn't use these features unless they're
the data packets and create persistence tables for absolutely necessary, because affinity rules create
virtually anything within it. This enables them to use dependencies between VMs that can affect server load
much more unique and identifiable information, such as balancing. It is advisable to steer clear of affinity unless
user name, to maintain persistence. However, if it is absolutely needed.
organizations one must take care to ensure that this
identifiable client information will be present in every
request made, as any packets without it will not be Resource restrictions
persisted and will be load balanced again, most likely Resource restrictions protect virtual machines from
breaking the application. Server load balancing is others that overuse resources. One can limit the
essential to keep resources properly distributed in a resources that a VM is allowed to consume. It can also
virtual infrastructure. If the infrastructure is expanding reserve a minimum quantity of resources that a VM
to a private cloud, which is an automated environment, must always have available. Both settings are great
virtual machine load balancing becomes even more when resources are tight, but they also create
critical. dependencies that can cause server load balancing to
fail -- or make it more difficult for a load-balancing
With any virtualization platform, a private cloud service to do its job.
requires virtual machines (VMs) that can live-migrate
anywhere to balance resource loads. The most common Unnecessarily powerful VMs
load-balancing services are Microsoft System Center This one's a rookie mistake. Most of us are used to the
Virtual Machine Manager's Performance and Resource notion of nearly unlimited physical resources for
Optimization feature and VMware's Distributed Windows. It's been years since servers lacked the
Resource Scheduler (DRS). processing power or RAM to support a workload. The
idea of "Just give it lots of RAM and plenty of
processors" tends to seep into our virtual infrastructure
Most virtualization administrators already rely on some
as well.
degree of server load balancing in their infrastructure,
so you're probably closer to private cloud computing
than you may think. The problem with this line of thinking is that
unnecessarily powerful VMs consume lots of resources.
When machines use too many processors or too much
RAM, target host servers aren't powerful enough to

ISSN: 2347-8578 www.ijcstjournal.org Page 36


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

support the VM's configuration. As a result, the Dipesh Gupta, Hardeep Singh [2] proposed SSL
machine can't fail over or is limited to specific targets session sharing based web cluster load balancing.
where it can fail over. Internet users increase the traffic on the servers and
server security is the major concern with which the
user’s privacy needs to be protect. TLS (Transport
Start with one processor per virtual machine and as Layer Security) is a widely deployed protocol that
little RAM as possible, then work upward. That way establishes a secure channel between communicating
server load-balancing service can allocate resources parties over the internet. But TLS/SSL has huge impact
only where they're most needed -- and none go to on webserver’s performance by degrading it to a
waste. considerable amount. When TLS/SSL session is
generated it is broadcasted to all servers in the cluster
Most of us remember that it's necessary to have storage with which session reuse can be used to save time in
for VM files themselves, but we sometimes forget about negotiation. TLS Handshake and Session resume is
the other storage requirements: Raw Device Mappings occur at the server end so in future if client requests
for a VMware virtual machine or pass-through drives again and its session is not expired then it can again
for a Hyper-V machine. Storage connections are always joins that its own session without renegotiating which
on a per-host basis, which means that every host must saves the session initialization time. Ultimately a new
be correctly masked and zoned so VMs can see their load balancing cluster design is proposed that can share
storage. If not, server load balancing suffers, because TLS sessions in the cluster to effectively improve the
VMs and their resources can't migrate to the target host. performance of TLS web cluster. The web cluster
server shares the sessions of users within the cluster.
The another technique for improving the latency and
Disabling load balancing throughput of the server SSL/TLS with backend
Some admins don't realize that VM load balancing is forwarding technique is compare and is analysed. The
still considered an advanced capability. As a result, they traditional method has flaws in the load balancing of the
haven't created a cluster in their vSphere data center or server but with the new implanted technique on the
haven't enabled DRS. server improves the performance during the high load
.The results are reviewed with 16 and 32 node cluster
For a Hyper-V infrastructure, both System Center system. With new technique the latency of system has
Virtual Machine Manager and System Center been decreased by the 40 % and throughput of the
Operations Manager are required for automated server system is extremely better than classical balancing
load balancing to work. technique.
My final and somewhat tongue-in-cheek
recommendation: If we intend to use server load According to De Grande [3] dynamic balancing of
balancing, then capability should be turned on. computation and communication load is vital for the
execution stability and performance of distributed,
parallel simulations deployed on shared, unreliable
IV. RELATED WORK resources of large-scale environments. High Level
Architecture (HLA) based simulations can experience a
decrease in performance due to imbalances that are
Anoop Reddy [1] developed a system to protect
produced initially and/or during run-time. These
applications from session stealing/hijacking attacks by
imbalances are generated by the dynamic load changes
tracking and blocking anomalies in end point
of distributed simulations or by unknown, non-managed
characteristics. In this proposal Systems and methods
background processes resulting from the non-dedication
for protection against session stealing is described. In
of shared resources. Due to the dynamic execution
embodiments of the present solution, a device
characteristics of elements that compose distributed
intermediary to the client and the server may identify
simulation applications, the computational load and
first properties of the client and associate the first
interaction dependencies of each simulation entity
properties with the session key. When the device
change during run-time. These dynamic changes lead to
receives subsequent request comprising the session key,
an irregular load and communication distribution,
the device matches the associated first properties with
which increases overhead of resources and execution
second properties of the second device that is sending
delays. A static partitioning of load is limited to
the subsequent request. If there is a match, the
deterministic applications and is incapable of predicting
subsequent request transmitted to the server. Otherwise,
the dynamic changes caused by distributed applications
the subsequent request is rejected.
or by external background processes. Due to the

ISSN: 2347-8578 www.ijcstjournal.org Page 37


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

relevance in dynamically balancing load for distributed an empty packet over the networking to all the sub
simulations, many balancing approaches have been servers and response time for each sub server is
proposed in order to offer a sub-optimal balancing calculated using the Queuing theory. In this Load
solution, but they are limited to certain simulation Balancing system, the SSL based load distribution
aspects, specific to determined applications, or unaware schemes have been introduced for better performance.
of HLA-based simulation characteristics. Therefore,
schemes for balancing the communication and In systems and methods for supporting a SNMP request
computational load during the execution of distributed over a cluster [5] the present disclosure is directed
simulations are devised, adopting a hierarchical towards systems and methods for supporting Simple
architecture. First, in order to enable the development Network Management Protocol (SNMP) request
of such balancing schemes, a migration technique is operations over clustered networking devices. The
also employed to perform reliable and low-latency system includes a cluster that includes a plurality of
simulation load transfers. Then, a centralized balancing intermediary devices and an SNMP agent executing on
scheme is designed; this scheme employs local and a first intermediary device of the plurality of
cluster monitoring mechanisms in order to observe the intermediary devices. The SNMP agent receives an
distributed load changes and identify imbalances, and it SNMP
uses load reallocation policies to determine a GETNEXT request for an entity. Responsive to receipt
distribution of load and minimize imbalances. As a of the SNMP GETNEXT request, the SNMP agent
measure to overcome the drawbacks of this scheme, requests a next entity from each intermediary device of
such as bottlenecks, overheads, global synchronization, the plurality of intermediary devices of the cluster. To
and single point of failure, a distributed redistribution respond to the SNMP request, the SNMP agent selects a
algorithm is designed. Extensions of the distributed lexicographically minimum entity. The SNMP agent
balancing scheme are also developed to improve the may select the lexicographically minimum entity from a
detection of and the reaction to load imbalances. These plurality of next entities received via responses from
extensions introduce communication delay detection, each intermediary device of the plurality of
migration latency awareness, self-adaptation, and load intermediary devices.
oscillation prediction in the load redistribution
algorithm. Such developed balancing systems
Branko Radojević [6] analysed issues with Load
successfully improved the use of shared resources and
Balancing Algorithms in Hosted (Cloud) Environments.
increased distributed simulations' performance.
In order to provide valuable information and influence
the decision-making process of a load balancer, thus
K Kungumaraj, T Ravichandran proposed A distributed maintaining optimal load balancing in hosted (or cloud)
system consists of independent workstations connected environments, it is not enough just to provide
usually by a local area network. [4] Load balancing information from networking part of the computer
system puts forward to a new proposal to balance the system or from external load balancer. Load balancing
server load in the distributed system. The load models and algorithms proposed in the literature or
balancing system is a set of substitute buffer to share applied in open-source or commercial load balancers
the server load, when their load exceeds its limit. The rely either on session-switching at the application layer,
proposed technique gives an effective way to overcome packet-switching mode at the network layer or
the load balancing problem. Serving to more number of processor load balancing mode. The analysis of
client requests is the main aim of every web server, but detected issues for those load balancing algorithms is
due to some unexpected load, the server performance presented in this paper, as a preparation phase for a new
may degrade. To overcome these issues, network load balancing model (algorithm) proposition. The new
provides an efficient way to distribute their work with algorithm incorporates information from virtualized
the sub servers which is also known as proxy servers. computer environments and end user experience in
Allocating work to the sub server by their response time order to be able to proactively influence load balancing
is the proposed technique. The secure socket layer with decisions or reactively change decision in handling
Load balancing scheme has been introduced to critical situations.
overcome those server load problems. Storing and
serving effectively and securely is more important so
that desired algorithm is going to implement for load Archana B.Saxena1 and Deepti Sharma [7] proposed
distribution and security enhancement named as Secure Analysis of Threshold Based Centralized Load
Socket Layer with Load Balancing and RSA Security Balancing Policy for Heterogeneous Machines.
algorithm respectively. Calculating response time of Heterogeneous machines can be significantly better
each request from the clients has been done by sending than homogeneous machines but for that an effective
workload distribution policy is required. Maximum

ISSN: 2347-8578 www.ijcstjournal.org Page 38


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

realization of the performance can be achieved when World Wide Web provides a uniform and widely-
system designer will overcome load imbalance accepted application interface used by these services to
condition within the system. Load distribution and load reach multitudes of clients. These changes place the
balancing policy together can reduce total execution Web server at the center of a gradually emerging e-
time and increase system throughput. In this paper; we service infrastructure with increasing requirements for
provide algorithm analysis of a threshold based job service quality and reliability guarantees in an
allocation and load balancing policy for heterogeneous unpredictable and highly-dynamic environment. This
system where all incoming jobs are judiciously and paper describes performance control of a Web server
transparently distributed among sharing nodes on the using classical feedback control theory. We use
basis of jobs’ requirement and processor capability for feedback control theory to achieve overload protection,
the maximization of performance and decline in performance guarantees, and service differentiation in
execution time. A brief discussion of job allocation, the presence of load unpredictability. We show that
transfer and location policy is given with explanation of feedback control theory offers a promising analytic
how load imbalance condition is solved within the foundation for providing service differentiation and
system. A flow of scheme is given with essential code performance guarantees. We demonstrate how a general
and analysis of present algorithm is given to show how Web server may be modeled for purposes of
this algorithm is better. performance control, present the equivalents of sensors
and actuators, formulate a simple feedback loop,
P Rafiq, J Kann [8] proposed methods for self-loading describe how it can leverage on real-time scheduling
balancing access gateways. The present invention is and feedback-control theories to achieve perclass
directed towards systems and methods for self-load response-time and throughput guarantees, and evaluate
balancing access gateways. The systems and methods the efficacy of the scheme on an experimental testbed
include a master access gateway that receives load using the most popular Web server, Apache.
metrics and capabilities from a plurality of access Experimental results indicate that control-theoretic
gateways. The master access gateway also receives techniques offer a sound way of achieving desired
requests to determine if a request to start a new session performance in performance-critical Internet
is to be redirected to access gateways. The master applications. Our QoS (Quality-of-Service)
access gateways uses the load metrics and capabilities management solutions can be implemented either in
to select an access gateway to service the request. middleware that is transparent to the server, or as a
library called by server code
D Goel, JR Kurma [9] proposed systems and methods
are described for link load balancing, by a multi-core JH Kim, GS Choi [11] proposed load balancing scheme
intermediary device, a plurality of Internet links. The for cluster-based secure network servers. Although the
method may include load balancing, by a multi-core secure sockets layer (SSL) is the most popular protocol
device intermediary to a plurality of devices and a to provide a secure channel between a client and a
plurality of Internet links, network traffic across the cluster-based network server, its high overhead
plurality of Internet links. The multi-core device degrades the server performance considerably, and thus,
providing persistence of network traffic to a selected affects the server scalability. Therefore, improving the
Internet link based on a persistence type. A first core of performance of SSL-enabled network servers is critical
the multi-core device receives a packet to be for designing scalable and high performance data
transmitted via an Internet link to be selected from the centers. In this paper, we examine the impact of SSL
plurality of Internet links. The first core sends to a offering and SSL-session aware distribution in cluster-
second core of the multi-core device a request for based network servers. We propose a backend
persistence information responsive to identifying that forwarding scheme, called
the second core is an owner core of a session for SSL_WITH_BF that employs a low-overhead user-
persistence based on the persistence type. The first core level communication mechanism like VIA to achieve
receives the persistence information from the second good load balance among server nodes. We compare
core and determines to transmit the packet to the three distribution models for network servers: Round
Internet link previously selected based on the Robin (RR), SSL_With_Session and SSL_WITH_BF
persistence information received from the second core. through simulation. The experimental results with 16-
node and
T. Abdelzaher, K. Shin[10] proposed the Internet is 32-node cluster configurations show that while session
undergoing substantial changes from a communication reuse of SSL_With_Session is critical to improve the
and browsing infrastructure to a medium for conducting performance of application servers, the proposed
business and marketing a myriad of services. The backend forwarding scheme can further enhance the

ISSN: 2347-8578 www.ijcstjournal.org Page 39


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

performance due to better load balancing. The for a given cache budget. Experimental results show
SSL_With_BF scheme can minimize average latency that an even 50-50 partitioning between the page cache
by about 40% and improve throughput across a variety and the fragment cache works very well across all
of workloads. environments. With this partitioning, we are able to
achieve over fifty percent reduction in server latencies
as compared to fragment caching. This approach
Mohit Aron Peter Druschel Willy Zwaenepoel [12] achieves both the long-term benefit through fragment
Proposed a resource management framework for caching and the immediate benefit through anticipatory
providing predictable quality of service (QoS) in Web page pre-generation. An investigation can be done on
servers. The framework allows Web server and proxy the performance effects of pre-generating a set of pages,
operators to ensure a probabilistic minimal QoS level, rather than just a single page.
expressed as an average request rate, for a certain class
of requests (called a Service), irrespective of the load J Guitart, D Carrera, V Beltran, J Torres [14] proposed
imposed by other requests. A measurement-based Session-Based Adaptive Overload Control for Secure
admission control framework determines whether a Dynamic Web Applications. As dynamic web content
service can be hosted on a given server or proxy, based and security capabilities are becoming popular in
on the measured statistics of the resource consumptions current web sites, the performance demand on
and the desired QoS levels of all the co-located application servers that host the sites is increasing,
services. In addition, we present a feedback-based leading sometimes these servers to overload. As a
resource scheduling framework that ensures that QoS result, response times may grow to unacceptable levels
levels are maintained among admitted, co-located and the server may saturate or even crash. In this paper
services. Experimental results obtained with a prototype we present a session-based adaptive overload control
implementation of our framework on trace-based mechanism based on SSL (Secure Socket Layer)
workloads show its effectiveness in providing desired connections differentiation and admission control. The
QoS levels with high confidence, while achieving high SSL connections differentiation is a key factor because
average utilization of the hardware. the cost of establishing a new SSL connection is much
greater than establishing a resumed SSL connection (it
Suresha and Jayant R. Haritsa [13] proposed techniques reuses an existing SSL session on server). Considering
on reducing Dynamic Web Page Construction Times this big difference, we have implemented an admission
Many web sites incorporate dynamic web pages to control algorithm that Prioritizes the resumed SSL
deliver customized contents to their users. However, connections to maximize performance on session-based
dynamic pages result in increased user response times environments and limits dynamically the number of
due to their construction overheads. They proposed new SSL connections accepted depending on the
mechanisms for reducing these overheads by utilizing available resources and the current number of
the excess capacity with which web servers are connections in the system to avoid server overload. In
typically provisioned. Specifically, we present a order to allow the differentiation of resumed SSL
caching technique that integrates fragment caching with connections from new SSL connections. They proposed
anticipatory page pre-generation in order to deliver a possible extension of the Java Secure
dynamic pages faster during normal operating Sockets Extension (JSSE) API. Their evaluation on
situations. A feedback mechanism is used to tune the Tomcat server demonstrates the benefit of our
page pre-generation process to match the current system proposal for preventing server overload.
load. The experimental results from a detailed
simulation study of our technique indicate that, given a T. Abdelzaher, K. Shin [15] proposed mechanisms and
fixed cache budget, page construction speedups of more policies for supporting HTTP/1.1 persistent connections
than fifty percent can be consistently achieved as in cluster-based Web servers that employ content-based
compared to a pure fragment caching approach. We request distribution. We present two mechanisms for
have proposed a hybrid approach to reduce dynamic the efficient, content-based distribution of HTTP/1.1
web page construction times by integrating fragment requests among the back-end nodes of a cluster server.
caching with page pre-generation, utilizing the spare A trace-driven simulation shows that these mechanisms,
capacity with which web servers are typically combined with an extension of the locality-aware
provisioned. Through the use of a simple linear request distribution (LARD) policy, are effective in
feedback mechanism, we ensure that the peak load yielding scalable performance for HTTP/1.1 requests.
performance is no worse than that of pure fragment We implemented the simpler of these two mechanisms,
caching. A detailed study of the hybrid approach over a back-end forwarding. Measurements of this mechanism
range of cache ability levels and prediction accuracies, in connection with extended LARD on a prototype

ISSN: 2347-8578 www.ijcstjournal.org Page 40


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

cluster, driven with traces from actual Web servers, characteristics of the workloads. In this paper, we focus
confirm the simulation results. The throughput of the on the characteristics of the network behaviour within a
prototype is up to four times better than that achieved clustered, multi-tiered data centre. Using a real
by conventional weighted round-robin request implementation of a clustered three-tier data centre, we
distribution. In addition, throughput with persistent analyse the arrival rate and inter-arrival time
connections is up to 26% better than without. distribution of the requests to individual server nodes,
the network traffic between tiers, and the average size
of messages exchanged between tiers. The main results
J Brendel, CJ Kring, Z Liu, CC Marino [16] proposed of this study are; (1) in most cases, the request inter-
world-wide-web server with delayed resource-binding arrival rates follow log-normal distribution, and self-
for resource-based load balancing on a distributed similarity exists when the data centre is heavily loaded,
resource multi-node network. A multi-node server (2) message sizes can be modelled by the log-normal
transmits world-wide-web pages to network-based distribution, and (3) Service times fit reasonably well
browser clients. A load balancer receives all requests with the Pareto distribution and show heavy tailed
from clients because they use a virtual address for the behaviour at heavy loads.
entire site. The load balancer makes a connection with
the client and waits for the URL from the client. The V. PROPOSED METHOD
URL specifies the requested resource. The load
balancer waits to perform load balancing until after the The proposed system is designed to increase throughput
location of the requested resource is known. The and balance the servers based on different workloads.
connection and URL request are passed from the load The traditional method has flaws in the load balancing
balancer to a second node having the requested of the server but with the new implanted technique on
resource. The load balancer re-plays the initial the server improves the performance during the high
connection packet sequence to the second node, but load. The secure socket layer with Load balancing
modifies the address to that for the second node. The scheme has been introduced to overcome server load
network software is modified to generate the physical problems. Storing and serving effectively and securely
network address of the second node, but then changes is more important so that desired algorithm is going to
the destination address back to the virtual address. The implement for load distribution and security
second node transmits the requested resource directly to enhancement named as Secure Socket Layer with Load
the client, with the virtual address as its source. Since Balancing and RSA Security algorithm respectively.
all requests are first received by the load balancer which The results are reviewed with 16 and 32 node cluster
determines the physical location of the requested system. With new technique the latency of system has
resource, nodes may contain different resources. The been decreased by the 40 % and throughput of the
entire contents of the web site are not mirrored onto all system is extremely better than classical balancing
nodes. Network bottlenecks are avoided since the nodes technique. We provide algorithm analysis of a threshold
transmit the large files back to the client directly, based job allocation and load balancing policy for
bypassing the load balancer. Client browsers can cache heterogeneous system where all incoming jobs are
the virtual address, even though different nodes with judiciously and transparently distributed among sharing
different physical addresses service requests. nodes on the basis of jobs’ requirement and processor
capability for the maximization of performance and
Deniz Ersoz, Mazin S. Yousif and Chita R. Das decline in execution time.
proposed [17] Characterizing Network Traffic in a
Cluster-based, Multi-tier Data Centre. With the
VI. CONCLUSION
increasing use of various Web-based services, design of
high performance, scalable and dependable datacentres
has become a critical issue. Recent studies show that a The performance implications of the SSL protocol for
clustered, multi-tier architecture is a cost-effective providing a secure service in a cluster-based application
approach to design such servers. Since these servers are server will be investigated and proposed a back-end
highly distributed and complex, understanding the forwarding scheme for improving server performance
workloads driving them is crucial for the success of the through a better load balance. The proposed scheme
ongoing research to improve them. In view of this, there exploits the underlying user-level communication in
has been a significant amount of work to characterize order to minimize the intracluster communication
the workloads of Web-based services. However, all of overhead. The prosed system will be more robust in
the previous studies focus on a high level view of these handling variable file sizes.
servers, and analyse request-based or session-based

ISSN: 2347-8578 www.ijcstjournal.org Page 41


International Journal of Computer Science Trends and Technology (IJCST) – Volume 9 Issue 4, Jul-Aug 2021

REFERENCES [11] JH Kim, GS Choi, CR Das: A Load


Balancing Scheme for Cluster-based Secure
[1] Anoop Reddy, Rama Rao Katta, Bhanu Prakash Network Servers, proceedings in IEEE
Valluri, Craig Anderson, Ratnesh Singh Thakur: International Conference on Cluster Computing,
Protect applications from session on September 2005,pp. 1-10
stealing/hijacking attacks by tracking and [12] Mohit Aron Peter Druschel Willy
blocking anomalies in end point characteristics, Zwaenepoel. A resource management framework
published on November 26, 2011 in Search for providing predictable quality of service
International and National Patent collections, (QoS) in Web servers Available online :
No.WO2015179310. www.researchgate.net /publication/228537697
[2] Dipesh Gupta, Hardeep Singh: Review on TLS [13] Suresha and Jayant R. Haritsa: Techniques on
or SSL session sharing based web cluster load reducing Dynamic Web Page Construction
balancing. In proceedings of International Times Volume 3007 of the series Lecture Notes
Journal of Research in in Computer Science, pp. 722-731. Available
Engineering and Technology Volume: 03 Issue: online: link.springer.com/chapter/10.1007
11, Nov-2014. [14] J Guitart, D Carrera, V Beltran, J Torres:
[3] Robson Eduardo, De Grande: Dynamic Load Session-based adaptive overload control for
Balancing Schemes for Large Scale HLA Based secure dynamic Web applications. In
Simulations. In proceedings of 15th IEEE/ACM proceedings International Conference on Parallel
International Symposium on Distributed Processing (ICPP), on June 2005, pp.341 - 349
Simulation and Real time Applications, [15] T. Abdelzaher, K. Shin and N. Bhatti Efficient
Published on September Support for P-Http in ClusterBasedweb Servers.
2011, pp.4 – 11. In proceedings USENIX Annual Technical
[4] S Annamalaisami, R Holla: Systems and Conference,Monterey, California, USA, June 6-
methods for supporting a snmp request over a 11, 1999.
cluster, Published on Dec 19, 2013, Citrix [16] J Brendel, CJ Kring, Z Liu, CC Marino: world-
Systems, No. WO2013188780 A1. wide-web server with delayed resource-binding
[5] K. Kungumaraj, T. Ravichandran: Load for resource-based load balancing on a
balancing as a strategy learning task. In distributed resource multinode network,
proceedings Scholarly Journal of Scientific published on Jun 30, 1998, Patent no. 5,774,660.
Research and Essay (SJSRE) Vol. 1(2) on, April [17] Deniz Ersoz, Mazin S. Yousif and Chita R. Das:
2012, pp. 30-34 Characterizing Network Traffic in a Cluster-
[6] Branko Radojević MIPRO: Analysis of issues based, Multi-tier Data Centre. In proceedings
with load balancing algorithms in hosted (cloud) 27th International Conference on Distributed
environments, proceedings in 34th International Computing Systems (ICDCS '07), published on
Convention 2011, on May 23-27, pp. 416 – 420 June 2007, pp-59.
[7] Archana B.Saxena1 and Deepti Sharma:
Analysis Of Threshold Based Centralized
Load Balancing Policy For Heterogeneous
Machines. In proceedings International Journal
of Advanced Information
Technology (IJAIT) Vol. 1, No.5, October 2011
[8] P Rafiq, J Kann : Systems and methods for self-
loading balancing access gateways, published on
May 19, 2015
[9] D Goel, JR Kurma: Citrix Systems, IncSystems
and methods for link load balancing on a multi-
core device, published on Jan 12, 2012
[10] T. Abdelzaher, K. Shin and N. Bhatti.
Performance Guarantees for Web Server End-
Systems: A Control-Theoretical Approach. IEEE
Transactions on Parallel and Distributed Systems
Vol. 13 (1), pp. 80-96. January 2002.

ISSN: 2347-8578 www.ijcstjournal.org Page 42

You might also like