Load Balancing explained


In computer networking, load balancing is a technique (usually performed by load balancers) to spread work between many computers, processes, hard disks or other resources in order to get optimal resource utilization and decrease computing time.

Introduction


A load balancer can be used to increase the capacity of a server farm beyond that of a single server. It can also allow the service to continue even in the face of server down time due to server failure or server maintenance.

A load balancer consists of a virtual server (also referred to as vserver or VIP) which, in turn, consists of an IP address and port. This virtual server is bound to a number of physical services running on the physical servers in a server farm. These physical services contain the physical server's IP address and port. A client sends a request to the virtual server, which in turn selects a physical server in the server farm and directs this request to the selected physical server. Load balancers are sometimes referred to as "directors"; while originally a marketing name chosen by various companies, it also reflects the load balancer's role in managing connections between clients and servers.

Different virtual servers can be configured for different sets of physical services, such as TCP and UDP services in general. Protocol- or application-specific virtual servers that may be supported include HTTP, FTP, SSL, SSL BRIDGE, SSL TCP, NNTP, SIP, and DNS.

The load balancing methods (listed below) manage the selection of an appropriate physical server in a server farm.

Load balancers also perform server monitoring of services in a web server farm. In case of failure of a service, the load balancer continues to perform load balancing across the remaining services that are UP. In case of failure of all the servers bound to a virtual server, requests may be sent to a backup virtual server (if configured) or optionally redirected to a configured URL. For example, a page on a local or remote server which provides information on the site maintenance or outage.

Among the server types that may be load balanced are:
  • Server farms
  • Caches
  • Firewall
  • Intrusion detection systems
  • SSL offload or compression appliances
  • Content Inspection servers (such as anti-virus, anti-spam)
In Global Server Load Balancing (GSLB) (also known as Global Traffic Management) the load balancer distributes load to a geographically distributed set of server farms based on health, server load or proximity.

Load balancer features


  • SSL Offload and Acceleration: SSL applications can be a heavy burden on the resources of a Web Server, especially on the CPU and the end users may see a slow response (or at the very least the servers are spending a lot of cycles doing things they weren't designed to do). To resolve these kinds of issues, a Load Balancer capable of handling SSL Offloading in specialized hardware may be used. When Load Balancers are taking the SSL connections, the burden on the Web Servers is reduced and performance will not degrade for the end users.
  • Distributed Denial of Service Attack (DDoS) Protection: load balancers can provide features such as SYN cookies and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
  • HTTP Compression: reduces amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers
  • TCP Offload: different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
  • TCP Buffering: the load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks.
  • HTTP Caching: the load balancer can store static content so that some requests can be handled without contacting the web servers.
  • Content Filtering: some load balancers can arbitrarily modify traffic on the way through.
  • HTTP Security: some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can't manipulate them.
  • Priority Queuing: also known as rate shaping, the ability to give different priority to different traffic.
  • Content Aware Switching: most load balancers can send requests to different servers based on the URL being requested.
  • Global Server Load Balancing: directing traffic to the best datacenter on a global or national basis
  • Link Load Balancing: monitor and load balance traffic between multiple ISP links
  • Client Authentication: authenticate users against a variety of authentication sources before allowing them access to a website.
  • Web Application Firewall: enforces a layer-7 security policy to secure HTTP and HTTPS websites.
  • Web Acceleration: sometimes this just entails HTTP caching and compression but some vendors provide additional acceleration features.
  • Spam Filtering: at least one load balancer allows the use of an IP reputation database to refuse mail from known spammers even before sending the messages to other spam filters being load balanced.
  • Programmatic Traffic Manipulation: at least one load balancer allows the use of a scripting language to allow custom load balancing methods, arbitrary traffic manipulations, and more.


Persistence


Persistence can be configured on a virtual server; once a server is selected, subsequent requests from the client are directed to the same server. Persistence is sometimes necessary in applications where client state is maintained on the server, and in other cases it can simply provide better performance (as one server may have data cached related to a particular user while other servers may not).

But reliance on persistence can cause problems if the persistence fails. Further, persistence mechanisms can vary across load balancer implementations. An alternative method of managing persistence is to store state information in a shared database, which can be accessed by all real servers, and to link this information to a client with a small token such as a cookie, which is sent in every client request. Of course this can be less efficient than persistence provided by a load balancer.

The easiest and most widely supported persistence method is source-address affinity. What this means is when the load balancer makes a load balancing decision, it creates a record in memory that records which server that remote IP address was sent to. Future requests from the same IP address will be sent to the same server. A timeout is usually specified so these entries do not need to be stored forever. The problem with this persistence method is that some users (notably AOL users) access the internet through a cluster of proxy servers which means their request address may change from request to request. In addition your typical home internet user has a dynamic IP address assigned by their ISP using DHCP and it is possible for their IP address to change before they finish a web transaction. Finally, for internal corporate applications, many users might come from a few source addresses (i.e. internal NAT addresses) which limits the granularity of the load balancing decisions using this persistence method (huge groups of users may come from one IP address and thus be sent to the same server, even if it is overloaded).

The best load balancers out there will support multiple persistence methods so that no one failure will break server persistence. For HTTP traffic on load balancers that support it, it is common to use cookie persistence as the primary persistence method as this is the most robust persistence solution for this type of traffic. High-end load balancers are able to insert the cookie into the HTTP responses automatically and the browser will re-send this cookie on the next request allowing the load balancer to read the cookie and use it to send the request to the same real server. In case a browser does not allow or support cookies, a fallback persistence method is usually used such as source-address affinity.

Methods


The following methods are possibilities that are supported by some load balancers for deciding which real server to relay a client request to. If persistence is being used then this will only be used for the first request from a new client, after that the persistence method will override these methods. This means that with persistence it is possible for things to be less than evenly balanced.
  • Least connection: a request is sent to the server with the fewest active connections at the moment
  • Round-robin: requests are sent to servers in a sequential and circular pattern -- server 1, server2, server3, ..., serverN, server1, ...
  • Fastest: server responsiveness is dynamically measured and requests are sent to the server with the fastest current response time.
In addition ratios can usually be assigned to servers so that some servers get a greater share than others, usually used if the servers have different capabilities. Advanced methods like Fastest make ratios unnecessary since it measures actual server performance.

Finally some load balancers support the idea of priority activation -- when the number of available servers drops below a certain number, other standby servers can be brought online (possibly serving as backups for a number of different applications).

Web server methods


One major issue for large Internet sites is how to handle the load of the large number of visitors they get. This is routinely encountered as a scalability problem as a site grows. There are several ways to accomplish load balancing; an example of a site using the approach is the Wikimedia Foundation and its projects. In June 2004 the load was balanced using a combination of:
  • Round robin DNS distributed page requests evenly to one of three Squid cache servers.
  • Squid cache servers used response time measurements to distribute page requests between seven web servers. In addition, the Squid servers cached pages and delivered about 75% of all pages without ever asking a web server for help.
  • The PHP scripts which run the web servers distribute load to one of several database servers depending on the type of request, with updates going to a master database server and some database queries going to one or more slave database servers.
Alternative methods include use of layer 4 routers, and for Linux, the Linux Virtual Server, which is an advanced open source load balancing solution for network services. Other load balancing reverse proxies for UNIX systems include XLB, HAProxy, Balance, Pen and Pound. With the appropriate modules, the Apache, Lighttpd and Nginx web servers can also act as a reverse proxy. fastream.com / Fastream IQ Reverse Proxy is a scalable and robust reverse proxy for Windows 2000/XP/2003/Vista. Network Load Balancing Services is a Microsoft proprietary clustering and load balancing implementation.

Networking and redundancy


When providing an online service, it is important to always have access to the Internet. You cannot limit yourself to choose one provider and being at their mercy. They may fail or you may overload the circuit with peak traffic during the day, resulting in slower network speed. Unavailability when a site is down and slow connections can cause clients to go elsewhere, perhaps to your competitors for better service.

Many sites are turning to the multi-homed scenario; having multiple connections to the Internet via multiple providers to provide a reliable and high throughput service. Multi-homed networks are increasing in popularity because they provide networks with better reliability and performance to the end-user. Better reliability results from the fact that the network is protected in case one of the Internet links or access routers fails. Performance increases due to the fact that the network's bandwidth to the Internet is the sum of the different pipes available through the different access links.

Monitoring


Server monitoring checks the state of a server by periodic probing of the specified destination. Based on the response, it takes appropriate action. Monitors - also known as keep-alives - specify the types of request sent to the server and the expected response from the server. The load balancing system sends periodic requests to the server. The response from the servers must be received not later than configured response timeout. If the configured number of probes fail, the server is marked "DOWN" and the next probe is sent after the configured down time. The destination of the probe may be different from the server's IP address and port. A load balancer may support multiple monitors. When a service is bound to multiple monitors, the status of the service is arrived based on the results sent by all monitors.

Monitors can be simple like pinging a server or checking to see if a certain port is open, or they can be advanced and application-specific such as performing an FTP directory listing or requesting an HTTP page and matching a string in the response. The better load balancers support external monitors that can be written in a variety of programming languages so that in-depth evaluation of the server's proper operation can be performed.

CPE


For the end-user CPE (as opposed to ISPs, public hosts or other telco infrastructure), load balancing can be used to provide resilience or redundant backup in the case of failure of one part of WAN access. For example, a company with an ADSL line might have a secondary ADSL line, or any other kind of Internet feed. Internet traffic may be evenly distributed across both connections, giving more total net bandwidth, or traffic policies may determine specific types of data use a specific connection. Where one connection fails, all traffic can automatically route via the other connection. This is also known as Link Load Balancing.

References


  • Tony Bourke: Server Load Balancing, O'Reilly, ISBN 0-596-00050-2
  • Chandra Kopparapu: Load Balancing Servers, Firewalls & Caches, Wiley, ISBN 0-471-41550-2
  • Robert J. Shimonski: Windows Server 2003 Clustering & Load Balancing, Osborne McGraw-Hill, ISBN 0-07-222622-6
  • Jeremy Zawodny, Derek J. Balling: High Performance MySQL, O'Reilly, ISBN 0-596-00306-4
  • Matthew Syme, Philip Goldie: Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing, Prentice Hall PTR, ISBN 0-13-101468-4


Vendors


  • Elfiq Networks - Link Load Balancers
  • A10 Networks
  • .vantronix | secure systems
  • Astaro
  • astrocorp.com / Astrocom
  • Barracuda Networks
  • CAI Networks
  • Celestix Networks
  • Cisco
  • Citrix
  • Coyote Point Systems
  • Crescendo Networks
  • DBAM Systems
  • Elfiq Networks
  • exceliance.fr / Exceliance
  • FatPipe Networks
  • F5 Networks
  • Foundry Networks
  • Inlab Software
  • jetNEXUS
  • Juniper Networks
  • KEMP Technologies
  • Nortel
  • PePLink Multi-WAN Routers
  • pcticorp.com / Parallel Computers Technology, Inc. (PCTI)
  • protonet.co.za / Proto Co Networking
  • Radware
  • strangeloopnetworks.com / Strangeloop Networks
  • routerstudio.com / RouterStudio Load Balancing
  • xrio.com / Xrio - Q-Balancer Range of Load Balancers
  • Zeus Technology
  • Sentral Systems Ltd
  • itinsell.cloud / ASPSERVEUR - Load Balancing as ASP services


<-- Previous | Home Glossary | Next -->

📣 Latest tweets mentioning Load Balancing


📖 Latest blogs mentioning Load Balancing

redswitches.com Icon 🏆 Alexa 239,829 - 📅 - Load Balancing Strategies for Improved Performance in 2024 - Key Takeaways Load balancing ensures fast, reliable online services. Various strategies exist, each tailored to specific needs and environments. Round Robin is simple and distributes requests evenly but doesn’t consider server load. Least ...
redswitches.com Icon 🏆 Alexa 239,829 - 📅 - Network Load Balancers: Optimizing Server Performance in 2024 - Key Takeaways A network load balancer is a device that balances application or network traffic throughout servers. Load balancing offers several benefits, including application availability, security, scalability, performance, etc. Some drawbacks ...
host4geeks.com Icon 🏆 Alexa 191,095 - 📅 - How to Create Child Nameservers - Creating child nameservers can help improve redundancy and load balancing for your domain’s DNS infrastructure. Read More

scalahosting.com Icon 🏆 Alexa 38,742 - 📅 - Load Balancing Strategies in Managed VPS Hosting - A managed VPS hosting provider must be able to adapt to every user’s demands. A multi-national online business generating thousands ...

🏆 Alexa 289,137 - 📅 - Mengenal Skalabilitas VPS Secara Lengkap serta Load Balancing - Virtual Private Server (VPS) adalah salah satu layanan hosting yang memberikan pengguna akses ke server dengan teknologi virtualisasi. Salah satu keuntungan utama menggunakan VPS terletak pada skalabilitasnya yang lebih mudah karena bisa dilakukan ...
🏆 Alexa 289,137 - 📅 - Pengertian Load Balancing dan Manfaatnya untuk Server - Jika Anda mempunyai situs atau aplikasi dengan traffic tinggi, tentunya Anda akan memerlukan resources atau sumber daya yang cukup banyak. Terkadang, server Anda akan kewalahan saat menerima banyak request atau permintaan dari terlalu banyak ...
arnhost.com Icon 🏆 Alexa 1,381,889 - 📅 - How LiteSpeed Web Server Works on Your cPanel Hosting in 2023 - Litespeed is a leading web server on unbeatable performance, load balancing, and manageable resources. It is one of the best secure web servers written in C and C++ programming languages. George Wang is the artist of this wonderful creation to boost

📋 Latest news about Load Balancing

Akamai Launches Global Cloud Sites, Uplifts Storage and Load Balancing - 📅 - Content delivery network (CDN) provider Akamai (crunchbase.com) has launched three cloud computing sites in the U.S. and France, with more planned in the coming weeks in the U.S. and India. The firm this week announced the opening of facilities in Washington, DC, Chicago, and Paris, France, along with the launch of ...
HTTP Load Balancing: The Lifeline of Managed Hosting Solutions - 📅 - With the heightened reliance on web-based applications for many organizations worldwide, one crucial aspect comes to the forefront - load balancing (varnish-software.com) (opensource.com) (en.wikipedia.org). So, what exactly is HTTP load balancing, and how does it function within a managed hosting proposition? Let's ...
Progress Acquires Virtual Load Balancing Company Kemp for $258M - 📅 - Progress (progress.com) Acquires Virtual Load Balancing Company Kemp (kemptechnologies.com) For $258M Progress (NASDAQ, PRGS), a global supplier of software to develop, deploy and manage high-impact applications, has acquired Kemp - a provider of virtual load balancing technology. By integrating Application ...
Kemp Load Balancing Now Paired with Orange Business Services - 📅 - Kemp Technologies (kemptechnologies.com)' LoadMaster load balancer is now available as a supported solution for Orange Business Services' public cloud platform. Kemp Technologies offers these Orange Flexible Engine subscribers a solution to enhance maximum security, performance and resilience for their cloud-hosted ...
EfficientIP Launches Edge DNS Global Server Load Balancing - 📅 - EfficientIP, a provider of network security and automation solutions specializing in DDI (DNS-DHCP-IPAM), has released its edge DNS GSLB (Global Server Load Balancing) offering. With enterprises moving to multi-cloud environments, datacenters becoming distributed and security risks increasing, DNS GSLB would help ...
The Planet Expands Load Balancing - 📅 - Dedicated managed hosting provider The Planet (www.theplanet.com) announced on Monday it has launched shared load balancing (www.theplanet.com/dedicated-servers/private-racks) for its virtual rack solution. The Foundry ServerIron-powered offering is priced from $75 per month, and uses a network ...
KEMP Takes Load Balancing To School - 📅 - Load balancer and application delivery controller provider KEMP Technologies (www.kemptechnologies.com) has announced it has received a passing grade for its products deployed in schools from kindergarten to university, delivering secure and reliable Web services to students and faculty. KEMP's ...