Overview
Today, many web applications experience high traffic demands, and traffic spikes can lead to server overloads and a deteriorating user experience.
The HAProxy Enterprise software load balancer spreads traffic across a pool of healthy servers, allowing you to scale out your capacity for handling concurrent requests. You can then easily answer the demand, while also improving performance and availability.
![[Traffic flow with a load balancer]](https://cdn.haproxy.com/documentation/hapee/1-8r2/assets/with-a-loadbalancer-d62b460dfabe2f4cffbb814c3411911adc891979e9ee6e05705d6531263428ff.png)
You can seamlessly integrate HAProxy Enterprise with your existing infrastructure, either:
at the edge of a network to replace traditional, hardware load balancers,
in the cloud to replace expensive virtual load balancers,
or inside a container network as a sidecar proxy.
The job of a load balancer
A load balancer sits in front of your web servers and receives requests directly from clients before relaying them to one of your servers. In this way, it can distribute requests evenly, allowing the work to be shared. This prevents any backend server from becoming overworked and, as a result, your servers operate more efficiently. A load balancer differs from a web or application server in that it does not host your web application directly. Instead, its job is to spread the work across your cluster of servers.
Because all requests pass through the load balancer on their way to a server, the load balancer becomes the ideal place to redirect clients, inspect for malicious behavior, and generate traffic statistics, among other duties. HAProxy Enterprise provides security and management features in addition to load-balancing. You can use it to load balance any TCP/IP service including databases, message queues, mail servers, and IoT devices.
HAProxy Enterprise functionalities
HAProxy Enterprise offers traffic rate-limiting, health checks, switching rules (ACLs), a Web Application Firewall, application-layer DDoS attack protection, SSL termination, HTTP compression, and best-in-class observability.
The following table presents the main features of the HAProxy Enterprise load balancer in more detail:
Rate limiting | To keep resource usage fair, you can stop a client from making too many requests during a window of time. |
---|---|
Health checks | HAProxy Enterprise monitors the health of web servers and backend servers to ensure they can handle requests. It removes unhealthy servers from the pool and puts them back in place once they're up and running. |
Switching rules (ACLs) | You can filter and direct traffic in real time through conditional statements (ACLs).
|
Web Application Firewall | The HAProxy Enterprise Web Application Firewall (WAF) stops attacks against web applications. It supports three modes:
|
Application-Layer DDoS Attack Protection | HAProxy Enterprise mitigates today's threats through real-time behavioral analysis.
|
SSL termination | Maintaining SSL certificates across a pool of servers is tedious, error-prone and a waste of processing power on application or web servers. With SSL termination, or SSL offloading, you perform all encryption and decryption at the edge of your network.
|
HTTP compression | Save network bandwidth and reduce latency by compressing the body of a response before it's relayed to the client. |
Observability | Analyze live metrics, monitor threat protection, or disable servers depending on their status with the Real-time Dashboard. |
Data Plane API | You can leverage the Data Plane API to:
|
HAProxy Enterprise architecture
HAProxy Enterprise integrates seamlessly with your existing infrastructure. Internally, it is comprised of frontends, ACLs, default and conditional backends, and servers.
It routes traffic to any number of pools of servers, which are comprised of physical servers, VMs, Kubernetes pods, containers, and so on.
The following table presents the main components of the HAProxy Enterprise load balancer in more detail:
Seamless integration | HAProxy Enterprise stands as a reverse proxy in front of your backend servers and integrates seamlessly with your network infrastructure. |
---|---|
Frontend | A frontend exposes a website to the Internet, for instance, www.example.com.
|
Binds | A bind defines the IP addresses and ports that clients can connect to. You can, for example, associate multiple binds with a frontend, e.g., one for HTTP and another for HTTPS requests. |
ACLs | You can test various conditions through Access Control Lists (ACLs), and perform a given action based on those tests.
You can easily create complex conditions through logic operators (AND, OR, NOT). |
Default backend | A backend is a group of servers that handle requests in a load-balanced fashion. The default backend is the pool of servers to send traffic to if requests do not match any ACL. |
Conditional backends | A conditional backend is a pool of servers to send traffic to if requests match an ACL.
|
Server | A server defines the IP address and port of an actual server that will be load-balanced and process client requests.
|
Next up
Hardware Recommendations