- HAPEE 1.6r2 User Guide
- Getting Started
- Traffic Management
- Running HAProxy
- Stopping and Restarting HAProxy
- Performing Health Checks
- Generating Alerts for Events
- Capturing Traffic Data
- Configuring HAPEE for Transport Layer Security (TLS)
- Fetching Data Samples
- Setting Up Access Control Lists (ACLs)
- Writing Conditions
- Rewriting HTTP Protocol
- Redirecting HTTP Traffic
- Compressing HTTP Traffic
- Configuring the lb-update module
HTTP compression is a technique that allows a server to compress the body of a response before it’s forwarded to a client.
Its main purpose is to reduce the size of the response and limit the number of bytes to send out and enable faster delivery.
When a client sends an HTTP request, it announces the compression algorithm it supports using the Accept-Encoding header field. The server can compress its response using any algorithm supported by the client.
Since HAProxy is usually situated between clients and the servers, it can compress responses if a server does not do it for a client that supports it.
HAProxy can work in two modes:
- compression offloading: compresses all the streams on behalf of the servers
- compression catch up: compresses only the streams where the client supports compression but the server did not compress the response
HAProxy disables HTTP compression when:
- the request does not advertise a supported compression algorithm in the Accept-Encoding header
- the response message is not HTTP/1.1
- HTTP status code is not 200
- the response header Transfer-Encoding contains chunked (Temporary Workaround)
- the response contains neither a Content-Length header nor a Transfer-Encoding whose last value is chunked
- the response contains a Content-Type header whose first value starts with multipart
- the response contains the no-transform value in the Cache-control header
- User-Agent matches Mozilla/4 unless it is MSIE 6 with XP SP2, or MSIE 7 and later
- The response contains a Content-Encoding header, indicating that the response is already compressed (see
SLZ is a fast and memory-less stream compressor which produces an output that can be decompressed with zlib or gzip.
HAPEE is compiled with libslz, a compression algorithm which requires little memory to optimize performance.
The following directives from the global section apply to stateless compression:
|| sets the maximum per-process input compression rate to
||sets the maximum per-process CPU usage (%) that HAProxy can reach before it stops compressing new responses. The default value is 100, which means no limitation.|
||sets the maximum compression level. With libslz, only two values are valid: 0 which means no compression and 1 which means enable compression. The default value is 1.|
The following directives from the
backend section apply to stateless compression:
||sets the list of supported compression algorithm. Many algorithm can serve as arguments: HAProxy applies the first one found on the client side Content-Encoding header field.
The following algorithms are currently supported:
||applies deflate compression with zlib format. Not recommended to be enabled with
||applies gzip compression.|
||does not alter content at all; used for debugging purposes only.|
||applies deflate compression without zlib wrapper. May be used as an alternative to
||MIME type to which to apply compression. It is related to server side Content-Type HTTP header field. If not set, HAProxy tries to compress all responses not compressed by the server.|
||In this mode, HAProxy removes the Accept-Encoding header field from the request before it forwards it to the server. Thus, HAProxy compresses the response and not the server.|
1. Enable compression in front of application web servers which cannot compress content that was dynamically generated:
backend b_myapp [...] compression algo gzip
2. Enable compression in front of a web service for all text based content:
3. Offload compression for a farm where a broken gzip or
deflate implementation is running:
backend b_myapp [...] compression algo gzip compression offload