Compressing HTTP Traffic
HTTP compression is a technique that allows a server to compress the body of a response before it's forwarded to a client.
Its main purpose is to reduce the size of the response and limit the number of bytes to send out and enable faster delivery.
When a client sends an HTTP request, it announces the compression algorithm it supports using the Accept-Encoding header field. The server can compress its response using any algorithm supported by the client.
Since HAProxy is usually situated between clients and the servers, it can compress responses if a server does not do it for a client that supports it.
Using Traffic Compression
HAProxy can work in two modes:
Compression offloading: compresses all the streams on behalf of the servers
Compression catch up: compresses only the streams where the client supports compression but the server did not compress the response
HAProxy disables HTTP compression when:
The request does not advertise a supported compression algorithm in the Accept-Encoding header
The response message is not HTTP/1.1
HTTP status code is not 200
The response header Transfer-Encoding contains chunked (Temporary Workaround)
The response contains neither a Content-Length header nor a Transfer-Encoding whose last value is chunked
The response contains a Content-Type header whose first value starts with multipart
The response contains the no-transform value in the Cache-control header
User-Agent matches Mozilla/4 unless it is MSIE 6 with XP SP2, or MSIE 7 and later
The response contains a Content-Encoding header, indicating that the response is already compressed (see
The compression does not rewrite Etag headers and does not emit the Warning header.
Configuring HAPEE for HTTP Compression
SLZ is a fast and memory-less stream compressor which produces an output that can be decompressed with zlib or gzip.
HAPEE is compiled with libslz, a compression algorithm which requires little memory to optimize performance.
The following directives from the global section apply to stateless compression:
| || |
sets the maximum per-process input compression rate to
| || |
sets the maximum per-process CPU usage (%) that HAProxy can reach before it stops compressing new responses. The default value is 100, which means no limitation.
| || |
sets the maximum compression level. With libslz, only two values are valid: 0 which means no compression and 1 which means enable compression. The default value is 1.
The following directives from the defaults, frontend, listen or backend section apply to stateless compression:
| || |
sets the list of supported compression algorithm. Many algorithm can serve as arguments: HAProxy applies the first one found on the client side Content-Encoding header field. The following algorithms are currently supported:
| || |
MIME type to which to apply compression. It is related to server side Content-Type HTTP header field. If not set, HAProxy tries to compress all responses not compressed by the server.
| || |
In this mode, HAProxy removes the Accept-Encoding header field from the request before it forwards it to the server. Thus, HAProxy compresses the response and not the server.
Enable compression in front of application web servers which cannot compress content that was dynamically generated:
backend b_myapp [...] compression algo gzip
Enable compression in front of a web service for all text based content:
Offload compression for a farm where a broken gzip or deflate implementation is running:
backend b_myapp [...] compression algo gzip compression offload