HAProxy 2.6 is now available!

As always, the community behind HAProxy made it possible to bring the enhancements in this release. Whether developing new functionality, fixing issues, writing documentation, QA testing, hosting CI environments, or submitting bug reports, members of our community continue to drive the project forward. If you’d like to join the effort, you can find us on GitHub, Slack, Discourse, and the HAProxy mailing list.

Register for the webinar HAProxy 2.6 Features Roundup to learn more about this release and participate in a live Q&A with our experts.

In the following sections, you will find a list of changes included in this version.

HTTP/3 over QUIC
Generic hash load balancing algorithm
SSL and TLS
Authentication
Runtime API
Master CLI
Lua
Listing configuration keywords
Protocol updates
Seamless reloads
New fetches and converters
Variables
Performance tuning
Contributors

HTTP/3 over QUIC

This version of HAProxy adds experimental support for HTTP/3 over QUIC, which is a novel approach to transmitting HTTP messages over UDP instead of TCP. The benefits include fewer round trips between the client and server when establishing a TLS connection, better protection against denial-of-service attacks, and improved connection migration when the user switches between networks.

In the example configuration below, we enable HTTP/3 over QUIC by setting a bind line that listens for client connections on UDP port 443. The prefix quic4@ sets the protocol. Also note that we return an alt-svc HTTP header, which instructs the client’s browser to switch to the new protocol for subsequent requests. In other words, the first request will be HTTP/2, but any after that will be HTTP/3.

Something else to know is that HAProxy supports stateless reset packets with QUIC, but you must set the global directive cluster-secret, which HAProxy uses to derive a stateless reset token. The token protects against malicious actors sending spoofed reset packets.

You’ll need to compile HAProxy with a few new options, including the USE_QUIC flag, and also link to a QUIC-compatible version of OpenSSL, such as the one found here. Want to try this out? Check out our HTTP/3 demo project.

Generic hash load balancing algorithm

You can use the new load balancing algorithm, hash, in place of the existing, more specific hash algorithms source, uri, hdr, url_param, and rdp-cookie. The new algorithm is generic, thus allowing you to pass in a sample fetch of the data used to calculate the hash.

In the example below, the pathq fetch returns the URL path and query string for the data to hash:

SSL and TLS

You can compile HAProxy against OpenSSL 3.0, the latest branch of the OpenSSL library.

Authentication

To authenticate clients with client certificates, you set the ca-file parameter on your bind line to indicate which certificate authority (CA) to use to verify the certificate. This parameter now accepts a directory path, allowing you to load multiple CA files so that you can verify certificates that were signed by different authorities.

Similarly, the ca-file parameter on a server line in a backend now accepts a directory path, allowing you load multiple CAs to verify a server’s SSL certificate. In this case, you can also specify @system-ca to load your system’s list of trusted CAs.

Runtime API

A new Runtime API command, show ssl providers, available when HAProxy was compiled against OpenSSL 3.0, returns a list of providers loaded into OpenSSL. A provider implements the cryptographic algorithms. You can load other providers via the OpenSSL configuration file, which you can find the path for by running openssl version -d.

Next, the Runtime API’s dynamic server feature, which was introduced in HAProxy 2.4 and got expanded keyword support in HAProxy 2.5, is no longer experimental. Recall that the dynamic server functions let you create servers on the fly without reloading the HAProxy configuration.

Also, you can now set the check and check-ssl parameters when creating servers, which were unsupported in prior versions. Note that when enabling health checks with these parameters, HAProxy is not yet able to implicitly inherit the SSL or Proxy Protocol configuration of the server line, so you must explicitly use check-ssl and check-send-proxy, even if the health check port is not overridden.

Master CLI

The Master CLI provides an interface for working with the HAProxy worker processes. You can learn more about it in the blog post Get to Know the HAProxy Process Manager. The CLI received several new commands:

Command Description
prompt Begins an interactive session with the CLI.
expert-mode [on|off] Activates expert mode for every worker accessed from the Master CLI.
experimental-mode [on|off] Activates experimental mode for every worker accessed from

the Master CLI.

mcli-debug-mode [on|off] Allows a special mode in the Master CLI which enables all

keywords that were meant for a worker on the Master CLI, allowing you to debug the master process. Once activated, you list the new available keywords with « help ». Combined with « experimental-mode » or « expert-mode » it enables even

more keywords.

The starting point is the prompt command, which starts an interactive session. Once in a session, you can enable expert, experimental, and master CLI debug modes. Then, send Runtime API commands to one of the worker processes. Some Runtime API commands become available only in one of the aforementioned modes.

Here is an example:

Lua

When extending HAProxy with a custom Lua module, you can now update an SSL certificate in the memory of the current HAProxy process by using the CertCache class. In the snippet below, the certificate and key are hardcoded in the Lua file, but in practice you could fetch these using the HTTP client or receive them from HAProxy variables, for example.

Also, the HTTP client that was added in version 2.5, which lets you make non-blocking HTTP calls from Lua, now supports two new parameters: dst for setting the destination address and timeout for setting a timeout server value. Setting dst overrides the IP address and port in the url parameter, but keeps the path. Below, the destination URL becomes http://127.0.1.1:8001/test.

However, since it supports HAProxy bind-style addresses, a more interesting use case is to set dst to a UNIX socket. For example, you could query the Docker API, which listens at the UNIX socket /var/run/docker.sock, to fetch a JSON-formatted list of running containers from within your Lua code, as shown in the next snippet:

This is functionally equivalent to calling the Docker API with curl:

Furthermore, new global directives in the HAProxy configuration affect the httpclient class:

Global directive Description
httpclient.ssl.ca-file <cafile> This option defines the ca-file which should be used to verify the server certificate. It takes the same parameters as the ca-file option on the server line.

By default and when this option is not used, the value is « @system-ca » which tries to load the CA of the system. If it fails the SSL will be disabled for the httpclient.

However, when this option is explicitly enabled it will trigger a configuration error if it fails.

httpclient.ssl.verify [none|required Works the same way as the verify option on server lines. If set to ‘none’, server certificates are not verified. Default option is « required ».

By default and when this option is not used, the value is « required ». If it fails the SSL will be disabled for the httpclient.

However, when this option is explicitly enabled it will trigger a configuration error if it fails.

httpclient.resolvers.id <resolvers id> This option defines the resolvers section with which the httpclient will try to resolve.

Default option is the « default » resolvers ID. By default, if this option is not used, it will simply disable the resolving if the section is not found.

However, when this option is explicitly enabled it will trigger a configuration error if it fails to load.

httpclient.resolvers.prefer <ipv4|ipv6> This option allows you to chose which family of IP addresses you want when resolving, which is convenient when IPv6 is not available on your network. Default option is « ipv6 ».

Listing configuration keywords

Have you ever wanted to know whether a configuration keyword is supported in the version of HAProxy you’re running? You can now ask HAProxy to return to you a list. Keywords are sorted into classes, so first get the list of classes by passing the -dKhelp argument to HAProxy, along with the quiet (-q), validation check (-c) and configuration file (-f) arguments:

Then get a list of keywords, for example:

Protocol updates

A new global directive, h1-accept-payload-with-any-method, allows clients using HTTP/1.0 to send a payload with GET, HEAD, and DELETE requests. The HTTP/1.0 specification had not been clear on how to handle payloads with these types of requests and proxy implementations vary on the interpretation, which could lead to request smuggling attacks. HAProxy uniformly rejects these requests for that reason, but the new option allows you to turn off this safeguard if you need to support specific clients.

Seamless reloads

Since HAProxy 1.8, HAProxy has had seamless reloads, which means you can use systemctl reload haproxy to update the HAProxy configuration without dropping any active connections, even under very high utilization. Listening sockets transfer over to the new worker process during the reload. The only thing you had to do was make sure that master-worker mode was enabled by including the -W flag when starting HAProxy and add the parameter expose-fd listeners to a stats socket directive in the global section of your configuration:

Now, you no longer need to do even that. Seamless reloads will work without any effort on your part. You can omit the expose-fd listeners parameter and the -W flag is already included in the Systemd service file in the HAProxy repository.

New fetches and converters

Two new fetches help pinpoint why a request was terminated. The last_rule_file fetch returns the name of the configuration file containing the final rule that was matched during stream analysis and the last_rule_line returns the line number. Add these to a custom log format to capture which rule in your configuration stopped the request.

In the next example, a custom log format includes these new fetches:

Next, the new add_item converter concatenates fields and returns a string. The advantage this has over the existing concat converter is that it will place a delimiter, such as a semicolon, between fields, and check whether the field exists to avoid appending a trailing delimiter at the end of the string.

In the example below, the add_item converter sets an HTTP cookie with the Expires and Secure attributes, which are separated by semicolons.

Variables

Variables let you store information about a request or response and reference that information within logical statements elsewhere in your configuration file. HAProxy 2.6 makes it simple to check whether a variable already exists or already has a value before trying to set it again. All tcp- and http- set-var actions, such as http-request set-var and tcp-request content set-var, now support the new parameter.

For example, if you wanted to set a variable named token to the value of an HTTP header named X-Token, but fall back to setting it to the value of a URL parameter named token if the header doesn’t exist, you could use the condition isnotset to check whether the variable has a value from the first case before trying to set it again:

You can use the following, built-in conditions:

Condition Sets the new value when…
ifexists the variable was previously created with a set-var call.
ifnotexists the variable has not been created yet with a set-var call.
ifempty the current value is empty. This applies for nonscalar types (strings, binary data).
ifnotempty the current value is not empty. This applies for nonscalar types (strings, binary data).
ifset the variable has been set and unset-var has not been called. A variable that does not exist is also considered unset.
ifnotset the variable has not been set or unset-var was called.
ifgt the variable’s existing value is greater than the new value.
iflt the variable’s existing value is less than the new value.

Performance tuning

HAProxy 2.6 brings new way to improve load balancing performance:

  • Adding fd-hard-limit to the global section of your configuration will enforce a cap on the number of file descriptors that HAProxy will use, even when the system allows many more, which protects you from consuming too much memory. If you set a global maxconn setting higher than this, the maxconn will adapt to this hard limit. Learn about setting maximum connections.
  • The new global directive close-spread-time lets you close idle connections gradually over a period of time, rather than all at once, which had caused reconnecting clients to rush against the process. For best results, you should set this lower than the hard-stop-after directive.
  • HAProxy’s task scheduler code and the code that dequeues connections awaiting an available server got a performance boost. The code, which uses multithreading, was optimized to bypass thread locking, allowing server queue management to become much more scalable.
  • The connection stream code has been refactored to simplify it and reduce the number of layers. Although more work in this area is underway, the result will be a more linear architecture, resulting in fewer bugs and easier maintenance.
  • At startup, HAProxy inspects the CPU topology of the machine and if a multi-socket machine is detected, sets an affinity in order to run on the CPUs of a single node in order to not suffer from the performance penalties caused by the inter-socket bus latency. However, if this causes inferior performance, you can set the no numa-cpu-mapping directive.

Contributors

We would like to thank each and every contributor who was involved in this release. Contributors help in various forms such as discussing design choices, testing development releases, reporting detailed bugs, helping users on Discourse and the mailing list, managing issue trackers and CI, classifying Coverity reports, maintaining the documentation, operating some of the infrastructure components used by the project, reviewing patches, and contributing code.

SHARE THIS ARTICLE