HAProxy 2.7 is now available!
Register for the webinar HAProxy 2.7 Feature Roundup to learn more about this release and participate in a live Q&A with our experts.
Once again, the latest HAProxy update features improvements across the board, upgrading old features and introducing some new ones. New elements in this release include:
- the debut of traffic shaping to control client upload and download speeds
- an improvement to health check performance to reduce CPU load
- updated layer 7 retries that reuse idle HTTP connections even for first client requests
- stick table locking efficiency improvements
- the introduction of stick table data shards to accelerate the processing of large datasets
- a range of new converter and Runtime API command additions
- as well as other small updates to Lua script passing and Master CLI control
What a list!
As always, these improvements are only possible thanks to the support of the incredible HAProxy Community, from discussions over the mailing list to lively debate on the HAProxy GitHub project. Each community member is invaluable in providing code for new functionality and bug fixes, QA testing, documentation updates, bug reports, advice and suggestions, and much more. The project would not exist without you! If joining this vibrant community is of interest, it can be found on GitHub, Slack, Discourse, and the HAProxy mailing list.
New feature: Traffic shaping
HAProxy has a new traffic shaping feature that lets you limit the speed at which clients can upload or download data. For example, this allows you to limit the maximum download speed of a file to 5 Mbps even for clients that have faster connections. Or conversely, you can slow a client’s upload speed. Through traffic shaping, you can apply a bandwidth limit for each individual HTTP stream, meaning that each stream gets its own bandwidth allotment, or set a limit that applies to a particular client’s IP address or collectively to all clients accessing a backend.
filter bwlim-out directive and
http-response set-bandwidth-limit together set download speeds, while
filter bwlim-in and
http-request set-bandwidth-limit set upload speeds. The filters can specify a stick table to enforce limits based on the keys in the table, such as a client’s IP address or the ID of a backend. A nice thing about these filters is that the bandwidth limits are not necessarily fixed constants in the configuration, and you can define them based on data collected from your traffic. For example, a video service could use the contents of an HTTP header provided by the server to set the appropriate bandwidth limit for a given video to avoid too much network bandwidth from being used by agents prefetching large parts of the contents.
Read more about Traffic Shaping.
Overcoming the 64 threads barrier
Massively multi-core, modern CPUs allow us to build a product that packs a lot of features inside a single computer process, which validates the choice made years ago to adopt a one-thread-per-core model to take advantage of those CPU cores. However, due to the fast, atomic operations involved at many places, HAProxy was previously limited to 64 threads, and therefore 64 CPU cores, on 64-bit machines. This limit is now raised to 4096 threads by the introduction of thread groups.
A thread group, which you create with the
thread-group directive in the global section of your configuration, lets you assign a range of threads, for example 1-64, to a group and then use that group on a bind line in your configuration. You can define up to 64 groups of 64 threads.
In addition to taking better advantage of available threads, thread groups help to limit the number of threads that compete to handle incoming connections, thereby reducing contention. Thread groups also deal much better with non-uniform memory architecture machines (NUMA) that have multiple CPU sockets or processors with uneven access to the L3 cache, where performance gains of up to 4x were observed in the lab.
Better performing health checks
Server health checks became more efficient with this release. You will recall that traditionally HAProxy checks its connectivity to servers at a defined interval. Previously, when HAProxy completed a check, it placed the next scheduled health check into a queue for any thread to pick up the next time. This combined with the increase of thread count had been causing a thundering herd problem in which many threads awoke to compete for the task.
Now, to reduce contention, HAProxy keeps the recurring work with the same thread. As a failsafe to prevent a thread from becoming overloaded, before starting the next health check the thread compares its workload to see if there’s another, less busy thread available. If so, it hands the task over to that thread. Overall, allowing one thread to own health checks has reduced CPU load and latency.
Revisiting HTTP reuse with L7 retries
Since HAProxy introduced layer-7 retries in version 2.0, HAProxy can repeat its attempt to send an HTTP request to a server when its connection to that server breaks mid-communication. That makes it possible to more aggressively use idle connections, comfortable with the knowledge that if an idle connection suddenly closes, HAProxy can retry it. In this current release, HAProxy capitalizes on that by changing the
http-reuse safe directive to reuse idle connections even for a client’s first request as long as retries are enabled for broken connections (the
retry-on directive in a backend is set to
QUIC and HTTP/3
The QUIC stack in HAProxy continues to evolve and has received numerous fixes and improvements to remain future-proof, such as support for QUICv2, complying with the QUIC Compatible Version Negotiation draft-08, CUBIC congestion control algorithm, and much more (252 commits in total). All these improvements and fixes were progressively backported to 2.6 as they stabilized. Many more are coming, and with 2.7 released, much less will be backported to 2.6, which will now mostly focus on stability fixes only.
Stick tables use more efficient locking
Given that reads are more common than writes, when accessing a stick table HAProxy now uses an rwlock, which allows multiple threads to read from the table simultaneously but allows only a single thread at a time to write. This replaces the spinlock, which had enforced exclusive access for both reads and writes. This has unleashed unused performance for stick tables that had been held up by threads waiting to acquire a lock. Performance gains of up to 11 times the initial request rate were observed on a 24-core system that was making intense use of stick tables and track-sc rules.
Sharding stick table data sent to peers
While many of you are aware that you can use a peers section in an HAProxy configuration to share stick table data between load balancers in an active-standby setup, did you know that you can also use it to share data with agents that process the data?
When using an external agent that collects and processes stick table data, a challenge can be the volume of that data. You can now split a stick table’s data into subsets, called shards, before distributing the shards among different stick table peers. This helps divide the work of processing a large dataset.
shards directive sets the number of shards to create, while the
shard argument on a peer in a peers section indicates the key used when creating the distribution hash. All stick tables associated with the peers section will be affected. In the example below, data is split into two shards so that half of the data goes to the first peer and half goes to the other.
You can use the Runtime API’s show table command to view the contents of a stick table.
SSL usability improvements
HAProxy 2.7 improves two of its bind directive options,
crt-ignore-err, which set a list of SSL certificate errors to ignore. Previously you would define a list of numeric error IDs here. Now, you can specify their human-readable names instead, for which the OpenSSL site provides a list of error codes. Similarly, the
x509_v_err_str() converter converts a numeric error ID to its human-readable constant, which is useful for logs.
Building HAProxy with QUIC relies on using an underlying SSL library that supports QUIC. This requirement will become progressively easier since we have adopted LibreSSL 3.6 as an experimental status. HAProxy will also have initial, but incomplete, support for the WolfSSL library.
Pass arguments to Lua scripts
HAProxy 2.7 supports passing optional arguments to Lua scripts via the
lua-load-per-thread directives. This facilitates passing initial settings to your scripts from your HAProxy configuration, without needing to modify the script’s hardcoded values or pass values via environment variables.
In your /etc/haproxy/haproxy.cfg file, pass arguments to the script:
In your Lua file, use the table.pack command to retrieve the script’s arguments. The three dots passed to
table.pack signify that this command accepts a variable number of arguments, which will be stored in the variable args. The example script below adds a new action named
http-request lua.log-args that simply prints the arguments to the HAProxy log file.
In this trivial example, those values will be printed to the HAProxy log (e.g. /var/log/haproxy.cfg) when the new action is called:
The following converters have been added:
|table_expire(<table>[,<default_value>])||Returns the remaining time before a given key will expire in the table, as well as how long ago a given key was last seen.|
|table_idle(<table>)||Returns the time the given key has remained idle since the last time it was updated.|
|host_only||Converts a string that contains a Host header value and removes its port.|
|port_only||Converts a string that contains a Host header value and returns only its integer port.|
|x509_v_err_str||Converts a numerical value to its corresponding X509_V_ERR constant name, which is useful for setting ACL expressions based on different client certificate errors (expired certificate, revoked certificate, etc.) when working with multiple versions of OpenSSL.|
This version of HAProxy adds a new Runtime API command,
add ssl ca-file, that adds a new SSL certificate to a ca-file.
The Master CLI, used to interact with HAProxy’s worker processes, has an updated reload command that will now wait for the reload to complete and then show the status of the newly forked process.
Note that this command will close all connections to the Master CLI.
There is also a new command,
show startup-logs, available when HAProxy has been compiled with the
USE_SHM_OPEN=1 flag. This command shows the HAProxy startup message and will show reload attempts and any errors.
In order to always serve its users faster, the development team continues to improve the suite of debugging tools.
- A new anonymization feature was added to the CLI and configuration file so that users can safely share sanitized versions of their configuration or live sessions dumps without revealing instance names that could disclose customer names.
- Ring buffers can now be file-backed so that logs can be kept locally or traces can be dumped until the very last event before an expected crash.
- Traces can now be enabled from within the configuration, saving users from having to script that after boot.
- Memory usage tracking is now even more accurate and can focus on specific pools.
- Inter-task communication can now be traced in memory and unrolled from crash dumps to locate a bug faster.
The following updates apply to this version of HAProxy:
bind-processdirective has been removed.
processargument on a bind line has been removed.
HAProxy 2.7 would not have been possible without a long list of contributors, all providing invaluable corrections and moments of inspiration. Contribution to the project comes in all manner of forms, from design choice discussions, bug reporting, testing development releases, maintaining documentation, to assisting users on both Discourse and the mailing list, classifying Coverity reports, reviewing patches, and contributing code. With a contributor list too long to include here, please know that the community appreciates each and every one of you who made 2.7 possible!
Earlier in November, the HAProxy community gathered in Paris, France, for HAProxyConf 2022. The conference lineup gave us three days of learning, including workshops, keynotes, technical talks, and use cases from HAProxy core developers, open-source users, and enterprise customers. HAProxy engineers presented the latest features and announced the launch of HAProxy Fusion Control Plane. Meanwhile, HAProxy users and customers presented their incredible innovations and performance benchmarks using HAProxy in high-scale deployments. We have already started planning HAProxyConf 2023. If you would like to present on what you have achieved with HAProxy, submission(at)haproxy.com.