HAProxy Technologies is excited to announce the release of HAProxy 2.0, bringing features critical for cloud-native and containerized environments, while retaining its industry-leading performance and reliability.
HAProxy 2.0 adds a powerful set of core features as well as completely new functionality that further improves its seamless support for integration into modern architectures. This includes Layer 7 retries, Prometheus metrics, traffic shadowing, polyglot extensibility, and gRPC support. In conjunction with this release, we are also introducing the HAProxy Kubernetes Ingress Controller and the powerful HAProxy Data Plane API which provides a modern REST API for configuring and managing HAProxy. Read the release announcement here.
When HAProxy 1.8 was released in November 2017, it introduced features including Hitless Reloads, DNS Service Discovery, Dynamic Scaling with the Runtime API, and HTTP/2 at the edge. These advancements moved HAProxy along the path of supporting a variety of architectures at any scale and in any environment, while also allowing it to maintain its position as the world’s fastest software load balancer.
Since then, many important changes have happened within the core project itself, such as changing the release cadence from an annual to a biannual release cycle. The project has opened up issue submissions on its HAProxy GitHub account. This has allowed our community to continue to flourish and we’re excited to be a part of such a strong corps of contributors.
The HAProxy community provides code submissions covering new functionality and bug fixes, quality assurance testing, continuous integration environments, bug reports, and much more. Everyone has done their part to make this release possible! If you’d like to join this amazing community, you can find it on Slack, Discourse, and the HAProxy mailing list.
This release improves upon capabilities that fit the unique conditions of cloud and container environments. HAProxy 2.0 is an LTS release.
In addition, the inaugural community conference, HAProxyConf, will take place in Amsterdam, Netherlands on November 12-13, 2019. With many interesting talk suggestions already received, we are looking at an amazing conference and we hope to see you there!
We’ve put together a complete HAProxy 2.0 configuration, which allows you to follow along and get started with the latest features right away. You will find the latest Docker images here. We’ll also be hosting webinars to cover the HAProxy 2.0 release, the Data Plane API, and the Kubernetes Ingress Controller. Sign up here.
In this post, we’ll give you an overview of the following updates included in this release.
Cloud-Native Threading & Logging
Tuning HAProxy for optimal performance is now even easier. Since version 1.8, you’ve been able to set the
nbthread directive to instruct HAProxy to operate across multiple threads, which allows you to make better use of multi-core processor machines. HAProxy now automatically configures this for you. It will, out of the box, set the number of worker threads to match the machine’s number of available CPU cores. That means that HAProxy can scale to accommodate any environment with less manual configuration.
You can still configure this yourself with the
nbthread directive, but this makes the task simpler. It also removes the burden of tuning this in cloud environments where machine instance sizes may be heterogeneous. On systems where HAProxy cannot retrieve the CPU affinity information, it will default to a single thread.
This also simplifies the
bind line as it no longer requires you to specify a
process setting. Connections will be distributed to threads with the fewest active connections. Also, two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoid needlessly allocating huge structs. This can be very helpful on embedded devices that do not need to support MAX_THREADS=64.
Logging is now easier to adapt to containerized environments. You can log directly to stdout and stderr, or to a file descriptor. Use the following syntax:
HTTP Representation (HTX)
The Native HTTP Representation (HTX) was introduced with HAProxy 1.9 and it laid the foundation that will allow HAProxy to continue to provide best-in-class performance while accelerating cutting-edge feature delivery for modern environments. Many of the latest features, such as end-to-end HTTP/2, gRPC, and Layer 7 retries, are powered by HTX.
HTX creates an internal, native representation of the HTTP protocol(s). It creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. Modifying headers now consists simply of marking the old one as deleted and appending the new one to the end. This provides easy manipulation of any representation of the HTTP protocol, allows HAProxy to maintain consistent semantics from end-to-end, and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa.
With HTX in place, any future HTTP protocols will be easier to integrate. It has matured since its introduction and starting in 2.0 it will be enabled by default.
With HTX now being on by default, HAProxy officially supports end-to-end HTTP/2. Here’s an example of how to configure it with TLS offloading. The
server lines include the
alpn parameter, which specifies a list of protocols that can be used in the preferred order:
You can also use HTTP/2 without TLS. Remove any
verify parameters from the
bind lines. Then swap
alpn h2 for
proto h2 and HAProxy will use only the given protocol.
HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. It allows for bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. The gRPC protocol is a modern, high-performance RPC framework that can run in any environment. Using Protocol Buffers, it’s able to serialize messages into a binary format that’s compact and potentially more efficient than JSON.
To begin using gRPC in HAProxy, you just need to set up a standard end-to-end HTTP/2 configuration. Here, we’re using the
alpn parameter to enable HTTP/2 over TLS:
Standard ACLs apply and allow for path-based matching, as shown:
Additionally, two new converters,
ungrpc, have been introduced that let you extract the raw Protocol Buffers messages.
Layer 7 Retries
Reducing downtime often involves having smart contingency mechanisms in place. HAProxy has, since its inception, supported retrying a failed TCP connection by including the
option redispatch directive. With HAProxy 2.0, it can also retry from another server at Layer 7 for failed HTTP requests. The new configuration directive,
retry-on, can be used in a
backend section. The number of attempts at retrying can be specified using the
retries directive. It is important that you know how your application behaves with Layer 7 retries enabled. Caution must be exercised when retrying requests such as POST requests. In our examples, we have disabled POST requests from being retried using
http-request disable-l7-retry if METH_POST
It supports a variety of error types to allow for granular control. Otherwise, you can specify
all-retryable-errors, which will retry the request for any error that is considered to be retriable. The full list of
retry-on options is below:
What it means
Retry when the connection or the TLS handshake failed. This is the default.
Retry when the server connection was closed after part of the request was sent and nothing was received from the server. This type of failure may be caused by the request timeout on the server side, poor network conditions, or a server crash or restart while processing the request.
Retry when the server returned something not looking like a complete HTTP response. This includes partial response headers as well as non-HTTP contents. It is usually a bad idea to retry on such events, which may be caused by a configuration issue such as having the wrong server port or by the request being rejected because it is potentially harmful to the server (a buffer overflow attack, for example).
The server timeout struck while waiting for the server to respond. This may be caused by poor network conditions, the reuse of an idle connection that has expired, or the request being extremely expensive to process. It is generally a bad idea to retry on such events on servers dealing with heavy database processing (full scans, etc.) as it may amplify denial-of-service attacks.
Retry requests that were sent over TLS Early Data (0-RTT) were rejected by the server. These requests are generally considered to be safe to retry.
Retry on any HTTP status code among 404 (Not Found), 408 (Request Timeout), 425 (Too Early), 500 (Server Error), 501 (Not Implemented), 502 (Bad Gateway), 503 (Service Unavailable), 504 (Gateway Timeout).
Retry for any error that is considered retriable. This is the same as if you specified conn-failure, empty-response, junk-response, response-timeout, 0rtt-rejected, 500, 502, 503, and 504.
HAProxy 2.0 also introduces a new
http-request action called
disable-l7-retry that allows you to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful, for example, to make sure that POST requests aren’t retried.
Here’s an example configuration that activates Layer 7 retries:
Data Plane API
In today’s cloud-native landscape, ephemeral services are born and die quickly, deployments happen continuously, and configuration needs to be refreshed constantly. The new Data Plane API provides a modern REST API for configuring HAProxy on the fly. You can now dynamically add and remove frontends, backends, and servers. You can create ACL rules, insert HTTP routing directives, set IP and port bindings, and much more. The API updates the configuration file as needed, reloading the HAProxy process when necessary.
HAProxy has proven itself to be dynamic and extensible with its built-in Lua support and its Stream Processing Offload Engine. The new Data Plan API takes that one step forward by providing true dynamic configuration management. The API daemon runs as a sidecar process, which HAProxy can manage using the
program directive in the new Process Manager. The HAProxy Data Plane API supports transactions, which allow multiple changes to be applied simultaneously. This gives you the ultimate confidence that updates are atomic.
API Specification: https://www.haproxy.com/documentation/dataplaneapi/latest/
Blog post: The New HAProxy Data Plane API
Several of the exciting innovations happening involve components that run as sidecar processes alongside HAProxy, such as the Data Plane API and any Stream Processing Offload Agents (SPOAs). Clearly, there’s a benefit to having central orchestration to control the lifecycle of these processes.
This release introduces support for the new Process Manager. It allows you to specify external binaries that HAProxy will start and manage directly under its master/worker mode. After enabling master/worker mode by either including the -Ws flag on the command line or adding a
master-worker directive to the
global section of the HAProxy configuration, you can tell HAProxy to start external programs by using the following syntax:
For example, to have HAProxy handle the start up of the Data Plane API, you would add it as a
command in the
program section, like this:
You can view a list of running commands by issuing a
show proc command to the Runtime API:
The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. The goal was to create the extension points necessary to build upon HAProxy using any programming language. The initial examples were all C based. Over time, the community saw a need to show how SPOE can be extended in any language and a variety of libraries and examples were contributed. This opens the door to as many developers as possible.
In collaboration with our community, we’re excited to announce that libraries and examples are available in the following languages and platforms:
Traffic shadowing, or mirroring, allows you to mirror requests from one environment to another. This is helpful in instances where you would like to send a percentage of production traffic to a testing or staging environment to vet a release before it’s fully deployed. The new Traffic Shadowing daemon is written as a Stream Processing Offload Agent (SPOA) and takes advantage of HAProxy’s SPOE, which allows you to extend HAProxy using any programming language.
The Traffic Shadowing SPOA can be launched and managed using the Process Manager, as shown:
Above, we specified config mirror.cfg on the
filter spoe line. Here is an example of how mirror.cfg would look:
Kubernetes Ingress Controller
Since February 2017, an HAProxy Ingress Controller for Kubernetes has been provided by community contributor, Joao Morais. HAProxy Technologies contributed features, such as adding DNS service discovery and watched the evolution of the project. There’s a need, however, for a community-driven project that’s developed jointly by HAProxy Technologies.
The new HAProxy Kubernetes Ingress Controller provides a high-performance ingress for your Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting, and the best-in-class performance that HAProxy is renowned for. Ingresses can be configured through either ConfigMap resources or annotations and there’s support for defining secrets for storing TLS certificates.
Blog post: Dissecting the HAProxy Kubernetes Ingress Controller
HAProxy now has native support for exposing metrics to Prometheus. Prometheus is an open-source systems monitoring and alerting toolkit that was originally built at SoundCloud. Its adoption has been widespread and it inspires an active community.
To begin using the Prometheus exporter you must first compile HAProxy with the component by using the EXTRA_OBJS variable. An example
make command would be:
Activate the exporter within your HAProxy configuration by adding an http-request use-service directive, like so:
Read more about the Prometheus integration on the blog post: HAProxy Exposes a Prometheus Metrics Endpoint.
Peers & Stick Tables Improvements
HAProxy allows the propagation of stick table data to other HAProxy nodes using the Peers Protocol. HAProxy 2.0 introduces several improvements to the Peers Protocol including:
Stick tables in
Runtime API command:
New stick table counters
New stick table data type, server_name
A node now sends a heartbeat message to its peers after a three-second period of inactivity. If there isn’t any activity within a five-second period, the peer is considered dead, the connection is closed, and reconnection is attempted.
peers section has been expanded to allow using the
default-server configuration directives. It also now supports having stick tables directly within itself. This means that you no longer need to use dummy backends, as previously recommended when dealing with many different stick tables.
In the following example, we define a stick table directly inside a
peers section and encrypt traffic between nodes using SSL:
frontend you would then specify the following:
Using the Runtime API, you can now get information about the various peers connections using show peers.
The stick table counters
gpc1_rate are additional, general-purpose counters that can be incremented using configuration logic. A new stick table data type, server_name, was added. It functions the same as server_id except that the server’s name is exchanged over the wire in addition to its ID. To learn how to take advantage of stick tables, check out our blog post: Introduction to HAProxy Stick Tables.
Power of Two Random Choices Algorithm
In the 1.9 release, a new load-balancing algorithm was added called random. It chooses a random number as the key for the consistent hashing function. Random load balancing can be useful with large farms or when servers are frequently added or removed, as it may avoid the hammering effect that could result from
leastconn in this situation. It also respects server weights and dynamic weight changes and server additions take effect immediately.
hash-balance-factor directive can be used to further improve the fairness of the load balancing, especially in situations where servers show highly variable response times.
balance to random, the argument <draws> indicates that HAProxy should draw that many random servers and then select the one that is least loaded. Drawing two servers even has a name: the Power of Two Random Choices algorithm.
Specify Power of Two load balancing within your backend as follows:
You can read more about this in our blog post Test Driving “Power of Two Random Choices” Load Balancing.
Log Distribution & Sampling
When dealing with a high volume of logs, sampling can be extremely beneficial, giving you a random insight into the traffic. Typically, this sampling would need to be performed by a syslog server such as rsyslog. With HAProxy 2.0, you can now do sampling directly within HAProxy by using the
log directive’s sample parameter. Multiple
sample directives can be specified simultaneously.
To get started, configure logging as follows:
The first log line configures all local0 logs to be sent to stderr. The second log line configures logging to 127.0.0.1:10001 at a sampled rate. One out of 10 requests would be logged to this source. Sending 100 requests while incrementing the URL parameter i results in the following log entries:
The third log line configures logging to 127.0.0.2 on port 10002 at a sampled rate. For every 11 requests, it will log requests 2, 3, and 8-11. Sending 100 requests while incrementing the URL parameter i results in the following log entries:
Built-in Automatic Profiling
HAProxy now features the
profiling.tasks directive, which can be specified in the
global section. It takes the parameters auto, on, or off. It defaults to auto.
When set to auto, the profiling automatically switches on when the process starts to suffer from an average latency of 1000 microseconds or higher, as reported in the avg_loop_us activity field and automatically turns off when the latency returns below 990 microseconds. This value is an average over the last 1024 loops. So, it does not vary quickly and tends to smooth out short spikes. It may also spontaneously trigger from time to time on overloaded systems, containers, or virtual machines, or when the system swaps—which must absolutely never happen on a load balancer.
To view the activity you can use the
show activity Runtime API command, as shown:
To view the status of profiling, use the
show profiling Runtime API command:
Profiling exposes the following fetches, which can be captured in the HAProxy log:
The microseconds part of the date.
The number of calls to the task processing the stream or current request since it was allocated. It is reset for each new request on the same connection.
The average number of nanoseconds spent in each call to the task processing the stream or current request.
The total number of nanoseconds spent in each call to the task processing the stream or current request.
The average number of nanoseconds spent between the moment the task handling the stream is woken up and the moment it is effectively called.
The total number of nanoseconds between the moment the task handling the stream is woken up and the moment it is effectively called.
To use these in the logs, you would either extend the default HTTP
log-format, like so:
Or, extend the default TCP
Enhanced TCP Fast Open
HAProxy now has end-to-end support for TCP Fast Open (TFO), enabling clients to send a request and receive a response during the TCP three-way handshake. The benefit of this is that you save one round-trip after the first connection.
HAProxy has supported TFO on the frontend since version 1.5. Version 2.0 enhances this by adding TFO for connections to backend servers on systems that support it. This requires Linux kernel 4.11 or newer. Add the
tfo parameter to a
Be sure to enable retries with the
retry-on directive or the request won’t be retried on failure.
New Request Actions
As a part of this release, several new
tcp-request actions were introduced. Here is a breakdown of these new actions with their descriptions.
http-request do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr>
Performs DNS resolution of the output of <expr> and stores the result in the variable <var>. It uses the DNS
Disables any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful, for example, to make sure POST requests aren’t retried upon failure.
tcp-request content do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr>
Performs DNS resolution of the output of <expr> and stores the result in the variable <var>. It uses the DNS
tcp-request content set-dst <expr>
Used to set the destination IP address to the value of the specified expression.
tcp-request content set-dst-port <expr>
Used to set the destination port to the value of the specified expression.
http-request replace-uri <match-regex> <replace-fmt>
This matches the regular expression in the URI part of the request according to <match-regex> and replaces it with the <replace-fmt> argument.
http-request do-resolve and
tcp-request do-resolve warrant further explanation. They allow you to resolve a DNS hostname and store the result in an HAProxy variable. Consider the following example:
Here, we’re using
http-request do-resolve to perform a DNS query on the hostname found in the Host request header. The nameserver(s) referenced in the mydns
resolvers section (not shown) will return the IP address associated with that hostname and HAProxy will then store it in the variable txn.dstip. The
http-request set-dst line in the be_main backend updates the
server address with this variable.
This is beneficial in split horizon DNS environments, wherein the DNS server will return different results, such as publicly routable or internal-only addresses, depending on the client’s, which is the load balancer’s, source IP address. So, you could have Dev and Prod load balancers that receive different DNS records when they call do-resolve. This is much more dynamic than the at-runtime DNS resolution available in HAProxy (i.e. using the
resolvers parameter on the
server line), which is typically set to hold onto a DNS result for a period of time. As such, it’s also suitable for other scenarios involving highly dynamic environments, such as where upstream servers are ephemeral.
Converters allow you to transform data within HAProxy and are usually followed after a fetch. The following converters have been added to HAProxy 2.0:
Decrypts the raw byte input using the AES128-GCM, AES192-GCM, or AES256-GCM algorithm.
Extracts the raw field of an input binary sample representation of a Protocol Buffers message.
Extracts the raw field of an input binary sample representation of a gRPC message.
Fetches in HAProxy provide a source of information from either an internal state or from layers 4, 5, 6, and 7. New fetches that you can expect to see in this release include:
Returns the client random of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL.+
Returns the server random of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL.
Returns the client random of the back connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL.
Returns the server random of the back connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL.
The following miscellaneous improvements have been made:
SSL/TLS Ticket Keys
TLS session tickets help to speed up session resumption for clients that support them. HAProxy 2.0 adds support for AES256-bit ticket keys specified in both a file or through the Runtime API.
Core Dump – Ease of Use
A new global directive
set-dumpablehas been added, which aids in enabling core dumps. It’s been known to be a pain to get a core dump when enabling the
groupsettings, which disables the dumpable flag on Linux, when using a chroot and/or when HAProxy is started by a service management tool that requires complex operations to just raise the core dump limit. This directive makes it much easier to retrieve a core file.
Introduces 2 new server keywords:
check-via-socks4which can be used for communicating with servers within a backend over SOCKS4 and adds similar functionality for health checking over SOCKS4.
LTS Support for 1.9 Features
HAProxy 2.0 bring LTS support for the aforementioned features, as well as the following features that were introduced or improved upon during the 1.9 release:
Small Object Cache with an increased caching size of up to 2GB, set with the
total-max-sizesetting determines the total size of the cache and can be increased up to 4095MB.
New fetches that report either an internal state or from layer 4, 5, 6, and 7.
New converters that allow you to transform data within HAProxy.
HTTP 103 (Early Hints), which asks the browser to preload resources.
Server Queue Priority Control, which lets you prioritize some queued connections over others.
Connection pooling to backend servers.
resolverssection supports using resolv.conf by specifying parse-resolv-conf.
busy-pollingdirective allows the reduction of request processing latency by 30-100 microseconds on machines using frequency scaling or supporting deep idle states.
The Server class gained the ability to change a server’s
The TXN class gained the ability to adjust a connection’s priority within the server queue.
There is a new StickTable class that allows access to the content of a
stick-tableby key and allows the dumping of content.
Regression testing of the HAProxy code using varnishtest.
HAProxy 2.1 Preview
HAProxy 2.1 will build on the foundation that has been laid in HAProxy 1.9 and 2.0. Some of the exciting features planned are:
Dynamic SSL Certificate Updates
Prometheus exporter improvements
HAProxy remains at the forefront of performance and innovation because of the commitment of the open-source community and the staff at HAProxy Technologies. We’re excited to bring you this news of the 2.0 release! In addition to the features included in this version, it paves the way for many exciting updates, which, with our new release cadence, you’ll see more frequently.
It immediately brings support for end-to-end HTTP/2, gRPC, Layer 7 Retries, traffic shadowing, connection pooling on the server side, a Process Manager, the Power of Two Random Choices Algorithm, and a Prometheus Exporter. Of course, one of the most powerful additions is the new Data Plane API, which allows you to dynamically configure HAProxy using RESTful HTTP calls.
Our enterprise customers have been able to benefit from many of these features for the last few months, as a majority of them have already been backported directly into the HAProxy Enterprise 1.9r1 release. Our philosophy is to provide value to the open-source community first and then rapidly integrate features into Enterprise, which has a focus on stability. You can compare versions on the Community vs Enterprise page.
Keep apprised of the latest news by subscribing to this blog! You can also follow us on Twitter and join us on Slack. Want to learn more about HAProxy Enterprise? Contact us or sign up for a free trial today!