[ANNOUNCE] HAProxy 1.6.0 released
Yesterday, 13th of October, Willy has announced the release of HAProxy 1.6.0, after 16 months of development!
First good news is that release cycle goes a bit faster and we aim to carry on making it as fast.
A total of 1156 commits from 59 people were committed since the release of 1.5.0, 16 months ago.
Please find here the official announce: [ANNOUNCE] haproxy-1.6.0 now released!.
In his mail, Willy detailed all the features that have been added to this release. The purpose of this blog article is to highlight a few of them, providing the benefits and some configuration examples.
NOTE: Most of the features below were already backported and integrated into our HAProxy Enterprise and ALOHA products.
HAProxy Enterprise is our Open Source version of HAProxy based on HAProxy community stable branch where we backport many features from dev branch and we package it to make the most stable, reliable, advanced and secured version of HAProxy. It also comes with third party software to fill the gap between a simple HAProxy process and a load-balancer (VRRP, syslog, SNMP, Route Health Injection, etc…). Cherry on the cake, we provide “enterprise” support on top of it.
NOTE 2: the list of new features introduced here is not exhaustive. Example are proposed in a quick and dirty way to teach you how to start with the feature. Don’t run those examples in production 🙂
It’s 2015, let’s use QUOTE in configuration file
Those who uses HAProxy for a long time will be happy to know that ‘\ ‘ (backslash-space) sequence is an old painful souvenir with 1.6 🙂
We can now write:
reqirep "^Host: www.(.*)" "Host: foobar\1"or
option httpchk GET / "HTTP/1.1\r\nHost: www.domain.com\r\nConnection: close"
Lua Scripting
Maybe the biggest change that occurred is the integration of Lua.
Quote from Lua’s website: “Lua is a powerful, fast, lightweight, embeddable scripting language.“.
Basically, everyone has now the ability to extend HAProxy by writing and running their own Lua scripts. No need to write C code, maintain patches, etc…
If some Lua snipplets are very popular, we may write the equivalent feature in C and make it available in HAProxy mainline.
One of the biggest challenge Thierry faced when integrating Lua is to give it the ability to propose non-blocking processing of Lua code and non-blocking network socket management.
HAProxy requires Lua 5.3 or above.
With Lua, we can add new functions to the following HAProxy elements:
- Service
- Action
- Sample-fetch
- Converter
Compiling HAProxy and Lua
Installing LUA 5.3
cd /usr/src curl -R -O http://www.lua.org/ftp/lua-5.3.0.tar.gz tar zxf lua-5.3.0.tar.gz cd lua-5.3.0 make linux sudo make INSTALL_TOP=/opt/lua53 install
LUA 5.3 library and include files are now installed in /opt/lua53.
Compiling HAProxy with Lua support
make TARGET=linux2628 USE_OPENSSL=1 USE_PCRE=1 USE_LUA=1 LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/
HAProxy/Lua simple Hello world example
A simple Hello world! in Lua could be written like this:
- The lua code in a file called hello_world.lua:
core.register_service("hello_world", "tcp", function(applet) applet:send("hello world\n") end)
- The haproxy configuration
global lua-load hello_world.lualisten proxy bind 127.0.0.1:10001 tcp-request content use-service lua.hello_world
More with Lua
Please read the doc, ask your questions on HAProxy‘s ML (no registration needed): haproxy@formilux.org.
Of course, we’ll write more articles later on this blog about Lua integration.
Captures
HAProxy‘s running context is very important when writing configuration. In HAProxy, each context is isolated. IE: you can’t use a request header when processing the response.
With HAProxy 1.6, this is now possible: you can declare capture slots, store data in it and use it at any time during a session.
defaults mode http frontend f_myapp bind :9001 declare capture request len 32 # id=0 to store Host header declare capture request len 64 # id=1 to store User-Agent header http-request capture req.hdr(Host) id 0 http-request capture req.hdr(User-Agent) id 1 default_backend b_myapp backend b_myapp http-response set-header Your-Host %[capture.req.hdr(0)] http-response set-header Your-User-Agent %[capture.req.hdr(1)] server s1 10.0.0.3:4444 check
Two new headers are inserted in the response:
Your-Host: 127.0.0.1:9001
Your-User-Agent: curl/7.44.0
Multiprocess, peers and stick-tables
In 1.5, we introduced “peers” to synchronize stick-table content between HAProxy servers. This feature was not compatible with multi-process mode.
We can now, in 1.6, synchronize content of the table it is stick to one process. It allows creating configurations for massive SSL processing pointing to a single backend sticked on a single process where we can use stick tables and synchronize its content.
peers article peer itchy 127.0.0.1:1023 global pidfile /tmp/haproxy.pid nbproc 3 defaults mode http frontend f_scalessl bind-process 1,2 bind :9001 ssl crt /home/bassmann/haproxy/ssl/server.pem default_backend bk_lo backend bk_lo bind-process 1,2 server f_myapp unix@/tmp/f_myapp send-proxy-v2 frontend f_myapp bind-process 3 bind unix@/tmp/f_myapp accept-proxy default_backend b_myapp backend b_myapp bind-process 3 stick-table type ip size 10k peers article stick on src server s1 10.0.0.3:4444 check
Log
log-tag
It is now possible to position a syslog tag per process, frontend or backend. Purpose is to ease the job of syslog servers when classifying logs.
If no log-tag is provided, the default value is the program name.
Example applied to the configuration snipplet right above:
frontend f_scalessl log-tag SSL [...] frontend f_myapp log-tag CLEAR [...]
New log format variables
New log format variables have appeared:
- %HM: HTTP method (ex: POST)
- %HP: HTTP request URI without query string (path)
- %HQ: HTTP request URI query string (ex: ?bar=baz)
- %HU: HTTP request URI (ex: /foo?bar=baz)
- %HV: HTTP version (ex: HTTP/1.0)
Server IP resolution using DNS at runtime
In 1.5 and before, HAProxy performed DNS resolution when parsing configuration, in a synchronous mode and using the glibc (hence /etc/resolv.conf file).
Now, HAProxy can perform DNS resolution at runtime, in an asynchronous way and update server IP on the fly. This is very convenient in environment like Docker or Amazon Web Service where server IPs can be changed at any time.
Configuration example applied to docker. A dnsmasq is used as an interface between /etc/hosts file (where docker stores server IPs) and HAProxy:
resolvers docker nameserver dnsmasq 127.0.0.1:53 defaults mode http log global option httplog frontend f_myapp bind :80 default_backend b_myapp backend b_myapp server s1 nginx1:80 check resolvers docker resolve-prefer ipv4
Then, let’s restart s1 with the command “docker restart nginx1” and let’s have a look at the magic in the logs:
(...) haproxy[15]: b_myapp/nginx1 changed its IP from 172.16.0.4 to 172.16.0.6 by docker/dnsmasq.
HTTP rules
New HTTP rules have appeared:
- http-request: capture, set-method, set-uri, set-map, set-var, track-scX, sc-inc-gpc0, sc-set-gpt0, silent-drop
- http-response: capture, set-map, set-var, sc-inc-gpc0, sc-set-gpt0, silent-drop, redirect
Variables
We often used HTTP header fields to store temporary data in HAProxy. With 1.6, we can now define variables.
A variable is available for a scope: session, transaction (request or response), request, response.
A variable name is prefixed by its scope (sess, txn, req, res), a dot ‘.’ and a tag composed by ‘a-z’, ‘A-Z’, ‘0-9’ and ‘_’.
Let’s rewrite the capture example using variables
global # variables memory consumption, in bytes tune.vars.global-max-size 1048576 tune.vars.reqres-max-size 512 tune.vars.sess-max-size 2048 tune.vars.txn-max-size 256 defaults mode http frontend f_myapp bind :9001 http-request set-var(txn.host) req.hdr(Host) http-request set-var(txn.ua) req.hdr(User-Agent) default_backend b_myapp backend b_myapp http-response set-header Your-Host %[var(txn.host)] http-response set-header Your-User-Agent %[var(txn.ua)] server s1 10.0.0.3:4444 check
Mailers
Now, HAProxy can send emails when server states change (mainly goes DOWN), so your sysadmins/devos won’t sleep anymore :). It used to be able to log only before.
mailers mymailers mailer smtp1 192.168.0.1:587 mailer smtp2 192.168.0.2:587 backend mybackend mode tcp balance roundrobin email-alert mailers mymailers email-alert from test1@horms.org email-alert to test2@horms.org server srv1 192.168.0.30:80 server srv2 192.168.0.31:80
Processing of HTTP request body
Until 1.5 included, HAProxy could process only HTTP request headers. It can now access request body.
Simply enable the statement below in your frontend or backend to give HAProxy this ability:
option http-buffer-request
Protection against slow-POST attacks
slow-POST attacks are like the slowlorys one, except that HTTP header are sent quicly, but request body are sent very slowly.
Once enabled, the timeout http-request parameters also apply to the POSTED data.
Fetch methods
A few new fetch methods now exists to play with the body: req.body, req.body_param, req.body_len, req.body_size, etc…
Short example to detect “SELECT *” string in a request POST body, and of course how to deny it:
defaults mode http frontend f_mywaf bind :9001 option http-buffer-request http-request deny if { req.body -m reg "SELECT \*" } default_backend b_myapp backend b_myapp server s1 10.0.0.3:4444 check
New converters
1.5 introduced the converters but only a very few of them were available.
1.6 adds many of them. The list is too long, but let’s give the most important ones: json, in_table, field, reg_sub, table_* (to access counters from stick-tables), etc…
Device Identification
Through our company, we have some customer who want us to integrate into HAProxy the ability to detect device type and characteristics and report it to the backend server.
We got a couple of contributions from 2 companies experts in this domain: 51 degrees and deviceatlas.
You can now load those libraries in HAProxy in order to fully qualify a client capabilities and set up some headers your application server can rely on to adapt content delivered to the client or let the varnish cache server use it to cache multiple flavor of the same object based on client capabilities.
More on this blog later on how to integrate each product.
Seamless server states
Prior 1.6, when being reloaded, HAProxy considers all the servers are UP until the first check is performed.
Since 1.6, we can dump server states into a flat file right before performing the reload and let the new process know where the states are stored. That way, the old and new processes owns exactly the same server states (hence seamless).
The following information are reported:
- server IP address when resolved by DNS
- operational state (UP/DOWN/…)
- administrative state (MAINT/DRAIN/…)
- Weight (including slowstart relative weight)
- health check status
- rise / fall current counter
- check state (ENABLED/PAUSED/…)
- agent check state (ENABLED/PAUSED/…)
The state could be applied globally (all server found) or per backend.
Example:
Simple HAProxy configuration:
global stats socket /tmp/socket server-state-file /tmp/server_state backend bk load-server-state-from-file global server s1 10.0.0.3:4444 check weight 11 server s2 10.0.0.4:4444 check weight 12
Before reloading HAProxy, we save the server states using the following command:
socat /tmp/socket - <<< "show servers state" > /tmp/server_state
Here is the content of /tmp/server_state file:
1 # <field names skipped for the blog article> 1 bk 1 s1 10.0.0.3 2 0 11 11 4 6 3 4 6 0 0 1 bk 2 s2 10.0.0.4 2 0 12 12 4 6 3 4 6 0 0
Now, let’s proceed with reload as usual.
Of course, the best option is to export the server states using the init script.
External check
HAProxy can run a script to perform complicated health checks.
Just be aware about the security concerns when enabling this feature!!
Configuration:
global external-check backend b_myapp external-check path "/usr/bin:/bin" external-check command /bin/true server s1 10.0.0.3:4444 check
TLS / SSL
NOTE: some of the features introduced here may need a recent openssl library.
Detection of ECDSA-able clients
This has already been documented by Nenad on this blog: Serving ECC and RSA certificates on same IP with HAProxy
SSL certificate forgery on the fly
Since 1.6, HAProxy can forge SSL certificate on the fly!
Yes, you can use HAProxy with your company’s CA to inspect content.
Support of Certificate Transparency (RFC6962) TLS extension
When loading PEM files, HAProxy also checks for the presence of file at the same path suffixed by “.sctl”. If such file is found, support for Certificate Transparency (RFC6962) TLS extension is enabled.
The file must contain a valid Signed Certificate Timestamp List, as described in RFC. File is parsed to check basic syntax, but no signatures are verified.
TLS Tickets key load through stats socket
A new stats socket command is available to update TLS ticket keys at runtime. The new key is used for encryption/decryption while the old ones are used for decryption only.
Server side SNI
Many application servers now takes benefit from Server Name Indication TLS extension.
Example:
backend b_myapp_ssl mode http server s1 10.0.0.3:4444 check ssl sni req.hdr(Host)
Peers v2
The peers protocol, used to synchronized data from stick-tables between two HAProxy servers can now synchronise more than just the sample and the server-id.
It can synchronize all the tracked counters. Note that each node push its local counter to a peer. So this must be used for safe reload and server failover only.
Don’t expect to see 10 HAProxy servers to sync and aggregate counters in real time.
That said, this protocol has been extended to support different data type, so we may see more features soon relying on it 😉
HTTP connection sharing
On the road to HTTP/2, HAProxy must be able to support connection pooling.
And on the road to the connection pools, we have the ability to share a server side connection between multiple clients.
The server side connection may be used by multiple clients, until the owner (the client side connection) of this session dies. Then new connections may be established.
The new keyword is http-reuse and have different level of sharing connections:
- never: no connections are shared
- safe: first request of each client is sent over its own connection, subsequent request may used an other connection. It works like a regular HTTP keepalive
- aggressive: send request to connections that has proved reliably support connection reuse (no quick connection close after a response has been sent)
- always: send request to established connections, whatever happens. If the server was closing the connection in the mean time, the request is lost and the client must resend it.
Get rid of 408 in logs
Simply use the new option option http-ignore-probes
.
Baptiste, just a few comment so that people take good habits from the start :
1) don’t use double-quotes for regex, use single quotes. Just like in shell scripting, double quoting supports environment variables prefixed with the dollar sign ($) which is often used in regex. People may get trapped over the long term. It would even be better to train people to always use single quotes by default.
2) Lua 5.3.1 has been out for the last 4 months, better train people to apply fixes. Newer versions appear here : http://www.lua.org/ftp/
Otherwise it looks good. Thanks!
Regarding dynamic name resolution, how does HAProxy behave when a DNS record returns multiple A/AAAA records? Will it treat them as separate servers and load balance accordingly, or it will only use one of them. The former would be ideal when using service discovery protocols like Consul or SkyDNS (no need for reloading config on changes).
Also I’m wondering whether adding support for SRV record (https://en.wikipedia.org/wiki/SRV_record) could open window for new possibilities (controlling weight for example).
HAProxy takes only one IP per server, even when the DNS response contains many records. First IP matching the family is considered unless the current IP is found in the response.
We’re planing to use SRV records later, we need to study the impact first of such type of records and information it provide.
Regarding Seamless server states, according to my testing using version 1.6.3 administrative state DRAIN is not preserved after a reload.
The MAINT administrative state and also the weight are preserved but I am not sure if “set state to DRAIN” is the same with “set weight 0” so I can use weight instead of DRAIN.
Also, maybe related somehow, the DRAIN state is not highlighted blue in the stats page as it was in version 1.5.x
Quick question, what command line equivalent do you recommend to compile haproxy 1.6.3 with lua 5.3.0 on macosx? Thanks
We have no macosx users here so I don’t know, but you’ll find some of them on HAProxy’s ML.
No problem, I figured it out and posted my solution in case it helps somebody else.
I think I’ve found the correct one so I’ll reply to myself in case someone is struggling too. I’m on El Capitan btw:
> Compile LUA 5.3.0
wget http://www.lua.org/ftp/lua-5.3.0.tar.gz
tar xvzf lua-5.3.0.tar.gz
cd lua-5.3.0
make macosx
make INSTALL_TOP=/a/folder install
> Compile Haproxy with Lua
make TARGET=generic USE_OPENSSL=1 USE_PCRE=1 USE_LUA=1 LUA_LIB=/a/folder/lib LUA_INC=/a/folder/include SSL_INC=/usr/local/opt/openssl/include SSL_LIB=/usr/local/opt/openssl/lib
I had to precise the path to OpenSSL. Even with USE_OPENSSL set to null I had this error:
In file included from src/hlua.c:38:
include/proto/ssl_sock.h:24:10: fatal error: ‘openssl/ssl.h’ file not
found
#include
^
1 error generated.
make: *** [src/hlua.o] Error 1
How do I get alerted when a back end comes back up? It seems to only alert on down atm.
I am planning to use “http-reuse always” in the backend section. I want to know the risks involved.
Also, you said that “if the server was closing the connection in the mean time, the request is lost and the client must resend it”, what do you mean by client here? The connection was between haproxy and the server.
I would like to know how can setup the ftp forward, please help ! Thanks a lot!
I am planning to control http request 1.5 Req/sec for my website how can i achive this.?
It depends if you want to queue extra requests or to reject them. Also the measures will be an integral number per second (either 1 or 2 but not 1.5), or you can validate a limit of 3 every 2 seconds. If you just want to drop once the limit is reached, you can do this using an ACL and fe_req_rate(). If you want to slow down without dropping, then this is doable at the connection level (but then you will have to assume one request per connection) using “rate-limit sessions” in the frontend.