HAProxy is a Load-Balancer, this is a fact. It is used to route traffic to servers to primarily ensure applications reliability.
Most of the time, the sessions are locally stored in a server. Which means that if you want to split client traffic on multiple servers, you have to ensure each user can be redirected to the server which manages his session (if the server is available, of course). HAProxy can do this in many ways: we call it persistence.
Thanks to persistence, we usually says that any application can be load-balanced… Which is true in 99% of the cases. In very rare cases, the application can’t be load-balanced. I mean that there might be a lock somewhere in the code or for some other good reasons…
In such case, to ensure high-availability, we build „active/passive“ clusters, where a node can be active at a time.
HAProxy can be use in different ways to emulate an active/passive clustering mode, and this is the purpose of today’s article.
Bear in mind that by „active/passive“, I mean that 100% of the users must be forwarded to the same server. And if a fail over occurs, they must follow it in the mean time!
Let’s use one HAProxy with a couple of servers, s1 and s2.
When starting up, s1 is master and s2 is used as backup:
------------- | HAProxy | ------------- | ` |active ` backup | ` ------ ------ | s1 | | s2 | ------ ------
Automatic failover and failback
The configuration below makes HAProxy to use s1 when available, otherwise fail over to s2 if available:
defaults mode http option http-server-close timeout client 20s timeout server 20s timeout connect 4s frontend ft_app bind 10.0.0.100:80 name app default_backend bk_app backend bk_app server s1 10.0.0.1:80 check server s2 10.0.0.2:80 check backup
The most important keyword above is „backup“ on s2 configuration line.
Unfortunately, as soon as s1 comes back, then all the traffic will fail back to it again, which can be acceptable for web applications, but not for active/passive
Automatic failover without failback
The configuration below makes HAProxy to use s1 when available, otherwise fail over to s2 if available.
When a failover has occured, no failback will be processed automatically, thanks to the stick table:
peers LB peer LB1 10.0.0.98:1234 peer LB2 10.0.0.99:1234 defaults mode http option http-server-close timeout client 20s timeout server 20s timeout connect 4s frontend ft_app bind 10.0.0.100:80 name app default_backend bk_app backend bk_app stick-table type ip size 2 nopurge peers LB stick on dst server s1 10.0.0.1:80 check server s2 10.0.0.2:80 check backup
The stick table will maintain persistence based on destination IP address (10.0.0.100 in this case):
show table bk_app # table: bk_app, type: ip, size:20480, used:1 0x869154: key=10.0.0.100 use=0 exp=0 server_id=1
With such configuration, you can trigger a fail back by disabling s2 during a few second period.