HAProxy Enterprise Documentation 1.6r1

DNS Round-robin

To create an active-active cluster of load balancers, you can use DNS round-robin to send traffic to each load balancer in a rotation.

To remove unhealthy instances of HAProxy Enterprise from the round-robin load-balancing, you need a way to detect failure. If your DNS server support health checking upstream servers, then you can use that feature to remove unhealthy instances. In this section, we assume that your DNS server does not support health checking. Instead, you will configure the HAProxy Enterprise VRRP module to assign a virtual IP address to each load balancer. In the event that one of your load balancers fails, its virtual IP address will transfer to a healthy load balancer instance.

A side effect of this failover is that your healthy instance, which had been receiving its own share of traffic already, will temporarily take on more traffic until the other instance recovers.

In this guide, we set up two load balancers: both actively receiving traffic.

Configure VRRP

Install the VRRP module using your system's package manager on both instances of HAProxy Enterprise that will participate in your load balancer cluster:

$ # On Debian/Ubuntu
$ sudo apt-get install hapee-1.6r1-vrrp
$ # On CentOS/RedHat/Oracle
$ sudo yum install hapee-1.6r1-vrrp

On the first load balancer, edit the file /etc/hapee-extras/hapee-vrrp.cfg. By default, this file is configured for an active-standby cluster and it looks like this:

# Check for the presence of the SSH daemon. That way, if SSH dies, we prefer
# the other node which remains remotely manageable.

vrrp_script chk_sshd {
   script "pkill -0 sshd"          # pkill -0 is cheaper than pidof
   interval 5                      # check every 5 seconds
   weight -4                       # remove 4 points of prio if missing
   fall 2                          # check twice before setting down
   rise 1                          # check once before setting up
}

# Check for the presence of the load balancer daemon (hapee-lb) itself. The
# weight is higher than for SSHD, because if one node only has SSHD and the
# other one only has the LB running, we prefer the LB.

vrrp_script chk_lb {
   script "pkill -0 hapee-lb"      # pkill -0 is cheaper than pidof
   interval 1                      # check every second
   weight 6                        # add 6 points of prio if present
   fall 2                          # check twice before setting down
   rise 1                          # check once before setting up
}

# This is an example of how it would be possible to check if the LB sees some
# operational servers, and to use the result to decide to be primary or backup.
# The "/are-you-ok" url should be referenced as a "monitor-uri" in hapee-lb,
# and this vrrp_script should be referenced in the "track_script" block of the
# concerned VRRP instances.

vrrp_script chk_servers {
   script "echo 'GET /are-you-ok' | nc 127.1 8080 | grep -q '200 OK'"
   interval 2                      # check every 2 seconds
   weight 2                        # add 2 points of prio if OK
   fall 2                          # check twice before setting down
   rise 2                          # check twice before setting up
}

vrrp_instance vrrp_1 {
   interface eth0                  # or bond0 or whatever prod interface
   state MASTER                    # try to be primary (MASTER) without waiting
   virtual_router_id 51            # use a distinct value for each instance
   priority 101                    # 101 on primary, 100 on backup
   virtual_ipaddress_excluded {
         55.55.55.55               # your shared service IP address(es)
   }
   track_interface {
      eth0 weight -2               # interfaces to monitor
      # eth1 weight -2
   }
   track_script {
      chk_sshd
      chk_lb
   }
}

Replace the vrrp_instance block with the following contents.

  • Replace the interface lines with the name of the network interface on which this server receives traffic.

  • Replace the IP addresses listed in the virtual_ipaddress_excluded blocks with addresses you'd like to use to receive traffic. These new addresses should fall within the interface's IP subnet, but should not be assigned to any server already.

vrrp_instance vrrp_1 {
   interface enp0s8              # Change network interface name
   state MASTER
   virtual_router_id 51
   priority 101
   virtual_ipaddress_excluded {
      192.168.50.10               # NEW IP address
   }
   track_interface {
      enp0s8 weight -2            # Change network interface name
   }
   track_script {
      chk_sshd
      chk_lb
   }
}

vrrp_instance vrrp_2 {
   interface enp0s8               # Change network interface name
   state BACKUP
   virtual_router_id 52
   priority 100
   virtual_ipaddress_excluded {
      192.168.50.11                # NEW IP address
   }
   track_interface {
      enp0s8 weight -2             # Change network interface name
   }
   track_script {
      chk_sshd
      chk_lb
   }
}

This creates two virtual IP addresses: 192.168.50.10 and 192.168.50.11. The first has its state set to MASTER, while the second is set to BACKUP. This means that this load balancer owns the first IP address but will only bind to the second IP address as a backup, meaning when its primary owner fails.

On the other load balancer, configure the VRRP module in a similar way, but swap the state values so that the first vrrp_instance block is set to BACKUP, while the second is set to MASTER. Also, swap the priority values. In this way, each load balancer is a backup for the other. If one fails, its virtual IP will transfer to the other node, which will at that point answer requests from both addresses until the failed instance recovers.

vrrp_instance vrrp_1 {
   interface enp0s8              # Change network interface name
   state BACKUP
   virtual_router_id 51
   priority 100
   virtual_ipaddress_excluded {
      192.168.50.10               # NEW IP address
   }
   track_interface {
      enp0s8 weight -2            # Change network interface name
   }
   track_script {
      chk_sshd
      chk_lb
   }
}

vrrp_instance vrrp_2 {
   interface enp0s8               # Change network interface name
   state MASTER
   virtual_router_id 52
   priority 101
   virtual_ipaddress_excluded {
      192.168.50.11                # NEW IP address
   }
   track_interface {
      enp0s8 weight -2             # Change network interface name
   }
   track_script {
      chk_sshd
      chk_lb
   }
}

Start the hapee-extras-vrrp service on both servers:

$ sudo systemctl start hapee-1.6r1-vrrp
$ sudo systemctl enable hapee-1.6r1-vrrp

Edit your HAProxy Enterprise configurations to listen on the virtual IP addresses. Note that you must append the field transparent, indicating that the address will be bound even if it does not belong to the local machine, which is necessary since the virtual IP will float to the active load balancer.

frontend myfrontend
   mode http
   bind 192.168.50.10:80 transparent
   bind 192.168.50.11:80 transparent
   default_backend web_servers

backend web_servers
   server s1 192.168.50.20:80

Alternatively, if you prefer to not add transparent to every bind line, you can set the kernel parameter ip_nonlocal_bind. Edit the file /etc/sysctl.conf and add the line net.ipv4.ip_nonlocal_bind=1:

net.ipv4.ip_nonlocal_bind=1

Then run sudo systctl -p to reload the configuration.

As a third option, you can listen on all IP addresses assigned to the server by replacing the address with an asterisk or omitting the IP portion altogether:

frontend myfrontend
   mode http
   bind *:80
   default_backend web_servers

In essence, we have created an active-active cluster by creating two active-standby clusters, one in each direction. Each load balancer has an active IP address, but also serves as a standby server for the other IP address. Refer to the section on Failover triggers to understand the VRRP configuration better.

Configure DNS Round-robin

DNS allows you to rotate which IP address is returned in response to a DNS query. When clients request your service by its domain name, they will receive the IP address of the next load balancer in the list. Clients tend to cache DNS results, so once a client receives a DNS answer, it will likely continue making requests to the same load balancer until the DNS answer expires.

Create an A record for each load balancer IP address. For example, consider the following DNS zone file:

$ORIGIN example.local.

@       3600 IN SOA dns1.example.local. admin.example.local. (
            2017042745 ; serial
            1800       ; refresh (30 minutes)
            900        ; retry (15 minutes)
            1209600    ; expire (2 weeks)
            60       ; minimum (1 minute)
         )

        3600 IN NS dns1.example.local.

@     60 IN A     192.168.50.10
@     60 IN A     192.168.50.11
www   60 IN CNAME @

Depending on your DNS server, you may need to enable load-balancing of the A records. It is also a good idea to set a shorter TTL for these records to avoid staying cached in intermediate nameservers for long.


Next up

Route Health Injection