HAProxy Enterprise Documentation 1.7

External Mode

You can run the ingress controller on a server outside of your Kubernetes cluster. Doing so can reduce the number of proxies and load balancers necessary for routing traffic into your cluster.

In this scenario, we use the following components:

Component Description
HAProxy Kubernetes Ingress Controller The ingress controller runs as a standalone process outside of your Kubernetes cluster.
Project Calico Project Calico is a network plugin for Kubernetes. It supports BGP peering, which allows pods inside your Kubernetes cluster to share their IP addresses with a server outside of the cluster.
BIRD Internet Routing Daemon BIRD is a software-defined router. It receives routes from Project Calico and makes them available to the ingress controller.

Setup your Kubernetes Cluster with Project Calico

Follow these steps to set up your Kubernetes cluster. This solution relies on using Project Calico as the network plugin in your Kubernetes cluster. Project Calico supports BGP peering, which allows pods inside your Kubernetes cluster to share their IP addresses with a server outside of the cluster.

  1. Follow the Install Docker Engine guide to install Docker on the Linux server where you would like to install your first Kubernetes node.

  2. Set the cgroup driver for the Docker service to be systemd. Create the file /etc/docker/daemon.json and add the following contents to it:

    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }

    Then restart the Docker service:

    $ sudo systemctl restart docker
    
  3. Disable swap, as required by the kubelet service.

    $ swapoff -a
    
  4. Follow the Installing kubeadm guide to install the kubeadm, kubectl, and kubelet packages.

  5. Call kubeadm init to install Kubernetes on this server.

    $ sudo kubeadm init --pod-network-cidr=172.16.0.0/16
    
    • The --pod-network-cidr argument sets the range of IP addresses to use for the pod network. The control plane node assigns each new pod an IP address in this range.
    • If your server has more than one network interface, add the --apiserver-advertise-address argument to specify the IP address that the Kubernetes API listens on. Otherwise, it uses the default network interface.

    Refer to the kubeadm init documentation guide for more information about each argument.

    After running this command, the output shows the token you will need to join other nodes to the cluster. Save this token for later.

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.31.24.118:6443 --token rncm5q.w9nv225jb9i053yz \
            --discovery-token-ca-cert-hash sha256:f52d344283070d2d132e16ec969b9159ea511ff5d2b0724e55ec0d99c43f5e79
    
  6. After the installation, copy the kube config file to the root user’s home directory. This allows you to connect to the Kubernetes API using kubectl.

    $ sudo mkdir -p /root/.kube
    $ sudo cp /etc/kubernetes/admin.conf /root/.kube/config
    $ sudo chown root:root /root/.kube/config
    
  7. Install the Project Calico as the network plugin in your Kubernetes cluster. First, install its operator:

    $ sudo kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

    Refer to the Project Calico Quickstart guide for detailed instructions.

  8. Create a new YAML file named calico-installation.yaml that defines an Installation custom resource. This will configure Project Calico as the network plugin and enable BGP peering. The cidr line should match the IP range you defined with the --pod-network-cidr argument when calling kubeadm init.

    # This section includes base Calico installation configuration.
    # For more information, see: https://docs.projectcalico.org/v3.19/reference/installation/api#operator.tigera.io/v1.Installation
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        bgp: Enabled
    
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          cidr: 172.16.0.0/16
          encapsulation: IPIP
          natOutgoing: Enabled
          nodeSelector: all()

    Apply the file to your cluster using kubectl apply.

    $ sudo kubectl apply -f ./calico-installation.yaml
    
  9. Install the calicoctl command-line tool and copy it to the /usr/local/bin directory. We will use this to finish the setup of Project Calico.

  10. Create the directory /etc/calico and then add a file named calicoctl.cfg to it. Add the following contents to the file, which configure calicoctl to connect to the Kubernetes API:

    apiVersion: projectcalico.org/v3
    kind: CalicoAPIConfig
    metadata:
    spec:
      datastoreType: "kubernetes"
      kubeconfig: "/root/.kube/config"
  11. Create a file named calico-bgp.yaml. Add the following contents to it, which enable BGP peering.

    Be sure to change the peerIp field to be the IP address of the server where you will run the ingress controller.

    apiVersion: projectcalico.org/v3
    kind: BGPConfiguration
    metadata:
      name: default
    spec:
      logSeverityScreen: Info
      nodeToNodeMeshEnabled: true
      asNumber: 65000
    ---
    apiVersion: projectcalico.org/v3
    kind: BGPPeer
    metadata:
      name: my-global-peer
    spec:
      peerIP: 172.31.25.187
      asNumber: 65000
    • The asNumber field defines the BGP autonomous system (AS) number you wish to use.
    • The peerIp field should be the IP address of the server where you will install the ingress controller.

    Apply it with the calicoctl apply command:

    $ sudo calicoctl apply -f ./calico-bgp.yaml
  12. Create an empty ConfigMap resource in your cluster, which the ingress controller requires.

    $ sudo kubectl create configmap haproxy-kubernetes-ingress
  13. If you call calicoctl node status, the Info column should say Connection refused. This is because we have not configured the ingress controllet yet.

    $ sudo calicoctl node status
    
    Calico process is running.
    
    IPv4 BGP status
    +---------------+-----------+-------+----------+--------------------------------+
    | PEER ADDRESS  | PEER TYPE | STATE |  SINCE   |              INFO              |
    +---------------+-----------+-------+----------+--------------------------------+
    | 172.31.25.187 | global    | start | 22:36:42 | Connect Socket: Connection     |
    |               |           |       |          | refused                        |
    +---------------+-----------+-------+----------+--------------------------------+
    
    IPv6 BGP status
    No IPv6 peers found.
    
  14. To add a worker node to the cluster, repeat steps 1-4 on another server to install Docker and the kubeadm, kubectl, and kubelet packages. Then call the kubeadm join command was displayed after you ran kubeadm init:

    $ sudo kubeadm join 172.31.24.118:6443 --token rncm5q.w9nv225jb9i053yz \
         --discovery-token-ca-cert-hash sha256:f52d344283070d2d132e16ec969b9159ea511ff5d2b0724e55ec0d99c43f5e79
    

Install the ingress controller outside of your cluster

On a separate server not joined to your Kubernetes cluster, follow these steps to install the HAProxy Kubernetes Ingress Controller as a standalone process.

  1. Copy the kube config file to this server and store it in the root user’s home directory. The ingress controller will use this to connect to the Kubernetes API.

    $ sudo mkdir -p /root/.kube
    $ sudo cp admin.conf /root/.kube/config
    $ sudo chown -R root:root /root/.kube
    
  2. Install the HAProxy package for on your Linux distribution. For Ubuntu, use these commands:

    $ sudo add-apt-repository -y ppa:vbernat/haproxy-2.4
    $ sudo apt update
    $ sudo apt install -y haproxy
    
  3. Stop and disable the HAProxy service.

    $ sudo systemctl stop haproxy
    $ sudo systemctl disable haproxy
    
  4. Call the setcap command to allow HAProxy to bind to ports 80 and 443:

    $ sudo setcap cap_net_bind_service=+ep /usr/sbin/haproxy
    
  5. Download the ingress controller from the project’s GitHub Releases page.

    Extract it and then copy it to the /usr/local/bin directory.

    example

    $ wget https://github.com/haproxytech/kubernetes-ingress/releases/download/v1.6.7/haproxy-ingress-controller_1.6.7_Linux_x86_64.tar.gz
    $ tar -xzvf haproxy-ingress-controller_1.6.7_Linux_x86_64.tar.gz
    $ sudo cp ./haproxy-ingress-controller /usr/local/bin/
    
  6. Create a Systemctl service file at /lib/systemd/system/haproxy-ingress.service. Add the following to it:

    [Unit]
    Description="HAProxy Kubernetes Ingress Controller"
    Documentation=https://www.haproxy.com/
    Requires=network-online.target
    After=network-online.target
    
    [Service]
    Type=simple
    User=root
    Group=root
    ExecStart=/usr/local/bin/haproxy-ingress-controller --external --configmap=default/haproxy-kubernetes-ingress --program=/usr/sbin/haproxy --disable-ipv6 --ipv4-bind-address=0.0.0.0 --http-bind-port=80 &
    ExecReload=/bin/kill --signal HUP $MAINPID
    KillMode=process
    KillSignal=SIGTERM
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
  7. Enable and start the service.

    $ sudo systemctl enable haproxy-ingress
    $ sudo systemctl start haproxy-ingress
    

Install the BIRD Internet Routing Daemon

To enable the ingress controller to route requests to pods in your Kubernetes cluster, it must get routing information via BGP from the Project Calico network plugin. To do that, install the BIRD Internet Routing Daemon, which acts as a software-defined router that adds IP routes to the server where the ingress controller is running.

  1. On the ingress controller server, install BIRD.

    $ sudo add-apt-repository -y ppa:cz.nic-labs/bird
    $ sudo apt update
    $ sudo apt install bird
    
  2. Edit file named bird.conf in the /etc/bird directory. Add the following contents to it, but change:

    • the router id to the current server’s IP address.
    • the local line’s IP address in each protocol section to the current server’s IP address.
    • the neighbor line to the IP address of a node in your Kubernetes cluster. One of these should be the control plane server’s IP address.
    router id 172.31.25.187;
    log syslog all;
    
    # control plane node
    protocol bgp {
       local 172.31.25.187 as 65000;
       neighbor 172.31.24.118 as 65000;
       direct;
       import filter {
          if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
       };
      export none;
    }
    
    # worker node
    protocol bgp {
       local 172.31.25.187 as 65000;
       neighbor 172.31.23.65 as 65000;
       direct;
       import filter {
          if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
       };
       export none;
    }
    
    # Inserts routes into the kernel routing table
    protocol kernel {
       scan time 60;
       export all;
    }
    
    # Gets information about network interfaces from the kernel
    protocol device {
       scan time 60;
    }
    • The router id line is the IP address of this ingress controller server.
    • Each protocol bgp section connects BIRD to a Kubernetes node via BGP. Each is considered a neighbor. Set the local field to the same value as router id. Set neighbor to the node’s IP address. This example uses 65000 as the Autonomous System number, but you can choose a different value.
  3. Enable and start the BIRD service.

    $ sudo systemctl enable bird
    $ sudo systemctl restart bird
    
  4. After completing these steps, the ingress controller is configured to communicate with your Kubernetes cluster and, once you’ve added an Ingress resource using kubectl, it can route traffic to pods.

    Be sure to allow the servers to communicate by adding rules to your firewall.

    On the control plane server, calling calicoctl node status should show that connections to the ingress controller and any worker nodes have been established.

    $ sudo calicoctl node status
    
    Calico process is running.
    
    IPv4 BGP status
    +---------------+-------------------+-------+----------+-------------+
    | PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
    +---------------+-------------------+-------+----------+-------------+
    | 172.31.25.187 | global            | up    | 23:06:44 | Established |
    | 172.31.23.65  | node-to-node mesh | up    | 23:12:00 | Established |
    +---------------+-------------------+-------+----------+-------------+
    
    IPv6 BGP status
    No IPv6 peers found.
    

    On the ingress controller server, calling sudo birdc show protocols should show that a connection has been established to the control plane server and any worker nodes.

    $ sudo birdc show protocols
    
    BIRD 1.6.8 ready.
    name     proto    table    state  since       info
    bgp1     BGP      master   up     22:38:44    Established
    bgp2     BGP      master   up     22:38:43    Established
    kernel1  Kernel   master   up     22:38:43
    device1  Device   master   up     22:38:43
    

Next up

Amazon EKS