Enable external mode for an on-premises Kubernetes installation
In this scenario, we deploy a custom Kubernetes installation that uses Project Calico as its Container Networking Interface (CNI) plugin. A CNI plugin is responsible for defining the virtual network that pods use to communicate with one another. Because a pod network is typically accessible only to Kubernetes pods, we need a way to bridge this network with a public-facing, external network.
Project Calico has the ability to perform BGP peering between the pod network and an external network, allowing us to install and run the ingress controller external to Kubernetes, while still receiving IP route advertisements that enable it to relay traffic to pods.
We will use the following components:
Component | Description |
---|---|
HAProxy Kubernetes Ingress Controller | The ingress controller runs as a standalone process outside of your Kubernetes cluster. |
Project Calico | Project Calico is a network plugin for Kubernetes. It supports BGP peering, which allows pods inside your Kubernetes cluster to share their IP addresses with a server outside of the cluster. |
BIRD Internet Routing Daemon | BIRD is a software-defined router. It receives routes from Project Calico and makes them available to the ingress controller. |
Prepare servers for Kubernetes
Deploy Linux servers that will host your Kubernetes components.
You will need:
- a control plane server: one Linux server to run the Kubernetes control plane and be responsible for managing the cluster and hosting the Kubernetes API.
- worker nodes: one or more Linux servers to act as Kubernetes worker nodes, which host pods.
- ingress controller server: one Linux server to run the HAProxy Kubernetes Ingress Controller.
On the control plane server and worker nodes, perform these steps:
-
Follow the Install Docker Engine guide to install Docker and Containerd on the server. Containerd will serve as the container engine in Kubernetes.
-
By default, the Containerd configuration file,
/etc/containerd/config.toml
, disables the Container Runtime Interface (CRI) that Kubernetes needs. To fix this, delete the file, then restart the service:$ sudo rm /etc/containerd/config.toml $ sudo systemctl restart containerd
-
Disable swap, as required by the Kubernetes kubelet service.
$ swapoff -a
-
Follow the Installing kubeadm guide to install the
kubeadm
,kubectl
, andkubelet
packages. We will use thekubeadm
tool to install Kubernetes.
Configure the Kubernetes control plane server
At least one server must become the central management server, otherwise known as the control plane. On that server, perform the following additional steps:
-
Call
kubeadm init
to install Kubernetes on this server. Replace the value of--api-advertise-address
with your server’s IP address.$ sudo kubeadm init \ --pod-network-cidr 172.16.0.0/16 \ --apiserver-advertise-address 192.168.56.10
Argument Description --pod-network-cidr
Sets the range of IP addresses to use for the pod network. Each new pod will receive an IP address in this range. The IP range 172.16.0.0/16
allows up to 65534 unique IP addresses and will suffice most installations.--apiserver-advertise-address
Add this optional argument if your server has more than one IP address assigned to it to specify the address on which the Kubernetes API should listen. Refer to the kubeadm init documentation guide for more information about these and other arguments.
After running the command, the output shows the
kubeadm join
command you will use to add worker nodes to the cluster. Copy this for later. For example:Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.10:6443 --token n8jsqb.5gxbylf6zx4o61cy \ --discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880b
-
After the installation, a kubeconfig file is created at
/etc/kubernetes/admin.conf
. Copy this file to the root user’s directory. This allows you to connect to the Kubernetes API usingkubectl
and we will configure Project Calico to use this kubeconfig file too.$ sudo mkdir /root/.kube $ sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config $ sudo chown root:root root/.kube/config
-
Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. Write the IP address to the file
/etc/default/kubelet
and then restart the service.For example:
$ sudo touch /etc/default/kubelet $ echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10" | sudo tee /etc/default/kubelet $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
-
Install the Project Calico operator in your Kubernetes cluster by using the command below. Refer to the Project Calico Quickstart guide for detailed instructions.
$ sudo kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
-
Create the directory
/etc/calico
and add a file namedcalico-installation.yaml
.$ sudo mkdir -p /etc/calico $ sudo touch /etc/calico/calico-installation.yaml
Add the following contents to the file to define an Installation custom resource that will install the Project Calico CNI plugin and enable BGP peering with networks outside of the pod network.
Set the
cidr
line to match the IP range you chose for the--pod-network-cidr
argument when callingkubeadm init
.# This section includes base Calico installation configuration. # For more information, see: https://docs.projectcalico.org/v3.19/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: bgp: Enabled # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 172.16.0.0/16 encapsulation: IPIP natOutgoing: Enabled nodeSelector: all()
Apply the file to your cluster using
kubectl apply
.$ sudo kubectl apply -f /etc/calico/calico-installation.yaml
-
Install the calicoctl command-line tool and copy it to the
/usr/local/bin
directory. We will use this to finish the setup of Project Calico.$ sudo cp ./calicoctl /usr/local/bin
-
Create a file named
/etc/calico/calicoctl.cfg
.$ sudo touch /etc/calico/calicoctl.cfg
Add the following contents to the file, which configures
calicoctl
to connect to your Kubernetes cluster using the kubeconfig file in the root user’s home directory.apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/root/.kube/config"
-
Create a file named
/etc/calico/calico-bgp.yaml
.$ sudo touch /etc/calico/calico-bgp.yaml
Add the following to it to enable BGP peering with your external network. Change the
peerIp
field to be the IP address of the server where you will run the ingress controller. For example, this would be an IP address within the external network, not within the pod network.apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 65000 --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: my-global-peer spec: peerIP: 192.168.56.11 asNumber: 65000
Argument Description asNumber
Defines the BGP autonomous system (AS) number you wish to use. peerIP
Defines the IP address of the server where you will install the ingress controller. Apply it with the
calicoctl apply
command:$ sudo calicoctl apply -f /etc/calico/calico-bgp.yaml
-
Create an empty ConfigMap resource in your cluster, which the ingress controller requires upon startup.
$ sudo kubectl create configmap haproxy-kubernetes-ingress
-
To verify the setup, call
calicoctl node status
. The Info column should show Connection refused. This is expected because we have not configured the ingress controller yet to serve as the neighbor BGP peer.
$ sudo calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-----------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-----------+-------+----------+--------------------------------+
| 192.168.56.11 | global | start | 22:53:20 | Connect Socket: No route to |
| | | | | host |
+---------------+-----------+-------+----------+--------------------------------+
IPv6 BGP status
No IPv6 peers found.
Configure the Kubernetes worker nodes
Kubernetes worker nodes host pods. On each server that you wish to register as a worker node in the Kubernetes cluster, after following the steps in the Prepare servers for Kubernetes, perform these additional steps:
-
On the control plane server get the
kubeadm join
command by callingkubeadm token create --print-join-command
. Copy it and run it on the worker node server.For example:
$ sudo kubeadm join 192.168.56.10:6443 \ --token jqfhgn.bgvy9xko70q82awu \ --discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880b This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
-
Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. Write the IP address to the file
/etc/default/kubelet
and then restart the service.For example:
$ sudo touch /etc/default/kubelet $ echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11" | sudo tee /etc/default/kubelet $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
Install the ingress controller outside of your cluster
On a separate server not joined to your Kubernetes cluster, follow these steps to install the HAProxy Kubernetes Ingress Controller as a standalone process.
-
Copy the kubeconfig file to this server and store it in the root user’s home directory. The ingress controller will use this to connect to the Kubernetes API.
$ sudo mkdir -p /root/.kube $ sudo cp admin.conf /root/.kube/config $ sudo chown -R root:root /root/.kube
-
Install the HAProxy package for on your Linux distribution. For Ubuntu, use these commands:
$ sudo add-apt-repository -y ppa:vbernat/haproxy- $ sudo apt update $ sudo apt install -y haproxy
-
Stop and disable the HAProxy service.
$ sudo systemctl stop haproxy $ sudo systemctl disable haproxy
-
Call the
setcap
command to allow HAProxy to bind to ports 80 and 443:$ sudo setcap cap_net_bind_service=+ep /usr/sbin/haproxy
-
Download the ingress controller from the project’s GitHub Releases page.
Extract it and then copy it to the /usr/local/bin directory.
example
$ wget https://github.com/haproxytech/kubernetes-ingress/releases/download/v1.8.8/haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gz $ tar -xzvf haproxy-ingress-controller_1.8.8_Linux_x86_64.tar.gz $ sudo cp ./haproxy-ingress-controller /usr/local/bin/
-
Create a
systemctl
service file at /lib/systemd/system/haproxy-ingress.service. Add the following to it:[Unit] Description="HAProxy Kubernetes Ingress Controller" Documentation=https://www.haproxy.com/ Requires=network-online.target After=network-online.target [Service] Type=simple User=root Group=root ExecStart=/usr/local/bin/haproxy-ingress-controller --external --configmap=default/haproxy-kubernetes-ingress --program=/usr/sbin/haproxy --disable-ipv6 --ipv4-bind-address=0.0.0.0 --http-bind-port=80 ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGTERM Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
-
Enable and start the service.
$ sudo systemctl enable haproxy-ingress $ sudo systemctl start haproxy-ingress
Install the BIRD Internet Routing Daemon
To enable the ingress controller to route requests to pods in your Kubernetes cluster, it must get routing information via BGP from the Project Calico network plugin. To do that, install the BIRD Internet Routing Daemon, which acts as a software-defined router that adds IP routes to the ingress controller server.
-
On the ingress controller server, install BIRD.
$ sudo add-apt-repository -y ppa:cz.nic-labs/bird $ sudo apt update $ sudo apt install bird
-
Edit file named bird.conf in the /etc/bird directory. Add the following contents to it, but change:
- the
router id
to the current server’s IP address. This the IP address of the ingress controller server. - the
local
line’s IP address in eachprotocol
section to the current server’s IP address. Again, this the IP address of the ingress controller server. - the
neighbor
line to the IP address of a node in your Kubernetes cluster. One of these should be the control plane server’s IP address. - the
import filter
should match the pod network’s IP range that you set earlier withkubeadm init
.
router id 192.168.56.11; log syslog all; # control plane node protocol bgp { local 192.168.56.11 as 65000; neighbor 192.168.56.10 as 65000; direct; import filter { if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept; }; export none; } # worker node protocol bgp { local 192.168.56.11 as 65000; neighbor 192.168.56.12 as 65000; direct; import filter { if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept; }; export none; } # Inserts routes into the kernel routing table protocol kernel { scan time 60; export all; } # Gets information about network interfaces from the kernel protocol device { scan time 60; }
Each
protocol bgp
section connects BIRD to a Kubernetes node via iBGP. Each is considered a neighbor. This example uses 65000 as the Autonomous System number, but you can choose a different value. - the
-
Enable and start the BIRD service.
$ sudo systemctl enable bird $ sudo systemctl restart bird
-
After completing these steps, the ingress controller is configured to communicate with your Kubernetes cluster and, once you’ve added an Ingress resource using
kubectl
, it can route traffic to pods.Be sure to allow the servers to communicate by adding rules to your firewall.
On the ingress controller server, calling
sudo birdc show protocols
should show that connections have been established with the control plane server and any worker nodes.$ sudo birdc show protocols BIRD 1.6.8 ready. name proto table state since info bgp1 BGP master up 22:38:44 Established bgp2 BGP master up 22:38:43 Established kernel1 Kernel master up 22:38:43 device1 Device master up 22:38:43
On the control plane server, calling
calicoctl node status
should show that BGP peering has been established with the ingress controller, which has a peer type of global, and any worker nodes, which are connected through the Project Calico node-to-node mesh.$ sudo calicoctl node status Calico process is running. IPv4 BGP status +---------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +---------------+-------------------+-------+----------+-------------+ | 192.168.56.10 | global | up | 23:06:44 | Established | | 192.168.56.12 | node-to-node mesh | up | 23:12:00 | Established | +---------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
Next up
Installation with Amazon EKS