Installation

External mode on-premises

Available since

version 1.7

In this scenario, we deploy a custom Kubernetes installation that uses Project Calico as its Container Networking Interface (CNI) plugin. A CNI plugin is responsible for defining the virtual network that pods use to communicate with one another. Because a pod network is typically accessible only to Kubernetes pods, we need a way to bridge this network with a public-facing, external network.

Project Calico has the ability to perform BGP peering between the pod network and an external network, allowing us to install and run the ingress controller external to Kubernetes, while still receiving IP route advertisements that enable it to relay traffic to pods.

We will use the following components:

Component Description
HAProxy Enterprise Kubernetes Ingress Controller The ingress controller runs as a standalone process outside of your Kubernetes cluster.
Project Calico Project Calico is a network plugin for Kubernetes. It supports BGP peering, which allows pods inside your Kubernetes cluster to share their IP addresses with a server outside of the cluster.
BIRD Internet Routing Daemon BIRD is a software-defined router. It receives routes from Project Calico and makes them available to the ingress controller.

Prepare servers for Kubernetes Jump to heading

Deploy Linux servers that will host your Kubernetes components.

You will need:

  • a control plane server: one Linux server to run the Kubernetes control plane and be responsible for managing the cluster and hosting the Kubernetes API.
  • worker nodes: one or more Linux servers to act as Kubernetes worker nodes, which host pods.
  • ingress controller server: one Linux server to run the HAProxy Enterprise Kubernetes Ingress Controller.

On the control plane server and worker nodes, perform these steps:

  1. Follow the Install Docker Engine guide to install Docker and Containerd on the server. Containerd will serve as the container runtime in Kubernetes.

  2. By default, the Containerd configuration file, /etc/containerd/config.toml, disables the Container Runtime Interface (CRI) plugin that Kubernetes needs. We also need to enable Systemd cgroups because kubeadm installs the Kubernetes service, kubelet, as a Systemd service. The easiest method is to generate a default configuration file and then make changes to it using sed, the search-and-replace tool.

    nix
    containerd config default | sudo tee /etc/containerd/config.toml
    sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
    sudo systemctl restart containerd
    nix
    containerd config default | sudo tee /etc/containerd/config.toml
    sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
    sudo systemctl restart containerd
  3. Disable swap, as required by the Kubernetes kubelet service.

    nix
    swapoff -a
    nix
    swapoff -a
  4. Follow the Installing kubeadm guide to install the kubeadm, kubectl, and kubelet packages. We will use the kubeadm tool to install Kubernetes.

Configure the Kubernetes control plane server Jump to heading

At least one server must become the central management server, otherwise known as the control plane. On that server, perform the following additional steps:

  1. Call kubeadm init to install Kubernetes on this server. Replace the value of --apiserver-advertise-address with your server’s IP address. Set --pod-network-cidr to the IP range you want to use for your Kubernetes cluster’s private network. Be sure that this range does not overlap with other IP ranges already in use on your network.

    nix
    sudo kubeadm init \
    --cri-socket unix:///run/containerd/containerd.sock \
    --pod-network-cidr 172.16.0.0/16 \
    --apiserver-advertise-address 192.168.56.10
    nix
    sudo kubeadm init \
    --cri-socket unix:///run/containerd/containerd.sock \
    --pod-network-cidr 172.16.0.0/16 \
    --apiserver-advertise-address 192.168.56.10
    Argument Description
    --cri-socket Sets the path to the Containerd CRI socket.
    --pod-network-cidr Sets the range of IP addresses to use for the pod network. Each new pod will receive an IP address in this range. The IP range 172.16.0.0/16 allows up to 65534 unique IP addresses.
    --apiserver-advertise-address Add this optional argument if your server has more than one IP address assigned to it to specify the address on which the Kubernetes API should listen.

    Refer to the kubeadm init documentation guide for more information about these and other arguments.

  2. After the installation, a kubeconfig file is created at /etc/kubernetes/admin.conf. This contains settings for connecting to the new Kubernetes cluster. Copy it to your home directory and to the root user’s home directory. This allows you to connect to the Kubernetes API using kubectl and we will configure Project Calico, which will connect as root, to use this kubeconfig file too.

    nix
    sudo mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    sudo mkdir /root/.kube
    sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
    sudo chown root:root /root/.kube/config
    nix
    sudo mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    sudo mkdir /root/.kube
    sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
    sudo chown root:root /root/.kube/config
  3. Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. In the file /etc/default/kubelet (or /etc/sysconfig/kubelet), set the --node-ip argument to your server’s IP address. It’s also a good idea to set the path to the Containerd socket explicitly via the --container-runtime-endpoint argument. Then restart the service.

    nix
    sudo touch /etc/default/kubelet
    echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubelet
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    nix
    sudo touch /etc/default/kubelet
    echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubelet
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
  4. At this point, the Kubernetes control plane should be running. You can use the kubectl get pods command to check that the pods are running sucessfully. It is normal for the coredns pods to be in the Pending state at this time.

    nix
    kubectl get pods -A
    nix
    kubectl get pods -A
    output
    text
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-787d4945fb-8p8nz 0/1 Pending 0 3m41s
    kube-system coredns-787d4945fb-dngw9 0/1 Pending 0 3m41s
    kube-system etcd-controlplane 1/1 Running 0 3m52s
    kube-system kube-apiserver-controlplane 1/1 Running 0 3m52s
    kube-system kube-controller-manager-controlplane 1/1 Running 0 3m54s
    kube-system kube-proxy-rk7wg 1/1 Running 0 3m41s
    kube-system kube-scheduler-controlplane 1/1 Running 0 3m57s
    output
    text
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-787d4945fb-8p8nz 0/1 Pending 0 3m41s
    kube-system coredns-787d4945fb-dngw9 0/1 Pending 0 3m41s
    kube-system etcd-controlplane 1/1 Running 0 3m52s
    kube-system kube-apiserver-controlplane 1/1 Running 0 3m52s
    kube-system kube-controller-manager-controlplane 1/1 Running 0 3m54s
    kube-system kube-proxy-rk7wg 1/1 Running 0 3m41s
    kube-system kube-scheduler-controlplane 1/1 Running 0 3m57s
  5. Install the Project Calico Container Network Interface (CNI) plugin. We’ll use the Project Calico plugin because it supports BGP peering, which we’ll need for connecting the ingress controller to the Kubernetes cluster’s private network.

    Refer to the Project Calico Quickstart guide for instructions on installing the operator and custom resource definitions. Note that by default Project Calico expects a pod network CIDR of 192.168.0.0/16. Since, we are using 172.16.0.0/16 instead, edit the custom-resources.yaml file before installing it.

    In the example below, we change the spec.calicoNetwork.ipPools.cidr field to 172.16.0.0/16:

    custom-resources.yaml
    yaml
    # This section includes base Calico installation configuration.
    # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
    name: default
    spec:
    # Configures Calico networking.
    calicoNetwork:
    bgp: Enabled
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
    cidr: 172.16.0.0/16
    encapsulation: VXLANCrossSubnet
    natOutgoing: Enabled
    nodeSelector: all()
    ---
    # This section configures the Calico API server.
    # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
    apiVersion: operator.tigera.io/v1
    kind: APIServer
    metadata:
    name: default
    spec: {}
    custom-resources.yaml
    yaml
    # This section includes base Calico installation configuration.
    # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
    name: default
    spec:
    # Configures Calico networking.
    calicoNetwork:
    bgp: Enabled
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
    cidr: 172.16.0.0/16
    encapsulation: VXLANCrossSubnet
    natOutgoing: Enabled
    nodeSelector: all()
    ---
    # This section configures the Calico API server.
    # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
    apiVersion: operator.tigera.io/v1
    kind: APIServer
    metadata:
    name: default
    spec: {}

    Then create the resources:

    nix
    kubectl create -f ./custom-resources.yaml
    nix
    kubectl create -f ./custom-resources.yaml
  6. Download the calicoctl command-line tool.

    Copy it to the /usr/local/bin directory and set its permissions to make it executable:

    nix
    sudo cp ./calicoctl /usr/local/bin
    sudo chmod +x /usr/local/bin/calicoctl
    nix
    sudo cp ./calicoctl /usr/local/bin
    sudo chmod +x /usr/local/bin/calicoctl
  7. Create a file named /etc/calico/calicoctl.cfg.

    nix
    sudo mkdir /etc/calico
    sudo touch /etc/calico/calicoctl.cfg
    nix
    sudo mkdir /etc/calico
    sudo touch /etc/calico/calicoctl.cfg

    Add the following contents to it, which configures calicoctl to connect to your Kubernetes cluster using the kubeconfig file from the root user’s home directory.

    calicoctl.cfg
    yaml
    apiVersion: projectcalico.org/v3
    kind: CalicoAPIConfig
    metadata:
    spec:
    datastoreType: "kubernetes"
    kubeconfig: "/root/.kube/config"
    calicoctl.cfg
    yaml
    apiVersion: projectcalico.org/v3
    kind: CalicoAPIConfig
    metadata:
    spec:
    datastoreType: "kubernetes"
    kubeconfig: "/root/.kube/config"
  8. Create a file named /etc/calico/calico-bgp.yaml.

    nix
    sudo touch /etc/calico/calico-bgp.yaml
    nix
    sudo touch /etc/calico/calico-bgp.yaml

    Add the following to it to enable BGP peering with your external network. Change the peerIp field to be the IP address of the server where you will run the ingress controller.

    calico-bgp.yaml
    yaml
    apiVersion: projectcalico.org/v3
    kind: BGPConfiguration
    metadata:
    name: default
    spec:
    logSeverityScreen: Info
    nodeToNodeMeshEnabled: true
    asNumber: 65000
    ---
    # ingress controller server
    apiVersion: projectcalico.org/v3
    kind: BGPPeer
    metadata:
    name: my-global-peer
    spec:
    peerIP: 192.168.56.13
    asNumber: 65000
    calico-bgp.yaml
    yaml
    apiVersion: projectcalico.org/v3
    kind: BGPConfiguration
    metadata:
    name: default
    spec:
    logSeverityScreen: Info
    nodeToNodeMeshEnabled: true
    asNumber: 65000
    ---
    # ingress controller server
    apiVersion: projectcalico.org/v3
    kind: BGPPeer
    metadata:
    name: my-global-peer
    spec:
    peerIP: 192.168.56.13
    asNumber: 65000
    Argument Description
    asNumber Defines the BGP autonomous system (AS) number you wish to use.
    peerIP Defines the IP address of the server where you will install the ingress controller.

    Apply it with the calicoctl apply command:

    nix
    sudo calicoctl apply -f /etc/calico/calico-bgp.yaml
    nix
    sudo calicoctl apply -f /etc/calico/calico-bgp.yaml
  9. Create an empty ConfigMap resource in your cluster, which the ingress controller requires upon startup.

    nix
    sudo kubectl create configmap haproxy-kubernetes-ingress
    nix
    sudo kubectl create configmap haproxy-kubernetes-ingress
  10. To verify the setup, call calicoctl node status. The Info column should show Connection refused. This is expected because we have not configured the ingress controller yet to serve as the neighbor BGP peer.

    nix
    sudo calicoctl node status
    nix
    sudo calicoctl node status
    output
    text
    Calico process is running.
    IPv4 BGP status
    +---------------+-----------+-------+----------+--------------------------------+
    | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
    +---------------+-----------+-------+----------+--------------------------------+
    | 192.168.56.13 | global | start | 22:53:20 | Connect Socket: No route to |
    | | | | | host |
    +---------------+-----------+-------+----------+--------------------------------+
    IPv6 BGP status
    No IPv6 peers found.
    output
    text
    Calico process is running.
    IPv4 BGP status
    +---------------+-----------+-------+----------+--------------------------------+
    | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
    +---------------+-----------+-------+----------+--------------------------------+
    | 192.168.56.13 | global | start | 22:53:20 | Connect Socket: No route to |
    | | | | | host |
    +---------------+-----------+-------+----------+--------------------------------+
    IPv6 BGP status
    No IPv6 peers found.

Configure the Kubernetes worker nodes Jump to heading

Kubernetes worker nodes host pods. On each server that you wish to register as a worker node in the Kubernetes cluster, after following the steps in the Prepare servers for Kubernetes, perform these additional steps:

  1. On the control plane server, call kubeadm token create --print-join-command, which shows the kubeadm join command you need to join a server to the cluster.

    For example:

    nix
    kubeadm token create --print-join-command
    nix
    kubeadm token create --print-join-command

    Copy its output and run it on the worker node server:

    nix
    sudo kubeadm join 192.168.56.10:6443 \
    --token jqfhgn.bgvy9xko70q82awu \
    --discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880b
    nix
    sudo kubeadm join 192.168.56.10:6443 \
    --token jqfhgn.bgvy9xko70q82awu \
    --discovery-token-ca-cert-hash sha256:ce4dfb0efa64a0bb9071268c7a94258a9fef56be89e909a21f16f2528d8c880b
    output
    text
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    output
    text
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  2. Optional: If the server has more than one IP address assigned to it, you must configure the Kubernetes kubelet service to use the correct one. In the file /etc/default/kubelet (or /etc/sysconfig/kubelet), set the --node-ip argument to your server’s IP address. It’s also a good idea to set the path to the Containerd socket explicitly via the --container-runtime-endpoint argument. Then restart the service.

    nix
    sudo touch /etc/default/kubelet
    echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubelet
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    nix
    sudo touch /etc/default/kubelet
    echo "KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11 --container-runtime-endpoint=unix:///run/containerd/containerd.sock" | sudo tee /etc/default/kubelet
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet

Install the ingress controller outside of your cluster Jump to heading

On a separate server not joined to your Kubernetes cluster, follow these steps to install the HAProxy Enterprise Kubernetes Ingress Controller as a standalone process.

  1. Copy the kubeconfig file to this server and store it in the root user’s home directory. The ingress controller will use this to connect to the Kubernetes API.

    nix
    sudo mkdir -p /root/.kube
    sudo cp admin.conf /root/.kube/config
    sudo chown -R root:root /root/.kube
    nix
    sudo mkdir -p /root/.kube
    sudo cp admin.conf /root/.kube/config
    sudo chown -R root:root /root/.kube
  2. HAProxy Enterprise Kubernetes Ingress Controller is compatible with a specific version of HAProxy Enterprise. Use the chart below to install the correct version.

    Ingress controller version Compatible HAProxy Enterprise version
    1.11 2.8
    1.10 2.7
    1.9 2.6
    1.8 2.5
    1.7 2.4
  3. Install the ingress controller:

    nix
    sudo apt install -y hapee-2.8r1-kubernetes-ingress
    nix
    sudo apt install -y hapee-2.8r1-kubernetes-ingress
    nix
    sudo yum install -y hapee-2.8r1-kubernetes-ingress
    nix
    sudo yum install -y hapee-2.8r1-kubernetes-ingress
  4. Edit the file /etc/hapee-2.8/kubernetes-ingress.yml and set the controller.kubeconfig field to /root/.kube/config.

    kubernetes-ingress.yml
    yaml
    controller:
    kubeconfig: /root/.kube/config
    kubernetes-ingress.yml
    yaml
    controller:
    kubeconfig: /root/.kube/config
  5. Restart the ingress controller service and verify that the service started:

    nix
    sudo systemctl restart hapee-2.5-kubernetes-ingress
    sudo systemctl status hapee-2.5-kubernetes-ingress
    nix
    sudo systemctl restart hapee-2.5-kubernetes-ingress
    sudo systemctl status hapee-2.5-kubernetes-ingress
    output
    text
    ● hapee-2.5-kubernetes-ingress.service - HAPEE KUBERNETES INGRESS
    Loaded: loaded (/lib/systemd/system/hapee-2.5-kubernetes-ingress.service; disabled; vendor preset: enabled)
    Active: active (running) since Fri 2023-01-13 22:18:28 UTC; 6s ago
    output
    text
    ● hapee-2.5-kubernetes-ingress.service - HAPEE KUBERNETES INGRESS
    Loaded: loaded (/lib/systemd/system/hapee-2.5-kubernetes-ingress.service; disabled; vendor preset: enabled)
    Active: active (running) since Fri 2023-01-13 22:18:28 UTC; 6s ago

Configure the BIRD Internet Routing Daemon Jump to heading

To enable the ingress controller to route requests to pods in your Kubernetes cluster, it must get routing information via BGP from the Project Calico network plugin. To do that, configure the BIRD Internet Routing Daemon, which acts as a software-defined router that adds IP routes to the ingress controller server. You install this for HAProxy Enterprise via the hapee-extras-route package.

  1. Install the route package:

    nix
    sudo apt install -y hapee-extras-route
    nix
    sudo apt install -y hapee-extras-route
  2. Edit the file named /etc/hapee-extras/hapee-route.cfg Add the following contents to it, but change:

    • the router id to the current server’s IP address. This is the IP address of the ingress controller server.
    • the local line’s IP address in each protocol section to the current server’s IP address. Again, this the IP address of the ingress controller server.
    • the neighbor line to the IP address of a node in your Kubernetes cluster. One of these should be the control plane server’s IP address.
    • the import filter should match the pod network’s IP range that you set earlier with kubeadm init.

Supported BIRD version

Only BIRD 1.x is supported at this time. Do not use BIRD 2.x syntax in the configuration file.

hapee-route.cfg
javascript
router id 192.168.56.13;
log syslog all;
# control plane node
protocol bgp {
local 192.168.56.13 as 65000;
neighbor 192.168.56.10 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# worker node
protocol bgp {
local 192.168.56.13 as 65000;
neighbor 192.168.56.11 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# Inserts routes into the kernel routing table
protocol kernel {
scan time 60;
export all;
}
# Gets information about network interfaces from the kernel
protocol device {
scan time 60;
}
hapee-route.cfg
javascript
router id 192.168.56.13;
log syslog all;
# control plane node
protocol bgp {
local 192.168.56.13 as 65000;
neighbor 192.168.56.10 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# worker node
protocol bgp {
local 192.168.56.13 as 65000;
neighbor 192.168.56.11 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# Inserts routes into the kernel routing table
protocol kernel {
scan time 60;
export all;
}
# Gets information about network interfaces from the kernel
protocol device {
scan time 60;
}

Each protocol bgp section connects BIRD to a Kubernetes node via iBGP. Each is considered a neighbor. This example uses 65000 as the Autonomous System number, but you can choose a different value.

  1. Restart the route service.

    nix
    sudo systemctl restart hapee-extras-route
    nix
    sudo systemctl restart hapee-extras-route
  2. After completing these steps, the ingress controller is configured to communicate with your Kubernetes cluster and, once you’ve added an Ingress resource using kubectl, it can route traffic to pods. Learn about creating Ingress resources for routing traffic in the section Use HAProxy Kubernetes Ingress Controller to route HTTP traffic.

    Be sure to allow the servers to communicate by adding rules to your firewall.

    On the ingress controller server, calling /opt/hapee-extras/bin/hapee-route-cli show protocols should show that connections have been established with the control plane server (bgp1) and any worker nodes (bgp2).

    nix
    sudo /opt/hapee-extras/bin/hapee-route-cli show protocols
    nix
    sudo /opt/hapee-extras/bin/hapee-route-cli show protocols
    output
    text
    BIRD 1.6.8 ready.
    name proto table state since info
    bgp1 BGP master up 21:53:46 Established
    bgp2 BGP master up 21:53:47 Established
    kernel1 Kernel master up 21:53:45
    device1 Device master up 21:53:45
    static1 Static master up 21:53:45
    vol1 Volatile master up 21:53:45
    output
    text
    BIRD 1.6.8 ready.
    name proto table state since info
    bgp1 BGP master up 21:53:46 Established
    bgp2 BGP master up 21:53:47 Established
    kernel1 Kernel master up 21:53:45
    device1 Device master up 21:53:45
    static1 Static master up 21:53:45
    vol1 Volatile master up 21:53:45

    On the control plane server, calling calicoctl node status should show that BGP peering has been established with the ingress controller, which has a peer type of global, and any worker nodes, which are connected through the Project Calico node-to-node mesh.

    nix
    sudo calicoctl node status
    nix
    sudo calicoctl node status
    output
    text
    Calico process is running.
    IPv4 BGP status
    +---------------+-------------------+-------+----------+-------------+
    | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
    +---------------+-------------------+-------+----------+-------------+
    | 192.168.56.13 | global | up | 23:06:44 | Established |
    | 192.168.56.11 | node-to-node mesh | up | 23:12:00 | Established |
    +---------------+-------------------+-------+----------+-------------+
    IPv6 BGP status
    No IPv6 peers found.
    output
    text
    Calico process is running.
    IPv4 BGP status
    +---------------+-------------------+-------+----------+-------------+
    | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
    +---------------+-------------------+-------+----------+-------------+
    | 192.168.56.13 | global | up | 23:06:44 | Established |
    | 192.168.56.11 | node-to-node mesh | up | 23:12:00 | Established |
    +---------------+-------------------+-------+----------+-------------+
    IPv6 BGP status
    No IPv6 peers found.

Do you have any suggestions on how we can improve the content of this page?