Multi-Tenant Kubernetes Clusters With the HAProxy Kubernetes Ingress Controller

Learn how to use the HAProxy Kubernetes Ingress Controller when hosting multiple tenants in your cluster and how to configure namespaces, access controls, and resource quotas.

This is the third post in a series about HAProxy’s role in building a modern systems architecture that relies on cloud-native technology such as Docker containers and Kubernetes. Containers have revolutionized how software is deployed, allowing the microservice pattern to flourish and enabling self-healing, autoscaling applications. HAProxy is an intelligent load balancer that adds high performance, observability, security, and many other features to the mix.

It’s a rare bird, a Kubernetes cluster that serves only a single tenant. In the wild, you’ll likely encounter clusters where tenants are packed in close: QA and Dev, Team A and Team B, Java application and .NET application—environments, teams, and technology stacks declare their stakes on resources. It’s essential that you plan ahead for multiple tenants, set up the proper namespaces, define access controls, set resource quotas, and configure ingress routing.

Sharing resources in a Kubernetes cluster is a logical way to save money on the cost of infrastructure. In this post, we’ll share tips for setting up multiple tenants and, in particular, how to configure the HAProxy Kubernetes Ingress Controller to serve traffic to multiple tenants.

Learn more by registering for our webinar: “HAProxy Skills Lab: Building a Multi-tenant Kubernetes Cluster

Want to supercharge your ingress routing with HAProxy? Download our free eBook now!

Namespaces Are Key

Kubernetes namespace groups objects inside of a shared scope, providing a sandbox where objects created by one tenant don’t overlap with objects created by another. Take for example a Dev and a QA environment. You can host both environments inside of a single Kubernetes cluster where they can share server resources, yet remain oblivious to one another. Each environment, or “tenant”, can duplicate your entire application stack. Building walls around each tenant avoids accidentally exposing an experimental Dev service within the QA environment, deleting the wrong object, or applying a breaking change to the wrong application

Declare a new namespace by adding a YAML file that defines a Namespace object, like this:

apiVersion: v1
kind: Namespace
metadata:
name: dev

In this instance, the namespace is named dev. Use kubectl to apply this change to your cluster:

$ kubectl apply -f dev-namespace.yaml
view raw 20200312-02.sh hosted with ❤ by GitHub

Once created, add objects to the namespace by referencing their name within the object’s metadata. In the following example, a ConfigMap object is added to the dev namespace:

apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
namespace: dev
data:
foo: 'bar'

Only objects within the same dev namespace will have access to this ConfigMap. Also, when using the kubectl command-line utility to view objects, you will need to include the --namespace argument or else the returned list will come up empty:

$ kubectl get configmaps --namespace=dev
view raw 20200312-04.sh hosted with ❤ by GitHub

Use the kubectl get namespaces command to view all of your defined namespaces:

$ kubectl get namespaces
NAME STATUS AGE
default Active 4m33s
dev Active 2m38s
view raw 20200312-05.sh hosted with ❤ by GitHub

Managing User Access to a Namespace

Once you’ve defined a namespace, you can configure role-based access control (RBAC) to limit who has access to it. Out of the box, there are already a few roles defined, including adminedit, and view. In the following sections, a new user login is created and given the edit role in the dev namespace, which gives it read/write access to that namespace only.

Add a RoleBinding

First, create a new RoleBinding object that assigns the edit role to a user named bob.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-edit
namespace: dev # grants permissions within the "dev" namespace
subjects:
- kind: User
name: bob # permissions for a user named bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit # read/write access
apiGroup: rbac.authorization.k8s.io

The edit ClusterRole is already defined and can be scoped to the dev namespace by setting the namespace metadata field. Who is Bob? It’s a user who isn’t represented as an object per se (there is no User object in Kubernetes), but who will authenticate to the cluster using a client certificate that contains a CN field set to bob. You can also grant permissions to a group of users. In the following RoleBinding object, a group named dev-group is granted edit access to the dev namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-edit
namespace: dev # grants permissions within the "dev" namespace
subjects:
- kind: Group
name: dev-group # permissions for the group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit # read/write access
apiGroup: rbac.authorization.k8s.io

For group permissions, when you create the client certificate, its O field must match the subject name, which is dev-group in this case. Use kubectl to create the object in Kubernetes:

apiVersion: v1
kind: Namespace
metadata:
name: dev

Create a client certificate

The next step is to create a certificate signing request (CSR) for a new client certificate. There are a number of tools that you can use to do this, such as the openssl command-line utility. In the following example, I use openssl to create a CSR for a user named bob with a group of dev-group to demonstrate setting the CN and O fields:

# Create a certificate signing request with CN=bob and O=dev-group
# This creates bob.csr and bob.key
$ openssl req -newkey rsa:2048 -nodes -keyout bob.key -out bob.csr -subj "/CN=bob/O=dev-group"
view raw 20200312-21.sh hosted with ❤ by GitHub

Next, you’ll need to sign the CSR with your cluster’s CA certificate in order to get a client certificate. I’m using Minikube in my test lab, so I could sign the certificate signing request with Minikube’s CA certificate and key, which can be found in the ~/.minikube directory. I would use the following openssl command to create a client certificate named bob.crt.

# Sign it with the cluster's CA certificate
# This creates bob.crt
$ openssl x509 -req -in bob.csr -CA ~/.minikube/ca.crt -CAkey ~/.minikube/ca.key -CAcreateserial -out bob.crt -days 1000
view raw 20200312-09.sh hosted with ❤ by GitHub

Another way to sign the CSR and get a bob.crt file is to use the Kubernetes Certificates API, wherein you create a CertificateSigningRequest object. You will need to store the CSR data in a YAML file as a base64-encoded string and then apply the YAML file to your cluster, so it’s easiest to do it from the command line, like this:

$ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: bob
spec:
request: $(cat bob.csr | base64 | tr -d '\n')
usages:
- digital signature
EOF
view raw 20200312-22.sh hosted with ❤ by GitHub

Then, approve the CSR:

$ kubectl certificate approve bob
view raw 20200312-23.sh hosted with ❤ by GitHub

You can then download the signed certificate with the kubectl get csr command:

$ kubectl get csr bob -o jsonpath='{.status.certificate}' | base64 --decode > bob.crt
view raw 20200312-24.sh hosted with ❤ by GitHub

Add a cluster context

Next, add a new cluster context that lets you log in as bob, using the bob certificate.

$ kubectl config set-credentials bob --client-certificate=bob.crt --client-key=bob.key
$ kubectl config set-context minikube-bob --cluster=minikube --user=bob
$ kubectl config use-context minikube-bob
view raw 20200312-10.sh hosted with ❤ by GitHub

You’re now using the minikube-bob context to access your Minikube Kubernetes cluster. If you try accessing or creating objects in the dev namespace, it will work, but you’ll get an error if you try to access an object in any other namespace.

$ kubectl get pods --namespace=dev
NAME READY STATUS RESTARTS AGE
app-66d9457bf5-vpbnw 1/1 Running 1 22h
$ kubectl get pods --namespace=default
Error from server (Forbidden): pods is forbidden: User "bob" cannot list resource "pods" in API group "" in the namespace "default"
view raw 20200312-11.sh hosted with ❤ by GitHub

You can switch back to the normal Minikube context, which has admin privileges, like this:

$ kubectl config use-context minikube
view raw 20200312-12.sh hosted with ❤ by GitHub

An Ingress Controller that Watches a Namespace

Now that you’ve created a namespace and given limited access to it, let’s see how to manage HTTP traffic going into the environment. The HAProxy Kubernetes Ingress Controller brings the power of HAProxy to Kubernetes, allowing you to leverage its high performance, reliability, and security.

Be sure to switch back to the normal admin context before going further. Without any special configuration, the HAProxy Kubernetes Ingress Controller will watch over all namespaces. When a pod is added or removed anywhere within the cluster, the controller is notified, which means that any of your teams can use it for ingress traffic routing. That’s great news if you want to set up routing quickly for all of your teams (i.e. tenants). However, there are a few reasons why you may decide to deploy multiple ingress controllers.

For one thing, by creating multiple ingress controllers, you can apply a walled garden approach for each tenant. By creating a distinct ingress controller for each one, you can:

  1. collect distinct HAProxy metrics per tenant, such as request rates and error rates.

  2. set rate limits per tenant to prevent “noisy neighbors” syndrome.

  3. define custom timeouts per tenant to accommodate varying SLAs.

  4. reuse the same URL paths to keep your applications consistent between tenants.

When you deploy an HAProxy Kubernetes Ingress Controller using Helm, add --namespace-whitelist to the controller.extraArgs field to set the namespace to watch, as shown:

$ helm install onlydev haproxytech/kubernetes-ingress \
--set-string "controller.extraArgs={--namespace-whitelist=dev}"
view raw 20200312-13.sh hosted with ❤ by GitHub

You can specify more than one namespace to watch:

$ helm install onlydev haproxytech/kubernetes-ingress \
--set-string "controller.extraArgs={--namespace-whitelist=dev-team-a,--namespace-whitelist=dev-team-b}"
view raw 20200312-14.sh hosted with ❤ by GitHub

This Ingress object is created within the dev namespace and, therefore, is picked up automatically by an ingress controller that has its whitelist set to dev:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: dev
spec:
rules:
- http:
paths:
- path: /app-service
backend:
serviceName: app-service
servicePort: 80

You could create an identical Ingress object in your qa namespace, but it won’t route through this particular ingress controller because its namespace is different. Each ingress controller can be exposed to a unique IP address so that tenants can be given their own subdomain. It won’t be possible for one tenant’s traffic to mix with that of another. Note that you can use Network Policy objects to restrict access between services inside the cluster.

An Ingress Controller That You Target

Another way to manage ingress routing is to use ingress classes. Whereas --namespace-whitelist tells the ingress controller to watch a specific namespace for changes, an ingress class flips that responsibility around, giving an Ingress object a chance to target the controller it wants by name. To set this up, add class to the list of arguments when defining your ingress controller. Here, the --ingress.class argument is set to intranet:

$ helm install intranet haproxytech/kubernetes-ingress \
--set controller.ingressClass=intranet
view raw 20200312-16.sh hosted with ❤ by GitHub

Maybe this ingress controller exposes services only to the company’s intranet. You may have another ingress controller that has a class of public, exposing services to external customers, for example. An Ingress object targets its desired controller by setting its haproxy.org/ingress.class annotation, as shown:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress-internal
namespace: default
annotations:
haproxy.org/ingress.class: "intranet"
spec:
rules:
- http:
paths:
- path: /app-service
backend:
serviceName: app-service-internal
servicePort: 80

This puts the control into the hands of your service developers. They can choose which ingress controller to use and it gives them a greater degree of autonomy. You can even use this with multiple tenants if you don’t mind giving them a common IP address for accessing their services.

Resource Quotas

As a final tip, each namespace can be assigned its own allotment of resources. For example, QA might be allocated more or less CPU and memory than Dev. This lets you prioritize which tenants receive the resources, or lets you simply keep things equal for everybody. If you don’t do this, then you risk one tenant utilizing more than their fair share and leaving other tenants squabbling over the scraps.

It is essential then that every pod defines how much CPU and memory it needs so that Kubernetes knows when a tenant is about to exceed its resource limits. We won’t go into detail about this, but this is accomplished by setting requests and limits on a pod, which you can learn more about on the Managing Compute Resources page. You can also create defaults for a namespace, in case a pod doesn’t set its own limits, by creating a LimitRange object.

Let’s cover how to set the resource quota for a namespace, which determines the cap on resources. If a tenant requests more resources than what you’ve allowed here, their objects won’t be created. Resource quotas let you restrict:

  • the total CPU that can be used

  • the total memory that can be used

  • the amount of hard drive storage that can be used

  • the number of objects that can be created

The following ResourceQuota object sets limits for CPU and memory in the dev namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-resources
namespace: dev
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi

Use kubectl to apply the quota:

$ kubectl apply -f dev-quota.yaml
view raw 20200312-19.sh hosted with ❤ by GitHub

Then, you can view how much has been used so far:

$ kubectl describe resourcequota dev-resources -n dev
Name: dev-resources
Namespace: dev
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
requests.cpu 500m 1
requests.memory 50Mi 1Gi
view raw 20200312-20.sh hosted with ❤ by GitHub

Quotas will help keep tenants from overusing resources and allow you to see how much a particular tenant has used so far, which is great when planning whether you need to expand the cluster. It’s a vital step when planning for multiple tenants.

Conclusion

In this blog post, you learned some tips for managing multiple tenants that share resources within a Kubernetes cluster. The HAProxy Kubernetes Ingress Controllers lets you whitelist certain namespaces to watch so that each namespace can be routed through a specific controller. You can also target specific ingress controllers by using ingress classes.

When setting up multiple tenants, it pays to configure RBAC and to give teams access to only their respective namespaces, which you can accomplish by using client certificates. You should also consider setting resource quotas to prevent a tenant from using more than their fair share of CPU and memory.

If you enjoyed this post and want to see more like it, subscribe to this blog! You can also follow us on Twitter and join the conversation on Slack.

The Enterprise version of the ingress controller combines HAProxy, the world’s fastest and most widely used open-source software load balancer, with enterprise-class features, services, and premium support. Contact us to learn more and sign up for a free trial.

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.