Rolling Updates & Blue-Green Deployments With Kubernetes & HAProxy

The HAProxy Kubernetes Ingress Controller supports two popular deployment patterns for updating applications in Kubernetes: rolling updates and blue-green deployments.

This is the second post in a series about HAProxy’s role in building a modern systems architecture that relies on cloud-native technology such-as Docker containers and Kubernetes. Containers have revolutionized how software is deployed, allowing the microservice pattern to flourish and enabling self-healing, autoscaling applications. HAProxy is an intelligent load balancer that adds high performance, observability, security, and many other features to the mix.

Learn more by registering for our webinar “HAProxy Skills Lab: Deployment Patterns in Kubernetes Using the HAProxy Kubernetes Ingress Controller”.

So, you have deployed your application to Kubernetes and it’s running flawlessly. The next important question is, how should you deploy the next version of it safely? How can you replace the existing pods without disrupting traffic? Furthermore, how is it affected by routing traffic through the HAProxy Kubernetes Ingress Controller?

Kubernetes accommodates a wide range of deployment methods. We’ll cover two that guarantee a safe rollout while keeping the ability to revert if necessary:

  • Rolling updates have first-class support in Kubernetes and allow you to phase in a new version gradually;

  • Blue-green deployments avoid having two versions at play at the same time by swapping one set of pods for another.

The HAProxy Kubernetes Ingress Controller is powered by the world’s fastest and most widely used software load balancer. Known to provide the utmost performance, observability, and security, it is the most efficient way to route traffic into a Kubernetes cluster. It automatically detects changes within your Kubernetes infrastructure and ensures accurate distribution of traffic to healthy pods. Its design prevents downtime even when there are rapid configuration changes. It supports both deployment patterns and reliably exposes the correct pods to clients.

Deploy the HAProxy Kubernetes Ingress Controller

In this blog post, I use Minikube to start up a simple Kubernetes cluster on my workstation. Minikube requires a hypervisor, such as VirtualBox, to be installed. Once it’s up and running, you will be able to expose services running inside the Kubernetes cluster at the IP address 192.168.99.100.

After installing and starting Minikube, deploy the HAProxy Kubernetes Ingress Controller, which is responsible for routing traffic into your Kubernetes cluster. You can either install the open-source version or the Enterprise version, which is built upon HAProxy Enterprise. It adds features such as a Web Application Firewall, which is essential for stopping application-layer attacks.

By default, the Ingress Controller assumes that you want to configure SSL. If you prefer to try things without SSL, then download its YAML file and modify its ConfigMap so that ssl-redirect is OFF.

haproxy-ingress.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-configmap
namespace: default
data:
ssl-redirect: "OFF"

Rolling Updates

A rolling update offers a way to deploy the new version of your application gradually across your cluster. It replaces pods during several phases. For example, you may replace 25% of the pods during the first phase, then another 25% during the next, and so on until all are upgraded. Since the pods are not replaced all at once, this means that both versions will be live, at least for a short time, during the rollout.

Did you know?

Because a rolling update creates the potential for two versions of your application to be deployed simultaneously, make sure that any upstream databases and services are compatible with both versions.

This deployment model enjoys first-class support in Kubernetes with baked-in YAML configuration options. Here’s how it works:

  1. Version 1 of your application is already deployed.

  2. Push version 2 of your application to your container image repository.

  3. Update the version number in the Deployment object’s definition.

  4. Apply the change with kubectl.

  5. Kubernetes staggers the rollout of the new version across your pods.

  6. The HAProxy Kubernetes Ingress Controller detects when the new pods are live. It automatically updates its proxy configuration, routing traffic away from the old pods and towards the new ones.

A rolling update dodges downtime by replacing existing pods incrementally. If the new pods introduce an error that stops them from starting up, Kubernetes will pause the rollout. Also, a rolling update ensures that some pods are always up, so there’s no downtime. Kubernetes keeps a minimum number of pods running during the rollout. However, this requires that you’ve added a readiness check to your pods so that Kubernetes knows when they are truly ready to receive traffic.

Deploy the original application

Kubernetes enables rolling updates by default. An update begins when you change your Deployment resource’s YAML file and then use kubectl apply. Consider the following definition, which deploys version 1 of an application. Note that I am using the errm/versions Docker image because it displays the version of the application when you browse its webpage, which makes it easy to see which version you’re running.

app.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app
spec:
replicas: 5
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: errm/versions:0.0.1
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1

The readinessProbe section tells Kubernetes to send an HTTP request to the application five seconds after it has started, and then every five seconds thereafter. No traffic is sent to the pod until a successful response is returned. This is key to preventing downtime.

Did you know?

Consider tagging your container images with version numbers, rather than using a tag like latest. This allows you to keep track of the versions that are deployed and manage the release of new versions.

Next, define a Service object that will categorize the pods into a single group that the Ingress Controller will watch:

app-service.yaml

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
run: app
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000

Next, define an Ingress object. This configures how the HAProxy Ingress Controller will route traffic to the pods:

ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: default
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 80

Use kubectl apply to deploy the pods, service and ingress:

$ kubectl apply -f app.yaml -f app-service.yaml -f ingress.yaml

Version 1 of your application is now deployed. Run the following command to see which port the HAProxy Kubernetes Ingress Controller has mapped to port 80:

$ kubectl get svc haproxy-ingress -n haproxy-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress NodePort 10.101.75.28 <none> 80:31179/TCP,443:31923/TCP,1024:30430/TCP 98s

You can then see that the application is exposed on port 31179. You can see it by visiting the Minikube IP address http://192.168.99.100:31179 in your browser.

version 1 web page

Version 1 web page

Let’s see how to upgrade it to version 2 next.

Upgrade using a rolling update

After you have pushed a new version of your application to your container repository, trigger a rolling update by increasing the version number set on the Deployment definition’s spec.template.spec.containers.image property. This tells Kubernetes that the current, desired version of your application has changed. In our example, since we’re using a prebaked image, there’s already a version 2 set up in the Docker Hub repository.

app.yaml

image: errm/versions:0.0.2

Then, use kubectl apply to start the rollout:

$ kubectl apply -f app.yaml

You can check the status of the rollout by using the kubectl rollout status command:

$ kubectl rollout status deployment app
deployment "app" successfully rolled out

Once completed, you can access the application again at the same URL, http://192.168.99.100:31179. It shows you a new web page signifying that version 2 has been deployed.

version 2 web page

Version 2 web page

If you decide that the new version is faulty, you can revert to the previous one by using the kubectl rollout undo command, like this:

$ kubectl rollout undo deployment app
deployment.extensions/app rolled back

The HAProxy Kubernetes Ingress Controller detects pod changes quickly and can switch back and forth between versions without dropping connections. Rolling updates aren’t the only way to accomplish highly-available services, though. In the next section, you’ll learn about blue-green deployments, which update all pods simultaneously.

Blue-Green Deployments

A blue-green deployment lets you replace an existing version of your application across all pods at once. The name, blue-green, was coined in the book Continuous Delivery by Jez Humble and David Farley. Here’s how it works:

  1. Version 1 of your application is already deployed.

  2. Push version 2 of your application to your container image repository.

  3. Deploy version 2 of your application to a new group of pods. Both versions 1 and 2 pods are now running in parallel. However, only version 1 is exposed to external clients.

  4. Run internal testing on version 2 and make sure it is ready to go live.

  5. Flip a switch and the ingress controller in front of your clusters stops routing traffic to the version 1 pods and starts routing it to the version 2 pods.

This deployment pattern has a few advantages over a rolling update. For one, at no time are there ever two versions of your application accessible to external clients at the same time. So, all users will receive the same client-side Javascript files and be routed to a version of the application that supports the API calls within those files. It also simplifies upstream dependencies, such as database schemas.

Another advantage is that it gives you time to test the new version in a production environment before it goes live. You control how long to wait before making the switch. Meanwhile, you can verify that the application and its dependencies function normally.

On the other hand, a blue-green deployment is all-or-nothing. Unlike a rolling update, you aren’t able to gradually roll out the new version. All users will receive the update at the same time, although existing sessions will be allowed to finish their work on the old instances. So, the stakes are a bit higher that everything should work, once you do initiate the change. It also requires allocating more server resources, since you will need to run two copies of every pod.

Luckily, the rollback procedure is just as easy: You simply flip the switch again and the previous version is swapped back into place. That’s because the old version is still running on the old pods. It is simply that traffic is no longer being routed to them. When you’re confident that the new version is here to stay, you can decommission those pods.

You’ll need to set up your original application in a slightly different way when you expect to use a blue-green deployment. There is more emphasis on using Kubernetes metadata labels, which will become clear in the next section.

Deploy the original application

Consider the following definition, which deploys version 1 of your application. Note its spec.selector section, which specifies a label called version:

app-v1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app-blue
spec:
replicas: 1
selector:
matchLabels:
run: app
version: 0.0.1
template:
metadata:
labels:
run: app
version: 0.0.1
spec:
containers:
- name: app
image: errm/versions:0.0.1
ports:
- containerPort: 3000

Deployment object defines a spec.selector section that matches the spec.template.metadata section. This is how a Deployment tags pods and keeps track of them. This is the key to setting up a blue-green deployment. By using different labels, you can deploy multiple versions of the same application. Here, the spec.selector.matchLabels property is set to run=app,version=0.0.1. The version should match the version tag of your Docker image, for convenience and simplicity.

The following Service definition targets that same selector:

app-service-bg.yaml

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
run: app
version: 0.0.1
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000

Next, use the following Ingress definition to expose the version 1 pods to the world. It registers a route with the HAProxy Kubernetes Ingress Controller:

ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: default
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 80

Apply everything using kubectl:

$ kubectl apply -f app-v1.yaml -f app-service-bg.yaml -f ingress.yaml

At this point, you can access the application at the HTTP port exposed by the Ingress Controller: http://192.168.99.100:31179. Now, let’s see how to use a blue-green deployment to upgrade the version.

Upgrade using a blue-green deployment

Now that the blue version (i.e. version 1) is released, create a green version of your Deployment object that will deploy version 2. The YAML will be the same, except that you increase the value of the version label, as well as the Docker image tag. Also note that the name of the deployment is changed from app-blue to app-green, since you cannot have two Deployments with the same name that target different pods.

app-v2.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app-green
spec:
replicas: 1
selector:
matchLabels:
run: app
version: 0.0.2
template:
metadata:
labels:
run: app
version: 0.0.2
spec:
containers:
- name: app
image: errm/versions:0.0.2
ports:
- containerPort: 3000

Apply it with kubectl:

$ kubectl apply -f app-v2.yaml

At this point, both blue (version 1) and green (version 2) are deployed. Only the blue instance is receiving traffic, though. To make the switch, update your Service definition’s version selector so that it points to the new version:

app-service.yaml

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
run: app
version: 0.0.2
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000

Apply it with kubectl:

$ kubectl apply -f app-service.yaml

Check the application again and you will see that the new version is live. If you need to roll back to the earlier version, simply change the Service definition’s selector back and reapply it. The HAProxy Kubernetes Ingress Controller detects these changes almost instantly and you can swap back and forth to your heart’s content. There’s no downtime during the cutover. Established TCP connections will finish normally on the instance where they began.

Testing the new pods

You can also test the new version before it’s released by registering a different ingress route that exposes the application to a new URL path. First, create another Service definition called test-service:

test-service.yaml

apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
haproxy.org/path-rewrite: /
spec:
selector:
run: app
version: 0.0.2
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000

Note that we are including the path-rewrite annotation, which rewrites the URL /test to / before it reaches the pod. Then, add a new route to your existing Ingress object that exposes this service at the URL path /test, as shown:

ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
haproxy.org/ingress.class: "development"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 80
- path: /test
backend:
serviceName: test-service
servicePort: 80

This lets you check your application by visiting /test in your browser.

Conclusion

The HAProxy Kubernetes Ingress Controller is powered by the legendary HAProxy. Known to provide the utmost performance, observability, and security, it features many benefits including SSL termination, rate limiting, and IP whitelisting. When you deploy the ingress controller into your cluster, it’s important to consider how your applications will be upgraded later. Two popular methods are rolling updates and blue-green deployments.

Rolling updates allow you to phase in a new version gradually and it has first-class support in Kubernetes. Blue-green deployments avoid the complexity of having two versions at play at the same time and give you a chance to test the change before going live. In either case, the HAProxy Kubernetes Ingress Controller detects these changes quickly and maintains uptime throughout.

If you enjoyed this post and want to see more like it subscribe to this blog! You can also follow us on Twitter and join the conversation on Slack.

The Enterprise version of the ingress controller combines HAProxy, the world’s fastest and most widely used open-source software load balancer and application delivery controller, with enterprise-class features, services and premium support. Contact us to learn more and sign up for a free trial.

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.