Announcing HAProxy Kubernetes Ingress Controller 1.5
Version Update

HAProxy Kubernetes Ingress Controller 1.10 is now the latest version. Learn more

We’ve released version 1.5 of the HAProxy Kubernetes Ingress Controller. This version unlocks access to the raw HAProxy configuration language for power users to gain more control. You can also enable mutual TLS authentication between the ingress controller and services, enforce Basic authentication, and return custom error pages to users. This version also enhances the controller’s internals, resulting in a more efficient HAProxy configuration, and behind the scenes, HAProxy has been upgraded to version 2.3.

It also provides a way to run the ingress controller outside of your Kubernetes cluster, but still, monitor the cluster for changes to pods. Some teams prefer this approach as they transition their applications to the container platform. You can launch the controller on a separate server and give it access to the pod network.

This release would have been impossible without all of the hard work from everyone in the open-source community involved, both on GitHub and Slack.

External Ingress Controller

According to the Kubernetes documentation:

Applications running in a Kubernetes cluster find and communicate with each other, and the outside world, through the Service abstraction.

The ingress controller itself runs as a Kubernetes service. Service relies on kube-proxy to route requests to the pods where the application is running, on whichever node that is. However, some users wish to bypass the Kubernetes Service layer in order to have fewer proxies or packet filters for a request to cross. So, one of the possible solutions is to run the ingress controller and HAProxy instance outside of the cluster like a more traditional reverse proxy.

While this solution trades off scalability—which is one of the main reasons to run applications inside Kubernetes—it does offer a smoother way to migrate existing infrastructure to Kubernetes. You can start to migrate applications into Kubernetes, but keep HAProxy instances outside of it, along with the high availability solution you choose (e.g. Keepalived).

Running an ingress controller outside of the Kubernetes cluster can also be quite useful for debugging, proof-of-concept, CI/CD pipelines, etc., for those who prefer to avoid containers. All you need is:

  • the HAProxy Kubernetes Ingress Controller binary;

  • the HAProxy binary;

  • a Kubeconfig file to access your Kubernetes cluster;

  • a network configuration that allows the HAProxy instances to route traffic to the pods network.

The last requirement can be as simple as adding a route like this to your server, where <node-ip> is the IP address of one of the cluster nodes reachable from where HAProxy is running:

$ ip route add <pod-network> via <node-ip>

To run the ingress controller outside of Kubernetes, you can start it using the following command:

$ ./kubernetes-ingress -e \
--configmap=default/haproxy-kubernetes-ingress \
--program=/usr/bin/haproxy \
--disable-ipv6 \
--ipv4-bind-address=10.0.3.100
--http-bind-port=8080 \
--https-bind-port=8443

All of the controller arguments are documented here.

Service Mutual TLS Authentication

This version adds support for mutual TLS authentication (mTLS) between the ingress controller and the backend servers it routes traffic. Mutual TLS authentication is not a protocol, but rather a security practice that ensures that both parties in a TLS session—the client and the server—trust each other. It means that when the client and server communicate over TLS, the client verifies the server’s identity by checking that its TLS certificate was signed by a known Certificate Authority (CA); Simultaneously, the client sends to the server its own certificate so that the server can verify the client’s identity. That way, both sides can inspect the other’s certificate before agreeing to continue. In this case, the ingress controller is the client to the backend servers.

For this purpose, we have added the server-ca and server-crt annotations, which you can apply at the Ingress, Service, or ConfigMap levels depending on whether the configured certificate applies to all or only some services. The server-ca annotation holds a Kubernetes secret that contains a CA certificate used to verify the backend server’s TLS certificate. The server-crt annotation holds a Kubernetes secret that contains a client certificate that the ingress controller will present to the server.

Here is an example that demonstrates setting these annotations in a Service definition:

apiVersion: v1
kind: Service
metadata:
labels:
run: web
name: web
annotations:
haproxy.org/server-ca: "default/server-tls-secret"
haproxy.org/server-crt: "default/client-tls-secret"
# ... other service settings...

If you don’t want to verify the server’s certificate, you can use the older server-ssl annotation to establish a TLS connection to the server without certificate verification.

Basic Authentication

The ingress controller’s position in front of your services makes it the ideal place to implement security measures like authentication. That’s because it fulfills the role of an API gateway, where all traffic flows through it. You can offload complex work to this layer and it will cover all of your services.

In version 1.5, we’ve added support for Basic HTTP authentication. This form of authentication displays a login prompt whenever a client first tries to access a service. To enable it, set the auth-type annotation to basic-auth on either your ConfigMap or Ingress definition. Add the auth-secret annotation, which specifies a Kubernetes secret that contains usernames and their passwords in the form of username: encrypted and base-64 encoded password.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
haproxy.org/ssl-redirect: "true"
haproxy.org/ssl-redirect-code: "301"
haproxy.org/ssl-certificate: "default/tls-secret"
haproxy.org/auth-type: basic-auth
haproxy.org/auth-secret: "default/logins"
# ... other ingress settings...

Although the passwords are encrypted when stored, Basic authentication credentials are passed in the clear when used, so you should always enable TLS to encrypt communication when using it.

Config Snippets

One of the benefits of using an ingress controller is that it hides the details of how the underlying load balancer is configured. Instead of editing HAProxy’s configuration file, haproxy.cfg, by hand, you can simply add annotations to your Kubernetes Ingress, Service, or ConfigMap files and apply them with kubectl. This annotation syntax is succinct. It boils down the load balancer functionality to its essence. For instance, to enable SSL with a redirect from HTTP to HTTPS, you would add these annotations to your Ingress definition:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: default
annotations:
haproxy.org/ssl-redirect: "true"
haproxy.org/ssl-redirect-code: "301"
haproxy.org/ssl-certificate: "default/tls-secret"
# ... other ingress settings...

This simplicity makes configuring advanced load balancing features convenient for users. In fact, you don’t even need to learn the HAProxy configuration language to use the HAProxy Kubernetes Ingress Controller. The annotations provide good enough coverage for most use cases. Sometimes, the annotations do extra work behind the scenes, such as how the ssl-certificate annotation will not only apply the certificate but also monitor its status and update the binding if it changes.

However, HAProxy power users prefer having the full configuration language at their disposal. In this version of the ingress controller, you can now insert raw HAProxy directives into the underlying configuration, which unlocks features that have not been exposed as annotations. Use global-config-snippet in the ingress controller’s ConfigMap to add directives to HAProxy’s global settings, which is where you would apply rules that affect all routes and services.

In the following example, we add four global directives: ssl-default-bind-optionsssl-default-bind-cipherstune.ssl.default-dh-param and tune.bufsize, which are just a few of the settings available:

apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-kubernetes-ingress
namespace: default
data:
global-config-snippet: |
ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
tune.ssl.default-dh-param 2048
tune.bufsize 32768

You can also apply directives to a backend in the HAProxy configuration. A backend defines a pool of servers to which requests are routed. The ingress controller abstracts away this concept, but Kubernetes Services map to backends, and the two are roughly analogous. Add a backend-config-snippet annotation to a Service definition, as shown below:

apiVersion: v1
kind: Service
metadata:
labels:
run: web
name: web
annotations:
haproxy.org/backend-config-snippet: |
stick-table type binary size 1000 store http_req_rate(5s)
http-request track-sc0 url32+src
http-request deny if { url32+src,table_http_req_rate() gt 50 }
# ... other service settings...

Here, we add a stick-table that stores the client’s IP address and the URL they’ve requested. From this, we track each client’s request rate per URL using http-request track-sc0 and enforce a rate limit using http-request deny. These are all directives only available with access to HAProxy’s powerful configuration language. The ingress controller already provides the rate-limit-requests annotation to enable rate-limiting based on IP address, but with config snippets, you have more control, such as setting different rate limits for different URLs.

You can also place this annotation into an Ingress definition, in which case it applies to all services that the ingress references. Or, add it to your ingress controller’s ConfigMap, in which case it applies to all services. You choose the scope that best fits your use case.

There isn’t a way to add snippets to a frontend in the HAProxy configuration. The reason is that the ingress controller manages routing rules and other settings in the frontend section. It’s preferable to avoid pushing raw configuration there to avoid conflicts with the controller’s internal operations.

Custom Error Pages

Under some circumstances, HAProxy generates certain HTTP errors. For example, it replies with a 403 Forbidden response when a request is denied by a http-request deny rule. Or, it replies with a 503 Service Unavailable response when there is no server available to handle the request. The new --configmap-errorfile controller argument lets you return a custom error message for a given HTTP response status code.

First, add a ConfigMap that defines the HTML you’d like to return for a given status:

apiVersion: v1
kind: ConfigMap
metadata:
name: customerrors
namespace: default
data:
503: |-
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>Oops, that's embarassing!</h1>
<p>There are no servers available to handle your request.</p>
</body></html>

Then, pass the --configmap-errorfile argument with the name of the ConfigMap when creating the ingress controller:

args:
- --configmap-errorfile=default/customerrors

Other Annotations

In addition to the main features discussed above, several other annotations were added:

Annotation

Description

src-ip-header

Set the source IP from an HTTP request header rather than the L3 connection. This is particularly useful if the Ingress Controller is behind a cloud load balancer or any other component that changes the original source IP. This annotation will impact all the access control mechanisms based on source IP (whitelisting, blacklisting, rate-limiting).

send-proxy-protocol

Use the PROXY protocol when connecting to backend servers.

request-redirect

Redirect the HTTP request to the specified host and port by updating the HTTP Location header. You can set the redirection code with the request-redirect-code annotation.

rate-limit-status-code

Set the status code to return when rate limiting has been triggered.

ssl-redirect-port

Set the port to return when redirecting to HTTPS.

cors-enable

Enables CORS rules for corresponding Ingress traffic.

cors-allow-origin

Sets the Access-Control-Allow-Origin response header to tell browsers which origin is allowed to access the requested resource.

cors-allow-methods

Sets the Access-Control-Allow-Methods response header to tell browsers the HTTP methods allowed when accessing the requested resource.

cors-allow-credentials

Sets the Access-Control-Allow-Credentials response header to tell browsers if credentials can be used to access the requested resource.

cors-allow-headers

Sets the Access-Control-Allow-Headers response header to tell browsers which HTTP headers can be used when accessing the requested resource.

cors-max-age

Sets the Access-Control-Allow-Age response header to tell browsers how long the result of a preflight request can be cached.

Internal Enhancements

An important part of the work during this release was to enhance the ingress controller’s internals. One aspect of that effort was to update the code to rely more on HAProxy Map files since doing so greatly simplifies the configuration that’s rendered.

Take a look at the configuration below, which was generated before the enhancements. It’s too verbose. We have a use_backend rule for every ingress rule and a long combination of ACLs ending the http-request deny and http-request capture lines:

frontend http
mode http
bind 0.0.0.0:80 name bind_1
bind :::80 v4v6 name bind_2
http-request set-var(txn.host) req.hdr(Host),field(1,:),lower
http-request set-var(txn.path) path
http-request set-var(txn.base) base
http-request deny deny_status 403 if { var(txn.host),concat(,txn.path) -m beg -f /etc/haproxy/maps/16510262515213450.lst } { src -f /etc/haproxy/maps/7895261178644353572.lst } or { var(txn.host) -f /etc/haproxy/maps/16510262515213450.lst } { src -f /etc/haproxy/maps/7895261178644353572.lst } or { var(txn.path) -m beg -f /etc/haproxy/maps/16510262515213450.lst } { src -f /etc/haproxy/maps/7895261178644353572.lst }
http-request capture "hdr(Referer)" len 128 if { var(txn.host),concat(,txn.path) -m beg -f /etc/haproxy/maps/18288779858306557702.lst } or { var(txn.host) -f /etc/haproxy/maps/18288779858306557702.lst } or { var(txn.path) -m beg -f /etc/haproxy/maps/18288779858306557702.lst }
http-request capture "hdr(User-Agent)" len 128 if { var(txn.host),concat(,txn.path) -m beg -f /etc/haproxy/maps/15330672981640189476.lst } or { var(txn.host) -f /etc/haproxy/maps/15330672981640189476.lst } or { var(txn.path) -m beg -f /etc/haproxy/maps/15330672981640189476.lst }
use_backend echo-echo-3-http-echo-8080 if { var(txn.host) echo.k8s.local } { var(txn.path) -m beg /echo-3 }
use_backend echo-echo-2-http-echo-8080 if { var(txn.host) echo.k8s.local } { var(txn.path) -m beg /echo-2 }
use_backend echo-echo-3-http-echo-8080 if { var(txn.host) echo-3.k8s.local }
use_backend echo-echo-2-http-echo-8080 if { var(txn.host) echo-2.k8s.local }
use_backend echo-echo-1-http-echo-8443 if { var(txn.host) echo-1.k8s.local }
use_backend echo-echo-3-http-echo-8080 if { var(txn.path) -m beg /echo-3 }
use_backend echo-echo-2-http-echo-8080 if { var(txn.path) -m beg /echo-2 }
default_backend default-haproxy-1-4-kubernetes-ingress-default-backend-8080

With version 1.5, the same inputs create a dramatically different configuration:

frontend http
mode http
bind 0.0.0.0:80 name bind_1
bind :::80 name bind_2 v4v6
http-request set-var(txn.base) base
http-request set-var(txn.path) path
http-request set-var(txn.host) req.hdr(Host),field(1,:),lower,map(/etc/haproxy/maps/host.map)
http-request set-var(txn.host) req.hdr(Host),field(1,:),regsub(^[^.]*,,),lower,map(/etc/haproxy/maps/host.map,'') if !{ var(txn.host) -m found }
http-request set-var(txn.match) var(txn.host),concat(,txn.path,),map(/etc/haproxy/maps/path-exact.map)
http-request set-var(txn.match) var(txn.host),concat(,txn.path,),map_beg(/etc/haproxy/maps/path-prefix.map) if !{ var(txn.match) -m found }
http-request deny deny_status 403 if { var(txn.match) -m dom 819381936 } { src -f /etc/haproxy/maps/blacklist-2602162148.map }
http-request capture "hdr(Referer)" len 128 if { var(txn.match) -m dom 4205828474 }
http-request capture "hdr(User-Agent)" len 128 if { var(txn.match) -m dom 2786470064 }
use_backend %[var(txn.match),field(1,.)]
default_backend default-haproxy-kubernetes-ingress-default-backend-8080

The use_backend and http-request lines now rely on lookups of data from HAProxy Map files and this provides better readability and a more efficient configuration. Since we can update HAProxy Map files at runtime without a reload, we end up with fewer reloads.

Here’s how the example above works for routing to the correct service based on the host header:

  • First, a lookup is done in the Host map file to find if there is an exact match of the requested Host header.

  • If not found, a second lookup is done to find if there is a match for the domain of the requested Host header.

  • Next, a lookup is done for the concatenation of the “host/path” to find an exact match.

  • If not found, a second lookup is done to find a “prefix” match.

  • At this stage there is either no match—in which case the default backend will be served—or else the result of the match will be in the following format:

BackendName.ruleID1.ruleID2.ruleID3

which will be used to select the corresponding backend and HAProxy rules.

For example:

echo.k8s.local/echo-2 echo-echo-2-http-echo-8080.4205828474.278647006

During this release cycle, the controller code was refactored and reorganized in order to split it into different modules (storeannotationsHAProxy rules/maps, etc.) for easier maintenance and contributions going forward.

Last but not least, unit and end-to-end tests were added.

Contributors

We’d like to thank the code contributors who helped make this version possible:

Arash Haghighat

NEW FEATURE

Christopher Ruwe

CLEANUP

Dario Tranchitella

BUG FIX BUILD CLEANUP DOCUMENTATION NEW FEATURE OPTIMIZATION REORGANIZATION TESTS

Ivan Matmati

BUG FIX DOCUMENTATION NEW FEATURE

Moemen Mhedhbi

BUG FIX BUILD CLEANUP DOCUMENTATION NEW FEATURE OPTIMIZATION REFACTOR REORGANIZATION TESTS

Robert Maticevic

BUG FIX CLEANUP NEW FEATURE

Trevor Nichols

BUG FIX

Zlatko Bratkovic

BUG FIX BUILD CLEANUP DOCUMENTATION NEW FEATURE TESTS

Conclusion

The HAProxy Kubernetes Ingress Controller 1.5 release brings some exciting features that let you control the underlying configuration, enable authentication mechanisms, define custom error pages, and even run it outside of the Kubernetes cluster. We plan for the next release to leverage even more of HAProxy’s capabilities via Kubernetes Custom Resource Definitions. Stay tuned!

Want to stay up to date on similar topics? Subscribe to our blog! You can also follow us on Twitter and join the conversation on Slack.

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.