The New HAProxy Data Plane API: Two Examples of Programmatic Configuration

Use the HAProxy Data Plane API to manage your load balancer configuration dynamically using HTTP commands.

Sign up for the upcoming HAProxy 2.0 webinars

Designing for high availability nearly always means having a strong proxying/load balancing layer. A proxy provides essential services, such as:

  • detection and removal of failed servers

  • connection queuing

  • offloading of TLS encryption

  • compression

  • caching

The challenge is keeping your configurations up to date, which is especially daunting as services move into containers and those containers become ephemeral. Available with HAProxy 2.0, you can use the new HAProxy Data Plane API, which is a modern REST API, to fully configure HAProxy.

The HAProxy Data Plane API complements HAProxy’s flexible configuration language, which provides the building blocks to define both simple and complex routing rules. It is also the perfect addition to the existing Runtime API, which enables you to start, stop and drain traffic from servers, change server weights, and manage health checks.

The new Data Plane API gives you the ability to dynamically add and configure frontends, backends, and traffic routing logic. You can also use it to manage stick table rules, update logging endpoints, and create SPOE filters. In essence, almost the entire load balancer can be configured using HTTP commands. In this blog post, you’ll see how to get started using it.

Managing a Configuration

Typically when configuring HAProxy, you would manually edit its configuration file, /etc/haproxy/haproxy.cfg. This one file is used for managing all of the load balancer’s functionality. It is split into frontend sections primarily, which define the public-facing IP addresses that clients connect to, and backend sections, which hold the upstream servers where traffic is routed. There’s much more that you can do, including setting global settings that affect the running process, setting value defaults, adding traffic behavior analysis with stick tables, reading map files, defining filtering rules with ACLs, and plenty more.

While editing the file by hand is fairly straightforward, it isn’t always expedient, especially when operating dozens or even hundreds of services and proxies. For example, in a service mesh, HAProxy operates as a sidecar that is paired with each of your microservices. This allows all traffic to flow from proxy to proxy, essentially abstracting away the network and its fickleness from the adjacent application services. Proxies at this level can add retry logic, authorization, and TLS encryption to your services. However, the number of proxies grows quickly, since there is a proxy per service.

In this scenario, having the ability to call an HTTP API to dynamically update a fleet of proxy definitions is essential. In a service mesh, control-plane software supervises the proxies and dynamically calls configuration APIs. The HAProxy Data Plane API allows HAProxy to integrate with these platforms. What’s more, the API utilizes the Runtime API to make changes that don’t require a reload whenever possible.

Did you know?

The Data Plane API uses the Go packages config-parser and client-native to parse the HAProxy configuration and call Runtime API commands, respectively. You can use these in your own projects to integrate with HAProxy.

Configuring HAProxy Dynamically

There’s a lot that you can do with the Data Plane API. In this section, you’ll see how to create a backend with servers and a frontend that routes traffic to it. First, follow the official documentation for installing and setting up the API.

Then, once you have it installed and running, call GET on the /v1/services/haproxy/configuration/backends endpoint to see the backend sections that are already defined, like this:

$ curl --get --user admin:mypassword \
http://localhost:5555/v2/services/haproxy/configuration/backends

If you want to add a new backend, call the same endpoint with POST. There are two ways to make state changes: either as individual invocations or by batching commands inside a transaction. Because we want to make several, related changes, let’s start by creating a transaction.

Call the /v1/services/haproxy/transactions endpoint to create a new transaction. This requires a version parameter in the URL, but the commands inside the transaction don’t need one. Whenever a POST, PUT, or DELETE command is called, a version must be included, which is then stamped onto the HAProxy configuration file. This ensures that if multiple clients are using the API, they’ll avoid conflicts. If the version you pass doesn’t match the version stamped onto the configuration file, you’ll get an error. When using a transaction, that version is specified up front when creating the transaction.

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
http://localhost:5555/v2/services/haproxy/transactions?version=1

You’ll find the transaction’s ID in the JSON document that’s returned:

{"_version":5,"id":"9663c384-5052-4776-a968-abcef032aeef","status":"in_progress"}

Next, use the /v1/services/haproxy/configuration/backends endpoint to create a new backend, sending the transaction ID as a URL parameter:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "test_backend", "mode":"http", "balance": {"algorithm":"roundrobin"}, "httpchk": {"method": "HEAD", "uri": "/", "version": "HTTP/1.1"}}' \
http://localhost:5555/v2/services/haproxy/configuration/backends?transaction_id=9663c384-5052-4776-a968-abcef032aeef

Then call the /v1/services/haproxy/configuration/servers endpoint to add servers to the backend:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "server1", "address": "127.0.0.1", "port": 8080, "check": "enabled", "maxconn": 30, "weight": 100}' \
"http://localhost:5555/v2/services/haproxy/configuration/servers?backend=test_backend&transaction_id=9663c384-5052-4776-a968-abcef032aeef"

Next, add a frontend by using the /v1/services/haproxy/configuration/frontends endpoint:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "test_frontend", "mode": "http", "default_backend": "test_backend", "maxconn": 2000}' \
http://localhost:5555/v2/services/haproxy/configuration/frontends?transaction_id=9663c384-5052-4776-a968-abcef032aeef

So far, this frontend doesn’t have any bind statements yet. Add one by using the /v1/services/haproxy/configuration/binds endpoint, as shown:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "http", "address": "*", "port": 80}' \
"http://localhost:5555/v2/services/haproxy/configuration/binds?frontend=test_frontend&transaction_id=9663c384-5052-4776-a968-abcef032aeef"

Then, to commit the transaction and apply all changes, invoke the /v1/services/haproxy/transactions/[transaction ID] endpoint with PUT, like this:

$ curl -X PUT --user admin:mypassword \
-H "Content-Type: application/json" \
http://localhost:5555/v2/services/haproxy/transactions/9663c384-5052-4776-a968-abcef032aeef

Here’s what the configuration looks like now:

frontend test_frontend
mode http
maxconn 2000
bind *:80 name http
default_backend test_backend
backend test_backend
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.1
server server1 127.0.0.1:8080 check maxconn 30 weight 100

This load balancer is ready to receive traffic and forward it on to the upstream server.

Did you know?

Since the Data Plane API specification uses OpenAPI, you can use it to generate client code in many supported programming languages.

During this exercise, we batched all of the commands inside of a transaction. You can also invoke them one by one. In that case, instead of including a URL parameter called transaction_id, you’d include one called version, which is incremented with each call.

Another Example

You’ve now seen the simplicity and power of the HAProxy Data Plane API. With a few HTTP commands, you’re able to dynamically change the configuration. Let’s see another example. In this case, we’ll create an ACL that checks whether the Host header is example.com. If it is, a use_backend line will route to a different backend named example_servers. We’ll also add a http-request deny rule that will reject any requests for the URL path /admin.php unless the client’s source IP address is within the 192.168.50.20/24 network.

First, use the /v1/services/haproxy/transactions endpoint to create a new transaction and get its ID:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
http://localhost:5555/v2/services/haproxy/transactions?version=2
{"_version":2,"id":"7d0d6737-655e-4489-92eb-6d29cdd69827","status":"in_progress"}

Then use the /v1/services/haproxy/configuration/backends endpoint, along with the transaction’s ID, to create a new backend named example_servers:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "example_servers", "mode":"http", "balance": {"algorithm":"roundrobin"}}' \
http://localhost:5555/v2/services/haproxy/configuration/backends?transaction_id=7d0d6737-655e-4489-92eb-6d29cdd69827

Use the /v1/services/haproxy/configuration/servers endpoint to add a server to the backend:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"name": "server1", "address": "127.0.0.1", "port": 8081, "check": "enabled", "maxconn": 30, "weight": 100}' \
"http://localhost:5555/v2/services/haproxy/configuration/servers?backend=example_servers&transaction_id=7d0d6737-655e-4489-92eb-6d29cdd69827"

Use the /v1/services/haproxy/configuration/acls endpoint to define an ACL named is_example that checks whether the Host header has a value of example.com:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"index": 0, "acl_name": "is_example", "criterion": "req.hdr(Host)", "value": "example.com"}' \
"http://localhost:5555/v2/services/haproxy/configuration/acls?parent_type=frontend&parent_name=test_frontend&transaction_id=7d0d6737-655e-4489-92eb-6d29cdd69827"

Use the /v1/services/haproxy/configuration/backend_switching_rules to add a use_backend line that evaluates the is_example ACL:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"index": 0, "cond": "if", "cond_test": "is_example", "name": "example_servers"}' \
"http://localhost:5555/v2/services/haproxy/configuration/backend_switching_rules?frontend=test_frontend&transaction_id=7d0d6737-655e-4489-92eb-6d29cdd69827"

Use the /v1/services/haproxy/configuration/http_request_rules endpoint to add an http-request deny rule that checks whether the path is /admin.php and the client’s source IP is not within the 192.168.50.20/24 network:

$ curl -X POST --user admin:mypassword \
-H "Content-Type: application/json" \
-d '{"index": 0, "cond": "if", "cond_test": "{ path /admin.php } !{ src 192.168.50.20/24 }", "type": "deny"}' \
"http://localhost:5555/v2/services/haproxy/configuration/http_request_rules?parent_type=frontend&parent_name=test_frontend&transaction_id=7d0d6737-655e-4489-92eb-6d29cdd69827"

Then commit the transaction for the changes to take effect:

$ curl -X PUT --user admin:mypassword \
-H "Content-Type: application/json" \
http://localhost:5555/v2/services/haproxy/transactions/7d0d6737-655e-4489-92eb-6d29cdd69827

Your HAProxy configuration now looks like this:

frontend test_frontend
mode http
maxconn 2000
bind *:80 name http
acl is_example req.hdr(Host) example.com
http-request deny deny_status 0 if { path /admin.php } !{ src 192.168.50.20/24 }
use_backend example_servers if is_example
default_backend test_backend
backend example_servers
mode http
balance roundrobin
server server1 127.0.0.1:8081 check maxconn 30 weight 100
backend test_backend
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.1
server server1 127.0.0.1:8080 check maxconn 30 weight 100

Conclusion

In this blog post, you took a tour of the HAProxy Data Plane API, which allows you to fully configure HAProxy using a modern REST API. More information can be found in the official documentation. This rounds out a trio of powerful features that includes HAProxy’s flexible configuration language and the Runtime API. The Data Plane API opens the door to a number of use cases, most notably using HAProxy as a proxy layer within a service mesh.

We’re excited about the future of using the new API to build out sophisticated partnerships and features. HAProxy continues to provide high performance and resilience in any environment and at any scale.

If you enjoyed this article and want to keep up to date on similar topics, subscribe to this blog. You can also follow us on Twitter and join the conversation on Slack. HAProxy Enterprise makes it easy to get up and running with the Data Plane API since it can be installed as a convenient system package. It also includes a robust and cutting-edge codebase, an enterprise suite of add-ons, expert support, and professional services. Want to learn more? Contact us today and sign up for a free trial.

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.