The HAProxy Process Manager allows you to start external programs that are managed under HAProxy.
Not everything is compiled directly into HAProxy’s C code. Some components are written using other programming languages and run alongside the load balancer. These components can include agents built using HAProxy’s Stream Processing Offload Engine (SPOE)—SPOE allows polyglot extensibility, which is to say extending HAProxy with any programming language—and daemons such as the HAProxy Data Plane API, which is written in Go. Did you know that HAProxy has a feature that lets it start, stop and reload these components? It can host them as workers under its own main process, connecting their lifetimes with its own. Before we get to that, let’s see how to configure HAProxy as a service.
When you start HAProxy with the -W argument (or -Ws if running it as a service under Systemd), it enables master-worker mode. Master-worker mode spins up a main process and one or more worker processes under it, making it possible to spread HAProxy’s duties among several workers while exposing a single communication endpoint—the master process—for interacting with the load balancer as a whole. For example, you can send a reload command to the master process and it will recreate the workers. This design is ideal for hosting HAProxy under a modern service manager like Systemd.
The system packages for Ubuntu or Debian add the -Ws argument by default to the Systemd unit file. After starting HAProxy, check its status and you’ll find a list of running processes: one master and one worker:
In this model, reloads are architecturally simpler; Make a change to your HAProxy configuration and then trigger a reload with
systemctl reload haproxy. The master process will create a new worker, send it the new configuration, and then gracefully shutdown the old worker. You’ll learn more about the rationale behind the master-worker model when you watch William Lallemand’s presentation, HAProxy Process Management. William Lallemand is one of the core HAProxy engineers and he describes the history of process management as it relates to HAProxy.
The master-worker model simplifies the management of HAProxy’s running processes, but it also opened the door to a new feature that was introduced in HAProxy 2.0: the HAProxy Process Manager. The Process Manager lets you configure external applications that start and stop with HAProxy and are controlled by the HAProxy master process.
The HAProxy Process Manager
You can set an application to run when HAProxy starts by adding a
program section to your HAProxy configuration. We often use this to start the Data Plane API, which is the HTTP RESTful service for configuring HAProxy at runtime. The API is a binary that runs outside of HAProxy, so by using the Process Manager, we don’t need to install a Systemd unit file to control its lifetime. When HAProxy starts, so will the API.
Here’s an example that shows how to run the HAProxy Data Plane API using the Process Manager:
command directive sets the program to run and can take any number of parameters, which will be passed to the external program. Here, we do not want to stop and recreate the API whenever HAProxy reloads—the API will be reloading HAProxy at various times to manage its configuration—so we include the
no option start-on-reload directive. Other applications do better if recreated each time and wouldn’t use this directive, such as Stream Processing Offload Agents (SPOAs) whose configuration gets loaded into HAProxy.
Add as many
program sections as you need. Each will contain a single program to run. Besides using it for the Data Plane API, it’s convenient for launching SPOAs, which are used to stream load balancer data to external applications. For example, you can start the HAProxy Traffic Shadowing agent by including it in a program section, as shown in the following snippet:
Learn more about the Traffic Shadowing agent in our blog post HAProxy Traffic Mirroring for Real-World Testing.
The Master CLI
When you start HAProxy, add the -S argument to expose the Master CLI, which is an interface for interacting with worker processes. It uses the following syntax:
This is different from the HAProxy Runtime API, which aims to control HAProxy’s dynamic features, such as stick tables and map files; Here, you’re given a smaller set of commands, but relevant for managing worker processes. You’ll find all of the available options by invoking the help command:
show proc command to get a list of processes:
The HAProxy processes are listed as workers, with programs listed separately. By default, there’s only one worker, but you can control this by setting
nbproc in the
global section of your configuration. For example, if you set
nbproc to 3, then three workers would be listed. Using
nbproc to create multiple HAProxy workers allows you to pin specific functions, such as TLS termination, to specific processes, but enabling multiple threads on a single process accomplishes the same result and is easier to manage. Learn more about the differences between multiple processes and multiple threads in our blog post Multithreading in HAProxy.
You can execute Runtime API commands against individual workers by prefixing the command with the relative PID. For example, use @1 to invoke commands against the first worker:
Although you can’t use this technique to invoke commands against non-HAProxy processes, such as our Data Plane API program, it can be very useful just to see the list of running programs and know that they started as expected.
Over the years, how HAProxy forks workers has evolved, becoming better suited for modern service managers like Systemd and opening the door for features like the HAProxy Process Manager, which lets you run arbitrary programs. The Process Manager uses a simple configuration syntax, the
program section, that can be used to start any number of external applications in support of HAProxy. It’s often used to start the HAProxy Data Plane API, but it’s also convenient for running SPOAs like the Traffic Shadowing agent.
HAProxy Enterprise is the industry-leading software load balancer. It powers modern application delivery at any scale and in any environment, providing the utmost performance, observability and security. Organizations harness its cutting-edge features and enterprise suite of add-ons, backed by authoritative expert support and professional services. Ready to learn more? Contact us and sign up for a free trial!