HAProxy Technologies https://www.haproxy.com Wed, 15 Sep 2021 12:45:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.7 https://www.haproxy.com/wp-content/uploads/2017/06/cropped-Favicon-3-32x32.png HAProxy Technologies https://www.haproxy.com 32 32 How to Enable Health Checks in HAProxy https://www.haproxy.com/blog/how-to-enable-health-checks-in-haproxy/ https://www.haproxy.com/blog/how-to-enable-health-checks-in-haproxy/#respond Tue, 14 Sep 2021 09:00:53 +0000 https://www.haproxy.com/?p=416351 HAProxy provides active, passive, and agent health checks.   HAProxy makes your web applications highly available by spreading requests across a pool of backend servers. If one or even several servers fail, clients can still use your app as long as there are other servers still running. The caveat is, HAProxy needs to know which […]

The post How to Enable Health Checks in HAProxy appeared first on HAProxy Technologies.

]]>

HAProxy provides active, passive, and agent health checks.

 

HAProxy makes your web applications highly available by spreading requests across a pool of backend servers. If one or even several servers fail, clients can still use your app as long as there are other servers still running.

The caveat is, HAProxy needs to know which servers are healthy. That’s why health checks are crucial. Health checks automatically detect when a server becomes unresponsive or begins to return errors; HAProxy can then temporarily remove that server from the pool until it begins to act normally again. Without health checks, HAProxy has no way of knowing when a server has become dysfunctional.

Note: Health checks complement other fail-safe measures in HAProxy such as retries and redispatches. Read our blog post HAProxy Layer 7 Retries and Chaos Engineering to learn more.

You have access to three types of health checks: active, passive, and agent. Let’s learn about each one.

Active Health Checks

The simplest solution is to poll your backend servers by attempting to connect at a defined interval. This is known as an active health check. If HAProxy doesn’t get a response back, it determines that the server is unhealthy and after a certain number of failed connections, it removes the server from the rotation.

If you want to keep the default settings, configuring an active health check involves simply adding a check parameter to a server line in a backend. In the following example, we’ve enabled active health checks for each server:

HAProxy will try to establish a TCP connection every two seconds. After three failed connections, the server is removed, temporarily, until HAProxy gets at least two successful connections, after which it reinstates the server into the backend. You can customize these settings, changing the interval, number of failed checks that trigger a removal, or the number of successful checks that reinstate the server.

The inter parameter changes the interval between checks; it defaults to two seconds. The fall parameter sets how many failed checks are allowed; it defaults to three. The rise parameter sets how many passing checks there must be before returning a previously failed server to the rotation; it defaults to two. In the example below, we’ve set new values:

While attempting to connect helps determine whether an application is up and running, it can’t tell you whether the app is behaving normally. For web applications, you can switch to using an HTTP health check instead. An HTTP health check sends an HTTP request and expects a successful response in the 2xx or 3xx range, such as 200 OK or 302 Found.

Just add option httpchk to the backend, as shown:

By default, HAProxy makes a GET request to the URL path /, but you can change that by adding an http-check send line. Below, we send a GET request to the URL path /health. A common technique is to program the /health endpoint to do a thorough check of your application and its dependencies and then return a single successful response if everything looks good.

To send a POST request with a JSON body, use this form, which includes a Content-Type request header and a message body:

While it is a common pattern to have the server do a thorough check on its end, you can also configure HAProxy to perform several checks too. In the example below, we define two checks, both of which must be successful. Each block starts with http-check connect.

The http-check connect directive also lets you connect to the server using SSL and specify the protocol, such as HTTP/2, by using ALPN, as shown below:

Something else that you can do is tell HAProxy to expect a certain status code to be returned or that a string should be included in the HTTP response body. Use the http-check expect directive with either the status or string keyword. In the following example, the application must return a 200 OK response status to be considered healthy:

Or, you can require the response body to contain a case-sensitive string, such as success:

HAProxy also supports other protocol-specific health checks for LDAP, MySQL, PostgreSQL, Redis, and SMTP.

Passive Health Checks

Whereas an active health check continually polls the server with either a TCP connection or an HTTP request, a passive health check monitors live traffic for errors. You can enable this mode by adding the check, observe, error-limit, and on-error parameters to a server line, as shown below:

Set the observe parameter to layer4 to monitor all TCP connections for problems or to layer7 to watch all HTTP responses for errors. Successful responses are those that have an HTTP status code in the range 100-499, 501 or 505. The error-limit parameter sets how many consecutive requests can have errors before the on-error rule kicks in. Here, the rule marks the server as down.

Passive health checks always coexist with active health checks, with the latter doing its normal polling while also being responsible for reviving a server after it has been marked as down by a passive health check. In other words, you get both types of checking simultaneously. The benefit of that is that you will detect when only a part of your web application is malfunctioning, even if the active health check URL isn’t targeting that part. For example, if active health checks monitor the /health URL, but actual clients are getting errors on the /cart URL, HAProxy will detect that.

Beware that the active health checks will revive the server sooner or later, even if the /cart URL is still malfunctioning. One way to keep an unhealthy server down for longer is to extend the active health check interval by setting the rise parameter higher. Another solution is to turn your passive health check into a full blown circuit breaker by adding the slowstart parameter, which works well for backend services. We show how to do that in the blog post Circuit Breaking in HAProxy.

Agent Health Checks

While actively polling servers and observing live traffic are great ways to detect failures, it doesn’t give you a rich sense of a server’s overall state. For example, you can’t easily tell how much CPU load is being placed on it or if it’s running dangerously low on disk space.

With HAProxy, you can communicate with an external agent, which is software running on the server that’s separate from the application being load balanced. Since the agent has full access to the system, it can check the machine’s vitals more closely.

Check the sample project in GitHub to see a working example.

External agents can do more than just respond back with a binary up or down status. They can send signals to HAProxy that update its state, such as:

  • mark the server as up or down
  • put the server into maintenance mode
  • change the amount of traffic flowing to the server
  • increase or decrease the maximum number of clients that can connect concurrently

The agent will invoke an action when it detects a particular condition on the server. The communication protocol between the agent and HAProxy is simply ASCII text sent over a TCP connection, which makes it easy to write your own external agent program. The agent might send back any of the following (note that the end-of-line character, \n, is required):

Agent sends back Result
down\n The server is put into the down state
up\n The server is put into the up state
maint\n The server is put into maintenance mode
ready\n The server is taken out of maintenance mode
50%\n The server’s weight is halved
maxconn:10\n The server’ maximum connections is set to 10

On the HAProxy side, add an agent-check parameter to enable communication with the agent program.

There are a few other parameters shown here, so let’s describe them. Use agent-inter to set the interval of the checks. Set the agent-addr and agent-port parameters to the IP address and port where the agent is listening. Using an external agent gives you flexibility in how a server is checked and provides more ways to react. For example, instead of shutting off a server, you might decide to simply dial back the amount of traffic it receives.

The HAProxy Enterprise Real-time Dashboard

When you operate a non-trivial infrastructure, it soon becomes obvious that you need a consolidated view of your system. HAProxy Enterprise has a dashboard, called the Real-time Dashboard, where you can observe the current status of all of your services.

HAProxy Enterprise Real-time Dashboard

HAProxy Enterprise Real-time Dashboard

Having a central management dashboard makes health monitoring much easier. You can easily filter the list and each server can be enabled and disabled with a button click. You can also apply changes to batches of servers without needing to update each one individually.

Conclusion

In this post, you learned how HAProxy provides three types of health checks: active health checks, passive health checks, and agent health checks. Enabling health checks ensures that users aren’t affected by malfunctioning servers.

Want to stay up to date on similar topics? Subscribe to this blog! You can also follow us on Twitter and join the conversation on Slack.

HAProxy Enterprise powers modern application delivery at any scale and in any environment, providing the utmost performance, observability, and security for your critical services. Organizations harness its cutting edge features and enterprise suite of add-ons, which are backed by authoritative, expert support and professional services. Ready to learn more? Sign up for a free trial.

The post How to Enable Health Checks in HAProxy appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/how-to-enable-health-checks-in-haproxy/feed/ 0
Rate Limiting with the HAProxy Kubernetes Ingress Controller https://www.haproxy.com/blog/rate-limiting-with-the-haproxy-kubernetes-ingress-controller/ https://www.haproxy.com/blog/rate-limiting-with-the-haproxy-kubernetes-ingress-controller/#respond Wed, 08 Sep 2021 16:06:32 +0000 https://www.haproxy.com/?p=417501 Add IP-by-IP rate limiting to the HAProxy Kubernetes Ingress Controller.   DDoS (distributed denial of service) events occur when an attacker or group of attackers flood your application or API with disruptive traffic, hoping to exhaust its resources and prevent it from functioning properly. Bots and scrapers, too, can misbehave, making far more requests than […]

The post Rate Limiting with the HAProxy Kubernetes Ingress Controller appeared first on HAProxy Technologies.

]]>

Add IP-by-IP rate limiting to the HAProxy Kubernetes Ingress Controller.

 

DDoS (distributed denial of service) events occur when an attacker or group of attackers flood your application or API with disruptive traffic, hoping to exhaust its resources and prevent it from functioning properly. Bots and scrapers, too, can misbehave, making far more requests than is reasonable.

In this blog, we’ve covered several ways that you can use overall rate limiting to mitigate the effects of these kinds of events, but the HAProxy Kubernetes Ingress Controller offers even more fine-grained control to fend off DDoS attacks using several annotations that can help you build a powerful first line of defense on an IP-by-IP basis.

Rate Limit Requests

The most important annotation to understand is rate-limit-requests. This setting is an integer that defines the maximum number of requests that will be accepted from a source IP address during the rate-limit-period, which defaults to one second.

This is accomplished with the following annotation:

By adding this annotation to your config, any single IP address is limited to 10 requests per second, after which their requests would be denied with a 403 status code.

Rate Limit Period

Adding an annotation for rate-limit-period lets you specify a custom time period for your rate limits. The default is 1 second, which you could explicitly set with a string annotation of 1s. A period of one minute is set using the string 1m.

Behind the scenes, a stick-table is created to track the rate of requests. The stick table name is composed of the string Ratelimit- plus the rate-limit-period expressed in milliseconds. For example, if the rate-limit-period is set to two seconds, the name of the table will be Ratelimit-2000.

Custom Status Codes

To return a status code other than 403, add an annotation for rate-limit-status-code.

This sets the status code to return when rate limiting has been triggered. It’s a standard HTTP status code and defaults to HTTP 403. Here, we set it to the HTTP 429 Too Many Requests response status code:

Rate Limit Size

One final annotation we’ll take a look at is rate-limit-size, which is the number of IP address entries in the stick table. By default, the rate limit stick table is limited to 100,000 entries, meaning that 100,000 individual IP addresses are being tracked at any time. When this value is exceeded, the oldest entries in the table are removed and new addresses are added.

The rate-limit-size annotation is expressed as an integer.

Conclusion

In this blog post, you learned that you can fine tune your HAProxy Kubernetes Ingress Controller’s configuration to leverage some powerful annotations to protect your services and APIs.

Want to know when more content like this is published? Subscribe to our blog or follow us on Twitter. You can also join the conversation on Slack.

The post Rate Limiting with the HAProxy Kubernetes Ingress Controller appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/rate-limiting-with-the-haproxy-kubernetes-ingress-controller/feed/ 0
September/2021 – CVE-2021-40346: Duplicate ‘Content-Length’ Header Fixed https://www.haproxy.com/blog/september-2021-duplicate-content-length-header-fixed/ https://www.haproxy.com/blog/september-2021-duplicate-content-length-header-fixed/#comments Tue, 07 Sep 2021 09:00:36 +0000 https://www.haproxy.com/?p=417271 If you are using HAProxy 2.0 or newer, it is important that you update to the latest version. A vulnerability was found that makes it possible for an attacker to bypass the check for a duplicate HTTP Content-Length header, permitting a request smuggling attack or a response-splitting attack. Our analysis confirmed that the duplication is […]

The post September/2021 – CVE-2021-40346: Duplicate ‘Content-Length’ Header Fixed appeared first on HAProxy Technologies.

]]>

If you are using HAProxy 2.0 or newer, it is important that you update to the latest version. A vulnerability was found that makes it possible for an attacker to bypass the check for a duplicate HTTP Content-Length header, permitting a request smuggling attack or a response-splitting attack. Our analysis confirmed that the duplication is achieved by making use of the memory layout of HAProxy’s internal representation of an HTTP message to slip a select character from the header’s name to its value. Due to the difficulty in executing such an attack, the risk is low.

Affected Versions and Remediation

The following section lists the affected versions and fixed version. We recommend that you upgrade if you are using any of these.

Affected Version Fixed Version
HAProxy 2.0 2.0.25
HAProxy  2.2 2.2.17
HAProxy  2.3 2.3.14
HAProxy  2.4 2.4.4
HAProxy Enterprise 2.0r1 2.0r1-235.1230
HAProxy Enterprise 2.1r1 2.1r1-238.625
HAProxy Enterprise 2.2r1 2.2r1-241.505
HAProxy Enterprise 2.3r1 2.3r1-242.345
HAProxy Kubernetes Ingress Controller 1.6 1.6.7
HAProxy Enterprise Kubernetes Ingress Controller 1.6 1.6.7
HAProxy ALOHA 11.5 11.5.13
HAProxy ALOHA 12.5 12.5.5
HAProxy ALOHA 13.0 13.0.7

Workarounds

If you are not able to update right away, you can apply the following rules to mitigate the issues. These should be added to your frontend.

These lines reject requests or responses that have more than one Content-Length header.

The post September/2021 – CVE-2021-40346: Duplicate ‘Content-Length’ Header Fixed appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/september-2021-duplicate-content-length-header-fixed/feed/ 12
Testing Your HAProxy Configuration https://www.haproxy.com/blog/testing-your-haproxy-configuration/ https://www.haproxy.com/blog/testing-your-haproxy-configuration/#respond Tue, 31 Aug 2021 15:21:28 +0000 https://www.haproxy.com/?p=415281 Learn how to test your HAProxy Configuration. Properly testing your HAProxy configuration file is a simple, yet crucial part of administering your load balancer. Remembering to run one simple command after making a change to your configuration file can save you from unintentionally stopping your load balancer and bringing down your services. For the impatient, […]

The post Testing Your HAProxy Configuration appeared first on HAProxy Technologies.

]]>

Learn how to test your HAProxy Configuration.

Properly testing your HAProxy configuration file is a simple, yet crucial part of administering your load balancer. Remembering to run one simple command after making a change to your configuration file can save you from unintentionally stopping your load balancer and bringing down your services.

For the impatient, here’s a simple command to test a configuration file:

The flag ‘-c‘, enables “check mode” and is the flag that tells HAProxy to test, without actually starting or restarting HAProxy.

Tell Me More

How is this different from using the configuration check that’s built into the systemd reload command?

Invoking HAProxy in check mode has no effect upon the load balancer’s state.  If your HAProxy is not running, testing the file in this manner will not start it, even momentarily.  If your load balancer is running, it will not interrupt it. This lets you perform ad hoc tests of changes you’d like to make, without interfering with or interrupting your services.

Let’s look at what happens when you use the different ways systemd starts and restarts HAProxy:

$ sudo systemctl start haproxy On startup, the HAProxy process reads your configuration file one time and stores its parameters in memory.  After that, it doesn’t touch the file again until you tell it to. Not needing to refer to the file on disk lets HAProxy run incredibly fast.
$ sudo systemctl restart haproxy HAProxy stops immediately, killing any active connections.  It then attempts to start HAProxy with the specified configuration file.  If that file has errors, HAProxy will not start.  This is probably not what you want to happen.
$ sudo systemctl reload haproxy Hitless reloads let the active connections gracefully finish using the old configuration while bringing up new connections with the new config. If you attempt a reload with a broken config, it will give an error, but will not interrupt the previously running service.
$ sudo systemctl stop haproxy HAProxy stops.  The configuration file is not read.

Getting into the habit of using hitless reloads when you roll in a new config goes a long way towards avoiding unintended interruptions, but having check mode at your disposal adds a layer of fine-grained control, especially if you are incorporating external scripts.

Calling HAProxy Directly

Any user can start an HAProxy process to create a load balancer, as long as the configuration file isn’t trying to access any privileged ports, files or sockets. Let’s create a configuration with a simple typo, to see what HAProxy does when it encounters an error.

Paste the following into a file called test.cfg in your server’s /tmp directory:

Test this with:

You’ll get  the following error message:

As you can see from the output, it tells you the line number of the first fatal error it encountered, [./haproxy.cfg:12], where it broke on “unknown keyword ‘chekc'”. Recent versions of HAProxy will even suggest a correction: “did you mean ‘check’ maybe?”

Testing a Running Proxy’s Config

Testing a throwaway config is one thing, but when you make a change to a running service’s configuration file, how should you test? The procedure is exactly the same:

This will have no effect on the running HAProxy process, as long as you have the check mode (-c) flag in place.

Conclusion

In this article, you learned a simple command for testing HAProxy configuration files and also a bit about how HAProxy starts, restarts, and reloads.

You may never need to use check mode, as HAProxy’s systemd service script performs its own tests and won’t let you start with a bad config, but using check mode is a powerful technique to have at your fingertips when debugging, scripting, or incorporating HAProxy’s startup into another system.

HAProxy Enterprise powers modern application delivery at any scale and in any environment, providing the utmost performance, observability, and security for your critical services. Organizations harness its cutting edge features and enterprise suite of add-ons, which are backed by authoritative, expert support and professional services. Ready to learn more? Sign up for a free trial. Want to know when more content like this is published? Subscribe to our blog or follow us on Twitter. You can also join the conversation on Slack.

The post Testing Your HAProxy Configuration appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/testing-your-haproxy-configuration/feed/ 0
The HAProxy APIs https://www.haproxy.com/blog/haproxy-apis/ https://www.haproxy.com/blog/haproxy-apis/#respond Tue, 24 Aug 2021 11:00:30 +0000 https://www.haproxy.com/?p=414741 The HAProxy load balancer provides a set of APIs for configuring it programmatically.   Although many people enjoy the simplicity of configuring their HAProxy load balancer by directly editing its configuration file, /etc/haproxy/haproxy.cfg, others want a way to do it without logging into the server. Or, they want a way that integrates with bespoke software. […]

The post The HAProxy APIs appeared first on HAProxy Technologies.

]]>

The HAProxy load balancer provides a set of APIs for configuring it programmatically.

 

Although many people enjoy the simplicity of configuring their HAProxy load balancer by directly editing its configuration file, /etc/haproxy/haproxy.cfg, others want a way to do it without logging into the server. Or, they want a way that integrates with bespoke software. For example, they want to add pools of servers to the load balancer programmatically as a part of their CI/CD deployment pipeline.

HAProxy and HAProxy Enterprise provide two APIs for managing your load balancer remotely or programmatically: the HAProxy Runtime API and the HAProxy Data Plane API. Why are there two? What purpose does each serve? In this blog post, you’ll learn about them and find resources for getting started.

The HAProxy Runtime API

The HAProxy Runtime API is the older of the two, introduced in 2016 with HAProxy version 1.7. Since then, it has grown to cover many of the load balancer’s features. You can use it to enable and disable servers, stop and start health check probes, set ACL values, inspect stick tables, view statistics, and more. Built directly into the codebase of HAProxy, its version releases are intertwined into the releases of the load balancer itself.

There are two ways to expose the API: you can either have it listen on a UNIX domain socket like /var/run/haproxy/api.sock, in which case you can access it only while logged into the server, or you can publish it on a TCP/IP address. To use a socket, add a stats socket line to the global section of your configuration file:

To use a TCP/IP address, change the stats socket line so that it sets an IP address and port instead of a file path:

You send commands by echoing them to the listening socket or address. For example, to see a list of available functions, you would echo the help command and pipe it to a tool like socat:

If using a TCP/IP address, the command looks like this:

All of the Runtime API’s commands use this form. For example, to drain traffic from a server you would echo the set server command:

The defining trait of the Runtime API is that all configuration changes are applied in memory, but do not alter the HAProxy configuration file on disk. The reason for that is simple. For better performance and security, HAProxy never reads from the filesystem after completing its initial startup. During startup, it reads the haproxy.cfg file, along with any other supporting files like SSL certificates and map files, and then keeps a representation of those files in memory. The Runtime API modifies those in-memory representations only. So, after calling an API function, you should not expect to see any change to the files on disk, but HAProxy’s representation of those files will have been updated.

The advantage of making changes in memory only is that you don’t need to reload the load balancer for the changes to take effect. Avoiding a reload helps speed up operations and uses fewer computer resources. The disadvantage is that the changes are not durable. Restarting or reloading the load balancer loses all changes you’ve made with the Runtime API. There is a way to save and restore some of the data by using state files, which are files into which HAProxy dumps the current state of your servers including each server’s IP address, weight and drain status. However, this doesn’t cover the full range of changes that you could have made with the API.

The Runtime API works well for making quick changes on the fly like enabling and disabling servers and changing health check probes; it’s also convenient for viewing statistics and data related to servers, stick tables, maps, and peers. The most recent release of HAProxy adds the ability to add and remove servers via the API. However, to persist your changes, you’ll need to pair it with a mechanism that writes the changes to disk. As you’ll see, the Data Plane API, which is the spiritual successor to the Runtime API, fulfills that role. It writes changes to disk and also calls the Runtime API itself when a given operation is available.

Learn more about the HAProxy Runtime API.

The HAProxy Data Plane API

The HAProxy Data Plane API was released in 2019 at the same time as HAProxy 2.0, but it has its own codebase and follows its own release schedule. Not being dependent on HAProxy’s runtime, the Data Plane API is a self-hosted RESTful HTTP service. A RESTful architecture means that HTTP verbs like GET, PUT, POST and DELETE change the behavior of an API endpoint to either read, update, create or delete a configuration setting.

The Data Plane API is designed to be embedded into other software. It’s built upon the OpenAPI standard, which makes it compatible with a variety of code-generation tools that you can use to generate client-side code in a number of languages. That makes it easier to integrate into your own program.

It ships as a program that you run alongside HAProxy; you can download from its GitHub page. HAProxy Enterprise ships it as a system package for extra convenience. Being a separate program has some advantages. For one thing, having its versions be independent from HAProxy allows bug fixes and new features to be published without needing to deploy a newer version of the load balancer. The API is written in the Go programming language, which many would argue is a simpler language than C, making it accessible to more developers who want to contribute.

With the Data Plane API, you can build an HAProxy configuration nearly from the ground up. You can add pools of servers, define frontend listeners, update ACLs and maps, manage SSL certificates, and more. Recent releases have added support for powerful features like service discovery with Consul and AWS EC2. API calls update the configuration file on disk so that they persist past a restart. However, the Data Plane API invokes the Runtime API where possible to avoid a reload.

Read the official documentation to learn how to set it up. Basically, you will create the Data Plane API configuration file and then, when starting the program, point to that file with the -f flag:

Commands typically return or take JSON objects. For example, the info function returns JSON that describes the API’s version:

The API supports batching multiple operations together into a transaction and then executing them as an atomic operation. Transactions let you make larger changes at once, such as to configure a frontend and corresponding backend at the same time.

To manage an HAProxy instance, you would just need to run the API next to it.

Learn more about the HAProxy Data Plane API.

Conclusion

By harnessing the Runtime API or the Data Plane API, you can manage your HAProxy configuration without editing its configuration file by hand. The Runtime API is great for changing the state of your servers, SSL certificates, map files, and stick tables without requiring a reload. Use the Data Plane API to build a configuration from the ground up using RESTful HTTP commands. Behind the scenes, the Data Plane API calls the Runtime API whenever it can to avoid a reload. Together, these APIs make it possible to integrate HAProxy into a variety of automation, CI/CD, and observability tools.

Want to learn more about HAProxy Enterprise, the load balancer that powers modern application delivery at any scale and in any environment? Sign up for a free trial.

Subscribe to our blog or follow us on Twitter to be notified when more content like this is published. You can also join the conversation on Slack.

The post The HAProxy APIs appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/haproxy-apis/feed/ 0
[On-Demand Webinar] Run the HAProxy Kubernetes Ingress Controller in External Mode https://www.haproxy.com/blog/live-webinar-run-the-haproxy-kubernetes-ingress-controller-in-external-mode/ Fri, 20 Aug 2021 10:38:17 +0000 https://www.haproxy.com/?p=416031 The post [On-Demand Webinar] Run the HAProxy Kubernetes Ingress Controller in External Mode appeared first on HAProxy Technologies.

]]>
haproxy_2_2_ama_webinar

Traditionally, connecting clients to services in Kubernetes has required multiple layers of proxies. For example, an external load balancer will route traffic to your nodes. Then, kube-proxy passes the message to the node running an ingress controller. The ingress controller then relays the message to the correct pod. Having multiple layers of proxies like this can add latency and complicates troubleshooting.

The HAProxy Kubernetes Ingress Controller now supports running as an external load balancer itself, reducing latency by cutting out other layers by routing traffic directly to the Pod.

Join our webinar to learn more about running the ingress controller in external mode.

Speaker: Baptiste Assmann

REGISTER NOW

The post [On-Demand Webinar] Run the HAProxy Kubernetes Ingress Controller in External Mode appeared first on HAProxy Technologies.

]]>
Install HAProxy on Ubuntu https://www.haproxy.com/blog/how-to-install-haproxy-on-ubuntu/ https://www.haproxy.com/blog/how-to-install-haproxy-on-ubuntu/#respond Wed, 18 Aug 2021 09:00:12 +0000 https://www.haproxy.com/?p=414071 Learn how to Install HAProxy on Ubuntu 20.04.   Ubuntu 20.04 is a great choice for installing your HAProxy software load balancer. It’s a free Linux operating system that’s fast, secure, and best of all, it’s easy to use. One of the features that makes Ubuntu so accessible to even the newest of users is […]

The post Install HAProxy on Ubuntu appeared first on HAProxy Technologies.

]]>

Learn how to Install HAProxy on Ubuntu 20.04.

 

Ubuntu 20.04 is a great choice for installing your HAProxy software load balancer. It’s a free Linux operating system that’s fast, secure, and best of all, it’s easy to use.
One of the features that makes Ubuntu so accessible to even the newest of users is its package manager, apt, the Advanced Packaging Tool. apt lets you easily install and remove packages and dependencies, generally without having to worry about the nitty-gritty details of file paths, libraries, compilers, and version conflicts.
It’s so easy that installing HAProxy on a brand new Ubuntu box can be as simple as a one-line apt command:

But wait, there’s more!™

Yes, that simple command will quickly and easily set you up with HAProxy, but you’ll find that the version you’ve just installed probably lags behind the current release by a minor version number or two, sometimes as much as a major version number.

The version you get with apt out-of-the-box will be stable and secure, but it’s going to lack some of the cool new features you’ve been reading about, such as FIX protocol support, HTTP/2 WebSockets, or Dynamic SSL Certificate Storage

You can compile the source code yourself, but that can be a lot of extra steps. Fortunately, Vincent Bernat has done all of the hard work and released something called a PPA, or Personal Package Archive. A custom PPA tells Ubuntu to use a software source outside its normal channels and install a custom package. In the case of HAProxy, which is already provided by the official sources, once the PPA is installed, apt will use this custom package over the default.

Install the latest HAProxy using a PPA

Install Ubuntu 20.04 Server. For testing, I’m using a virtual machine running on my laptop. Everything we do here should work equally well on bare metal or any cloud provider.

Head over to haproxy.debian.net, where you can select the install instructions for your OS. At the time of this writing, the latest version was 2.4. Select the options for Ubuntu Focal (20.04 LTS) (long term support) and HAProxy 2.4-stable (LTS). This will bring you to a page listing the commands you need to run:

The first command installs the software-properties-common package which helps you manage any PPAs you install. It’s probably already installed, but running it again ensures that it’s available. The second command puts the PPA into the list of software sources.

We’re now ready to install the very latest HAProxy:

Adding =2.4.\* to the end tells apt that we want to maintain the latest version of HAProxy in the 2.4 branch, so if there are future updates in that branch, you’ll get them when you do an apt upgrade .

To show what you’ve installed:

Do an update and upgrade, to ensure you have the latest software packages and any security fixes:

You now have a fully up-to-date Ubuntu system running the latest version of HAProxy with a stock configuration located at /etc/haproxy/haproxy.cfg; you’re ready to start customizing.

On the HAProxy blog, head over to HAProxy Configuration Basics: Load Balance Your Servers. It will walk you through the steps to balance your traffic like a pro.

Conclusion

In this post, you learned how to install the open-source edition of HAProxy on one of the most popular and most powerful operating systems around, Ubuntu 20.04. You installed a custom PPA, or Personal Package Archive, to give you complete control over the version, ensuring that future updates remain within your desired branch.

HAProxy Enterprise is the world’s fastest, and most widely used software load balancer. It powers modern application delivery at any scale and in any environment, providing the utmost performance, observability, and security for your critical services. Organizations harness its cutting edge features and enterprise suite of add-ons, which are backed by authoritative, expert support and professional services. Ready to learn more? Sign up for a free trial.Want to know when more content like this is published? Subscribe to our blog or follow us on Twitter. You can also join the conversation on Slack.

The post Install HAProxy on Ubuntu appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/how-to-install-haproxy-on-ubuntu/feed/ 0
August/2021 – HAProxy 2.0+ HTTP/2 Vulnerabilities Fixed https://www.haproxy.com/blog/august-2021-haproxy-2-0-http-2-vulnerabilities-fixed/ https://www.haproxy.com/blog/august-2021-haproxy-2-0-http-2-vulnerabilities-fixed/#comments Mon, 16 Aug 2021 20:58:38 +0000 https://www.haproxy.com/?p=415491 A vulnerability was found that makes it possible to abuse the HTTP/2 parser, allowing an attacker to prepend hostnames to a request, append top-level domains to an existing domain, and inject invalid characters through the :method pseudo-header.

The post August/2021 – HAProxy 2.0+ HTTP/2 Vulnerabilities Fixed appeared first on HAProxy Technologies.

]]>

If you are using HAProxy 2.0 or newer, it is important that you update to the latest version. A vulnerability was found that makes it possible to abuse the HTTP/2 parser, allowing an attacker to prepend hostnames to a request, append top-level domains to an existing domain, and inject invalid characters through the :method pseudo-header. Willy Tarreau has also announced this on the mailing list here.

Timeline

5 August 2021 19:00 UTC – James Kettle in his presentation HTTP/2: The Sequel is Always Worse describes new class of vulnerabilities that could cause desync attacks on some proxy servers during the HTTP/2 to HTTP/1 translation. This presentation prompts Willy Tarreau, chief maintainer of HAProxy, to assess whether HAProxy is affected.

6 August 2021 00:43 UTC – A link to the article is shared on the HTTP working group

6 August 2021 02:00 UTC – Willy reads the article and begins checking the code.

6 August 2021 03:43 UTC – Willy concludes that HAProxy is fine, and responds to the HTTP working group that he suspects this would only affect older implementations as modern ones are expectedly safe regarding this.

7 August 2021 22:19 UTC – HAProxy contributor Tim Düsterhus contacts Willy reporting that he found that :path and :scheme can be abused, but only when the backend is HTTP/2, which differs from the findings in the article. He also supplies a reproducer.

8 August 2021 05:28 UTC – Willy responds to Tim with a proposed fix for review and testing. A few exchanges happen over the weekend on cosmetic details of the patch.

10 August 2021 20:15 UTC – Tim reports that he could also abuse the :method field, in this case assuming a vulnerable HTTP/1 server is located behind HAProxy.

10 August 2021 20:28 UTC – Willy proposes a fix for the method and overlooks a detail resulting in a longer discussion with Tim.

11 August 2021 15:15 UTC – After a mistaken belief that it is possible to conduct a smuggling attack, the decision is taken to contact distro maintainers and keep the issue under embargo for the end of the week with a release planned for the following Tuesday. Distro maintainers instantly respond favorably, confirming being able to deliver timely fixes if patches are provided quickly.

12 August 2021 07:30 UTC – Willy shares a preliminary batch of tested backports with distros, though the details of the impact in each branch is still under investigation, and the description of the patches still under editing as the knowledge of the problem is refined.

16 August 2021 08:40 UTC – Willy signals distros that there’s finally no request smuggling attack, it was a mistaken analysis resulting from modifications needed to test the request injection. Commit messages are updated.

17 August 2021 15:00 UTC – The embargo is lifted, stable versions with the fix are released to the public.

Vulnerability Overview

Tim Düsterhus discovered several issues within HAProxy’s handling of HTTP/2. The first allows an attacker to abuse the scheme (http:// and https://) to prepend a prefix hostname to requests that are forwarded to HTTP/2 backend servers, which causes the web servers to see a different hostname from the one that HAProxy saw, possibly bypassing some filtering performed at the proxy layer. When a web server is in HTTP/1 mode, the scheme is dropped and so it has no effect.

The second issue allows the :path pseudo-header to appear as part of the host header if it does not start with / or *. An attacker could use this to append a top-level domain to the existing domain and bypass hostname checks. Similarly, this one will be ignored if the request is forwarded to an HTTP/1 backend server, so only HTTP/2 to HTTP/2 communications are affected.

The third is a corner case within the current HTTP/2 specification, which makes the host header prevail over the :authority, causing rules applied to the host header to not necessarily match what the backend server sees once the authority is used to reconstruct the host header (or the host header dropped depending on the version, but this is the same). The new HTTP/2 specification, which is still being written, updates this by mandating that the host is always ignored when an authority is present, which will solve this problem in the long term.

Affected Versions & Remediation

The following section lists the affected versions, fixed version, and potential workarounds where available. We recommend that you upgrade immediately if you are using any of these.

Affected Versions Fixed Versions
HAProxy 2.0 versions <= 2.0.23 HAProxy 2.0.24
HAProxy 2.2 versions <= 2.2.15 HAProxy 2.2.16
HAProxy 2.3 versions <= 2.3.12 HAProxy 2.3.13
HAProxy 2.4 versions <= 2.4.2 HAProxy 2.4.3
HAProxy Enterprise 2.0r1 versions <= 1.0.0-234.1213 HAProxy Enterprise 2.0r1 >= 234.1215
HAProxy Enterprise 2.2r1 versions <= 1.0.0-240.455 HAProxy Enterprise 2.2r1 >= 240.490
HAProxy Enterprise 2.3r1 versions <= 1.0.0-239.297
HAProxy Enterprise 2.3r1 >= 241.329
HAProxy ALOHA 11 versions <= 11.5.10 HAProxy ALOHA 11.5.11
1HAProxy ALOHA 12 versions <= 12.5.2 HAProxy ALOHA 12.5.3
HAProxy ALOHA 13 versions <= 13.0.4 HAProxy ALOHA 13.0.5

 

Workarounds

It’s important to note that the only issue that affects HTTP/1 communication with backend servers involves the use of an HTTP/1 server that fails to properly validate a request line. At the time of writing, we’re not aware of any such server failing to properly parse an HTTP/1 request among the mainstream servers such as Apache, NGINX, or Varnish. This means that most users who cannot upgrade can simply disable HTTP/2 communications with their backend servers if it was enabled. This is done by making sure that neither proto h2 nor alpn h2 is present on the server lines.

Users who are using less common servers and who fear these might process invalid requests should consider the alternative workarounds below.

If you are not able to update right away, you can apply the following rules to mitigate the issues. These should be added to your frontend.

Alternatively, you can remove alpn h2,http/1.1 from your bind line and add the following to your defaults section:

Acknowledgements

We would like to thank Tim Düsterhus for his efforts in identifying these issues and his responsible disclosure of them.

The post August/2021 – HAProxy 2.0+ HTTP/2 Vulnerabilities Fixed appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/august-2021-haproxy-2-0-http-2-vulnerabilities-fixed/feed/ 2
How to Run HAProxy with Docker https://www.haproxy.com/blog/how-to-run-haproxy-with-docker/ https://www.haproxy.com/blog/how-to-run-haproxy-with-docker/#comments Mon, 09 Aug 2021 09:00:18 +0000 https://www.haproxy.com/?p=414381   Can you run HAProxy as a Docker container? Yes! Did you even need to ask? Docker is ubiquitous these days and you’ll find that many applications have been Docker-ized; the HAProxy load balancer is no exception. Pardon the cliché, but HAProxy was born for this. As a standalone service that runs on Linux, porting […]

The post How to Run HAProxy with Docker appeared first on HAProxy Technologies.

]]>

 

Can you run HAProxy as a Docker container? Yes! Did you even need to ask? Docker is ubiquitous these days and you’ll find that many applications have been Docker-ized; the HAProxy load balancer is no exception. Pardon the cliché, but HAProxy was born for this. As a standalone service that runs on Linux, porting it to Docker certainly seemed natural.

Why would you want to run your load balancer inside of a Docker container? Are their performance penalties when doing so? Will it introduce any security issues?

In this blog post, you’ll learn why you might consider running HAProxy inside a container and what the ramifications could be. Then you’ll see how to do it. Note that we are covering how to run HAProxy, not the HAProxy Kubernetes Ingress Controller.

HAProxy Technologies builds its own set of Docker images under its namespace haproxytech. These are updated regularly with the latest patches and security updates. I will be using those images in this blog post. You’ll find them here:

The commands I demonstrate were performed on a Linux workstation, but will work just as well when using Docker Desktop for Windows or Docker Desktop for Mac.

The benefits of Docker

Do you want the ability to run HAProxy without needing to compile it, install dependencies, or otherwise alter your system?

Docker containers bring considerable benefits, chief among them being less ceremony around installation and execution. Docker allows you to drop a container onto a host system and instantly get a running service—no install scripts, no installing C libraries. The service is completely contained within the container and all you need to do is start it and then map a TCP port to it. When you deploy a container, you gain the ability to run an entire application complete with its runtime environment without ever actually installing it onto the host system.

Lifecycle management becomes standardized too. Starting, stopping, and removing a container are as easy as calling one-line docker commands. That in turn makes deployment a repeatable and testable process. It also lends itself to easier software upgrades.

The performance impact of running Docker

You want your load balancer to be fast, with no added latency from the environment. So, the question is, what is the impact of running HAProxy inside of a container?

In terms of CPU overhead, it helps to remember that, unlike a virtual machine, Docker does not require a layer of virtualization on top of the host operating system. A container runs on the host’s kernel and is basically just another process, albeit one with better isolation from other processes running on the host (it uses namespaces to accomplish this). It should come as little surprise then that a study by researchers at IBM found that the CPU overhead of using Docker is negligible.

Networking is another story. By default, Docker lets you access the services running inside containers by creating a bridge network to the host. This does incur latency due to the network address translation (NAT) that must happen between the container’s local network and the host’s bridge network. In the same IBM study cited before, the researchers found that Docker’s NAT doubled latency from roughly 35 µs to 70 µs for a 100-byte request from the client and 200-byte response from the application.

On the other hand, bridge networks are useful because they allow you to isolate groups of containers into a container-only network and expose only some of those containers to the host, which is handy for reducing the number of IP addresses required on your host’s network (think about the number of IPs required to run hundreds or possibly thousands of containers). If you’re interested in learning more about how networking works in Docker, there’s a deep dive available from the Docker team that you can watch on YouTube.

If you require very low latency you can switch to using Docker’s host network feature, which allows your container to share the same network as the host, cutting out the need for NAT. Then again, that doesn’t touch on what to do if you want to run Docker Swarm or Kubernetes, which use overlay networks, for which different network drivers like Project Calico and Cilium have solutions. However, that is outside the scope of this article.

In short, unless you require very low latency, you should be fine sticking with the default bridge networking option. Just be sure to test it out and see if you’re getting the throughput you need.

Security considerations of running Docker

You may be concerned by the fact that many Docker containers run their service as root, and this root user is the same root user as on the host system. Concerns about a container breakout are legitimate. HAProxy runs as root too. However, to put your mind at ease: HAProxy requires root access because it needs to bind to restricted TCP ports like 80 and 443. However, once it has finished its startup, it drops its root privileges and runs as an unprivileged user.

People also weigh the risk that a container may be malicious. This is a good reason to stick with the haproxytech Docker images, which are curated by HAProxy Technologies.

Run HAProxy with Docker

We’ll create three instances of a web application, one instance of HAProxy, and a bridge network to join them together. So, once you’ve installed Docker, use the following command to create a new bridge network in Docker:

Then use the docker run command to create and run three instances of the web application. In this example, I use the Docker image jmalloc/echo-server. It’s a simple web app that returns back the details of the HTTP requests that you send to it.

Notice that we assign each one a unique name and attach it to the bridge network we created. You should now have three web applications running, which you can verify by calling the docker ps command:

These containers listen on their own port 8080, but we did not map those ports to the host, so they are not routable. We’ll relay traffic to these containers via the HAProxy load balancer. Next, let’s add HAProxy in front of them. Create a file named haproxy.cfg in the current directory and add the following to it:

A few things to note:

  • In the global section, the stats socket line enables the HAProxy Runtime API and also enables seamless reloads of HAProxy.
  • The first frontend listens on port 8404 and enables the HAProxy Stats dashboard, which displays live statistics about your load balancer.
  • The other frontend listens on port 80 and dispatches requests to one of the three web applications listed in the webservers backend.
  • Instead of using the IP address of each web app, we’re using their hostnames web1, web2, and web3. You can use this type of DNS-based routing when you create a Docker bridge network as we’ve done.

Next, create and run an HAProxy container and map its port 80 to the same port on the host by including the -p argument. Also map port 8404 for the HAProxy Stats page:

Calling docker ps afterwards shows that HAProxy is running:

You can access the echo-server web application at http://localhost. Each request to it will be load balanced by HAProxy. Also, you can see the HAProxy Stats page at http://localhost:8404.

If you make a change to your haproxy.cfg file, you can reload the load balancer—without disrupting traffic—by calling the docker kill command:

To delete the containers and network, run the docker stop, docker rm, and docker network rm commands:

Conclusion

In this blog post, you learned how running HAProxy inside of a Docker container can simplify its deployment and lifecycle management. Docker provides a standardized way for deploying applications, making the process repeatable and testable. While the CPU overhead of running Docker is negligible, it can incur extra network latency, but the impact of that depends on your use case and throughput needs.

To run HAProxy, simply create an HAProxy configuration file and then call the docker run command with the name of the HAProxy Docker image. HAProxy Technologies supplies up-to-date Docker images on Docker Hub.

HAProxy Enterprise powers modern application delivery at any scale and in any environment, providing the utmost performance, observability, and security for your critical services. Organizations harness its cutting edge features and enterprise suite of add-ons, which are backed by authoritative, expert support and professional services. Ready to learn more? Sign up for a free trial.

Want to know when more content like this is published? Subscribe to our blog or follow us on Twitter. You can also join the conversation on Slack.

The post How to Run HAProxy with Docker appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/how-to-run-haproxy-with-docker/feed/ 3
AWS EC2 Service Discovery with HAProxy https://www.haproxy.com/blog/aws-ec2-service-discovery-with-haproxy/ https://www.haproxy.com/blog/aws-ec2-service-discovery-with-haproxy/#comments Mon, 26 Jul 2021 13:33:34 +0000 https://www.haproxy.com/?p=413231 AWS Auto Scaling groups are a powerful tool for creating scaling plans for your application. They let you dynamically create a group of EC2 instances that will maintain a consistent and predictable level of service. HAProxy’s Data Plane API adds a cloud-native method known as Service Discovery to add or remove these instances within a […]

The post AWS EC2 Service Discovery with HAProxy appeared first on HAProxy Technologies.

]]>
Stylized conceptual illustration of Service Discovery of

AWS Auto Scaling groups are a powerful tool for creating scaling plans for your application. They let you dynamically create a group of EC2 instances that will maintain a consistent and predictable level of service. HAProxy’s Data Plane API adds a cloud-native method known as Service Discovery to add or remove these instances within a backend in your proxy as scaling events occur.

In this article, we’ll take a look at the steps used to integrate this functionality into your workflow. We’ll create an AWS Auto Scaling group using a custom launch template. Then we’ll add an HAProxy Enterprise instance in the form of a preconfigured AMI. Finally, we’ll use the Data Plane API to enable service discovery.

Of course, you have the option of using the community version of HAProxy, though its installation and configuration are outside the scope of this blog post.

To begin, log in to your AWS Console select AWS Services, All Services, EC2.

Create a Launch Template

Launch Templates are a list of the parameters that are the same across all of the instances.  You can think of them as a generic description of each of the servers that make up your application, such as the machine image and instance type, without the parameters that are unique to the instance, such as its unique hostname.

When instances are created using the launch templates, the HAProxy Data Plane API monitors its Virtual Private Cloud (VPC) for specially-tagged instances to monitor and use. This is known as Service Discovery.

Let’s create a new launch template that will serve as the blueprint for creating application servers.

In the AWS console, select Instances > Launch Templates > Create launch template.

Use the following values to create a simple template:

Key Value
Launch template name MyTemplate
Template version description A prod webserver for MyApp
Auto Scaling guidance Selected [X]

There are two types of tags that you can define on the template creation screen: Template Tags and Resource Tags. Template tags attach to the template itself, while resource tags are passed on to the created instances. Add the following resource tags to the template. These tags tell the HAProxy Data Plane API that the instances created from this template are to be discovered and that these instances are to be used as servers in a backend.

Tag Value
HAProxy:Service:Name MyApp
HAProxy:Service:Port 80

It is critical that the tags themselves are named exactly as shown here for the service discovery to succeed. The value you assign to the HAProxy:Service:Name tag is up to you. The port specified in HAProxy:Service:Port is the port that your application listens on.

The launch template we’re creating will deploy the free trial of Develatio’s ExpressJS AMI. Of course, you can use any AMI and application of your choosing. We’re using this machine image, as it comes preconfigured with a simple ExpressJS application that by default serves a page that will let you know that everything is working. These instances will listen on port 80, which we have indicated with the HAProxy:Service:Port tag above.

The t2.micro instance is a free tier eligible machine that can run our simple backend service.

Key Value
Amazon Machine Image (AMI) Develatio ExpressJS AMI
Instance Type t2.micro

Create or select an existing key pair to use for securely communicating with your servers. Carefully save this key for later use, as it is not possible to retrieve it from the AWS console later.

Note: You may need to adjust the filesystem permissions to 600 for this to work with SSH.

Select the Virtual Private Cloud (VPC) into which AWS will launch the ExpressJS servers and the default security group that allows HAProxy Enterprise to route requests to them.

Create Auto Scaling Group

Now that you’ve defined a launch template, let’s create an auto scaling group that will handle creating new instances based on that template.

From the left-hand menu, select Auto Scaling groups and click Create Auto Scaling group. Give your Auto Scaling group a name such as MyASG.

Select the template you just created, MyTemplate, from the list. You will see that you have the option to select a version of the template. If you have just created your template, you will likely have only one version, Default (1). If you modify your template at any time, you will need to update this selection to the correct version.

Auto Scaling groups have the option of using either “On-Demand” or “Spot” instances. Spot instances are significantly cheaper than on-demand instances:

A Spot Instance is an instance that uses spare EC2 capacity that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and is adjusted gradually based on the long-term supply of and demand for Spot Instances.

As we did not specify a preference during template creation, we have the option here to mix Spot and On-Demand instance types.

Next, Select the default VPC, subnet and select “No load balancer”. For this role, we will be using HAProxy Enterprise as our load balancer.

Specify the number of instances in the Auto Scaling group. For this exercise, specify “1” for each of the desired, minimum and maximum capacity settings.

AWS Security Groups are a virtual firewall for your EC2 instances to control incoming and outgoing traffic. For the  servers in our application pool, we will use the default security group created by AWS.

Once you have created your Auto Scaling group, AWS automatically launches it

The HAProxy Enterprise Load Balancer

Next, let’s add an HAProxy Enterprise AMI as a load balancer. This instance is created outside of the Auto Scaling group, and will automatically create its own security group. Specify the same key pair as before.

From the EC2 Dashboard, select Instances and click Launch Instances.

Search for “HAProxy Enterprise Ubuntu” in the AMI search string to see the official HAProxy Enterprise machine images and choose the HAProxy Enterprise image built on Ubuntu 20.04.  If you are testing in a non-production environment, you could specify t2.large as the instance type, but for anything more, we recommend either C (Compute) or M (Machine, general purpose) types, size large or xlarge at minimum, or N-type (Network Optimized) for heavier workloads.

When the instance is started, connect using SSH:

We need to add a user to the Data Plane API before we can use it. As we’ll be using an encrypted password to connect, ensure that the mkpasswd command from the whois package is installed:

Create a new section in your configuration file that includes this encrypted password:

Save and close the file. Restart the HAProxy Enterprise service:

Enable Service Discovery

To enable service discovery, we will need to add a pair of AWS authentication credentials.

From the top-right section of the AWS console, click on your username and select “My Security Credentials”. Open the “Access Keys” tab.

Create a new access key and download it to your computer. These credentials come in two parts, an access key ID, as well as the secret access key. Please note that if you lose the secret access key, there is no way to retrieve it from AWS.

To enable service discovery, run the following curl command from the HAProxy instance, substituting your own username, password, region and AWS credentials in the example below:

This command sends configuration and authentication data to the Data Plane API’s AWS service discovery endpoint. Upon successful registration, it returns a JSON result that will look like the following:

Re-open your configuration file. You will notice that a new backend has been created.

As the AWS Auto Scaling group has only one member, this backend has one server enabled out of the ten specified by “server_slots_base”: 10

At this point, you can use this backend as you would any other. You can use the Data Plane API to programmatically create a frontend and add it as the default backend. Here, we’ll make a quick manual edit to add it as the backend for a frontend called public_web_servers:

After making a manual edit, reload HAProxy:

Conclusion

In this simple example, we’ve set up an AWS Auto Scaling group that maintains a pool of servers for an application.

We’ve also set up HAProxy Enterprise and enabled EC2 instance service discovery within the HAProxy Data Plane API. This created a new HAProxy backend that will automatically add new servers and drop deleted servers as changes occur within the Auto Scaling group. Once created, this backend can be used like any standard HAProxy backend.

Our example set up a pool with a single instance by default, but this could be easily scaled to a dozen or a hundred instances, with almost no added complexity or further configuration, by configuring your auto scaling group to use any number of instances.

Want to know when more content like this is published? Subscribe to our blog or follow us on Twitter. You can also join the conversation on Slack.

The post AWS EC2 Service Discovery with HAProxy appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/aws-ec2-service-discovery-with-haproxy/feed/ 1