HAProxy Technologies https://www.haproxy.com Wed, 03 Aug 2022 13:42:29 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.10 https://www.haproxy.com/wp-content/uploads/2017/06/cropped-Favicon-3-32x32.png HAProxy Technologies https://www.haproxy.com 32 32 Log Forwarding with HAProxy and Syslog https://www.haproxy.com/blog/log-forwarding-with-haproxy-and-syslog/ https://www.haproxy.com/blog/log-forwarding-with-haproxy-and-syslog/#respond Wed, 03 Aug 2022 09:00:34 +0000 https://www.haproxy.com/?p=494931 Developing a strategy for collecting application-level logs necessitates stepping back and looking at the big picture. Engineers developing the applications may only see logging at its ground level: the code that writes the event to the log—for example a function that captures Warning: An interesting event has occurred! But where does that message go from […]

The post Log Forwarding with HAProxy and Syslog appeared first on HAProxy Technologies.

]]>

Developing a strategy for collecting application-level logs necessitates stepping back and looking at the big picture. Engineers developing the applications may only see logging at its ground level: the code that writes the event to the log—for example a function that captures Warning: An interesting event has occurred! But where does that message go from there? What path does it travel to get to its destination?

In a sufficiently large architecture where you have many services and applications, possibly spanning multiple data centers and maybe even touching both on-prem and cloud-hosted environments, you may want to designate log collection points at different tiers. These collection points would ingest data from all the applications in a network, for example, or from all within a data center, before forwarding it to a higher tier such as a log aggregation server that is responsible for reporting on all data globally. That way, only the mid-tier collection points would need to have knowledge of where the log aggregation server lives, while the applications would need to know only about the mid-tier collection points, simplifying administration if the network ever changes.

In this blog post, I will describe HAProxy’s log forwarding support for the Syslog protocol, which is a standardized logging protocol baked into many devices, software, and programming languages. HAProxy can act as a log forwarder, standing in as a collection point that receives logs and relays them to a higher tier. Its ability to act as a remote Syslog server and client, along with its support for both UDP and TCP, make it a convenient choice for teams that host a variety of applications and environments.

Log forwarding over UDP

In version 2.3, HAProxy introduced a feature for receiving Syslog messages and forwarding them to another server. HAProxy can listen on either a TCP or UDP port, or both, and then send the message out via TCP or UDP. The UDP configuration is the simplest, so let’s start with that.

In the HAProxy configuration snippet below, a section named log-forward listens for incoming messages and forwards them to a server at 172.25.0.12.

The bind line tells HAProxy to listen for incoming messages over TCP, and the dgram-bind line tells it to listen for incoming messages over UDP. By specifying 0.0.0.0 as the address, it listens on all available IP addresses assigned to the load balancer. We’re listening and sending using port 514, which is the standard Syslog port for both UDP and TCP. The facility code, local0, is used only if the received log event doesn’t already set a facility.

HAProxy supports two formats of the Syslog protocol, the older RFC3164 and the newer RFC5424, and it will relay whichever format it receives. However, you can choose which one to relay by setting format on the log line.

In this example, messages are sent out using the UDP protocol. The benefit of UDP is that it’s a fire-and-forget protocol, meaning that HAProxy doesn’t need to wait for a response from the target server, which avoids any slowdowns in performance. The disadvantage is that there is no guarantee that all log messages will arrive at their destination, such as if the destination server were to go offline. HAProxy isn’t able to health-check the server to know if it’s down. In many cases, losing non-critical logs here and there isn’t too important, but if you require 100% message delivery, consider sending outgoing messages over TCP instead.

Log forwarding over TCP

TCP guarantees that messages will reach their destination, but it expects HAProxy to wait for an acknowledgement from the server. To avoid slowdowns, HAProxy performs this waiting in the background, storing queued up Syslog messages in a buffer until they’re sent and acknowledged.

In the configuration snippet below, we utilize a ring section to define a circular buffer in memory for storing log messages until they’re sent. The log line in the log-forward section then refers to this ring section rather than an IP address as in the UDP example.

Together, the ring and log-forward sections enable relaying Syslog messages over TCP without creating a bottleneck. You can play with the maxlen and size parameters to change the maximum size of a Syslog message and the overall size of the buffer. Timeouts ensure that HAProxy won’t wait forever on an unresponsive server.

Conclusion

In this blog post, you learned how HAProxy can act as a log collection point that ingests logs from multiple applications and then forwards them to a centralized log aggregation server. HAProxy’s ability to listen for both UDP and TCP Syslog messages helps it to integrate with a variety of software. By creating a tier of log collection points, you can simplify administration of your logging infrastructure.

Interested to know when we publish content like this? Subscribe to our blog! You can also follow us on Twitter and join the conversation on Slack.

The post Log Forwarding with HAProxy and Syslog appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/log-forwarding-with-haproxy-and-syslog/feed/ 0
Preserve Stick Table Data When Reloading HAProxy https://www.haproxy.com/blog/preserve-stick-table-data-when-reloading-haproxy/ https://www.haproxy.com/blog/preserve-stick-table-data-when-reloading-haproxy/#respond Mon, 25 Jul 2022 09:30:57 +0000 https://www.haproxy.com/?p=493911 With HAProxy situated in front of their servers, many people leverage it as a frontline component for enabling extra security and observability for their networks. HAProxy provides a way to monitor the number of TCP connections, the rate of HTTP requests, the number of application errors and the like, which you can use to detect […]

The post Preserve Stick Table Data When Reloading HAProxy appeared first on HAProxy Technologies.

]]>

With HAProxy situated in front of their servers, many people leverage it as a frontline component for enabling extra security and observability for their networks. HAProxy provides a way to monitor the number of TCP connections, the rate of HTTP requests, the number of application errors and the like, which you can use to detect anomalous behavior, enforce rate limits, and catch application-related problems early.

Behind the scenes, an in-memory storage called stick tables keeps track of this data. Stick tables associate a key, which is typically the client’s IP address, with counters. These counters represent any of the abovementioned signals, around which you can build custom policies in order to take action when a counter exceeds a threshold. For example, you might send a Too Many Requests error when a user’s request rate goes too high.

There’s just one potential problem. If you make changes to your HAProxy configuration file, you then need to reload HAProxy so that the changes take effect. However, a reload clears away all of your stick table data! The good news is that there is a simple way to preserve this data during a reload, which we’ll cover in this blog post.

Preserve stick table data with peers

Preserving stick table data comes down to defining peers in your configuration.

What are peers? When you operate two or more HAProxy instances for redundancy, you often need to synchronize stick table data between them. For example, in an active-standby setup, where one load balancer actively receives traffic while the other is on standby, you would want to synchronize them so that if the standby instance needs to take over, it has a copy of the data. Each load balancer instance that shares stick table data is called a peer.

You configure peers by adding a peers section to your HAProxy configuration.

TIPHAProxy Enterprise supports active-active scenarios too, where both load balancers receive traffic simultaneously and aggregate their data. Conversely, a peers section alone overwrites data from one peer to the other and is suitable for active-standby scenarios only.

As it happens, using a peers section doubles as a way to preserve stick table data during a reload, and it works even if you operate only one load balancer.

First, add a peers section to your configuration. For illustration purposes, let’s assume you have only one load balancer. You would list it like this:

Inside the peers section, each peer line indicates a server participating in data synchronization. In this case, there’s only one, the current server. The first argument is the server’s hostname, which is garfield here. Then, the server’s IP address, which can be either a localhost address like 127.0.0.1 or the address at which other peers can access the server, such as 192.168.56.20. The load balancer listens at the designated port for incoming data, which is 10000 here.

It’s important that the first argument matches the server’s hostname. If that would be difficult to do, you can instead set the localpeer directive in the global section of your configuration to use a different name, as shown below.

Next, add a peers argument to your stick table declaration.

Your stick table will now be preserved during a reload. By reload, I mean this command:

Note that this will not work if you do a hard restart of HAProxy, such as:

To see data currently stored in a stick table, use the Runtime API’s show table command.

Conclusion

In this blog post, you learned how defining a peers section in your configuration enables HAProxy to retain stick table data during a reload. This ensures that the policies you define for rate limiting and the like will continue to operate normally.

Interested to know when we publish content like this? Subscribe to our blog! You can also follow us on Twitter and join the conversation on Slack.

To learn more about stick tables, sign up for the on-demand webinar, Introduction to HAProxy Stick Tables.

The post Preserve Stick Table Data When Reloading HAProxy appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/preserve-stick-table-data-when-reloading-haproxy/feed/ 0
Announcing HAProxy Data Plane API 2.6 https://www.haproxy.com/blog/announcing-haproxy-data-plane-api-2-6/ https://www.haproxy.com/blog/announcing-haproxy-data-plane-api-2-6/#respond Tue, 19 Jul 2022 11:00:15 +0000 https://www.haproxy.com/?p=492571 In HAProxy Data Plane API version 2.6, we continued the effort of expanding support for HAProxy configuration keywords, as this has been the priority with this release cycle, and it will be in the next one too to meet our goal of achieving complete feature parity with both the HAProxy configuration and Runtime API. This […]

The post Announcing HAProxy Data Plane API 2.6 appeared first on HAProxy Technologies.

]]>

In HAProxy Data Plane API version 2.6, we continued the effort of expanding support for HAProxy configuration keywords, as this has been the priority with this release cycle, and it will be in the next one too to meet our goal of achieving complete feature parity with both the HAProxy configuration and Runtime API. This will enable you to use HAProxy Data Plane API for configuring HAProxy without any gaps in functionality.

With that in mind, we also implemented quality of life improvements, namely adding a health check endpoint that returns the status of the HAProxy process, and we upgraded to Go 1.18 and updated all dependencies that we are using so users can benefit from bug and security fixes.

Extended keyword support

We updated the HAProxy Data Plane API to cover more HAProxy configuration keywords, with the goal of making the API a full-fledged way to configure HAProxy. In this section, you’ll see everything we’ve added.

The ring section

Since HAProxy 2.2, you’ve had the ability to define a section called ring which creates a FIFO buffer in memory where you can store HAProxy’s logs temporarily before forwarded them to a remote syslog server. The benefit of using a ring buffer is that it allows you to forward log messages to a remote syslog server using TCP. A traditional log line in an HAProxy configuration forwards logs using the UDP protocol. Relaying logs using UDP avoids any slowdowns, since UDP is a connectionless protocol, but the downside is that there is no guarantee that every log message will arrive at the destination. UDP is fire and forget.

However, by defining a ring section you can buffer log messages and then communicate with a syslog server over a TCP port without slowing down HAProxy since the log forwarding happens in the background. The TCP protocol is a connected protocol and guarantees delivery of all messages. In the snippet below, we define a ring section for buffering HAProxy logs.

Then in the global section of your configuration, your log line would specify the ring buffer instead of an IP address and port:

You can now create a ring section through the API by using the /services/haproxy/configuration/rings endpoint, as shown below.

Then add a server to it by using the /services/haproxy/configuration/servers endpoint, setting the URL parameter parent_type to ring and parent_name to the name of the ring section.

DEPRECATION WARNING To remain backward compatible, the server endpoint will still have the backend query string parameter, but it will be removed in a future version. Going forward please use parent_type and parent_name query string parameters to specify the section to which a server resource belongs.

Add the required log line to your global section by invoking the /services/haproxy/configuration/log_targets endpoint.

The log-forward section

NOTEAn earlier version of this post recommended using the log-forward section for load balancing, but this has been amended to indicate that the use case is primarily log forwarding, since health checking is not yet available.

While the ring section pertains to forwarding HAProxy’s own logs, another section named log-forward pertains to forwarding log messages for other applications to a list of syslog servers. This gets its own section in order to support relaying syslog messages over UDP, while in general HAProxy is a TCP-based proxy. The log-forward section was introduced in HAProxy 2.3 and looks like this:

You can create a log-forward section through the API by using the new /services/haproxy/configuration/log_forwards endpoint.

Then add a bind line for receiving syslog traffic over TCP and a dgram_bind line for receiving it over UDP by calling the /services/haproxy/configuration/binds and /services/haproxy/configuration/dgram_binds endpoints:

DEPRECATION WARNING To remain backward compatible, the bind resource will still have the frontend query string parameter, but it will be removed in a future version. Going forward please use parent_type and parent_name query string parameters to specify the section to which a bind resource belongs too.

Finally, add the log servers to which you will forward traffic by invoking the /services/haproxy/configuration/log_targets endpoint.

Global section keywords

With version 2.6, we’re happy to announce that the /services/haproxy/configuration/global endpoint supports all options for the HAProxy global section.

Process management and security:

  • default-path
  • description
  • expose-experimental-directives
  • gid
  • grace
  • insecure-fork-wanted
  • insecure-setuid-wanted
  • issuers-chain-path
  • h2-workaround-bogus-websocket-clients
  • log-tag
  • lua-load-per-thread
  • mworker-max-reloads
  • node
  • numa-cpu-mapping
  • pp2-never-send-local
  • presetenv
  • resetenv
  • uid
  • ulimit-n
  • set-dumpable
  • set-var
  • setenv
  • unix-bind
  • unsetenv
  • strict-limits

SSL tuning options:

  • ssl-default-bind-ciphers
  • ssl-default-bind-ciphersuites
  • ssl-default-bind-curves
  • ssl-default-bind-options
  • ssl-default-server-ciphers
  • ssl-default-server-ciphersuites
  • ssl-default-server-options
  • ssl-dh-param-file
  • ssl-server-verify
  • ssl-skip-self-issued-ca

Performance tuning:

  • busy-polling
  • max-spread-checks
  • maxconnrate
  • maxcomprate
  • maxcompcpuusage
  • maxpipes
  • maxsessrate
  • maxsslconn
  • maxsslrate
  • maxzlibmem
  • noepoll
  • nokqueue
  • noevports
  • nopoll
  • nosplice
  • nogetaddrinfo
  • noreuseport
  • profiling.tasks
  • spread-checks
  • ssl-engine

Defaults, frontend and backend keywords

Version 2.6 of the HAProxy Data Plane API also brings support or all option keywords that can be configured in defaults, backend, and frontend sections via the /services/haproxy/configuration/defaults, /services/haproxy/configuration/frontends, and /services/haproxy/configuration/backends API endpoints:

  • option abortonclose
  • option checkcache
  • option http-ignore-probes
  • option http-no-delay
  • option http-use-proxy-header
  • option httpslog
  • option independent-streams
  • option nolinger
  • option originalto
  • option persist
  • option prefer-last-server
  • option socket-stats
  • option splice-auto
  • option splice-request
  • option splice-response
  • option spop-check
  • option srvtcpka
  • option tcp-smart-accept
  • option tcp-smart-connect
  • option tcpka
  • option transparent

In addition, we’ve extended the stats object that you can configure in a frontend or backend section to include the following options:

  • stats auth
  • stats http-request
  • stats realm

For the backend and the defaults sections we added support for the server TCP keep-alive options:

  • srvtcpka-cnt
  • srvtcpka-idle
  • Srvtcpka-intvl

For the frontend and the defaults sections we added support for the client TCP keep-alive options:

  • clitcpka-cnt
  • clitcpka-idle
  • clitcpka-intvl

The http-after-response directive

HAProxy 2.6 introduced the http-after-response directive, which applies an action to all responses, even those generated by HAProxy and returned without involving a backend server. People use it to attach HTTP headers to a response when HAProxy returns a redirect, for example. The older http-response directive applies only when the backend server sends the response.

With HAProxy Data Plane API 2.6, you can configure http-after-response directives similarly to how you would configure http-response directives, only using the /services/haproxy/configuration/http_after_response_rules endpoint.

This will create the following line in your frontend section in the HAProxy configuration file:

Health endpoint

A new endpoint, /health, returns a value indicating whether HAProxy is up and running.

To enable this feature, you must configure the status_cmd field in the HAProxy Data Plane API configuration file (e.g. /etc/haproxy/dataplaneapi.hcl), as shown below.

HCL

YAML

Library updates and bug fixes

In addition to the already mentioned features implemented in HAProxy Data Plane API, the 2.6 version brings some stability and quality of life improvements, along with bug fixes.

One of the bigger updates is that the HAProxy Data Plane API project has been migrated to Go 1.18 along with the underlying libraries config-parser and client-native. This allows us to get all the new features of the Go language to improve our codebase, along with some optimizations and, of course, security fixes.

We’ve also updated the go-swagger library we use for code generation to the latest version, which received many bug fixes but introduced some breaking changes for our models package in the client-native project. So, the client-native package has been upgraded to v4. This is important news for all of our contributors, since they will need to update their dev environments for both Go 1.18 and go-swagger 0.29.0.

In addition to that, we did a thorough pass of all the external dependencies used by the HAProxy Data Plane API project and updated those dependencies where needed to benefit from all the bug and security fixes.

Contributors

We’d like to thank the code contributors who helped make this version possible!

Contributor Area
Amel Husic FEATURE
Andjelko Horvat BUG FEATURE
Andjelko Iharos BUG FEATURE
Dinko Korunic BUG
Connor Edwards BUG
Dario Tranchitella BUG CLEANUP FEATURE TEST
Goran Galinec BUG CLEANUP FEATURE
Mark CLEANUP
Marko Juraga BUG BUILD CLEANUP DOC FEATURE
Moemen MHEDHBI FEATURE
Norwin Schnyder BUG FEATURE
Robert Maticevic BUG BUILD CLEANUP FEATURE REORG
Seena Fallah BUG FEATURE
Zlatko Bratkovic BUG BUILD CLEANUP FEATURE

The post Announcing HAProxy Data Plane API 2.6 appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/announcing-haproxy-data-plane-api-2-6/feed/ 0
HAProxyConf 2022 Call for Papers Now Open https://www.haproxy.com/blog/haproxyconf-2022-call-for-papers-open/ https://www.haproxy.com/blog/haproxyconf-2022-call-for-papers-open/#respond Fri, 17 Jun 2022 10:35:10 +0000 https://www.haproxy.com/?p=479711 HAProxyConf 2022 aims to bring together system administrators, security professionals, developers, and business leaders to share how they have used HAProxy to implement high availability, boost security, shift to the cloud, or improve observability. We’re calling out to you, yes YOU, to submit your talk for this year’s event! When is the conference? HAProxyConf 2022 […]

The post HAProxyConf 2022 Call for Papers Now Open appeared first on HAProxy Technologies.

]]>

HAProxyConf 2022 aims to bring together system administrators, security professionals, developers, and business leaders to share how they have used HAProxy to implement high availability, boost security, shift to the cloud, or improve observability. We’re calling out to you, yes YOU, to submit your talk for this year’s event!

When is the conference?

HAProxyConf 2022 will happen November 8 and 9.

Where will the conference be?

We will be in beautiful Paris, France.

When is the due date for my talk proposal?

The deadline is September 5.

Check the Call for Papers webpage to get the full details and submit your talk.

Looking for inspiration? Check out last year’s conference videos!

The post HAProxyConf 2022 Call for Papers Now Open appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/haproxyconf-2022-call-for-papers-open/feed/ 0
Custom Resources with HAProxy Kubernetes Ingress Controller https://www.haproxy.com/blog/custom-resources-with-haproxy-kubernetes-ingress-controller/ https://www.haproxy.com/blog/custom-resources-with-haproxy-kubernetes-ingress-controller/#respond Tue, 14 Jun 2022 09:00:21 +0000 https://www.haproxy.com/?p=478691 HAProxy Kubernetes Ingress Controller provides custom resources named Backend, Defaults, and Global that let you manage ingress controller settings more efficiently. To start using them right away, check the documentation for steps and examples. In this blog post, you’ll learn why custom resources are such a powerful feature and see tips for getting the most […]

The post Custom Resources with HAProxy Kubernetes Ingress Controller appeared first on HAProxy Technologies.

]]>

HAProxy Kubernetes Ingress Controller provides custom resources named Backend, Defaults, and Global that let you manage ingress controller settings more efficiently. To start using them right away, check the documentation for steps and examples. In this blog post, you’ll learn why custom resources are such a powerful feature and see tips for getting the most out of them.

Custom resources explained

Every Kubernetes cluster comes with a set of standard resource types like pods, services, and deployments. If you wanted to see a list of them, you could connect to your Kubernetes cluster and run the command kubectl api-resources:

Kubernetes can be extended with new types, called custom resources. To install the HAProxy Kubernetes Ingress Controller custom resources, you would call kubectl apply with the URL of each resource’s definition:

Afterwards, you’ll see them as new entries in the list of resource types:

Or to list only resources that are custom, call kubectl get crd:

With the resource definitions added to your cluster, you can then create instances of those types. For example, to create a new Global resource, you would first create a YAML file for it:

example-global.yaml

Then apply it with kubectl:

The Global resource controls process-level settings for the ingress controller, such as the maximum number of concurrent connections it will accept, here set to 60,000.

Custom resources can be listed, described, applied and deleted using Kubernetes tools like kubectl, just like standard resources. Below, we list Global resources and then describe, or in other words display the attributes of, the example-global Global resource:

To apply the settings contained within the Global resource to your HAProxy Kubernetes Ingress Controller, overwrite the kubernetes-ingress ConfigMap resource and set its cr-global key to the namespace and name of your custom resource:

Then apply it with kubectl:

The benefits of custom resources

As you’ve seen, by installing custom resources definitions like Global, you implement new types in your Kubernetes cluster. Custom resources offer a number of benefits.

For one, they promote a clearer mental model by grouping related properties into an object. In other words, rather than putting an ingress controller’s global settings into a ConfigMap—after all, ConfigMap is a very generic type of thing—you put them into a resource named Global. With such a name, it becomes much easier to reason about where these settings fit in the overall scheme of your cluster.

As a Kubernetes resource, cluster administrators can control who can create them, such as to give only other cluster administrators permission through the use of Kubernetes’s role-based access control (RBAC). You could define Role or ClusterRole objects to determine which users can create Global objects, for example. This promotes a separation of concerns between cluster administrators and other users.

As mentioned, you can use familiar Kubernetes tools like kubectl to manage custom resources. This makes it simple to control their lifecycle. When you no longer need a group of global settings, simply call kubectl delete to remove it. The HAProxy Kubernetes Ingress Controller is notified of such events and will update its underlying HAProxy configuration automatically.

Finally, a custom resource allows for a more expressive syntax than run-of-the-mill annotations. That’s because, while annotations are only key-value pairs, properties inside a custom resource can be arrays or objects. So, you can have lists of properties or nested properties to express complex settings. The resource is validated as a whole, to make sure that all properties make sense together.

Conclusion

In this article, you learned that the HAProxy Kubernetes Ingress Controller provides a set of custom resources that includes Global, Defaults, and Backend, which you can use to manage ingress controller settings. After installing the resource definitions, you can create any number of these types of objects, and they behave just like the standard resource types.

Custom resources have a number of benefits, including an easier mental model, simpler reusability and access control, better property validation, and support for Kubernetes-native tools like kubectl. Check out the documentation to learn more!

Interested to know when we publish content like this? Subscribe to our blog! You can also follow us on Twitter and join the conversation on Slack.

The post Custom Resources with HAProxy Kubernetes Ingress Controller appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/custom-resources-with-haproxy-kubernetes-ingress-controller/feed/ 0
[Conference] Black Hat US 2022 https://www.haproxy.com/blog/conference-black-hat-us-2022/ Fri, 03 Jun 2022 15:09:34 +0000 https://www.haproxy.com/?p=475861 The post [Conference] Black Hat US 2022 appeared first on HAProxy Technologies.

]]>
HAProxy Technologies is excited to announce its presence at the 25th Black Hat USA. After a quarter century of bringing together the top minds of the cyber security community, Black Hat 2022 will be taking place from the 6-11 of August both virtually and in person, in Las Vegas, Nevada.
Beginning with four days of training sessions by experts from around the globe, guests will then be able to visit HAProxy Technologies’ booth in the Business Hall on the 10th and 11th. The Business Hall provides networking opportunities with thousands of InfoSec professionals, as well as the opportunity to evaluate a broad range of security products.

Admission also includes access to a range of Sponsored Sessions, where you will find our principal solutions architect Nenad Merdanovic giving a talk on “Application Delivery at the Core of a Multilayered Security Model“:

In recent years, application delivery requirements have rapidly evolved to combat the increasing size and sophistication of security threats. Organizations need a multi-layered approach to security, where measures are implemented at every layer, from the edge all the way to the application.

By combining its products, HAProxy Technologies provides the industry’s first end-to-end application delivery platform that simplifies security measures like DDoS protection, bot management, and web application firewall filtering. Techniques like machine learning allow continuous tuning. In this talk, you will learn how the components of the platform work together to secure an application or API.

Please get in touch via the contact form if you’d like to book time at the booth to speak with us regarding all things security. Our fingers are twitching over the keyboard at everything to be learnt at this event, and we can’t wait to see you there!

The post [Conference] Black Hat US 2022 appeared first on HAProxy Technologies.

]]>
Announcing HAProxy Kubernetes Ingress Controller 1.8 https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-8/ https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-8/#respond Thu, 02 Jun 2022 11:00:06 +0000 https://www.haproxy.com/?p=475271 We’re proud to announce the release of version 1.8 of the HAProxy Kubernetes Ingress Controller! In this release, we added support for full rootless mode, Prometheus metrics for the controller itself, and examples that are synchronized with our Helm chart. In this blog post, you will learn more about the changes in this version. Register […]

The post Announcing HAProxy Kubernetes Ingress Controller 1.8 appeared first on HAProxy Technologies.

]]>

We’re proud to announce the release of version 1.8 of the HAProxy Kubernetes Ingress Controller!

In this release, we added support for full rootless mode, Prometheus metrics for the controller itself, and examples that are synchronized with our Helm chart. In this blog post, you will learn more about the changes in this version.

Register for our webinar to learn more about this release.

Running unprivileged

Prior to this version, the ingress controller process ran with elevated privileges (root) inside its Docker container. The reason why we had to default to root privileges was to be able to bind to privileged ports (those below 1024), such as the standard HTTP and HTTPS ports 80 and 443. However, containers that run with elevated privileges pose a risk, since it becomes easier to escape the container sandbox and access the host system. In fact, some hosted Kubernetes providers such as Google Kubernetes Engine (GKE) Autopilot even disallow running privileged containers altogether.

Workarounds exist, including having a sidecar container running sysctl net.ipv4.ip_unprivileged_port_start=0 next to the ingress controller container. However, GKE Autopilot would not allow that either since it requires a privileged container to run sysctl.

Now, we no longer run a privileged container by default. Instead, we drop privilege to UID 1000 and GID 1000 for all processes in the container, including the S6, ingress controller, and HAProxy processes. We grant the HAProxy binary the CAP_NET_BIND_SERVICE capability via the setcap command when building the ingress controller Docker image so that it can bind to the desired ports:

This is possibly a breaking change for older host systems such as Debian Stretch, Ubuntu Willy, and Ubuntu Xenial, because their kernels disable some needed features for advanced multi-layered unification filesystem (AUFS), which had been the preferred storage driver for Docker 18.06 and older. Most Docker / Kubernetes installations have already moved to the OverlayFS or OverlayFS2 storage driver, making this problem mostly a legacy systems issue.

HAProxy Kubernetes Ingress Controller 1.8 permits running unprivileged, and the latest Helm chart uses that mode by default.

Diagnostic pprof data

The ingress controller exposes an endpoint for viewing pprof diagnostic data.

Diagnostic pprof data

Diagnostic pprof data

You enable it with the controller argument --pprof. With version 1.8, an additional argument, --controller-port, lets you change the endpoint’s port. By default, if pprof is enabled, it listens at port 6060, which is compatible with version 1.7. This port is also shared with the new Prometheus metrics endpoint, which is described in the next section.

To enable this feature when using Helm, pass the controller.extraArgs parameter:

Access the pprof diagnostics page at the path /debug/pprof.

Prometheus metrics

This version adds a new endpoint for collecting Prometheus metrics that are specific to the inner workings of the controller, such as the amount of memory allocated and CPU time spent.

Prometheus metrics for the controller

Prometheus metrics for the controller

Metrics for the load balancing portion of the controller have been available since version 1.0, which you can access on port 1024 at the URL path /metrics, but this version includes metrics for the controller itself. Access the new metrics on port 6060 at the URL path /metrics.

In order to enable them, two new controller arguments were introduced:

  • --controller-port – port where this will be exposed (defaults to 6060)
  • --prometheus – to enable prometheus

Here is an example, using Helm:

Default backend

The ingress controller routes traffic to a configured set of pods by evaluating routing rules defined in Ingress objects. However, if no ingress rule matches, the request will go to a default backend, which displays a 404 error message. Until now, the --default-backend-service controller argument pointed to the pod that ran the default backend application. However, that approach required running yet another service in your cluster.

With this release, another option is available. If you omit --default-backend-service then a default service controller will create one that’s part of the controller itself. If you run the ingress controller in external mode, you can set --default-backend-port to define a port where the default backend service runs. If you don’t set this, port 6061 will be used.

Examples

The project’s GitHub repository now contains a collection of examples. The examples are synced and produce the same setup as when you use Helm (same namespaces, service names, etc.), which is especially useful for learning how to create everything by hand with kubectl and then comparing it with Helm’s setup. A shortcut for initializing the examples is to invoke the command make example.

Additional improvements

Several other improvements join this release:

  • You can restrict access to your applications by setting the blacklist or whitelist annotations. These now accept pattern files that store the IP addresses, which makes it easier to support long lists of IP addresses.
  • Support for the new annotation client-strict-sni has been added. It returns a TLS error if no certificate is found for the client SNI.
  • Support for a default ingress class has been added.

Contributions

We’d like to thank the code contributors who helped make this version possible!

Moemen MHEDHBI TEST OPTIM REORG BUILD FEATURE BUG DOC
Ivan Matmati REORG TEST BUG FEATURE
Dinko Korunic FEATURE BUILD
Zlatko Bratkovic BUG CLEANUP FEATURE DOC BUILD
Nico Braun FEATURE
Davor Kapsa DOC
Petr Studeny BUG
Jonas Weber FEATURE
Frank Villaro-Dixon BUILD
Daniel Lenar BUG
Jakub Granieczny BUG
LarsBingBong BUG

The post Announcing HAProxy Kubernetes Ingress Controller 1.8 appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-8/feed/ 0
Announcing HAProxy 2.6 https://www.haproxy.com/blog/announcing-haproxy-2-6/ https://www.haproxy.com/blog/announcing-haproxy-2-6/#comments Tue, 31 May 2022 12:00:07 +0000 https://www.haproxy.com/?p=474841 HAProxy 2.6 is now available! As always, the community behind HAProxy made it possible to bring the enhancements in this release. Whether developing new functionality, fixing issues, writing documentation, QA testing, hosting CI environments, or submitting bug reports, members of our community continue to drive the project forward. If you’d like to join the effort, […]

The post Announcing HAProxy 2.6 appeared first on HAProxy Technologies.

]]>

HAProxy 2.6 is now available!

As always, the community behind HAProxy made it possible to bring the enhancements in this release. Whether developing new functionality, fixing issues, writing documentation, QA testing, hosting CI environments, or submitting bug reports, members of our community continue to drive the project forward. If you’d like to join the effort, you can find us on GitHub, Slack, Discourse, and the HAProxy mailing list.

Register for the webinar HAProxy 2.6 Features Roundup to learn more about this release and participate in a live Q&A with our experts.

In the following sections, you will find a list of changes included in this version.

HTTP/3 over QUIC
Generic hash load balancing algorithm
SSL and TLS
Authentication
Runtime API
Master CLI
Lua
Listing configuration keywords
Protocol updates
Seamless reloads
New fetches and converters
Variables
Performance tuning
Contributors

HTTP/3 over QUIC

This version of HAProxy adds experimental support for HTTP/3 over QUIC, which is a novel approach to transmitting HTTP messages over UDP instead of TCP. The benefits include fewer round trips between the client and server when establishing a TLS connection, better protection against denial-of-service attacks, and improved connection migration when the user switches between networks.

In the example configuration below, we enable HTTP/3 over QUIC by setting a bind line that listens for client connections on UDP port 443. The prefix quic4@ sets the protocol. Also note that we return an alt-svc HTTP header, which instructs the client’s browser to switch to the new protocol for subsequent requests. In other words, the first request will be HTTP/2, but any after that will be HTTP/3.

Something else to know is that HAProxy supports stateless reset packets with QUIC, but you must set the global directive cluster-secret, which HAProxy uses to derive a stateless reset token. The token protects against malicious actors sending spoofed reset packets.

You’ll need to compile HAProxy with a few new options, including the USE_QUIC flag, and also link to a QUIC-compatible version of OpenSSL, such as the one found here. Want to try this out? Check out our HTTP/3 demo project.

Generic hash load balancing algorithm

You can use the new load balancing algorithm, hash, in place of the existing, more specific hash algorithms source, uri, hdr, url_param, and rdp-cookie. The new algorithm is generic, thus allowing you to pass in a sample fetch of the data used to calculate the hash.

In the example below, the pathq fetch returns the URL path and query string for the data to hash:

SSL and TLS

You can compile HAProxy against OpenSSL 3.0, the latest branch of the OpenSSL library.

Authentication

To authenticate clients with client certificates, you set the ca-file parameter on your bind line to indicate which certificate authority (CA) to use to verify the certificate. This parameter now accepts a directory path, allowing you to load multiple CA files so that you can verify certificates that were signed by different authorities.

Similarly, the ca-file parameter on a server line in a backend now accepts a directory path, allowing you load multiple CAs to verify a server’s SSL certificate. In this case, you can also specify @system-ca to load your system’s list of trusted CAs.

Runtime API

A new Runtime API command, show ssl providers, available when HAProxy was compiled against OpenSSL 3.0, returns a list of providers loaded into OpenSSL. A provider implements the cryptographic algorithms. You can load other providers via the OpenSSL configuration file, which you can find the path for by running openssl version -d.

Next, the Runtime API’s dynamic server feature, which was introduced in HAProxy 2.4 and got expanded keyword support in HAProxy 2.5, is no longer experimental. Recall that the dynamic server functions let you create servers on the fly without reloading the HAProxy configuration.

Also, you can now set the check and check-ssl parameters when creating servers, which were unsupported in prior versions. Note that when enabling health checks with these parameters, HAProxy is not yet able to implicitly inherit the SSL or Proxy Protocol configuration of the server line, so you must explicitly use check-ssl and check-send-proxy, even if the health check port is not overridden.

Master CLI

The Master CLI provides an interface for working with the HAProxy worker processes. You can learn more about it in the blog post Get to Know the HAProxy Process Manager. The CLI received several new commands:

Command Description
prompt Begins an interactive session with the CLI.
expert-mode [on|off] Activates expert mode for every worker accessed from the Master CLI.
experimental-mode [on|off] Activates experimental mode for every worker accessed from

the Master CLI.

mcli-debug-mode [on|off] Allows a special mode in the Master CLI which enables all

keywords that were meant for a worker on the Master CLI, allowing you to debug the master process. Once activated, you list the new available keywords with “help”. Combined with “experimental-mode” or “expert-mode” it enables even

more keywords.

The starting point is the prompt command, which starts an interactive session. Once in a session, you can enable expert, experimental, and master CLI debug modes. Then, send Runtime API commands to one of the worker processes. Some Runtime API commands become available only in one of the aforementioned modes.

Here is an example:

Lua

When extending HAProxy with a custom Lua module, you can now update an SSL certificate in the memory of the current HAProxy process by using the CertCache class. In the snippet below, the certificate and key are hardcoded in the Lua file, but in practice you could fetch these using the HTTP client or receive them from HAProxy variables, for example.

Also, the HTTP client that was added in version 2.5, which lets you make non-blocking HTTP calls from Lua, now supports two new parameters: dst for setting the destination address and timeout for setting a timeout server value. Setting dst overrides the IP address and port in the url parameter, but keeps the path. Below, the destination URL becomes http://127.0.1.1:8001/test.

However, since it supports HAProxy bind-style addresses, a more interesting use case is to set dst to a UNIX socket. For example, you could query the Docker API, which listens at the UNIX socket /var/run/docker.sock, to fetch a JSON-formatted list of running containers from within your Lua code, as shown in the next snippet:

This is functionally equivalent to calling the Docker API with curl:

Furthermore, new global directives in the HAProxy configuration affect the httpclient class:

Global directive Description
httpclient.ssl.ca-file <cafile> This option defines the ca-file which should be used to verify the server certificate. It takes the same parameters as the ca-file option on the server line.

By default and when this option is not used, the value is “@system-ca” which tries to load the CA of the system. If it fails the SSL will be disabled for the httpclient.

However, when this option is explicitly enabled it will trigger a configuration error if it fails.

httpclient.ssl.verify [none|required Works the same way as the verify option on server lines. If set to ‘none’, server certificates are not verified. Default option is “required”.

By default and when this option is not used, the value is “required”. If it fails the SSL will be disabled for the httpclient.

However, when this option is explicitly enabled it will trigger a configuration error if it fails.

httpclient.resolvers.id <resolvers id> This option defines the resolvers section with which the httpclient will try to resolve.

Default option is the “default” resolvers ID. By default, if this option is not used, it will simply disable the resolving if the section is not found.

However, when this option is explicitly enabled it will trigger a configuration error if it fails to load.

httpclient.resolvers.prefer <ipv4|ipv6> This option allows you to chose which family of IP addresses you want when resolving, which is convenient when IPv6 is not available on your network. Default option is “ipv6”.

Listing configuration keywords

Have you ever wanted to know whether a configuration keyword is supported in the version of HAProxy you’re running? You can now ask HAProxy to return to you a list. Keywords are sorted into classes, so first get the list of classes by passing the -dKhelp argument to HAProxy, along with the quiet (-q), validation check (-c) and configuration file (-f) arguments:

Then get a list of keywords, for example:

Protocol updates

A new global directive, h1-accept-payload-with-any-method, allows clients using HTTP/1.0 to send a payload with GET, HEAD, and DELETE requests. The HTTP/1.0 specification had not been clear on how to handle payloads with these types of requests and proxy implementations vary on the interpretation, which could lead to request smuggling attacks. HAProxy uniformly rejects these requests for that reason, but the new option allows you to turn off this safeguard if you need to support specific clients.

Seamless reloads

Since HAProxy 1.8, HAProxy has had seamless reloads, which means you can use systemctl reload haproxy to update the HAProxy configuration without dropping any active connections, even under very high utilization. Listening sockets transfer over to the new worker process during the reload. The only thing you had to do was make sure that master-worker mode was enabled by including the -W flag when starting HAProxy and add the parameter expose-fd listeners to a stats socket directive in the global section of your configuration:

Now, you no longer need to do even that. Seamless reloads will work without any effort on your part. You can omit the expose-fd listeners parameter and the -W flag is already included in the Systemd service file in the HAProxy repository.

New fetches and converters

Two new fetches help pinpoint why a request was terminated. The last_rule_file fetch returns the name of the configuration file containing the final rule that was matched during stream analysis and the last_rule_line returns the line number. Add these to a custom log format to capture which rule in your configuration stopped the request.

In the next example, a custom log format includes these new fetches:

Next, the new add_item converter concatenates fields and returns a string. The advantage this has over the existing concat converter is that it will place a delimiter, such as a semicolon, between fields, and check whether the field exists to avoid appending a trailing delimiter at the end of the string.

In the example below, the add_item converter sets an HTTP cookie with the Expires and Secure attributes, which are separated by semicolons.

Variables

Variables let you store information about a request or response and reference that information within logical statements elsewhere in your configuration file. HAProxy 2.6 makes it simple to check whether a variable already exists or already has a value before trying to set it again. All tcp- and http- set-var actions, such as http-request set-var and tcp-request content set-var, now support the new parameter.

For example, if you wanted to set a variable named token to the value of an HTTP header named X-Token, but fall back to setting it to the value of a URL parameter named token if the header doesn’t exist, you could use the condition isnotset to check whether the variable has a value from the first case before trying to set it again:

You can use the following, built-in conditions:

Condition Sets the new value when…
ifexists the variable was previously created with a set-var call.
ifnotexists the variable has not been created yet with a set-var call.
ifempty the current value is empty. This applies for nonscalar types (strings, binary data).
ifnotempty the current value is not empty. This applies for nonscalar types (strings, binary data).
ifset the variable has been set and unset-var has not been called. A variable that does not exist is also considered unset.
ifnotset the variable has not been set or unset-var was called.
ifgt the variable’s existing value is greater than the new value.
iflt the variable’s existing value is less than the new value.

Performance tuning

HAProxy 2.6 brings new way to improve load balancing performance:

  • Adding fd-hard-limit to the global section of your configuration will enforce a cap on the number of file descriptors that HAProxy will use, even when the system allows many more, which protects you from consuming too much memory. If you set a global maxconn setting higher than this, the maxconn will adapt to this hard limit. Learn about setting maximum connections.
  • The new global directive close-spread-time lets you close idle connections gradually over a period of time, rather than all at once, which had caused reconnecting clients to rush against the process. For best results, you should set this lower than the hard-stop-after directive.
  • HAProxy’s task scheduler code and the code that dequeues connections awaiting an available server got a performance boost. The code, which uses multithreading, was optimized to bypass thread locking, allowing server queue management to become much more scalable.
  • The connection stream code has been refactored to simplify it and reduce the number of layers. Although more work in this area is underway, the result will be a more linear architecture, resulting in fewer bugs and easier maintenance.
  • At startup, HAProxy inspects the CPU topology of the machine and if a multi-socket machine is detected, sets an affinity in order to run on the CPUs of a single node in order to not suffer from the performance penalties caused by the inter-socket bus latency. However, if this causes inferior performance, you can set the no numa-cpu-mapping directive.

Contributors

We would like to thank each and every contributor who was involved in this release. Contributors help in various forms such as discussing design choices, testing development releases, reporting detailed bugs, helping users on Discourse and the mailing list, managing issue trackers and CI, classifying Coverity reports, maintaining the documentation, operating some of the infrastructure components used by the project, reviewing patches, and contributing code.

The post Announcing HAProxy 2.6 appeared first on HAProxy Technologies.

]]>
https://www.haproxy.com/blog/announcing-haproxy-2-6/feed/ 3
[On-Demand Webinar] Using CRDs to Improve Quality of Life in Kubernetes https://www.haproxy.com/blog/live-webinar-using-crds-to-improve-quality-of-life-in-kubernetes/ Mon, 30 May 2022 11:32:03 +0000 https://www.haproxy.com/?p=475451 The post [On-Demand Webinar] Using CRDs to Improve Quality of Life in Kubernetes appeared first on HAProxy Technologies.

]]>
haproxy_2_2_ama_webinar
Tuesday, July 12th, 2022
US: 12 noon EDT, 11 am CDT,  10am MDT, 9am PST
EU: 6 pm CEST, 7 pm EEST
Global: 4 pm UTC

The HAProxy Kubernetes Ingress Controller now integrates even better with the Kubernetes ecosystem.

By providing custom resources (CRDs) that represent its underlying load balancer settings, you can manage health check properties, tune process-level options, apply default timeouts and control other important settings in a Kubernetes-native way.

In this webinar, you’ll learn how custom resources simplify ingress controller management and allow you to interact with the controller using familiar tools.

We’ll also go through the new features and improvements in the latest release of HAProxy Kubernetes Ingress Controller 1.8.

watch NOW

The post [On-Demand Webinar] Using CRDs to Improve Quality of Life in Kubernetes appeared first on HAProxy Technologies.

]]>
[On-Demand Webinar] HAProxy 2.6 Feature Roundup https://www.haproxy.com/blog/on-demand-webinar-haproxy-2-6-feature-roundup/ Sun, 29 May 2022 11:26:49 +0000 https://www.haproxy.com/?p=476961 The post [On-Demand Webinar] HAProxy 2.6 Feature Roundup appeared first on HAProxy Technologies.

]]>
haproxy_2_2_ama_webinar
Version 2.6 of the world’s fastest and most widely used software load balancer has been released! Packed into it are important changes that improve performance, security, and extensibility. Watch this webinar to learn about the newest features and updates.

The presenter was Sebastien Gross. He covered:

 

  • Support for HTTP/3 over QUIC
  • New load balancing algorithm
  • Updates to SSL and client certificate authentication
  • New keywords
  • Runtime API updates
  • Lua HTTP client

 

watch NOW

The post [On-Demand Webinar] HAProxy 2.6 Feature Roundup appeared first on HAProxy Technologies.

]]>