HAProxy Technologies 2024 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[HAProxy Fusion: New External Load Balancing & Multi-Cluster Routing Features]]> https://www.haproxy.com/blog/haproxy-fusion-new-external-load-balancing-multi-cluster-routing-features Wed, 24 Apr 2024 08:46:00 +0000 https://www.haproxy.com/blog/haproxy-fusion-new-external-load-balancing-multi-cluster-routing-features ]]> Recently, we added powerful new K8s features to HAProxy Fusion Control Plane—enabling service discovery in any Kubernetes or Consul environment without complex, technical workarounds. 

We've covered the headlining features in our HAProxy Fusion Control Plane 1.2 LTS release blog. But while service discovery, external load balancing, and multi-cluster routing are undeniably beneficial, context helps us understand their impact. 

Here are some key takeaways from our latest Kubernetes developments and why they matter.

HAProxy already has an Ingress Controller. Why start building K8s routing around HAProxy Fusion Control Plane? 

While we've long offered HAProxy Enterprise Kubernetes Ingress Controller to help organizations manage load balancing and routing in Kubernetes, some use cases have required technical workarounds. Some users have also desired a K8s solution that mirrors their familiar experiences with load balancers. Resultantly, we've redefined Kubernetes load balancing with HAProxy Fusion for the following reasons:

  • The Ingress API wasn't designed to functionally emulate a typical load balancer and doesn't support external load balancing.

  • Ingress-based solutions can have a steeper learning curve than load balancers, and users don't want to manage individualized routing for hundreds of services.  

  • Organizations that favor flexibility and choose to run K8s in public clouds (due to Ingress limitations) are often forced to use restrictive, automatically provisioned load balancers. Those who also need to deploy on-premises face major hurdles.

External load balancing for public clouds is relatively simple—with public cloud service integration being a key advantage—but instantly becomes more complicated in an on-premises environment. Those integrations simply didn’t exist until now.

Few (if any) solutions were available to tackle external load balancing for bare metal K8s. And if you wanted to bring your external load balancer to the public cloud, the need for an ingress controller plus an application load balancer (ALB) quickly inflated operational costs.

All users can now manage traffic however they want with HAProxy Fusion 1.2. Our latest release brings K8s service discovery, external load balancing, and multi-cluster routing to HAProxy Enterprise. You no longer need an ingress controller to optimize K8s application delivery—and in some cases, HAProxy Fusion-managed load balancing is easier.

Easing the pain of common K8s deployment challenges

]]> ]]> Routing external traffic into your on-premises Kubernetes cluster can be tricky. It's hard to expose pods and services spread across your entire infrastructure, because you don’t have the automated integration with external load balancers that public clouds brought to the Kubernetes world.

Here's what Spectro Cloud's 2023 State of Production Kubernetes report found:

  • 98% of the report's 333 IT and AppDev stakeholders face one or more of K8s' most prominent deployment and management challenges.

  • Most enterprises operate 10+ Kubernetes clusters in multiple hosting environments, and 14% of those enterprises manage over 100 clusters!

  • 83% of interviewees had two or (many) more distributions across different services and vendors (including AWS EKS-D, Red Hat OpenShift, etc.).

The takeaway? Kubernetes remains universally challenging to effectively deploy and manage. Meanwhile, the increased scale and complexity of K8s distributions is magnifying those issues organizations are grappling with.

Plus, Kubernetes adoption is through the roof as containerization gathers steam. The value of developing a multi-cluster, deployment-agnostic K8s load balancing solution is immense—as is the urgency.

HAProxy helps solve common challenges around bring-your-own external load balancing practices, network bridging, pod management, and testing.

It all starts with service discovery

Without successfully exposing your pod services (or understanding your deployment topology), it's tough to set up traffic routing and load balancing. This also prevents services from being dynamically aware of one another without hard coding or tedious endpoint configuration. A dynamic approach is crucial since K8s pods and their IP addresses are ephemeral—or short-lived. Service discovery solves these issues, but not all approaches are equal.

Kubernetes setups without a load balancer commonly enable service discovery through a service registry. This happens on the client side and can complicate the logic needed for pod and container awareness. HAProxy server discovery is server-side since the load balancer does the work of connecting to services and retrieving information on active pods.

Understanding the HAProxy advantage 

]]> ]]> Service discovery now lives in HAProxy Fusion Control Plane within a dedicated UI tab, though the underlying Kubernetes API powers that function. HAProxy Fusion links to the K8s API, which lets HAProxy Enterprise dynamically update service configurations and automatically push those updates to your cluster(s). Using HAProxy Enterprise instances to route traffic, in conjunction with HAProxy Fusion Control Plane, has some unique advantages:

  • Layer 4 (TCP) and Layer 7 (HTTP) load balancing without having to separately manage Ingress services or Gateway API services

  • Centralized management and observability

  • Easier configuration language without complicated annotations

  • Multi-cluster routing

HAProxy Enterprise can now perform external load balancing for on-premises Kubernetes applications, configured via HAProxy Fusion Control Plane. HAProxy Fusion is aware of your Kubernetes infrastructure, and HAProxy Enterprise can sit inside or outside of your Kubernetes cluster.

More traffic, fewer problems

HAProxy Enterprise also treats TCP traffic as a first-class citizen and includes powerful multi-layered security features:

  • Web application firewall

  • Bot management

  • Rate limiting

  • …and more

External load balancing (and multi-cluster routing) leverages normal HAProxy Enterprise instances for traffic management. We can now automatically update load balancer configurations for backend pods running behind them. 

Aside from external load balancing, HAProxy Fusion Control Plane and HAProxy Enterprise play well with cloud-based Kubernetes clusters. In instances where you're otherwise paying for multiple load balancing services (as with AWS, for example), this tandem can help cut costs. Greater overall simplicity, speed, and consolidation are critical wins for users operating within a complex application environment.

Leverage external load balancing and multi-cluster routing in any environment

]]> ]]> Automated scaling, unique IP and hostname assignments, and service reporting are major tenets of HAProxy’s external load balancing. So, how do the pieces fit together?

]]> ]]> HAProxy Enterprise uses IP Address Management (IPAM) to manage blocks of available IP addresses. We can automatically grab load balancer service objects and create a public IP bind using that information. Administrators can create their own IPAM definitions within their K8s configuration and then create a load balancer service. The load balancer status, IP, binds, and servers are available in K8s and your HAProxy Enterprise configuration. This closely mirrors the external load balancing experience in a public cloud environment. 

Running a single K8s cluster has always been cumbersome for organizations that value high availability, A/B testing, blue/green deployments, or multi-region flexibility. Our earlier statistics also support just how pervasive multi-cluster setups are within Kubernetes. 

HAProxy’s multi-cluster routing is based around one central requirement: you need to load balance between multiple K8s clusters that are active/active, active/passive, or spread across multiple regions. Here's how HAProxy Fusion and HAProxy Enterprise support a few important use cases.

Multi-cluster routing example #1: multiple simultaneous clusters

Organizations often want to balance network traffic across multiple clusters. These clusters could be running the same version of an application within clusters across different availability zones. HAProxy Enterprise and HAProxy Fusion let you run your load balancer instances in either active/active or active/passive mode, depending on your needs.

This setup is pretty straightforward: 

  1. Your HAProxy Enterprise instance and Kubernetes clusters are contained within one region. 

  2. HAProxy Enterprise routes traffic between two or more clusters and their pods using standard load-balancing mechanisms.

]]> ]]> Multi-cluster routing example #2: A/B testing and blue/green deployments

Organizations often use A/B testing to compare two versions of something to see what performs better. For applications, this involves sending one portion of users to Cluster 1 (for Test A) and Cluster 2 (for Test B) where different app versions are waiting for them. 

Blue/green deployments work quite similarly, but we're transitioning traffic gradually from one application version to another. This only happens once the second cluster is ready to accept traffic. As a result, you can avoid downtime and switch between applications as needed.

]]> ]]> ]]> Multi-cluster routing example #3: multi-region failover

Having a global Kubernetes infrastructure is highly desirable, but stretching a single cluster across multiple regions isn't readily possible. Networking, storage, and other factors can complicate deployments—highlighting the need for a solution. Having more clusters and pods at your disposal means unwavering uptime, which is exactly what this setup is geared towards.

Each region can run one or more Kubernetes clusters with HAProxy instances in front of them. Should one of the Kubernetes clusters fail, HAProxy can automatically send traffic to the other cluster, without disrupting user traffic. The one tradeoff is slightly higher latency before you recover your services.

]]> ]]> ]]> Learn more about the power of external load balancing and multi-cluster routing

Thanks to automated service discovery in HAProxy Fusion and seamless integration with HAProxy Enterprise, we can now address many common pain points associated with Kubernetes deployments. On-premises external load balancing must be as easy as it is for public clouds, and load balancing between clusters is critical for uptime, scalability, and testing purposes. Our latest updates deliver these capabilities to our customers. 

External load balancing and multi-cluster routing are standard in HAProxy Enterprise. HAProxy Fusion ships with HAProxy Enterprise at no added cost, unlocking these powerful new functions. 

However, we have so much more to talk about! Check out our webinar to dive even deeper.

]]> HAProxy Fusion: New External Load Balancing & Multi-Cluster Routing Features appeared first on HAProxy Technologies.]]>
<![CDATA[HAProxy is Resilient to the HTTP/2 CONTINUATION Flood]]> https://www.haproxy.com/blog/haproxy-is-resilient-to-the-http-2-continuation-flood Tue, 23 Apr 2024 10:38:00 +0000 https://www.haproxy.com/blog/haproxy-is-resilient-to-the-http-2-continuation-flood ]]> A recent vulnerability in the HTTP/2 protocol could allow denial-of-service (DoS) attacks by exploiting the protocol's CONTINUATION frame to flood web servers, reverse proxies, or other software processing HTTP/2 traffic.

After rigorous testing, we have confirmed that our implementation of the HTTP/2 protocol can effectively handle the CONTINUATION Flood. Considering HAProxy was built from the ground up to withstand DoS attacks, its resilience to the HTTP/2 CONTINUATION Flood is no surprise. We will continue to monitor, but the supported versions of our products are not vulnerable to the known attack vectors.

What’s an HTTP/2 CONTINUATION Flood?

To understand how an HTTP/2 CONTINUATION Flood functions, we first need to understand how the HTTP/2 protocol facilitates multiplexing.

The HTTP/2 protocol is designed for multiplexing, a capability facilitated by the breakdown of communication into smaller units known as a “frame”. HTTP/2 frames are the smallest unit of communication in the protocol, serving a specific purpose, such as carrying headers or data. Each frame embeds a unique stream ID that identifies a stream of communication between two peers. Multiple streams can be opened in parallel between a client and a server—and this is called “multiplexing”. All streams and frames are exchanged over the same TCP connection.

HTTP/2 supports many frame types, but for the purpose of this discussion, we’ll focus on two: HEADERS and CONTINUATION.

In order to better understand how this works, let’s explore how an HTTP/2 connection is established:

  1. The client and server establish a TCP connection.

  2. The client and server negotiate an SSL connection.

  3. The client and server exchange a SETTINGS frame to negotiate various parameters of the HTTP/2 connection, including “SETTINGS_MAX_FRAME_SIZE”.

  4. The client sends a HEADERS frame with HTTP header fields.

  5. If the HTTP/2 header size is bigger than the negotiated “SETTINGS_MAX_FRAME_SIZE”, a CONTINUATION frame is sent with additional header fields.

  6. Repeat step five until the whole HTTP/2 header is transmitted.

  7. The last CONTINUATION frame has a special flag set: END_HEADERS, which means the client sent the data it was supposed to.

All HEADER and CONTINUATION frames are sent using the same stream ID, as mentioned above.

There’s an element of fun here: the client can decide the amount of data it sends in the HEADERS and CONTINUATION frames. In an extreme example, a client may send 8192 frames (1 HEADERS and 8191 CONTINUATION), each containing 1 byte of data, to send an 8KB request HTTP header. This situation initiates a flood. Despite being impolite, this flood is legitimate.

As outlined in the steps above, a server should wait until it gets the END_HEADERS flag. However, in the context of an attack, the perpetrator withholds this flag and continuously sends CONTINUATION frames with dummy data. The attacker’s goal is to exhaust the server’s memory and kernel resources to kill the process with an Out Of Memory error. Voilà!

Why is HAProxy resilient to the HTTP/2 CONTINUATION Flood?

Fortunately, HAProxy’s implementation of the HTTP/2 protocol is resilient to the HTTP/2 CONTINUATION Flood. Let’s reexamine the steps above to understand how this is handled by HAProxy:

  1. The client and server establish a TCP connection.

  2. The client and server negotiate an SSL connection.

  3. HTTP/2 SETTINGS is negotiated, and the SETTINGS_MAX_FRAME_SIZE is set to HAProxy’s “tune.bufzise” (16KB by default, which is also the minimal value for this H2 setting).

  4. The attacker sends the HEADERS frame. HAProxy allocates a buffer dedicated to this new stream ID and copies the data in there.

  5. The attacker floods CONTINUATION frames, and HAProxy writes the data in the dedicated buffer for this stream ID.

  6. The dedicated buffer is full (but the attacker is continuously flooding HAProxy).

  7. Because the buffer is full and does not contain (yet) a full and valid HTTP request, HAProxy returns an HTTP 400 status code error message and closes the stream ID.

During that time, the server is not aware that an attacker is trying to abuse the service. That’s the beauty of a reverse proxy (but we’ll go into deeper details in the next section).

An attacker may assume that a CONTINUATION Flood could overwhelm HAProxy, especially if the CONTINUATION frames send just 1 byte of data. However, HAProxy is known for its high performance, capable of handling 1,000,000 CONTINUATION frames per second per CPU core (depending on your type of CPU). Ultimately, the attacker is unable to overwhelm HAProxy, making HAProxy the reliable solution users can trust when they expose their applications on the internet.

We can take it a step further and completely disconnect a TCP connection when we consider a client is abusing the H2 protocol. Set the "tune.h2.he.glitches-threshold" parameter to detect this (and other kinds of H2 attacks like the Rapid Reset Attack). HAProxy’s flexibility and observability mindset allow you to conveniently log the glitches counter associated with clients by using the "fc_glitches" fetch output to help you set the parameter above.

How can HAProxy, as a reverse proxy, protect my application?

HAProxy's innate ability to operate as a reverse proxy offers a formidable defense against attacks like HTTP/2 CONTINUATION Flood. In HTTP terms, a reverse proxy is a gateway standing between the client and server, breaking the communication into two isolated connections.

1. One connection with the client

2. One connection with the server

When a connection is formed, the client communicates directly with HAProxy rather than the server. The client can only see HAProxy and its robust defense. The server is shielded behind HAProxy, and its vulnerabilities remain unexposed to the Internet.

]]> ]]> This means HAProxy is well-positioned to defend against threats before they reach the server. HAProxy's security capabilities bolster its resilience against attackers, ensuring that servers are safe.

As a reverse proxy, HAProxy can handle an HTTP/2 CONTINUATION Flood without the server being aware that an attack is taking place. Therefore, if you're using HAProxy in front of applications vulnerable to HTTP/2 CONTINUATION Flood attacks, there's less pressure to update components.

If your components are vulnerable to the flood, you have two options: fix them very quickly or install HAProxy in front of them.

Conclusion

HAProxy engineers worked alongside the community in shaping the design of the HTTP/2 protocol, ensuring that when it came time for implementation, HAProxy would be future-proofed against potential threats like the HTTP/2 CONTINUATION Flood.

Users love HAProxy because it’s reliable, performant, and flexible—and this is demonstrated in its resilience to DoS attacks. With HAProxy, there is no compromise—you get all three.

HAProxy is trusted by leading companies and cloud providers publishing services and APIs on the Internet. Its resilient, high-performance architecture and robust, peer-reviewed open source codebase make it one of the most trustable layers in your application delivery stack.

]]> HAProxy is Resilient to the HTTP/2 CONTINUATION Flood appeared first on HAProxy Technologies.]]>
<![CDATA[Sharing HAProxy’s Kubernetes Story at KubeCon Europe 2024]]> https://www.haproxy.com/blog/sharing-haproxys-kubernetes-story-at-kubecon-eu Thu, 28 Mar 2024 11:00:00 +0000 https://www.haproxy.com/blog/sharing-haproxys-kubernetes-story-at-kubecon-eu ]]> “HAProxy is an awesome load balancer,” was the common refrain on the expo floor at KubeCon Europe 2024, “but what does HAProxy do with Kubernetes?”

I’m so glad you asked! Let me just scan your badge…

I still get tingles thinking about HAProxy taking the top spot in the G2 Winter 2024 Grid® Report for Container Networking. Even so, three days immersed in the enthusiastic press of CNCF’s flagship European event (this year in Paris, France) was enough to show the size of the opportunity still before us. The love for HAProxy was strong – with hundreds upon hundreds descending on the HAProxy booth to share their stories and grab a T-shirt – but for many longtime fans, HAProxy’s Kubernetes story was a new one.

]]> ]]> Our week at KubeCon Europe, nestled in the cozy rectangle of booth G29, reminded me exactly why HAProxy sponsors and attends community events like this one. Firstly, we gain so much from meeting our users: the passion to do more and keep smashing expectations; camaraderie over shared war stories from the app delivery trenches; and ideas for how to make things simpler and more satisfying for our users.

Secondly, the chance to tell our own story. HAProxy is the world’s fastest and most widely used software load balancer, but for many of the 12,000-odd Kubernetes aficionados streaming through the expo hall, this was as far as they had got. They were ready to unfold the next chapter. And what a chapter it’s been lately, with several updates to our Kuberenetes solutions!

]]> ]]> To begin with, our products have long been available in container images that you can deploy in Kubernetes (or any other container orchestration platform). If you need a load balancer, web application firewall (WAF), API gateway, or ingress controller in a containerized form factor, HAProxy has you covered. We designed our products with a lightweight software-first approach, so you can expect high performance and efficiency with none of the downsides that come from squashing an appliance form factor (uncomfortably) into a container.

HAProxy Kubernetes Ingress Controller reached version 1.11 earlier this month (March 2024), bringing more robust support for Custom Resource Definitions (CRDs), rootless containers for more advanced security, and full QUIC support. HAProxy Kubernetes Ingress Controller has been downloaded more than 50 million times on Docker Hub, and the enterprise edition adds a powerful WAF, enterprise administration features, and our expert support.

For many users, we find that an ingress controller is not always the best (or only) solution. From Kubernetes experts migrating from a public cloud to an on-premises deployment (who suddenly need to think about external load balancing), to those facing a major adjustment moving from load balancers to ingress controllers: we saw an opportunity to offer a simple, automated solution to the cases where an ingress controller doesn’t cut it. Enter HAProxy Enterprise and HAProxy Fusion – working together to provide load balancing and centralized management, monitoring, and automation. HAProxy Fusion adds service discovery for Kubernetes, automating the process of delivering scalable external load balancing, multi-cluster routing, and external IP address allocation for dynamic backend services. You can learn more in our on-demand webinar, External Load Balancing and Multi-Cluster Routing for Kubernetes.

]]> ]]> HAProxy’s Kubernetes story seemed to be a gripping page-turner for the hundreds of fans flocking to our booth. But I also came away with a big grin because of the charm on display from the community. Here are some of my favorite things said by those who dropped by HAProxy’s booth:

  • “HAProxy saved our lives last week!”

  • “Who do I have to [redacted] to get that sweet backpack?”

  • “Your T-shirt is the best at KubeCon.”

  • “I think I can fit into the child’s T-shirt. It’s cute!”

  • “You know you can pay someone in sponsor services to fold your T-shirts? You don’t have to fold them yourselves.”

You know what? We don’t mind folding hundreds of custom-designed HAProxy T-shirts to give to our fans at events like KubeCon. We owe our success to our community; folding your T-shirts ourselves is the least we can do. From your load balancer configuration to a perfectly hand-pressed crease, you’ll always get our best.

See you at the next one.

]]> Sharing HAProxy’s Kubernetes Story at KubeCon Europe 2024 appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Kubernetes Ingress Controller 1.11]]> https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-11 Wed, 06 Mar 2024 00:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-11 ]]> HAProxy Kubernetes Ingress Controller 1.11 is now available. For our enterprise customers, HAProxy Enterprise Kubernetes Ingress Controller 1.11 is coming soon and will incorporate the same features. In this release, we enhanced security through the adoption of rootless containers, graduated our custom resource definitions to v1, made them easier to manage, and introduced support for the QUIC protocol.

Additionally, we've simplified version compatibility with HAProxy and included a reload/restart module to log and manage configuration changes better. These advancements are designed to provide a more secure, efficient, and user-friendly platform for managing ingress traffic in Kubernetes environments. In this blog post, you will learn more about HAProxy Kubernetes Ingress Controller 1.11 changes.

Version compatibility with HAProxy 

We're simplifying how we version our Ingress Controller to make it easier to understand and keep up with future updates. HAProxy Kubernetes Ingress Controller 1.11 uses HAProxy 2.8, but this will be the last time that the two version numbers are different. Starting with our next release, the version number of the Ingress Controller will match the version of HAProxy it uses. The next version number will be 3.0, and it will match up with HAProxy 3.0. This update eliminates confusion regarding the association between the Ingress Controller and HAProxy versions.

Custom Resource Definitions (CRDs) v1 (Backend, Defaults, and Global)

]]> ]]> In HAProxy Kubernetes Ingress Controller 1.11, we've upgraded our Custom Resource Definitions (CRDs) to v1. This upgrade brings full support for all HAProxy configuration options. 

Additionally, we've updated the group of definitions to ingress.v1.haproxy.org. While putting v1 into the name of the  CRD group might seem unconventional, it's a strategic choice designed to accommodate future versions. Kubernetes architecture is structured to avoid breaking changes in CRDs, as they are stored collectively. This approach works well for operators, where typically only one version runs in a cluster. However, for an ingress controller that might run multiple versions simultaneously, it's crucial to ensure that newer versions (such as a potential version 2) maintain existing configurations.

In environments with multiple teams, it's common for updates not to happen simultaneously across the board. Since custom resources ultimately interact with a unified cluster API, supporting multiple versions (even those with breaking changes) is essential and must not cause issues. 

Using webhooks is not an applicable solution in this context, as the challenge lies in managing versions with breaking changes within the Kubernetes API itself. This update ensures that the Ingress Controller can evolve without impacting the broader cluster operation, maintaining stability and compatibility across different team deployments.

Avoid problems early with v1

A significant improvement from the alpha versions in the latest update is adopting a common expression language in version 1. This enhancement allows for thorough validation of configurations before they are inserted into Kubernetes, such as during a kubectl apply command. 

This preemptive validation step helps avoid configurations that disrupt the controller by catching errors before the data is even saved in the Kubernetes API. This update brings the Ingress Controller's behavior in line with the HAProxy Data Plane API, which relies on the same definitions for operation. This ensures a more stable and reliable setup by catching errors early in the configuration process.

Transitioning from Alpha versions

In 1.11, the v1alpha1 and v1alpha2 CRD versions are now deprecated. While they remain supported for now, it's important to note that upcoming releases, starting with version 3.0, may no longer support these versions. If you are currently using v1alpha2 (noting that v1alpha1 was already deprecated in version 1.10), you will need to make some changes. 

Specifically, you'll need to update the group for your Custom Resource Definitions (Backend, Defaults, and Global) and adjust your Role-Based Access Control (RBAC) rules to accommodate this new group. This step is crucial for ensuring your configurations remain compatible with future versions of our software.

Enhanced CRD management

]]> ]]> In HAProxy Kubernetes Ingress Controller 1.11, we have introduced a simpler way to install and update our Custom Resource Definitions (CRDs). A new command-line option, --job-check-crd, has been added, enabling users to install or update definitions easily. This enhancement, which uses a Kubernetes Job to perform the update, ensures a smoother operation for managing CRDs and has been backported to all maintained versions for broader support.

For Helm users (available at Helm charts), this improvement is applied automatically.

]]> QUIC (Quick UDP Internet Connections) support

With version 1.11, we are introducing support for QUIC, a transport layer network protocol, which will be enabled automatically for users using certificates and TLS. 

In instances where binding to the UDP port is not feasible or desired, you can turn off this feature simply by using the --disable-quic option.

Additionally, the options --quic-bind-port and --quic-announce-port allow you to tailor the QUIC protocol's port settings to your specific requirements. With --quic-bind-port, you can designate the precise port for QUIC binding. While this level of customization may only be necessary for some, it offers essential flexibility in environments with stricter policies where UDP may not be enabled in the same way as TCP.

To ensure clients can transition from HTTP/2 (or HTTP) to QUIC, controllers must announce which port the client can connect to, and the --quic-announce-port option allows you to do just that. Additionally, the --quic-alt-svc-max-age option lets you define how long the connection has to upgrade. These features aim to simplify your network management, enhancing performance and security with QUIC.

Advancing security with rootless containers

]]> ]]> The S6 Overlay, our chosen init system, has been updated to the latest version, v3. This update is part of our ongoing commitment to enhance the security and functionality of the Ingress Controller container image, making it fully rootless and ensuring it operates seamlessly in read-only environments.

With this update, as a consequence of going fully rootless, we have changed the default ports for HTTP and HTTPS from the standard 80 and 443 to 8080 and 8443, respectively. This internal modification is designed not to disrupt existing setups, as the binding of external ports is contingent upon your specific configurations and the nature of your deployment. This adjustment is particularly noteworthy for those utilizing the controller in external mode, where it is employed directly as an application rather than as a Docker image. 

To revert to the original port settings, you can use the https-bind-port and http-bind-port arguments to specify your preferred bind ports.

Additional changes are also needed when defining a Deployment. The security context must have:

  • runAsNonRoot set to true

  • allowPrivilegeEscalation set to false

  • seccompProfile needs to be defined and set to RuntimeDefault type. 

For Helm users, these changes are applied automatically.

Github Container (GHCR) Images

Docker images for our Ingress Controller are available on Docker Hub, where they've proudly surpassed 50 million downloads! Additionally, we started offering images on GitHub too. Those who wish to test the latest developments can also use nightly images.

You can find these packages at our HAProxy Technologies Kubernetes Ingress GitHub repository.

To pull an image, simply use the command docker pull ghcr.io/haproxytech/kubernetes-ingress:1.11.0.

]]> Controller port

We're making it easier for users to leverage pprof and Prometheus with our controller. To access these tools, you'll need to use the controller-port, which by default is set to 6060

While this isn't a change in how things operate, we're clarifying the process to ensure you know exactly where to find these resources.

Upgraded logging for clearer insights and better control

We've introduced a reload/restart module to the Kubernetes Ingress Controller, enhancing transparency around configuration changes. This addition makes it easier to understand when, how, and why configuration changes were made throughout the controller's lifecycle, offering greater insight into the system's operation.

Logging in HAProxy Kubernetes Ingress Controller 1.11 includes unique transaction IDs in all log messages, where implementation is possible. This enhancement aids in accurately matching any failed transactions to their causes, whether due to incorrect settings or conditions. This makes it easier to find and understand the root cause of any problem by offering more precise information for troubleshooting and reporting.

These updates offer more clarity and control of your system's performance and security.

Notable additions

Disabling configuration snippets

The disable-config-snippets option allows you to turn off configuration snippets. This option accepts a comma-separated list, with possible values including backend, frontend, global, and all. You can combine these options in any way you need, with all conveniently disabling all snippets. 

Config snippet validation

While we recommend using custom resource definitions, we understand the necessity and convenience of config snippets for specific scenarios. Since config snippets are integrated into the configuration precisely as they are provided, they require careful consideration to ensure they do not disrupt the overall configuration. To address this, we've enhanced the controller's resilience to errors in snippets, allowing for a more forgiving and robust handling of configurations. This improvement aims to provide you with peace of mind, knowing that minor mistakes won't compromise your entire system.

log-format-tcp

The log-format-tcp command sets the log format string for TCP traffic. It only applies to the TCP configmap specified by the command line option --configmap-tcp-services.

allow-list and deny-list

The terms whitelist and blacklist have been deprecated in favor of allow-list and deny-list, respectively. It's important to note that while these terms are still operational, we plan to phase them out in the future. We encourage you to start using allow-list and deny-list in your configurations to ensure a smooth transition when the older terms are eventually removed.

standalone-backend

The standalone-backend annotation has been introduced, enabling each ingress object to create a separate backend. While this approach may not be the standard practice, it offers enhanced customization for specific paths. 

Documentation

You can find HAProxy documentation here. For those looking for our Ingress Controller documentation, it's readily available on the HAProxy's Kubernetes Ingress documentation page, as well as on the official GitHub repository

These resources are designed to support you with detailed information and guides, ensuring a smooth experience with HAProxy and the HAProxy Kubernetes Ingress Controller.

Contributions

We’d like to thank the code contributors who helped make this version possible!

Contributor

Area

Hélène Durand

BUG, CLEANUP, BUILD, DOC, FEATURE, TEST

Ivan Matmati

FEATURE, DOC, BUG

Vincent Gramer

BUG, DOC, FEATURE, OPTIM

Dinko Korunic

BUG, BUILD, FEATURE

Dario Tranchitella

TEST

Fabiano Parente

BUG

Alexis Vachette

BUG

Conrad Hoffmann

BUG

Michal Zielonka

BUG

Zlatko Bratkovic

BUG, BUILD, CLEANUP, DOC, FEATURE, TEST

Conclusion 

HAProxy Kubernetes Ingress Controller 1.11 represents our commitment to delivering a secure, efficient, and user-friendly platform for managing ingress traffic. By embracing rootless containers, enhancing Custom Resource Definitions (CRDs) management as we graduate our CRDs to v1, and introducing QUIC protocol support, we are setting new standards for security and performance in Kubernetes solutions. Introducing a reload/restart module ensures users can manage configuration changes with greater clarity and control.

Looking ahead, we're focused on offering features that maximize HAProxy's benefits within Kubernetes, aiming for an even more powerful, scalable, and secure application delivery system.

]]> Announcing HAProxy Kubernetes Ingress Controller 1.11 appeared first on HAProxy Technologies.]]>
<![CDATA[Software Load Balancers vs Appliances: Better Performance & Consistency With HAProxy]]> https://www.haproxy.com/blog/software-load-balancers-vs-appliances-better-performance-consistency-with-haproxy Thu, 15 Feb 2024 00:00:00 +0000 https://www.haproxy.com/blog/software-load-balancers-vs-appliances-better-performance-consistency-with-haproxy ]]> Software load balancers and load balancing appliances have become indispensable components within a healthy application infrastructure. Scalability, security, observability, and reliability are more critical than ever as companies push harder towards 99.999% availability. Accordingly, traffic management is key to protecting servers and ensuring uptime.

Vendors have offered load balancers in different form factors to serve evolving infrastructure needs. Unfortunately, these solutions aren't always optimized for their intended use cases—nor do they repackage effectively into other form factors. These shortcomings make modernization more challenging and complicate mixed environment deployments.

We'll take a look at some core differences between software load balancers and load balancing appliances, and explain how HAProxy’s unique approach to building a dedicated, software-first load balancer helps users address common pain points when selecting the right form factor.

What are software load balancers?

]]> ]]> "Pure" software load balancers are applications made to run on top of an underlying operating system. Some software load balancers come packaged with specific distributions (such as Ubuntu, Debian, or CentOS), while others ship as supplemental add-ons for web servers, firewalls, and network interfaces (such as Microsoft NLB). Many software load balancers are also open source. 

Detached from an OS, software load balancers can deploy anywhere, scale horizontally with streamlined procurement, leverage whatever compute power is available to them, and keep upfront costs low. Skilled teams can take advantage of their flexibility and deep configurability to get the best performance and the perfect fit for their environment.

However, performance can be limited if sufficient capacity is not available, operational costs can rise proportionately with scaling out, and the skill requirements can be higher than some IT generalists are comfortable with, requiring knowledge of server/container management and networking.

What are load balancing appliances?

]]> ]]> First, the term "load balancing appliance" is somewhat of a misnomer. Despite the usual association of the word “appliance” with hardware, load balancing appliances can be either hardware or virtual. In any case, appliances come with a pre-packaged OS, user interface and API, network stack, templates for integrations, plus other useful components. They're typically ready to use apart from minimal initial setup requirements.

Hardware appliances are rack-mounted devices with standardized or specialized internal components—such as optimized CPUs and advanced chipsets like Intel's QAT accelerators. These provide precisely tuneable and predictable performance. However, even virtualized workloads (discussed next) running on commercial off-the-shelf servers and cloud servers can leverage those components. 

Virtual load balancing appliances are pre-built virtual machines (VMs) paired with a specific hypervisor (such as KVM, Hyper-V, and VMWare ESX). Virtual appliances come with their own OS. They often share licensing requirements with their hardware counterparts. Plus, virtual appliances can offer nearly complete feature parity with hardware appliances while providing more flexibility in deployment and pricing models.

However, cost savings and scalability can be limited in comparison with software load balancers. Getting up and running is expensive if you have to purchase specialized hardware, and prices reflect the efforts necessary to build proprietary appliances. Scaling depends on rack space, power, throughput, and concurrent connection requirements. Once connection limits (SSL/TLS included) are reached, you typically need to upgrade your appliances or buy more licenses to keep up.

Why is choosing the right load balancer form factor challenging?

Two main load balancing options lie before us: software or appliance (hardware or virtual). For those wanting the utmost flexibility, scalability, and efficiency in modern application delivery architectures, software load balancing seems like the best choice. However, organizations often face obstacles to adopting a software load balancer. Two challenges often surface:

  1. Disappointing experiences with low-performing virtual appliances (particularly those converted from hardware appliances) sometimes shape perceptions of what’s possible with software load balancers, which can be incredibly fast. 

  2. Inconsistencies between software and appliance load balancers add friction to migrations and management overhead when running mixed deployments. Teams accustomed to load balancing appliances sometimes have a hard time adapting to "pure software plus OS" load balancers. 

Consequently, many organizations are constrained by infrastructure that poorly fits their goals. HAProxy's unique approach to load balancing side-steps these obstacles to help teams build high-performing solutions they actually want.

Solving challenge #1: maximizing performance from software load balancers

Based on poor performance outcomes with virtual appliances, some organizations are reluctant to adopt true software load balancers. This concern that software load balancers won't perform well often starts with vendors, who convert their hardware appliances into virtual load balancers that don't fully leverage underlying computing power and infrastructure. 

Other vendors adapt one core function (like a web server) into another with mixed results. This leads to performance issues and may cause organizations to avoid software load balancers altogether. The lines separating virtual load balancers and software load balancers have become blurred, adding to the confusion over which solution works best in a given scenario. 

HAProxy solves this problem by being a software load balancer, first and foremost. In fact, we're the world's fastest and most widely used software load balancer! This avoids the common appliance-to-software conversion issue that introduces performance compromises, and equally benefits our load balancing appliances that are based on our high-performing software load balancer. Here's what a HAProxy customer had to say:

]]> HAProxy delivers dedicated load balancing functionality that's designed to do one job incredibly well—handling upwards of 5 billion daily requests for DoubleVerify, supporting over 2 million HTTPS requests per second on one Amazon Graviton2 instance, and delivering lower latency than alternatives.

HAProxy takes full advantage of underlying computing power and infrastructure. We can squeeze maximum performance from available CPU and memory through caching and multithreading. HAProxy’s fast SSL/TLS performance is highly efficient on a 2-core server and scales up appropriately when running on powerful Intel QAT processors with TLS acceleration. Meanwhile, features like compression and traffic shaping reduce network bandwidth consumption. But that's just a snapshot of what our load balancing products offer. Check out our HAProxy Enterprise datasheet to dive even deeper.

Solving challenge #2: reducing friction of migrations and mixed deployments

It's reasonable to assume that a vendor-controlled ecosystem means greater consistency and product integration, but this isn't always the case. A common pain point is discovering that software load balancers and load balancing appliances function completely differently even when they come from the same supplier.

Capabilities, configurations, interfaces, automations, and monitoring approaches vary and therefore introduce complexity. Clashing code bases and design requirements for each form factor can result in very different products, undermining the effectiveness of these offerings and the overall vendor ecosystem. And what if you need to manage a mix of software and appliance-based load balancers, or migrate from one form factor to another? The added uncertainty that comes with vendor fragmentation can make these processes more trouble than they're worth—discouraging organizations from innovating and adopting the form factor that's right for them. 

HAProxy addresses this issue by providing consistent features, configuration options, and APIs, whether you're using HAProxy Enterprise (a software load balancer), HAProxy ALOHA (a load balancer appliance), or a combination of both. Because HAProxy solutions share similar design principles and the same core codebase, organizations are free to migrate, modernize, and innovate without worrying about major inconsistencies.

This is ideal for mixed environments where HAProxy Enterprise instances operate alongside HAProxy ALOHA appliances. For example, we might place HAProxy ALOHA in front of HAProxy Enterprise to provide scalability through Layer 4 load balancing and protection against various types of volumetric attacks. 

Then there's migration. HAProxy asks you to convert your old configurations (like iRules or Content Switching Policies, for example) over to a unified HAProxy configuration just once, helping you get up and running more quickly. These conversions are straightforward and result in human-readable configurations that are less verbose. Future configuration changes are therefore easier to make. This simplicity also eases the transition from HAProxy ALOHA appliances to HAProxy Enterprise instances, if that becomes necessary later. Forget about changing multiple network settings or following complex transition processes.

HAProxy removes the obstacles between you and your ideal load balancing form factor

Whether you're looking for a software load balancer, an appliance, or both, HAProxy has you covered. Our software-first approach negates many common challenges that organizations face around performance and management overhead. This means users can choose the right form factor for their infrastructure and application stack without worrying about performance or the pain of switching form factor later on. 

The question isn't whether software load balancers or appliances are better. We give you the power to choose the solution that works best for you, without the usual compromises. To learn more about the benefits of performance and flexibility, check out our blog post.

]]> Software Load Balancers vs Appliances: Better Performance & Consistency With HAProxy appeared first on HAProxy Technologies.]]>
<![CDATA[Protect Against Netscaler Vulnerability CitrixBleed]]> https://www.haproxy.com/blog/protecting-against-citrixbleed-with-haproxy Fri, 12 Jan 2024 00:00:00 +0000 https://www.haproxy.com/blog/protecting-against-citrixbleed-with-haproxy ]]> CitrixBleed, or CVE-2023-4966, is now an infamous security vulnerability affecting Citrix NetScaler that allows attackers to hijack user sessions by stealing session authentication tokens.

Unfortunately, it has affected many NetScaler customers including Xfinity, which lost data for 36 million customers as a result of CitrixBleed.

There is no way to protect against CitrixBleed by configuring the NetScaler WAF to detect and block it. The vulnerability affects the NetScaler appliance itself, so you must update every instance and also kill all existing sessions to patch the vulnerability. This is far from ideal and many customers are looking for easier ways to protect themselves.

In this post, we will show how you can use an HAProxy Enterprise load balancer to protect against CitrixBleed by placing it in front of your NetScaler instance(s).

Background on the CitrixBleed exploit

Let’s explore how CitrixBleed works at a high level. If an attacker sends a request with a HTTP Host header over 24,812 characters in length to a vulnerable endpoint in NetScaler, the appliance will dump the memory contents back to the attacker. One of the vulnerable endpoints is the path for OAuth discovery: /oauth/idp/.well-known/openid-configuration.

The resulting memory dump might contain information about sessions currently saved in the appliance. This information could allow the attacker to access the NetScaler appliance without any additional credentials. 

This happens because the HTTP Host header generates the payload for NetScaler’s OAuth configuration but lacks sufficient checks to prevent buffer overflow. 

Bleepingcomputer has a good article with a description of the attack.

Using HAProxy to protect against CitrixBleed

By default, HAProxy blocks requests with Headers that are too long, so it is not affected by this issue. We can use HAProxy to protect NetScaler from the CitrixBleed vulnerability.

In this example, we will: 

  1. set up HAProxy Enterprise in front of a Citrix NetScaler

  2. protect your infrastructure by default with HAProxy Enterprise

  3. configure HAProxy Enterprise to accept larger headers for maximum compatibility with NetScaler

  4. configure HAProxy Enterprise to reject and log any HTTP Host header over 24k characters (safely below the 24,812 character limit)

  5. identify attacks using the observability tools in HAProxy Fusion Control Plane.

Set up HAProxy Enterprise in front of Citrix NetScaler

In this architecture, we use HAProxy Enterprise to receive and filter external traffic first, passing only safe and legitimate traffic to NetScaler.

In addition, I always add the HAProxy Fusion Control Plane because it greatly simplifies the setup and provides default observability, allowing us to review any possible attack attempts.

]]> ]]> Protect your infrastructure by default with HAProxy Enterprise

HAProxy Enterprise’s lower limit for HTTP headers means the NetScaler is now protected from attacks hoping to exploit the CitrixBleed vulnerability. HAProxy will reject long HTTP headers with a 400 status code.

However, since we want to log and block the attacks, instead of rejecting them outright, we will create some additional configuration on HAProxy Enterprise.

Configure HAProxy Enterprise to accept larger headers

For maximum compatibility with NetScaler’s acceptance of larger HTTP headers, we will configure HAProxy Enterprise also to accept larger headers, overriding the default.

Global section

Increase the total bufsize size to allow larger headers. This is technically not needed at all since HAProxy would just reject the long headers by default (with a 400 status code), but we want to be able to log the attacks as well, instead of rejecting them outright. 

]]> blog20240112-01.cfg]]> Configure HAProxy Enterprise to reject and log any HTTP Host header over 24k characters

While HAProxy protects against this attack by default, for logging purposes, we will create a specific deny rule that applies an easily identifiable status code when rejecting these requests.

Load Balancing configuration

]]> blog20240112-02.cfg]]> Notice this line:

]]> blog20240112-03.cfg]]> This line does two things: 

  1. Detects any request where the Host header is over 24,000 characters long (this is close to the 24,812 buffer limit that NetScaler suffers from and large enough to consider a request as malicious) 

  2. Reject those requests with a status code “413 - Payload too large”. 

In this case, we are protecting the Host header only, but HAProxy’s defaults protect the backend with other headers as well.

You can choose your own status code, but 413 is useful because it’s unique enough to be very easy to spot in your logs – see how we do that below.

Identify attacks using HAProxy Fusion Control Plane

HAProxy Fusion provides a single pane of glass for centralized management, monitoring, and automation of HAProxy Enterprise. With HAProxy Fusion installed, you can easily monitor for any attacks detected by your HAProxy Enterprise deployment.

Let’s use the HAProxy Fusion UI and look into the Request Explorer for requests where the response code is 413 and the URI is the vulnerable OAuth URL.

Here, we can easily identify the matching requests and see the request details and logs.

]]> ]]> Summary

In this blog post, you’ve learned how to use HAProxy to block possible CitrixBleed attacks and how to monitor for any attacks using HAProxy Fusion.

]]> Protect Against Netscaler Vulnerability CitrixBleed appeared first on HAProxy Technologies.]]>
<![CDATA[December 2023 - CVE-2023-45539: HAProxy Accepts # as Part of the URI Component Fixed]]> https://www.haproxy.com/blog/december-2023-cve-2023-45539-haproxy-accepts-as-part-of-the-uri-component-fixed Tue, 09 Jan 2024 00:00:00 +0000 https://www.haproxy.com/blog/december-2023-cve-2023-45539-haproxy-accepts-as-part-of-the-uri-component-fixed ]]> We have received questions regarding CVE-2023-45539 issued in November 2023. The versions of our products released on Monday, 21 August 2023 to fix CVE-2023-40225 also fixed the vulnerability in CVE-2023-45539. Users who updated HAProxy in response to CVE-2023-40225 do not need to take further action.

HAProxy before 2.8.2 accepts # as part of the URI component, which might allow remote attackers to obtain sensitive information or have unspecified other impact upon misinterpretation of a path_end rule, such as routing index.html#.png to a static server.

In some cases the "path" sample fetch function incorrectly accepts '#' as part of the path component. This can in some cases lead to misrouted requests for rules that would apply on the suffix:

 use_backend static if { path_end .png .jpg .gif .css .js }

Nowadays most popular web servers such as Apache and NGINX will not accept invalid requests such as this, but other, non-compliant servers might.

Previously HAProxy accepted # as part of the path by default and would reject it with the "normalize" rules. With this update we reject it by default. However, it is still possible to accept it using "option accept-invalid-http-request”; if this applies to you, please reach out to Support as we would like to understand your use case.

If you are using an affected product, you should upgrade to the fixed version or apply the workaround configuration detailed below.

We would like to thank Seth Manesse and Paul Plasil who reported that the "path" sample fetch function incorrectly accepts '#' as part of the path component.

Affected versions and remediation

HAProxy Technologies released new versions of HAProxy, HAProxy Enterprise, HAProxy ALOHA, and HAProxy Kubernetes Ingress Controller on Monday, 21 August 2023. These releases patched the vulnerabilities described in CVE-2023-45539.

Users of the affected products should upgrade to the fixed version as soon as possible.

Users of Amazon AMIs and Azure VHDs: please note that cloud images have been updated with this patch.

Affected version

Fixed version

HAProxy 2.8

2.8.2

HAProxy 2.7

2.7.10

HAProxy 2.6

2.6.15

HAProxy 2.4

2.4.24

HAProxy 2.2

2.2.31

HAProxy 2.0

2.0.33

HAProxy Enterprise 2.7r1

2.7r1-300.867

HAProxy Enterprise 2.6r1

2.6r1-292.1120

HAProxy Enterprise 2.5r1

2.5r1-288.805

HAProxy Enterprise 2.4r1

2.4r1-288.1158

HAProxy Enterprise 2.2r1

2.2r1-257.1005

HAProxy Enterprise 2.0r1

2.0r1-250.1592

HAProxy ALOHA 15.0

15.0.6

HAProxy ALOHA 14.5

14.5.12

HAProxy ALOHA 14.0

14.0.17

HAProxy ALOHA 13.5

13.5.24

HAProxy ALOHA 12.5

12.5.23

HAProxy Kubernetes Ingress Controller 1.10

v1.10.7

HAProxy Kubernetes Ingress Controller 1.9

v1.9.10

HAProxy Kubernetes Ingress Controller 1.8

Not maintained anymore

HAProxy Kubernetes Ingress Controller 1.7

Not maintained anymore

HAProxy Enterprise Kubernetes Ingress Controller 1.9

v1.9.12-ee1

HAProxy Enterprise Kubernetes Ingress Controller 1.8

v1.8.12-ee7

HAProxy Enterprise Kubernetes Ingress Controller 1.7

v1.7.12-ee4

Workaround

If you are not able to update right away, this behavior can be selectively configured using "normalize-uri fragment-encode" and "normalize-uri fragment-strip".

Support

If you are an HAProxy Enterprise, HAProxy ALOHA, or HAProxy Enterprise Kubernetes Ingress Controller customer and have questions about upgrading to the latest version or applying the configuration workaround detailed above, please get in touch with the HAProxy support team.

]]> December 2023 - CVE-2023-45539: HAProxy Accepts # as Part of the URI Component Fixed appeared first on HAProxy Technologies.]]>
<![CDATA[Web App Security vs. API Security: Unified Approaches Reign Supreme]]> https://www.haproxy.com/blog/web-app-security-vs-api-security-unified-approaches-reign-supreme Wed, 20 Dec 2023 00:00:00 +0000 https://www.haproxy.com/blog/web-app-security-vs-api-security-unified-approaches-reign-supreme ]]> Every day, organizations face external threats as a consequence of exposing their services over the internet. An estimated 2,200+ attacks occur in a 24-hour period—or one attack every 39 seconds. Add the fact that an average data breach (one of many potential consequences of poor security) costs companies $4.45 million, and the need for strong security is impossible to ignore. 

Web application and API security is key to protecting your infrastructure, data, and users. Plus, bolstering security can increase application performance by maintaining high availability and blocking DoS attempts that would interrupt service.

While there are core differences between web apps and APIs that influence security implementation, a unified security strategy is crucial. In this blog, we'll discuss why both types of security appear different yet are inherently linked given evolving best practices. Unified approaches remain the most effective.

Why do web application security and API security look different?

At first glance, the overlap between web application and API security might not seem clear. Differences in clients, vulnerabilities, and OWASP categorization obscure the similarities.

Differences in clients

First comes the differences in clients. Developers design web applications for humans first—frontend interactions are prioritized as part of the user experience. Conversely, APIs let two software components communicate with each other using requests and responses. These interactions occur on the backend, and while API calls often stem from user actions, creators design APIs for computer consumption.

Differences in vulnerabilities

]]> ]]> Common web app vulnerabilities

Web application security involves protecting websites, applications, and any associated APIs from various threats. These risks multiply as your application scales and endpoints are added. 

The threat landscape is quite vast for web apps. It's also always evolving as technology changes and attackers find new exploits. It's the responsibility of developers, security teams, organizations, and their vendors to proactively counteract these threats. Here are some of the major security threats that web apps currently face:

  • DDoS and DoS attacks are some of the most common attacks today, and work by interrupting or overwhelming the resources supporting online services. DDoS attacks leverage a distributed network of devices that impact their targets by overloading server CPUs, consuming all available memory, or consuming all available bandwidth via excessive payloads. DoS attacks can achieve similar results but typically originate from a single source.

  • Cross-site scripting (XSS) attacks work by injecting malicious client-side scripts into trusted webpages. Typically JavaScript snippets, scripts can also include HTML code or any other frontend executable code. Web browsers mistake altered scripts as trusted and legitimate. XSS attacks are especially dangerous since they can access session tokens, cookies, and other sensitive information.

  • SQL injection attacks use malicious SQL queries to impact databases. Attackers can read sensitive information from the database, alter information, and even delete it.

  • Zero-day vulnerabilities, until disclosed, are unknown to developers of impacted web applications and potentially very serious. Fixes aren't immediately available. These threats have sparked the emergence of Google's Project Zero and other cybersecurity watchdogs who help uncover critical application vulnerabilities.

  • Misconfigurations often sneak into production—either through a lack of hardening, component mismanagement, unoptimized security settings, poor error handling, and more. 

  • Server-side request forgery (SSRF) attacks abuse server functionality to read or modify internal resources. Attackers can either use their own URLs or alter existing URLs the server-side code fetches or updates. This gives them privileged access to private networks, configurations, hidden databases, or the ability to send requests to internal services.

Common API vulnerabilities

API attack surfaces tend to be large. This is because APIs are accessible to a wide range of client devices, and therefore have more endpoints. API data also comes from a variety of sources, which makes validation that much more important in catching malicious code and preventing third-party abuse. Here are some of the major security threats that APIs face:

  • API abuse describes any malicious or accidental API usage that compromises sensitive systems, scrapes data, or overwhelms applications through request spam. In turn, bots can launch DDoS attacks and account takeovers en masse using stolen information. Abuse also occurs when friendly services (or API consumers) poll API servers too frequently and request excessive payloads.

  • Broken authentication describes any situation where the client identity verification process is faulty or compromised. For example, HTTP authentication, API key authentication, or OAuth authentication measures may not be working properly—enabling unrestricted access or privilege escalation within internal systems.

  • Broken authorization takes many forms, occurring at the object, object property, and function levels. Attackers can exploit vulnerable endpoints to make unauthorized calls (often to internal administrative APIs), manipulate objects, leak data, and destroy data via permissions escalations. 

  • Unrestricted resource consumption easily and often results from DoS attacks, which in turn raises operational costs. CPU consumption, memory usage, bandwidth use, and storage needs are vital API performance indicators. When a server is maxing out its resources, this can be indicative of poor provisioning, flawed API design, or a security vulnerability.

OWASP tracks vulnerabilities differently

The OWASP Foundation has worked collaboratively with industry professionals to promote secure coding and identify threats for over 20 years. The Foundation releases an updated OWASP Top 10 critical security risks list every three or four years. 

Traditionally, these lists have focused on web application security. However, the explosive popularity of APIs has necessitated the creation of a separate Top 10 API Security Risks. The OWASP Foundation has recognized that while overlap does exist, API developers must account for unique vulnerabilities.

Perceptions of frontend vs. backend

We use the term "web apps" to describe any static or dynamic software running on a server that's rendered within a web browser. Many websites are also considered web apps at their core, as there are many dynamic and interactive components that exist on those websites. Accordingly, web apps rely on client-side code and server-side code that executes on load and in response to user inputs. 

While a web app's frontend components form an interactive interface (and are therefore top of mind), web app security conversations should always focus on backend security. This is where security teams do the majority of their work. 

Meanwhile, APIs let two software components communicate with each other using requests and responses. Each leverages specific protocols that determine data transfers over the network to enable this. Plus, APIs mainly handle east-to-west traffic, which describes traffic flowing between backend services. 

This contrasts with web applications that normally handle north-to-south traffic—or client-server communication flowing in and out of a network. Backend security is therefore paramount for APIs as there's no frontend component. 

However, these perceptions are potentially risky and undermine the inherent links between modern web app and API security. Let's explore these similarities and why they're so important.

Why we should think of web app security and API security in the same way

]]> ]]> The need for strong security and hardening are the same for web apps and APIs. Companies want to avoid the downtime, data leaks, financial costs, and reputational damage that result from major security incidents. They also want to protect internal systems against intentional or accidental abuse. 

Why the urgency? Fifty-eight percent of respondents from Traceable's 2023 State of API Security Report stated that APIs extend attack surfaces across every layer of their tech stacks. Given the increasing dependence of web apps on APIs, these protections are essential. Teams need to limit consumption, abuse, and stop intrusion attempts in their tracks before attacks become impactful.

Principles

As a result, security principles are similar. Teams want to protect critical backend computing resources like available CPU cores and memory—or networking capacity like available bandwidth. Then comes the actual data behind every web application and API, which sits (ideally) within a safely guarded database governed by role-based access control (RBAC) and other restrictions.

Organizational structures

Beyond that, the same team often works on both the web app backend and the API, since these two components are so deeply intertwined. Both web app security and API security are almost always backend-focused. This is a reflection of modern microservices application architectures, which tightly couple web app frontends with backend APIs. These can't be separated. Consequently, it's only logical to follow a more unified approach to security.

Vendor solutions

And vendors are evolving accordingly. It's becoming increasingly common to offer packaged solutions that bolster security on web app and API backends. This strategy also replaces piecemeal security approaches that can otherwise form silos and further increase complexity. 

How to provide unified security for web applications and APIs

Unified strategies

Best practices now dictate that a unified strategy for a web application and API security is essential. The following strategies apply to both web apps and APIs:

  • Secure application architecture – Design your applications and APIs to negate threats tailored to their environments or use cases. This can include network isolation, identity and access management, and any other principles promoting security fundamentals.

  • Coding best practices – Write your code to be readable, concise, efficient, documented, testable, and as vulnerability-free as possible. Principles like input validation, sanitization, secure data transmission, regular patching, and least privilege are key. So too are authentication and authorization. OWASP maintains its referential Authentication Cheat Sheet, while the OpenAPI Specification is a great starting point for API developers. 

  • Shift-left security – Start testing for bugs and vulnerabilities as early as possible in the software development lifecycle before they can reach production. This shortens the QA process while cutting costs, since production bugs are six times more expensive to fix than those in the design phase. 

  • Threat intelligence – Incorporate systems and solutions that collect and analyze usage data. This helps teams understand a threatening actor's motives, targets, and behaviors. 

  • Traffic routing, monitoring, and filtering – Implement a load balancer and API gateway to effectively distribute client traffic to available servers, avoiding overloads. Detect anomalous behaviors and suspicious requests and block them in conjunction with ACLs. 

  • Incident response – When a problem (breach, slowdown, outage) occurs, take quick steps to identify the source and implement fixes that cause minimal disruptions. Leverage automation tools and AI to determine the best remediation procedures.

Unified reporting and responsibility

Development, security, and operations (DevSecOps) teams have overlapping goals and responsibilities when it comes to security, but unfortunately, many companies treat them as separate entities. These teams absolutely must be on the same page to succeed in their security mission. The following tenets hold true:

  • Avoid silos – Teams need to communicate and share the responsibility of security for web apps and APIs. Don't let one business unit shoulder the burden of security, and welcome ideas and solutions that encourage collaboration. This requires cultural buy-in. 

  • Avoid fragmented data and systems – Centralization is key to keeping DevSecOps teams informed and unified in their approach. Usage metrics, threat intelligence data, and more should be shared and easily accessible from one location. Internal systems should be shared, barring relevant access restrictions, and user-friendly.

  • Avoid blame shifting – While many organizations focus on the "who" in a security incident, focusing on preventing future issues via a systematic approach is time better spent. An internal audit should be done to uncover infrastructure or policy weaknesses and address them.

Unified security solutions

Why piece together a patchwork suite of security products when one will suffice? An ideal security solution should be chock full of features that counteract the top threats facing web apps and APIs. These features include the following:

  • DDoS protection – Prevent globally-coordinated attacks, botnets, and traffic spikes from sapping CPU, memory, and bandwidth. Preventing downtime is crucial to maintaining high availability. 

  • Bot management – Lock out harmful non-human traffic and prevent abuse, while still allowing beneficial crawlers such as Googlebot and Bingbot. Use behavioral indicators, weighting, and scoring to identify bad traffic before it reaches backend servers. 

  • Rate limiting – Control how often a client can make API calls, and at what volumes, to prevent purposeful or accidental abuse. This prevents users from exhausting system resources through mechanisms such as DoS attacks. Web apps can also throttle user activity and limit request frequency, to combat bots and deter malicious behavior.  

  • Web application firewall (WAF) – Harness an intermediate layer of security that protects web apps and APIs against cross-site scripting, SQL injection attacks, and requests with malicious payloads. Set customizable rules to allow only approved traffic through to your servers. 

  • Observability – Wherever possible, leverage a “single pane of glass” that allows you to monitor and manage your entire infrastructure. Black box solutions leave you operating in the dark. Observability lets you make decisions based on real-time data and performance indicators.

HAProxy provides unified security for applications and APIs

HAProxy has your security needs covered in any deployment environment—whether you're running complex web applications or high-volume APIs (or both, in all likelihood!). As a high-performance software load balancer, reverse proxy, and API gateway, HAProxy Enterprise uses a consistent set of rules and interfaces and a multi-layered approach to distribute good traffic to available servers while blocking bad traffic from getting through. 

HAProxy Enterprise includes a high-performance WAF, DDoS protection, bot management, rate limiting, and other features that address common security needs for web apps and APIs.

Our unified security approach also brings teams together. HAProxy Fusion Control Plane gives DevSecOps teams one home to operate within. Teams can manage their web app and API security policies, and their distributed load balancer and WAF layer, from a single graphical interface. They can also integrate the HAProxy Fusion API with their automation and security information and event management (SIEM) systems.

Want to learn more about HAProxy security solutions? Check out our dedicated solution page, or dive deeper with our Multi-Layered Security Webinar.

]]> Web App Security vs. API Security: Unified Approaches Reign Supreme appeared first on HAProxy Technologies.]]>
<![CDATA[Rate limiting based on AWS VPC ID]]> https://www.haproxy.com/blog/rate-limiting-based-on-aws-vpc-id Thu, 14 Dec 2023 00:00:00 +0000 https://www.haproxy.com/blog/rate-limiting-based-on-aws-vpc-id ]]> Managing incoming web traffic for your applications is essential to ensuring optimal performance, preventing abuse, and maintaining the security of your cloud infrastructure. 

To accomplish this, one of the tools HAProxy Enterprise users have at their disposal is rate limiting—the practice of preventing clients from making too many requests and using system resources unfairly.

In this blog post, we show how you can implement rate limiting based on the ID of the Virtual Private Cloud in Amazon Web Services using HAProxy Enterprise.

Understanding AWS Virtual Private Clouds

What is a VPC?

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. A VPC runs in an isolated environment from other virtual networks in the AWS Cloud and is required when creating Elastic Compute Cloud (EC2) instances.

Resembling much of a traditional network that you would operate in your own data center, a VPC is capable of hosting web applications and APIs. Running your services on a VPC means reaping the benefits of cloud environments—flexibility, scalability, and resource allocation.

In some situations, customers will configure VPC Peering, which involves connecting two different VPCs together to open a network between them. Sometimes one of these VPCs is beyond your control (where the VPCs have duplicate network address spaces). This means that the traditional way of rate limiting based on Source IP will not work. In those cases, you can use the VPC ID (a unique ID used to identify and manage the cloud network within your AWS account) to configure rate limiting in HAProxy Enterprise.

Understanding rate limiting

Challenges

Managing request rates in your AWS Virtual Private Cloud is crucial to address several challenges:

  • Unrestricted access may lead to users consuming more than their fair share of resources, impacting the overall performance of your applications.

  • Uncontrolled requests can expose vulnerabilities and compromise system security.

  • Unmonitored usage can result in unexpected spikes in cloud expenses.

HAProxy rate limiting

HAProxy Enterprise functions as a reverse proxy, offering rate limiting among a suite of other features to manage the rate of requests flowing into your VPC through the load balancer itself. Its flexible deployment and elastic scalability mean it can easily run in AWS while adapting to sudden changes in traffic demands. 

HAProxy Enterprise offers rate limiting, allowing you to combine ACLs, stick tables, and maps to implement a rate limiting solution with granular control and dynamic adaptability.

]]> HAProxy Enterprise rate limiting can be used to protect against DDoS and brute force attacks, enforce fair resource access, optimize backend server performance, and control cloud costs.

For users who want to take their rate limiting in large-scale deployments to the next level, consider HAProxy Fusion Control Plane and HAProxy Enterprise for high performance and simplified management in your AWS environments.

]]> ]]> HAProxy Enterprise configuration overview

The configuration guide below outlines setting up rate limiting with HAProxy Enterprise for AWS VPC. This is accomplished by extracting the VPC ID from the request to enforce rate limiting.

Any request that exceeds the specified rate limit will be returned an HTTP status 429 - Too Many Requests response. These denied requests are tracked and stored in stick tables for each VPC ID, along with an aggregated stick table for monitoring overall request rates.

Let’s get started.

Step one: define rate limits

First, define the rate limits in rates.map. This step defines rate limits based on VPC ID and path.

]]> blog20231213-01.cfg]]> An example of what this could look like would be:

]]> blog20231213-02.cfg]]> In this case, requests for a URL that begins with /api that originate from the given VPC will be limited to 30 requests per minute. The begins with part is defined later when we use the map_beg function. The per minute time range is defined later, when we create stick tables.

Step two: extract VPC ID

In your frontend, configure the load balancer to accept the Proxy Protocol by adding the accept-proxy argument to the bind line:

]]> blog20231213-03.cfg]]> Extract the VPC ID from the TCP connection using fc_pp_tlv and store it in the variable txn.vpce_id

]]> blog20231213-04.cfg]]> The fc_pp_tlv fetch method reads Type-Length-Value vectors from a Proxy Protocol header at the start of a TCP connection. AWS uses the Proxy Protocol to send a TLV that contains the VPC ID. To read it, call fc_pp_tlv with the hexadecimal value 0xEA, according to the AWS documentation on the Proxy Protocol. Then use the bytes converter to extract the TLV value containing the VPC ID.

]]> Step three: construct rate limiting keys

Construct keys for rate limiting and request tracking using the extracted VPC ID. 

]]> blog20231213-05.cfg]]> The first key, which we're storing in the variable txn.vpcratekey, concatenates the VPC ID and requested URL path to form a key. Its job is to map back to one of the rate limits in the rates.map file. The second key, txn.vpctrackkey, concatenates the VPC ID, requested URL path, and the client's source IP address. Its job is to keep track of each client's request rate for a given URL path and origin VPC.

Step four: find rate limit in the map file

Next, configure HAProxy Enterprise to look up the rate limit for the current request in the rates.map file using the key. In this example, our rates.map file is in the /var/lib/dataplaneapi/storage/maps directory because we're using HAProxy Fusion, and that's where it stores map files. Adjust this line for where you've stored the rates.map file.

]]> blog20231213-06.cfg]]> Step five: configure backends for request rate tracking

Configure the backend with a stick table to track request rates based on the virtual public cloud ID. In our example, we are using HAProxy Fusion Control Plane to automatically do rate limiting aggregation. In our configuration, we will write data to the ratebyvpc table and read from the ratebyvpc.agg one. HAProxy Fusion will automatically aggregate the limits from all HAProxy Enterprise instances into the aggregated table.

]]> blog20231213-07.cfg]]> Step six: track current request

Then we’ll begin monitoring each client's request rate by using the key for request tracking, which you'll recall is a combination of the VPC ID, requested URL path, and the client's source IP address. This information will be stored in the ratebyvpc stick table.

Add to your frontend:

]]> blog20231213-08.cfg]]> Step seven: find current rate

Find the client's current rate for the VPC and path combination from the aggregated data in the ratebyvpc.agg table.

]]> blog20231213-09.cfg]]> Step seven: check for rate abuse and deny requests

Use an ACL to check if the difference between rate limit and current rate is less than 0. If it’s less than 0, deny the request with a 429 response.

]]> blog20231213-10.cfg]]> Conclusion

By following these steps to implement HAProxy Enterprise’s rate limiting, you can effectively mitigate challenges like resource overconsumption, security vulnerabilities, and unexpected cost spikes in your AWS VPCs.

]]> Rate limiting based on AWS VPC ID appeared first on HAProxy Technologies.]]>
<![CDATA[Scalable Load Balancing & Security Made Simple at AWS re:Invent 2023]]> https://www.haproxy.com/blog/scalable-load-balancing-and-security-made-simple-on-aws Wed, 13 Dec 2023 00:00:00 +0000 https://www.haproxy.com/blog/scalable-load-balancing-and-security-made-simple-on-aws ]]> It seemed like it was only yesterday when we were in Las Vegas for Black Hat USA, but we soon found ourselves back in the vibrant city for AWS re:Invent 2023. This time, we were a gold sponsor for Amazon’s global cloud-computing event, showcasing how HAProxy eliminates the challenges associated with load balancing large-scale deployments on AWS.

While our booth was brimming with attendees on the show floor, we also took to the stage with our Lightning Talk, “Scalable load balancing and security made simple on AWS.” 

In this presentation, Jakub Suchy, Director of Sales Engineering at HAProxy Technologies, highlighted the challenges that arise with high-scale load balancing on AWS, including increasing costs, latency issues, and the complexity of managing sprawl. Our solution: consolidating multiple layers into a single high-performance load balancing layer, enhanced by centralized management, monitoring, and automation with HAProxy Fusion Control Plane.

Watch the full presentation below, and learn how HAProxy Fusion and HAProxy Enterprise enable simple and scalable load balancing and security on AWS, ultimately reducing latency and improving operational efficiency.

]]> Scalable Load Balancing & Security Made Simple at AWS re:Invent 2023 appeared first on HAProxy Technologies.]]>