HAProxy Technologies 2026 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service]]> https://www.haproxy.com/blog/cves-2026-quic-denial-of-service Thu, 12 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/cves-2026-quic-denial-of-service ]]> The latest versions of HAProxy Community, HAProxy Enterprise, and HAProxy ALOHA fix two vulnerabilities in the QUIC library. These issues could allow a remote attacker to cause a denial of service. The vulnerabilities involve malformed packets that can crash the HAProxy process through an integer underflow or an infinite loop.

If you use an affected product with the QUIC component enabled, you should update to a fixed version as soon as possible. Instructions are provided below on how to determine if your HAProxy installation is using QUIC. If you cannot yet update, you can temporarily workaround this issue by disabling the QUIC component.

Vulnerability details

  • CVE Identifiers: CVE-2026-26080 and CVE-2026-26081

  • CVSSv3.1 Score: 7.5 (High)

  • CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

  • Reported by: Asim Viladi Oglu Manizada

Description

Two separate issues were found in how HAProxy processes QUIC packets:

  • Token length underflow (CVE-2026-26081): This affects versions 3.0 (ALOHA 16.5) and later. A remote, unauthenticated attacker can cause a process crash. This happens by sending a malformed QUIC Initial packet that causes an integer underflow during token validation.

  • Truncated varint loop (CVE-2026-26080): This affects versions 3.2 (ALOHA 17.0) and later. An attacker can cause a denial of service. By sending a QUIC packet with a truncated varint, the frame parser enters an infinite loop until the system watchdog terminates the process.

Repeated attacks can  enable a lasting denial of service for your environment.

Affected versions and remediation

HAProxy Technologies released new versions of its products on Thursday, February 12, 2026, to patch these vulnerabilities.

CVE-2026-26081 (Token length underflow)

Product

Affected version(s)

Fixed version

HAProxy Community / Performance Packages

3.0 and later

3.0.16

3.1.14

3.2.12

3.3.3

HAProxy Enterprise

3.0 and later

hapee-lb-3.0r1-1.0.0-351.929

hapee-lb-3.1r1-1.0.0-355.744

hapee-lb-3.2r1-1.0.0-365.548

HAProxy ALOHA

16.5 and later

16.5.30

17.0.18

17.5.16

CVE-2026-26080 (Truncated varint loop)

Product

Affected version(s)

Fixed version

HAProxy Community / Performance Packages

3.2 and later

3.2.12

3.3.3

HAProxy Enterprise

3.2 and later

hapee-lb-3.2r1-1.0.0-365.548

HAProxy ALOHA

17.0 and later

17.0.18

17.5.16

Test if you’re affected

Users of affected products can determine if the QUIC component is enabled on their HAProxy installation and whether they are affected:

For a single installation (test a single config file):

grep -iE "quic" /path/to/haproxy/config && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

For multiple installations (test each config file in folder):

grep -irE "quic" /path/to/haproxy/folder && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

A response containing “QUIC may be enabled” indicates your HAProxy installation is potentially affected and you need to manually review and disable any QUIC listeners. The fastest method is by using the global keyword tune.quic.listen off (for version 3.3) or no-quic (3.2 and below).

Update instructions

Users of affected products should update immediately by pulling the latest image or package for their release track.

  • HAProxy Enterprise users can find update instructions in the customer portal.

  • HAProxy ALOHA users should follow the standard firmware update procedure in your documentation.

  • HAProxy Community users should compile from the latest source or update via their distribution's package manager or available images.

]]> Support

If you are an HAProxy customer and have questions about this advisory or the update process, please contact our support team via the Customer Portal.

]]> February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service appeared first on HAProxy Technologies.]]>
<![CDATA[Zero crashes, zero compromises: inside the HAProxy security audit]]> https://www.haproxy.com/blog/haproxy-security-audit-results Mon, 09 Feb 2026 15:00:00 +0000 https://www.haproxy.com/blog/haproxy-security-audit-results ]]> An in-depth look at the recent audit by Almond ITSEF, validating HAProxy’s architectural resilience and defining the shared responsibility of secure configuration.

Trust is the currency of the modern web. When you are the engine behind the world’s most demanding applications, "trust" isn't a marketing slogan—it’s an engineering requirement.

At HAProxy Technologies, we have always believed that high performance must never come at the cost of security or correctness. But believing in your own code isn’t enough. You need objective, adversarial validation. That's why we were glad to hear that ANSSI, the French cybersecurity agency, commissioned the rigorous security audit of HAProxy (performed by Almond ITSEF), which focused on code source analysis, fuzzing, and dynamic penetration testing as part of their efforts to support the security assessment of open source software.

The results are in. After weeks of intense stress testing, code analysis, and fuzzing, the auditors reached a clear verdict: HAProxy 3.2.5 is a mature, secure product that is reliable for production.

While we are incredibly proud of the results, we are equally grateful for the "operational findings" and the recommendations that highlight the importance of configuration in security. Here is a transparent look at what the auditors found and what it means for your infrastructure.

Unshakeable stability: 25 days of fuzzing, zero crashes

The most significant takeaway from the audit was the exceptional stability of the HAProxy core. The auditors didn't just review code; they hammered it.

The team performed extensive "fuzzing" by feeding the system massive amounts of malformed, garbage, and malicious data. They primarily targeted the HAProxy network request handling and internal sockets. This testing went on for days, and in the case of internal sockets, up to 25 days.

The result? Zero bugs. Zero crashes.

For software that manages mission-critical traffic, handling millions of requests per second, this level of resilience is paramount. It confirms that the core logic of HAProxy is built to withstand not just standard traffic, but the chaotic and malicious noise of the open internet.

Validating the architecture

Beyond the stress tests, the audit validated several key architectural choices that differentiate HAProxy from other load balancers.

Process isolation

The report praised HAProxy’s "defense-in-depth" strategy. We isolate the privileged "master" process (which handles administrative tasks, spawns processes, and retains system capabilities) from the unprivileged "worker" process (which handles the actual untrusted network traffic). 

By strictly separating these roles, HAProxy ensures that even if a worker were compromised by malicious traffic, the attacker would find themselves trapped in a container with zero system capabilities.

Custom memory management

Sometimes, we get asked why we use custom memory structures (pools) rather than standard system libraries (malloc). The answer has always been performance. Our custom allocators eliminate the locking overhead and fragmentation of general-purpose libraries, allowing for predictable, ultra-low latency.

However, custom code often introduces risk. That is why this audit was so critical: static analysis confirmed that our custom implementation is not just faster, but robust and secure, identifying no memory corruption vulnerabilities.

Clean code

The auditors found zero vulnerabilities in the HAProxy source code itself. The only vulnerability identified was in a third-party dependency (mjson), which had already been patched in a subsequent update and shared with the upstream project.

A case for shared responsibility

No software is perfect, and no audit is complete without findings. The report highlighted risks that lie not in the software’s flaws, but in operational configuration.

This brings us to a crucial concept: Shared Responsibility. We provide a bulletproof engine, but the user sits in the driver's seat. The audit highlighted a few areas where "default" behaviors prioritize compatibility over strict security, requiring administrators to be intentional with their config.

We believe in transparency, so we are highlighting these operational recommendations to provide guidance, much of which experienced HAProxy users will recognize as standard configuration best practice.

1. The ACL "bypass" myth

The auditors noted that Access Control Lists (ACLs) based on URL paths could be bypassed using URL encoding (e.g., accessing /login by sending /log%69n). While this may appear to be a security gap, it’s actually a result of HAProxy’s commitment to transparency. As a proxy, HAProxy’s primary job is to deliver traffic exactly as it’s received. Since a backend server might technically treat /login and /log%69n as distinct resources, HAProxy doesn't normalize them by default to avoid breaking legitimate, unique application logic.

If your backend decodes these characters and you need to enforce stricter controls, you have three main paths forward:

  1. Adopt a positive security model: Instead of trying to block "bad" paths (which are easy to alias), switch to an "Allow" list that only permits known-good URLs and blocks everything else.

  2. Manual normalization: For specific use cases, you can use the normalize-uri directive to choose which types of normalization to apply to percent-encoded characters before they hit your ACL logic (depending on the application's type and operating system).

  3. Enterprise WAF: If you prefer  "turnkey" protection, the HAProxy Enterprise WAF automatically handles this normalization, sitting in front of the logic to decode payloads safely.

The positive security model is a standard best practice and the only safe way to deal with URLs. The fact that the auditors unknowingly adopted an unsafe approach here made us think about how to emit new warnings when detecting such bad patterns, maybe by categorizing actions. This ongoing feedback loop within the community helps us continue to improve and refine a decades-old project.

2. Stats page access

The report noted that the Stats page uses Basic Auth and, if not configured with TLS, sends credentials in cleartext. It also reveals the HAProxy version number by default.

It’s important to remember that the Stats page is a legacy developer tool designed to be extremely lightweight. It isn't enabled by default, and its simplicity is a feature, not a bug. It’s meant to provide quick visibility without heavy dependencies. We appreciate the comment on the relevance of displaying the version by default. This is historical, and there's an option to hide it, but we're considering switching the default to hide it and provide an option to display it, as it can sometimes help tech teams quickly spot anomalies.

The stats page doesn’t reveal much truly sensitive data by default, so if you want to expose your stats like many technical sites do, including haproxy.org, you can easily enable it. However, if you want to configure it to expose more information on it that you consider sensitive (e.g., IP addresses), then you should absolutely secure it

The page doesn't natively handle advanced encryption or modern auth, so if you need to access it, follow these best practices:

  • Use a strong password for access

  • Wrap the Stats page in a secured listener that enforces TLS and rate limiting.

  • Only access the page through a secure tunnel like a VPN or SSH.

For larger environments, HAProxy Fusion offers a more modern approach. Instead of checking individual raw stats pages, HAProxy Fusion provides a centralized, RBAC-secured control plane. This gives you high-level observability across your entire fleet.

3. Startup stability

The auditors identified that specific malformed configuration values (like tune.maxpollevents) could cause a segmentation fault during startup.

While these were startup issues that did not affect live runtime traffic, the issue was identified and fixed immediately, and the fix was released the week following the preliminary report. This is the power of open source and active maintenance—issues are found and squashed rapidly.

Power, trust, and freedom

This audit reinforces the core pillars of our approach:

  • Power: Power is not just speed, but also the ability to withstand pressure. The exhaustive fuzzing tests prove that HAProxy is an engine built not just to run fast, but to run without disruption.

  • Trust: The fact that the auditors found zero vulnerabilities in the source code is a massive validation, but it isn't a coincidence. It is a testament to our Open Source DNA. Trust is earned through transparency, peer review, the continuous scrutiny of a global community, and professional security researchers.

  • Freedom: The "findings" regarding configuration remind us that HAProxy offers infinite flexibility. You have the freedom to configure it exactly as your infrastructure needs, but that freedom requires understanding your configuration choices.

Conclusion: deploy with confidence

The audit concludes that HAProxy 3.2 is "very mature" and "reliable for production".

We are committed to maintaining these high standards. We don't claim our code is flawless (no serious developer does). But we do claim that our focus on extreme performance never compromises our secure coding practices.

Next steps for users:

  • Upgrade: We recommend all users upgrade to the latest  HAProxy 3.2+ to benefit from the latest hardening and fixes.

  • Review: Audit your own configurations. Are you using "Deny" rules on paths? Consider switching to the standard positive security model.

  • Explore: If the complexity of manual hardening feels daunting, explore HAProxy One. It provides the same robust engine but adds the guardrails to simplify security at scale.

]]> Zero crashes, zero compromises: inside the HAProxy security audit appeared first on HAProxy Technologies.]]>
<![CDATA[How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One]]> https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one Thu, 05 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one ]]> History is everywhere at Dartmouth College, and while the campus is steeped in tradition, its IT infrastructure can’t afford to get stuck in the past. In an institution where world-class research and undergraduate studies intersect, technology must be fast, invisible, and – above all – reliable.

That reliability was put to the test when Dartmouth’s load balancing vendor was acquired twice in five years, as Avi Networks moved to VMware and VMware moved to Broadcom. Speaking at HAProxyConf 2025, Dartmouth infrastructure engineers Curt David Barthel and Kevin Doerr described how they began to see what they called “rising license costs without apparent value, and declining vendor support subsequent to acquisition after acquisition.”

It was clear that they were beginning to pay more for less — and it was time for a change.

After conducting thorough research, interviews, and demonstrations, Dartmouth settled on the best path forward: HAProxy One, the world’s fastest application delivery and security platform. 

For Dartmouth, it wasn’t just a migration; it was an opportunity to innovate on its existing infrastructure. They leveraged the platform’s deep observability and automation to architect a custom Load Balancing as a Service (LBaaS) solution.

Today, that platform is fully automated and self-service, making life easier for 50+ users across various departments and functions. Dartmouth’s journey serves as a technical blueprint for those hoping to make the switch from Avi to HAProxy One.

]]> Was history repeating itself?

As an undergraduate at Dartmouth, you’re likely to be taught that history doesn’t repeat itself — but sometimes it rhymes. 

Infrastructure changes were not new to the Dartmouth IT team. For roughly 20 years, the team managed its infrastructure using F5 Global and Local Traffic Managers. Later, they layered a software load balancing solution from Avi Networks on top of their F5 environment.

However, the landscape shifted as Avi was acquired by VMware, which was subsequently acquired by Broadcom. The changes led to rising licensing costs and declining vendor support. The solution began to feel like a closed ecosystem, forcing Dartmouth into a state of vendor lock-in that limited its architectural freedom.

]]> ]]> Ultimately, the team identified three "deal-breakers" that made their legacy environment unsustainable:

  1. Vendor lock-in: Today’s multi-cloud and hybrid cloud environments demand a platform-agnostic infrastructure. Yet, Dartmouth’s existing software was moving in the opposite direction — becoming increasingly tied to a specific vendor's ecosystem (VMware).

  2. Rising costs & constrained scaling: The licensing model was no longer aligned with Dartmouth’s needs. Increases in traffic often triggered disproportionately high costs, while complex licensing tiers made it difficult for the team to scale or innovate creatively.

  3. Automation roadblocks: To provide true "Load Balancing as a Service," the team needed a robust, template-driven workflow. The existing API didn't support the level of deep automation and auditability required to offer users a truly self-service experience.

Meeting new criteria

The Dartmouth team followed a dictum from the famous UCLA basketball coach, John Wooden: “Be quick — but don’t hurry.” 

They had established a high level of service for its users, and they wanted to maintain and also improve on that. So they set out their requirements carefully, including:

  • Comprehensive load balancing: Robust support for both L4 and L7 traffic.

  • API-first control plane: A solution that offers total data plane management through a modern, programmable interface.

  • Deep automation: Built-in features to support a GitOps-style workflow.

  • Modern orchestration: Native service discovery for Kubernetes environments.

  • Extensibility: The ability to customize and extend the platform to meet unique institutional needs.

]]> ]]> To find the right partner, Dartmouth conducted an extensive evaluation of top vendors where they demonstrated their products, along with customer reference interviews. HAProxy stood out for “less grandiose marketing” and the ability to run on-premises, in addition to cloud native implementation. 

HAProxy One met every current requirement and supported future plans. The platform was found to be cost-effective and to feature excellent support. 

"We interviewed many vendors, and HAProxy came out on top, particularly with the top-notch support model. It's beyond remarkable — it's unparalleled. Having that wealth of expertise is absolutely invaluable."

Building Rome in a few days

To replace their legacy environment, the Dartmouth team didn't just install new software; they engineered a robust, automated platform. 

The deployment was centered around HAProxy Fusion Control Plane, integrating essential networking components like IP address management (IPAM), global server load balancing (GSLB), and the virtual router redundancy protocol (VRRP). To maintain consistency with their existing operations, they also implemented custom TCP and HTTP log formats using the common log format (CLF).

The team then worked with their existing configuration manifests, in YAML format, which are sent to a Git repo to specify each user’s configuration options. This is all driven by a master Ansible playbook. 

]]> ]]> At the heart of this new system is a GitOps-driven workflow that makes infrastructure changes nearly invisible to the end user. The process follows a highly structured pipeline:

  1. User input: Power users submit their requirements through a simple, standardized front end.

  2. Manifest creation: These requirements are captured in YAML-formatted configuration manifests and committed to a Git repository.

  3. Automation pipeline: Each commit triggers a Jenkins pipeline that launches a master Ansible playbook.

  4. Configuration generation: Ansible uses Jinja2 templates to transform the YAML data into a valid, human-readable HAProxy configuration file.

  5. Centralized deployment: The playbook authenticates to the HAProxy Fusion Control Plane via API and pushes the configuration to HAProxy Fusion as a single, centralized update.

  6. Data plane synchronization: HAProxy Fusion then distributes and synchronizes the configuration across the entire fleet of HAProxy Enterprise data plane nodes, ensuring consistent, high-availability deployment at scale.

This modular approach provides Dartmouth with a "plug-and-play" level of flexibility. While the team is not deploying a web application firewall (WAF) at go-live, the framework is already in place to support it. When they are ready to activate the HAProxy Enterprise WAF, the process will be streamlined. Once the initial migration is complete, adding security layers will be as simple as activating a pre-tested template.

Observability without complexity

A big win for the IT team was the clear separation of responsibilities. Users are granted read-only access to HAProxy Fusion, allowing them to track the status of their requests and view their specific configurations in real time. Meanwhile, the IT team retains central control over the control plane, ensuring security and stability across the entire institution.

With every configuration change fully logged and auditable, troubleshooting has shifted from a manual "guessing game" to a data-driven process. Combined with HAProxy’s highly responsive support, Dartmouth now has a load-balancing environment that is not only faster and more cost-effective but significantly easier to manage.

Keys to the new city

Sometimes it’s seemingly small things that turn out to be crucial to success. What made Dartmouth’s transition to HAProxy work so well? 

The team manages more than 1,100 load balancer manifests, all of which were confirmed and validated against the new automation framework well before “go-live.” Specific “power” users were trained to use the HAProxy Fusion GUI, preparing them in advance for system deployment. 

The old architecture and the new one have been run side-by-side, so migration only requires a simple CNAME switch. If issues arise, users can fall back to the previous implementation, and behavior between the two systems can be easily compared in a real, “live fire” environment.

]]> ]]> The team cited several critical success factors, including:

  • The HAProxy Slack channel for support, with unparalleled responsiveness and a highly capable team

  • A developer team at HAProxy that is consistently available and responsive

  • Power user engagement and trust through early testing and implementation

Every feature from the Avi environment has now been implemented on HAProxy One — and in the process, Dartmouth has been able to introduce new capabilities that didn’t exist before. The response to date has been very strong. Power users say, “This looks great. This is much better than what we used to have.”

Ultimately, Dartmouth didn’t just swap vendors; they built a platform that puts them back in control. By prioritizing automation and architectural freedom, the team has moved past the cycle of rising costs and closed ecosystems. They now have a high-performance, self-service environment that is reliable, cost-effective, and ready to scale whenever they are.

]]> How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One appeared first on HAProxy Technologies.]]>
<![CDATA[Properly securing OpenClaw with authentication]]> https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication Tue, 03 Feb 2026 08:24:00 +0000 https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication ]]> OpenClaw (née MoltBot, née ClawdBot) is taking over the world. Everyone is spinning their own, either on a VPS, or their own Mac mini. 

But here's the problem: OpenClaw is brand new, and its security posture is mostly unknown. Security researchers have already found thousands of publicly available instances exposing everything from credentials to private messages.

While OpenClaw has a Gateway component — the UI and WebSocket that controls access — there are serious issues with its password/token-based authentication:

  • Until recently, you could skip authentication entirely on localhost.

  • The GET URL token authentication mechanism is questionable for such young code.

  • Trust needs to be earned, not assumed.

In this post, we'll secure OpenClaw using a battle-tested method with HAProxy.

The plan: implement HAProxy’s HTTP Basic Authentication

HAProxy’s HTTP Basic Authentication is a robust method for securing access to production systems with a username/password combination. In this guide, we’ll do the following:

  1. Install HAProxy

  2. Configure HAProxy with automatic TLS, basic auth, and rate limiting

  3. Install OpenClaw and authenticate access using the basic auth credentials

We'll cover running OpenClaw on a VPS first. In a follow-up, we'll tackle Mac mini deployments with secure remote access (think Tailscale, but entirely self-hosted). 

We'll also add smart rate limiting: anyone who sends more than 5 unauthorized requests within 120 seconds is blocked for 1 minute. The clever part? They'll see a 401 Unauthorized instead of 429 Too Many Requests, so attackers won't even know they've been rate-limited.

]]> You'll need two checklist items to get started:

  1. A VPS running anywhere with Ubuntu 24.04, and a public IP address

  2. A domain/subdomain pointing DNS to the VPS public IP

To see everything in action, visit our live demo link and experiment.

]]> Building it yourself

1) Install the HAProxy image

First, we'll install the high-performance HAProxy image:

]]> blog20260203-1.sh]]> We now have HAProxy 3.3 installed, with the high-performance AWS-LC library and full ACME support for automatic TLS certificates. Now, we just need to apply the configuration to make it work.

2) Configure HAProxy

Edit /etc/haproxy/haproxy.cfg and insert the following lines into the global section. This will set us up to use automatic TLS:

]]> blog20260203-2.cfg]]> Now let’s add configuration for automatic TLS using Let’s Encrypt. Edit the last line for your own domain:

]]> blog20260203-3.cfg]]> Next, we'll take care of the basic HAProxy configuration items. Don’t forget to change the line starting with ssl-f-use to use the correct subdomain alias from the my_files section:

]]> blog20260203-4.cfg]]> 3) Restart HAProxy

Restart HAProxy to apply the updated configuration:

]]> blog20260203-5.sh]]> Next, edit the HAProxy systemd file to make it automatically write certificates to disk. Run the following command:

]]> blog20260203-6.sh]]> You're now ready to insert the following line under [Service]:

]]> blog20260203-7.cfg]]> Finally, reload systemd:

]]> blog20260203-8.sh]]> 4) Install OpenClaw and access it securely

You're now ready to install OpenClaw:

]]> blog20260203-9.sh]]> That’s it! You can now run the following command:

]]> blog20260203-10.sh]]> This process will give you your personal access token. This is still needed for proper authentication inside OpenClaw itself.

You can now visit https://subdomain.example.com/?token=<gateway token>. When doing this for the first time, you'll have to provide a username and password.

You can also configure your macOS app to talk to this OpenClaw instance. Just insert the username and password directly into the Websocket URL, as shown below:

]]> ]]> One more thing

Check your rate limiting occasionally to see who's knocking at your door:

]]> blog20260203-11.sh]]> You might be surprised how many bots are already scanning for OpenClaw instances. That 401 response is working hard. Any line item where gpc0 is higher than 5 has been limited.

What if you accidentally lock yourself out? Simply run this command, where <key> is your IP address:bash:

]]> blog20260203-12.sh]]> Secure from the start

You now have an OpenClaw instance that's actually secure, not just "hopefully secure." Here's what's protecting you:

  • Defense in depth – You're not relying on OpenClaw's young authentication code. HAProxy handles the security layer with battle-tested HTTP Basic Auth that's been protecting production systems for decades.

  • Stealth rate limiting – Attackers hitting your instance will see authentication failures, not rate limit errors. They won't know they've been blocked, which means they'll waste time and resources before giving up.

  • Automatic TLS – Let's Encrypt handles your certificates with zero manual intervention. No expired certs, no security warnings, no hassle.

If you need more authentication methods or additional security layers, check out HAProxy Enterprise load balancer. When you’re ready to control your deployment at scale, using HAProxy Fusion for centralized management, observability, and automation.

Stay safe and keep learning!

]]> Properly securing OpenClaw with authentication appeared first on HAProxy Technologies.]]>
<![CDATA[Universal Mesh in action: how PayPal solved multi-cloud complexity with HAProxy]]> https://www.haproxy.com/blog/how-paypal-solved-multi-cloud-complexity-with-haproxy Thu, 15 Jan 2026 00:00:00 +0000 https://www.haproxy.com/blog/how-paypal-solved-multi-cloud-complexity-with-haproxy ]]> The hardest part of modern infrastructure isn’t choosing your deployment environments — it’s bridging communication between them. Large enterprises are constantly facing the challenge of keeping everything connected, secure, and fast when their infrastructures are spread across different clouds and on-premises systems.

PayPal faces this challenge every day, managing a global infrastructure that processes $1.6 trillion in annual payments across 436 million active accounts. Their environment is a complex mix of on-premises data centers and three major cloud providers (AWS, GCP, and Azure). With over 3,500 applications in service — some modern, others still relying on HTTP/1.1 — they dealt with overlapping CIDR / IP addresses, where multiple business units used the same private IP address ranges, and inconsistent cloud-native tools that made seamless communication difficult.

]]> ]]> To solve this, they didn't just patch their network; they built a Universal Mesh with HAProxy Enterprise load balancer and HAProxy Fusion Control Plane. This unified connectivity fabric, known internally as Project Meridian, supersedes earlier mesh technologies to provide a holistic framework for internal and external application delivery. Meridian serves as a universal translator across conflicting networks, creating a multi-tenant solution that eliminates the need to reinvent access patterns for every cloud provider.

In their recent HAProxyConf presentation, Senior Staff Network Engineers Kalaiyarasan Manoharan and Siddhartha Mukkamala detailed PayPal’s transformation. Here are the seven key steps they took to master multi-cloud networking.

]]> 1. Identify core challenges

The PayPal environment presented a number of challenges that demanded a unified solution: 

  • Connectivity. The core PayPal business and its business units, such as Braintree, Venmo, and Zettle, had applications spread across AWS, Azure, and GCP, with no unified way to communicate between them or share core services. 

  • Overlapping CIDR / IP addresses. Most business units used the same network ranges/subnets, making direct routing impossible. Overlap in private IP address space and subnets necessitated the routing of traffic over the public internet to connect services across different clouds, as there was no way to distinguish between identical internal addresses within different business units.

  • Exposing services. Without a private path, services often had to communicate over the public internet, which increased latency and expanded the attack surface.

  • Visibility. There was no "single pane of glass" to view end-to-end traffic flows, making troubleshooting a nightmare. 

Any solution had to address these challenges, making inter-service communication faster, easier, and more secure, with improved observability. 

2. Specify the architectural approach

PayPal’s goal was to create a "reusable solution that can abstract the complexity of the cloud providers." They envisioned a connectivity fabric that would provide a simple and unified way for business units to communicate securely, regardless of where any given service or data resource was hosted.

]]> ]]> The project was split into two main components:

  • Inner Meridian: Handles private connectivity between internal business units and internal cloud services.

  • Outer Meridian: Manages connectivity to external partners, SaaS providers, and AI models, such as GCP Gemini.

This simple division divided up the challenges involved in the overall solution into two manageable buckets. 

3. Build a non-overlapping IP fabric

]]> ]]> The most significant hurdle for Project Meridian to overcome was the overlapping CIDR / IP addresses. This overlap drove PayPal to expose many endpoints over the public internet. Project Meridian pulls these endpoints off the public grid. 

How did they do it? Instead of re-IPing thousands of servers (a multi-year nightmare), PayPal's engineers created a neutral zone using the 198.18.0.0/15 IP address range (defined in RFC 5735). This special-use range is designated for testing and is not routable over the public internet. This allowed them to leave the internal IP addresses alone and translate them only at the edge. 

By building their "Meridian Edge Services Fabric" with this non-overlapping range, they created a private "bridge" that allowed all business units to communicate without re-addressing their entire existing infrastructure.

Furthermore, HAProxy Enterprise’s ability to perform Source Network Address Translation (SNAT) allows Meridian to create a virtual network across incompatible existing networks. NATing makes traffic from outside a network appear as if it originated locally, without any changes to an application’s network configuration. 

This clever move created a private, non-overlapping, intermediary network layer with its own unique IP space. This allows PayPal to connect all the disparate cloud environments, without needing to “re-address” existing infrastructure. 

4. Deploy HAProxy Enterprise as the multi-cloud gateway

While PayPal initially explored cloud-native services, they soon realized they needed a more flexible, vendor-agnostic tool. They chose HAProxy Enterprise as the core component because it provided a unified, multi-tenant solution that works the same way in AWS as it does in GCP, Azure, or on-premises.

]]> ]]> They deployed HAProxy Enterprise clusters, known as Meridian Edges, across different clouds and regions for each business unit to ensure high availability. These edges handle the heavy lifting: SSL termination, protocol translation (converting HTTP/1.1 to modern HTTP/2), and Source Network Address Translation (SNAT) to bridge the different IP ranges.

]]> 5. Implement smart routing

With the CIDR problem solved, PayPal needed a way to route traffic to the correct application. Traditional DNS propagation is too slow for dynamic cloud environments. Instead of relying on complex DNS subdomains, they adopted a simple and effective strategy that leverages HAProxy Enterprise’s powerful path-based routing capabilities.

By moving routing logic out of DNS and into the mesh (HAProxy), PayPal decoupled service location from network location. This is a hallmark of Universal Mesh architecture.

For example, a request destined for "App 2" in "Business Unit 2" is sent to a unified endpoint, such as example.paypal.com/bu2/app2. The HAProxy Enterprise-powered Meridian Edge at the source receives the request and terminates the SSL. Using a dynamic map file, HAProxy Enterprise performs a high-performance lookup of the URI path to determine the exact destination Meridian Edge. This allows for granular, intelligent traffic steering without the administrative overhead of managing thousands of individual DNS records. 

The destination HAProxy Enterprise instance rewrites the intended URI path and forwards the request to the internal application, making the entire process seamless for the end services: “the Meridian Edge Service Fabric is an entirely private path.” 

6. Centralize observability and control 

To manage this distributed network of HAProxy Enterprise clusters, PayPal uses HAProxy Fusion as its management layer. This provides a "single pane of glass" where engineers can look up a unique correlation ID to see exactly how a request performed at every hop—from the network round-trip time to the application response time.

This provides clear evidence of where a bottleneck actually exists, leading to faster resolution.

7. Measure the results and build forward 

The impact of Project Meridian has been transformative for PayPal:

  • 24% latency reduction: By redirecting traffic away from the public CDN path and onto the private fabric with persistent HTTP/2 connections, they achieved a significant performance improvement.

  • Enhanced security: Moving applications to an entirely private path significantly reduced their external attack surface.

  • Operational efficiency: Service onboarding is now much faster. Once a service is in the Meridian directory, other units can connect to it easily without weeks of manual firewall tickets.

Conclusion

With Meridian, all three major public cloud providers, as well as any in-house assets that PayPal controls, function as a single, unified set of services and resources. A payments API in AWS can communicate with a risk API in GCP and then a compliance API in Azure, eliminating the need to generate traffic across the public internet. Most enterprise companies can only be envious of such an effective solution. 

As Siddhartha concluded, “Building that private connectivity between the business units is especially hard when there is an IP address overlap. We partnered with HAProxy, which helped us provide consistent connectivity across cloud providers.”

And PayPal isn't finished yet. They are currently working on a self-service automation model and partnering with HAProxy to implement advanced service discovery. This will further accelerate PayPal’s ability to innovate across its global footprint.

PayPal’s Meridian is a powerful real-world use case of Universal Mesh succeeding at enterprise scale. Universal Mesh is a unified connectivity fabric designed to solve the challenges of traditional networking and fractured connectivity models. It is an emergent architectural pattern that provides a holistic framework for application delivery, superseding earlier mesh technologies by addressing a broader scope of problems with a more elegant and scalable design.

]]> Universal Mesh in action: how PayPal solved multi-cloud complexity with HAProxy appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Kubernetes Ingress Controller 3.2]]> https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-3-2 Tue, 13 Jan 2026 08:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-3-2 ]]> We’re excited to announce the simultaneous releases of HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2! All new features described here apply to both products.

These releases introduce user-defined annotations, a new frontend CRD, and other minor improvements, and we’ll cover these in detail below. Visit our documentation to view the full release notes.

If you have questions about how to replace Ingress NGINX or how to migrate from Ingress to Gateway API, you can skip to the FAQs.

Version compatibility with HAProxy 

HAProxy Kubernetes Ingress Controller 3.2 is built with HAProxy 3.2.

New to HAProxy Kubernetes Ingress Controller?

HAProxy Kubernetes Ingress Controller is a free, open-source product providing high-performance Kubernetes-native application routing for the Ingress API. It supports HTTP/S and TCP (via CRD), and is built on HAProxy’s legendary performance, flexibility, and reliability. Additionally, it provides a low-risk migration path to Gateway API via the HAProxy Unified Gateway (beta). 

HAProxy Enterprise Kubernetes Ingress Controller provides secure, high-performance Kubernetes-native application routing for the Ingress API. It combines the flexibility of the open-source HAProxy Kubernetes Ingress Controller with an integrated web application firewall (WAF) and world-class support.

What’s new?

Feature

Benefit

Impact

User-defined annotations

Add new annotations independent of release pipeline and make use of more HAProxy features

Rapid feature adoption and modernization; simple support for Ingress NGINX annotations

Frontend CRD

Flexibly configure HAProxy frontend sections, validate changes to K8s resources

Simplified configuration and added flexibility

These enhancements make the community and enterprise products even more flexible, and enable simpler migration from existing Ingress NGINX deployments (EOL in March 2026) to HAProxy Kubernetes Ingress Controller. For an immediate, step-by-step technical transition plan from Ingress NGINX to HAProxy Kubernetes Ingress Controller, see our Ingress NGINX migration assistant.

Ready to upgrade?

When you're ready to start the upgrade process, view the upgrade instructions for your product:

User-defined annotations

]]> ]]> User-defined annotations are annotations with full validation that users can create for the frontend and backend sections of the HAProxy configuration that the ingress controller generates. These annotations are CRD driven, and allow you to limit their scope to certain resources. They're powerful, unlocking all previously unavailable HAProxy options through custom templates. They also bundle in safety through validation rules you define. 

User-defined annotations are especially useful when migrating to HAProxy Kubernetes Ingress Controller. If any annotation is missing, you can easily recreate it without tethering yourself to our release schedule.

HAProxy offers an extensive number of powerful load balancing options, all detailed within our Configuration Manual. The best and most reliable way to expose them is through secure deployment methods like user-defined annotations that still fully expose HAProxy's standard settings. 

How do user-defined annotations compare to CRDs?

Both annotations and CRDs have validation and can represent almost everything HAProxy offers. However, CRDs don't offer the same level of granularity that custom annotations do.

User-defined annotations vs. regular annotations

Security

The most important difference between user-defined annotations and regular annotations involves security. With user-defined annotations, there's a clear separation between internal teams that define them and those that consume them. 

When an administrator defines an annotation through a custom resource, they can define and limit its usage. This can be achieved by limiting annotations on certain HAProxy sections, namespaces, Services, or Ingress, as needed. If a specific service or group needs a little more configuration freedom, administrators can create a team-specific custom annotation.

Developers and teams 

Teams receive a complete list of available annotations from their admin or admin group. If they need additional annotations, team members can send new requests to their admin(s).

Validation

User-defined annotations have validation. You'll use Common Expression Language (CEL) to write these rules, which can be lenient or strict, simple or complex. Stricter rules help minimize the risk of misconfigurations.

Delivery speed

While the number of supported annotations has steadily grown alongside this project, no two deployments are identical. Company A needs different customization than Company B. While this project's goal is to consider every use case and setup, covering all scenarios with limited resources and time isn't possible. 

Luckily, user-defined annotations reduce the need for new annotations to be accepted, developed, and released. You can simply create a new annotation, deploy it, and start using it immediately.

Monitoring

We all read logs — right? When a user configures a user-defined annotation, validation runs and any error messages will display in the log. That user will quickly see if annotations aren't accepted due to various reasons. User-defined annotations offer added advantages too. Even if validation fails, the user-defined annotations will still appear in your configuration, but as a comment in the frontend or backend, alongside the error messages, which are also displayed as comments, helping explain what went wrong.

How can I distinguish user-defined annotations from 'regular' ones?

The official HAProxy annotations can have ingress.kubernetes.io, haproxy.org, and haproxy.com prefixes. User-defined annotations can have any prefix you define. For example, a well-known corporation at example.com can use an example.com prefix. Let's now tackle how to define the structure.

How to enable user-defined annotations?

The HAProxy Kubernetes Ingress Controller must be started with the following command line argument: --custom-validation-rules=<namespace>/<crd-name>.

If you’re using helm to manage HAProxy Kubernetes Ingress Controller, you may be using a custom values file (using -f <values file>). In this case, ensure you have the following path covered:

]]> blog20251218-12.cfg]]> User-defined annotation examples

We'll begin defining our annotations by including a prefix we want to use:

]]> blog20251218-01.yaml]]> This prefix indicates that the annotations HAProxy Kubernetes Ingress Controller will process are user-defined:

]]> blog20251218-02.yaml]]> We use standard Golang templating for the templates parameter, so any complex templating can be used. We write our rules in Common Expression Language (CEL). Once this is applied, a log confirmation message will appear:

ValidationRules haproxy-controller/example-validationrules accepted and set [example.com]

How to use user-defined annotations

Within Service, Ingress, or ConfigMap, we'll simply add our annotation metadata:

]]> blog20251218-03.yaml]]> After applying the annotation(s), we'll see the following in our configuration file:

]]> blog20251218-04.cfg]]> Working with more complex annotations

The user-defined annotations feature also enables you to create more complex annotations. The json type is highly useful in this scenario:

]]> blog20251218-05.cfg]]> Since rules and templates can be sophisticated, HAProxy Kubernetes Ingress Controller now supports multi-line annotations. If your template consists of a multi-line string, HAProxy Kubernetes Ingress Controller will create multiple lines in the configuration using the same annotation:

]]> blog20251218-06.cfg]]> Predefined variables

While using templates, the following variables are also available:

BACKEND, NAMESPACE, INGRESS, SERVICE, POD_NAME, POD_NAMESPACE, and POD_IP

]]> blog20251218-07.cfg]]> Options for defining rules]]> blog20251218-08.cfg]]> User-defined frontend annotations

Since Kubernetes lacks a Frontend object, you can instead define frontend annotations in your HAProxy Kubernetes Ingress Controller ConfigMap. This exists as an annotation of ConfigMap — not as a key-value pair.

]]> blog20251218-09.yaml]]> User-defined backend annotations

Users can also define backend annotations in three ways: via ConfigMap, Service, or Ingress. These annotations come with some caveats: 

  • Annotations made with ConfigMap will be applied to each supported backend. 

  • Annotations made with Service will be applied only on the specified service.

  • Annotations made with Ingress will be applied on services used in Ingress.

]]> What happens when you try to use the same annotation in multiple places? 

  • Service annotations have the highest priority. 

  • If Service annotations don't exist, Ingress annotations will be applied next. 

  • If neither Service nor Ingress annotations exist, ConfigMap annotations will be applied next.

To dive even deeper into user annotations, check out our user annotations documentation.

Frontend Custom Resources

]]> ]]> Similar to backend CRDs, you can now use Custom Resources to further configure the essential frontend sections that should always exist within your HAProxy configuration — such as HTTP, HTTPS, and STATS. We make this distinction since TCP frontend sections are created and managed solely through their own TCP CRDs, by comparison.

It's important to note that frontend CRDs should only be available to administrators, since they impact all traffic in the controller.

To start using them, you'll need to specify which resource is connected to a specific frontend. There are three new values you can use for frontend Custom Resources:

  • cr-frontend-http

    • Configures the HTTP frontend in your HAProxy configuration

  • cr-frontend-https

    • Configures the HTTPS frontend in your HAProxy configuration

  • cr-frontend-stats

    • Configures the STATS frontend in your HAProxy configuration

You can configure these specific frontend CRDs within Ingress Controller's ConfigMap:

]]> blog20251218-10.yaml]]> All available options contained within the frontend section of HAProxy can be configured using frontend CRDs. But what happens with any predefined values? All CRD values are merged with values that already exist. For example, CRD values will come first for binds, http-request rules, and for all lists in general. Afterwards, HAProxy Kubernetes Ingress Controller will append its own values on top of everything else.

]]> blog20251218-11.yaml]]> Minor improvements

HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2 add the following enhancements:

  • Backend names are now more readable than before

    • Each backend previously consisted of a namespace title, service name, and port number (or name) in the <namespace>_<service>_<port> format. The new format is <namespace>_svc_<service>_<port>. This enables more finely-grained statistical analysis, since it's now easier to separate namespace title and service name. 

  • The admin port is now the only way of fetching pprof and Prometheus data. This helps protect sensitive stats data.

  • A new generate-certificates-signer annotation will automatically generate TLS certificates signed by a provided CA secret for incoming connections. This uses the generate-certificates and ca-sign-file HAProxy bind options.

  • We've added a new --disable-ingress-status-update flag. When set, the controller will skip updating the loadBalancer status field in managed Ingress resources.

  • HAProxy Kubernetes Ingress Controller has moved from OpenSSL to the AWS-LC for added security, faster SSL/TLS cryptography, and higher throughput with low latency.

Deprecations

There are several planned feature deprecations for the next version of HAProxy Kubernetes Ingress Controller (version 3.4). 

First, we're removing support for CRDs in the ingress.v1.haproxy.org group. Those are CRDs for backends, defaults, globals, and TCPs. However, all of these have had ingress.v3.haproxy.org alternatives already available since HAProxy Kubernetes Ingress Controller 3.0.

  • With the available binary released on GitHub, we can use --input-file and --output-file to convert your resources from v1 to v3. You can use a simple Terminal command to begin converting:
    ./haproxy-ingress-controller --input-file=global-full-v1.yaml --output-file=global-full-v3.yaml

To better unify functionality across multiple products (especially HAProxy Unified Gateway), most of the annotations we currently use will also be deprecated in favor of using Custom Resources. To ensure continuity and provide a simple migration from annotations to CRDs, we'll release a tool that converts the output of annotations into CRDs. We'll make this available to community and enterprise users in 2026.

Contributions

]]> ]]> HAProxy Kubernetes Ingress Controller's development thrives on community feedback and feature input. We’d like to thank the code contributors who helped make this version possible!

Contributor

Area

Hélène Durand

FEATURE, BUG, BUILD, DOC, OPTIM, TEST

Ivan Matmati

BUG, FEATURE, TEST, DOC

Dario Tranchitella

BUG

Dinko Korunić

FEATURE

Philipp Hossner

BUG, FEATURE

SF97

BUG, BUILD

Fabiano Parente

FEATURE

Saba Orkoshneli

CLEANUP

Vladyslav Riabyk

FEATURE

Zlatko Bratkovic

BUG, FEATURE, TEST, BUILD, CLEANUP, DOC, OPTIM, REORG

FAQs and what’s next

Can I replace Ingress NGINX with HAProxy Kubernetes Ingress Controller?

Ingress NGINX is officially reaching end of life in March 2026, after which planned releases, bug fixes, security updates, and feature development will stop. We're here to help teams replace Ingress NGINX and ensure continuity.

HAProxy Kubernetes Ingress Controller is the easiest, most immediate, and most direct production-ready replacement for teams facing a tight migration deadline. While not a 100% drop-in replacement, a robust annotation system — including the new user-defined annotations and our Ingress NGINX Migration Assistant — make it simple to achieve equivalent functionality for a stress-free switchover. HAProxy Kubernetes Ingress Controller also offers superior speed, stability, and advanced features to level up your existing Ingress setup.

To learn more about migration, we encourage you to watch our on-demand webinar and contact us with any questions.

Can I migrate from Ingress to Gateway API?

For teams considering migrating from Ingress to Gateway API, the new Kubernetes-native standard for traffic management, HAProxy makes it simple. 

  1. First, HAProxy Kubernetes Ingress Controller users will be able to migrate easily to the new HAProxy Unified Gateway, maintaining their existing Ingress rules (feature coming in 2026). 

  2. Second, HAProxy Unified Gateway users will be able to gradually migrate from Ingress to Gateway API within the same product for consistent management.

HAProxy Unified Gateway is a free, open-source product providing unified, high-performance, Kubernetes-native application routing for both Gateway API and Ingress. HAProxy Unified Gateway provides flexible protocol support, role-based access control, and a low-risk, gradual migration path for organizations moving from Ingress to Gateway API. Combined with HAProxy’s legendary performance and reliability, these key features support the needs of modern applications and evolving organizations.

Can I manage all my traffic with HAProxy – in Kubernetes and other environments too?

HAProxy One — the world's fastest application delivery and security platform — provides universal traffic management with a data plane and control plane that are completely infra-agnostic. For Kubernetes users it currently enables intelligent external load balancing, multi-cluster routing, direct-to-pod load balancing, and the groundbreaking universal mesh. In 2026, we're adding built-in support for both Gateway API and Ingress via HAProxy Fusion Control Plane. These enhancements will enable HAProxy One to provide comprehensive Kubernetes routing and load balancing as part of its universal traffic management. 

Development for HAProxy Enterprise Kubernetes Ingress Controller will continue, with version 3.4 planned for 2026. Existing users can keep using HAProxy Enterprise Kubernetes Ingress Controller, or upgrade to HAProxy One for universal traffic management, intelligent multi-layered security, and a centralized control plane that works across all environments. 

To learn more, check out our Kubernetes solution.

Conclusion 

HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2 are even more powerful, while simplifying migration from alternatives such as Ingress NGINX. User-defined annotations and frontend CRDs enable faster feature adoption and modernization, and more flexible configuration. We hope you enjoy using these new features!

To learn more about HAProxy Kubernetes Ingress Controller, follow our blog and browse our documentation. To take HAProxy Enterprise Kubernetes Ingress Controller for a test drive, contact us.

If you want to explore additional Kubernetes capabilities in HAProxy — such as external load balancing and multi-cluster routing — check out our on-demand webinar.

]]> Announcing HAProxy Kubernetes Ingress Controller 3.2 appeared first on HAProxy Technologies.]]>
<![CDATA[How LinkedIn modernized its massive traffic stack with HAProxy]]> https://www.haproxy.com/blog/how-linkedin-modernized-its-massive-traffic-stack-with-haproxy Thu, 18 Dec 2025 00:00:00 +0000 https://www.haproxy.com/blog/how-linkedin-modernized-its-massive-traffic-stack-with-haproxy ]]> Connecting nearly a billion professionals is no small feat. It requires an infrastructure that puts the user experience above everything else. At LinkedIn, this principle created a massive engineering challenge: delivering a fast, consistent experience across various use cases, from the social feed to real-time messaging and enterprise tools.

In a deep-dive presentation at HAProxyConf, Sanjay Singh and Sri Ram Bathina from LinkedIn’s Traffic Infra team shared their journey to modernize the company’s edge layer. Facing rapid growth and changing technical needs, LinkedIn made the strategic decision to redesign its traffic stack around HAProxy.

The engineering principles driving this decision—simplicity, fault isolation, and raw performance—are just as relevant today as infrastructures get more complex.

Here is a look at why they moved away from their legacy solution, how they evaluated the competition, and the dramatic performance gains they achieved with HAProxy.

]]> The challenge: a legacy stack hitting its limits]]> ]]> For years, LinkedIn’s traffic stack relied on Apache Traffic Server (ATS). This system acted as the bridge between user devices and LinkedIn’s services.

To make it work for their specific needs, the team had heavily modified ATS with over 30 custom plugins. These plugins handled everything from business logic to authentication and security.

While this architecture worked well for a while, three major drivers forced the team to re-evaluate their setup:

  1. Organic growth: As the LinkedIn member base grew, so did queries per second (QPS). Scaling the fleet horizontally (adding more servers) was becoming inefficient and expensive.

  2. Business diversification: New products, such as LinkedIn Learning and Sales Navigator, brought complex requirements, including strict consistency for payments and geo-specific routing.

  3. Technological advancement: The team needed to support next-generation protocols, such as HTTP/2, HTTP/3, and gRPC, to keep up with the industry.

The hardware bottleneck

The legacy stack was becoming a bottleneck for growth. Scaling the ATS fleet wasn't simple; it had a cascading effect on downstream services, meaning bottlenecks shifted to other spots in the system.

The most telling challenge came during a hardware upgrade. The team upgraded to AMD 64-core machines, expecting a significant performance boost, but the upgrade in computing power only reduced their fleet size by about 12%.

This proved that simply throwing more hardware at the problem wasn't the answer—the software itself had to change.

]]> ]]> The danger of complexity

Reliance on custom C/C++ plugins also created a fragile environment. Because these plugins functioned as shared libraries, they didn't offer good fault isolation. If a developer introduced a bug in one plugin, it could crash the entire proxy and take down the site. LinkedIn needed a solution that offered better reliability, higher performance, and native ways to handle their complex routing rules without writing so much custom code.

The evaluation: why HAProxy won

]]> ]]> The LinkedIn team didn't just pick a new tool at random. They created a strict "wishlist" for their next proxy. It had to be:

  • Open source with a strong community.

  • Highly performant to handle LinkedIn's massive scale.

  • Feature-rich—offering native constructs to model routing so they could write less custom code.

  • Future-proof with support for modern protocols.

They evaluated several top competitors in the industry, including Zuul, Nginx, and Envoy, before HAProxy emerged as the clear winner by checking every box on the wishlist.

It offered the right balance of performance and community support. Crucially, its long-term support (LTS) release cycle fit LinkedIn's operational model perfectly, allowing them to focus on business logic rather than constant upgrades.

As Sri Ram Bathina noted, "We anticipate that it's going to drastically reduce our fleet size and get a lot of gains in performance, cost, and the amount of effort required to manage our fleet".

The decision was driven by four key advantages:

1. Unmatched performance

The LinkedIn team conducted benchmarking to measure end-to-end average latency using a 1KB payload and a simulated 1ms upstream delay. The goal was to see how much load (in requests per second) the proxy could take before latency exceeded 10 milliseconds.

The results were stark:

  • Legacy (ATS): Latency spiked above 10ms at just 4,500 RPS.

  • Envoy: Hit the limit at 13,000 RPS.

  • HAProxy: Maintained low latency up to 55,000 RPS.

HAProxy outperformed the competition by a huge margin. For LinkedIn, this kind of efficiency means they can handle more traffic with fewer servers, solving the scaling issues affecting their legacy stack.

]]> ]]> 2. Simplifying configuration

One of the biggest pain points with the old stack was how complicated the rules were.

To route traffic for a specific frontend, the team had to create a new rule for every single path. This resulted in a staggering 16,000 routing rules. Managing that many rules is a nightmare for operations and invites human error.

HAProxy solved this with its native map files and pattern fetching. By fetching patterns from a file and applying them to a single rule, they projected a massive reduction in complexity. They could go from 16,000 rules down to just 250—essentially one rule per backend.

Sri Ram explained the impact simply: "This would make our operations very simple... I only have to change that one rule instead of changing it in multiple places".

3. Native extensibility and fault isolation

LinkedIn has some very specific routing needs. One example is "member sticky routing". To ensure a consistent experience (such as read-after-write consistency), the system tries to route a user to the same data center every time.

With HAProxy, the team prototyped this logic using a simple Lua script. They could fetch the data center information and route the request in just two lines of config.

Furthermore, HAProxy’s Stream Processing Offload Agent (SPOA) provided the fault isolation they desperately needed. They can now offload processing—like anti-abuse checks—to external agents. If those agents fail, the core proxy keeps running smoothly.

4. Future-proofing

Finally, the move to HAProxy solves the "catch-up" problem. The legacy stack struggled to support modern protocols, which threatened to slow down the development of next-gen applications at LinkedIn.

HAProxy provided immediate, out-of-the-box support for HTTP/2, gRPC, WebSockets, and more. This ensures LinkedIn’s infrastructure isn't just fixing today's problems, but is ready for the future of the web.

Conclusion

By moving to HAProxy, LinkedIn has not only replaced a component; it has fundamentally modernized its edge. They moved from a complex, plugin-heavy architecture that struggled to scale to a streamlined, high-performance stack that is easier to manage and ready for the next generation of web protocols.

While LinkedIn achieved this with the free open source version of HAProxy, enterprise customers can easily achieve this level of performance (and more) using HAProxy One, the world’s fastest application delivery and security platform.

HAProxy One combines the performance, reliability, and flexibility of our open source core (HAProxy) with the capabilities of a unified enterprise platform. Its next-generation security layers are powered by threat intelligence from HAProxy Edge, enhanced by machine learning, and optimized with real-world operational feedback.

The platform consists of a flexible data plane (HAProxy Enterprise), a scalable control plane (HAProxy Fusion), and a secure edge network (HAProxy Edge), which together enable multi-cloud load balancing as a service (LBaaS), web app and API protection, API/AI gateways, Kubernetes networking, application delivery network (ADN), and end-to-end observability.

As Sri Ram Bathina concluded in their talk, "We love HAProxy... It is going to help capture our routing requirements very easily and minimize the amount of code we need to write."

Contact us today to schedule a consultation and demo to see how HAProxy One can modernize your infrastructure.


]]> How LinkedIn modernized its massive traffic stack with HAProxy appeared first on HAProxy Technologies.]]>
<![CDATA[Fresh from AWS re:Invent: Supercharging HAProxy Community with AWS-LC Performance Packages]]> https://www.haproxy.com/blog/fresh-from-aws-reinvent-supercharging-haproxy-community-with-aws-lc-performance-packages Fri, 12 Dec 2025 11:29:00 +0000 https://www.haproxy.com/blog/fresh-from-aws-reinvent-supercharging-haproxy-community-with-aws-lc-performance-packages ]]> The timing couldn’t have been better.

Last week, the tech world descended on Las Vegas for AWS re:Invent. It was the perfect venue to talk about cloud infrastructure, scale, and the future of application delivery. While we enjoyed talking shop at our booth, we didn't just bring swag and demos; we brought a significant performance improvement for our open-source community.

We were proud to announce the release of HAProxy 3.3 along with a game-changer for high-performance setups: HAProxy Community Performance Packages. These are pre-compiled, install-ready packages built not with the standard OpenSSL library (as found in most OS distributions), but with the new and lightning-fast AWS-LC.

Why does this matter? Because in the world of CPU-intensive work (like processing TLS connections), time is money, and efficiency is everything. We have done the heavy lifting to bundle HAProxy with the most performant library, ensuring you get maximum throughput right out of the box and linear scaling with additional CPU cores.

]]> ]]> Our open-source commitment: performance is non-negotiable

At HAProxy, we have always been obsessed with efficiency. Our philosophy is simple: load balancers should not be a bottleneck. However, the SSL/TLS landscape has evolved significantly in recent years.

OpenSSL has long served as the industry standard, providing stability and security. With the transition to OpenSSL 3, the project focused on enhancing modularity and security architecture. While these are valuable goals for the broader ecosystem, the architectural changes introduced trade-offs in specific high-load environments.

Our internal research found that in multi-threaded configurations, the new architecture can face scalability challenges due to lock contention and atomic operations. In scenarios involving high-volume handshakes, performance can plateau rather than scaling linearly with CPU cores.

For a community that relies on HAProxy for speed, we needed a solution that could fully utilize modern hardware. We published a detailed research paper, "The State of SSL Stacks," which analyzes these behaviors and explores alternatives.

]]> ]]> Enter AWS-LC: the speed you need

We evaluated several alternatives, including WolfSSL, LibreSSL, and BoringSSL. But the standout performer for general-purpose, high-scale deployments was AWS-LC.

AWS-LC is a general-purpose cryptographic library maintained by the AWS Cryptography team. It is open-source, based on code from the Google BoringSSL project and the OpenSSL project, and it aggressively targets both security and performance.

When we benchmarked HAProxy built with AWS-LC against other SSL stacks, the results were clear:

  • Massive Throughput: In our testing of end-to-end encryption with TLS resumption on a 64-core Graviton4 instance, we achieved over 180,000 end-to-end connections per second using AWS-LC.

  • Significant Gains: This represents a performance increase of approximately 50% over OpenSSL 1.1.1w and significantly outperforms OpenSSL 3.x versions.

  • Linear Scaling: This library scales linearly. When you add more CPU cores, you actually achieve greater performance rather than encountering diminishing returns due to software locks.

A collaboration of code and community

We didn't just pick a library off the shelf; we collaborated.

During our deep testing of AWS-LC, we identified a build configuration nuance where the build system wasn't defaulting to the C11 standard, which disabled certain atomic operations crucial for performance. We reported this to the AWS-LC team, and their response was exactly what you hope for in open source: fast, receptive, and effective.

They fixed the oversight quickly, allowing the library to utilize modern atomic operations instead of locks. We would like to extend a huge thank you to the AWS team for helping us push the boundaries of what is possible on modern hardware.

Why we built the performance packages

Here is the reality for most users: You know that switching SSL/TLS libraries could make your load balancer faster. But actually doing it? That’s hard. It usually requires:

  • Downloading source code.

  • Managing complex dependencies.

  • Compiling HAProxy manually.

  • Maintaining that custom build forever.

That is a high barrier to entry. We believe that every HAProxy user deserves access to the best possible performance. You shouldn't have to be a compilation expert to get it.

So, we did the work for you. We are now providing official HAProxy Community Performance Packages. These are pre-packaged for your distribution (currently available for Ubuntu 24.04, Debian 12, and Debian 13). You can install them via apt just like you would any standard package.

This also aligns with the latest release of HAProxy 3.3!

]]> ]]> HAProxy 3.3: more than just speed

While the new SSL/TLS library is the fuel injector under the hood, the engine itself, HAProxy 3.3, has received significant upgrades. This release is packed with features designed for modern infrastructure:

  • Kernel TLS (KTLS): For those who are more performance-focused, we’ve added support for offloading symmetric encryption to the Linux kernel, saving memory copies and CPU cycles.

  • QUIC on the Backend (Experimental): You can now connect to backend servers using HTTP/3 over QUIC. This future-proofs your infrastructure as more internal services move toward QUIC for reduced latency.

  • ACME DNS-01 Support: We’ve expanded our Let’s Encrypt integration. HAProxy can now handle DNS-01 challenges, allowing you to validate domain ownership via DNS TXT records rather than just HTTP files.

  • Persistent Stats: Observability is critical. In HAProxy 3.3, you can store statistics in shared memory. This means that if you reload HAProxy to apply a configuration change, you won't lose your metrics history.

Get Started Today

This announcement represents a bridge between two worlds: the freedom of community open-source and the simplicity of commercially supported packages.

Whether we met you on the floor at AWS re:Invent or you are reading this from your office today, the performance upgrade is ready for you.

Give it a try, and let us know what you see in your own benchmarks.

]]> Fresh from AWS re:Invent: Supercharging HAProxy Community with AWS-LC Performance Packages appeared first on HAProxy Technologies.]]>
<![CDATA[Sanitizing HTTP/1: a technical deep dive into HAProxy’s HTX abstraction layer]]> https://www.haproxy.com/blog/sanitizing-http1-a-technical-deep-dive-into-haproxys-htx-abstraction-layer Thu, 11 Dec 2025 00:00:00 +0000 https://www.haproxy.com/blog/sanitizing-http1-a-technical-deep-dive-into-haproxys-htx-abstraction-layer ]]> HTTP/1.1 is a text-based protocol where the message framing is mixed with its semantics, making it easy to parse incorrectly. The boundaries between messages are very weak because there is no clear delimiter between them. Thus, HTTP/1.1 parsers are especially vulnerable to request smuggling attacks.

In older HAProxy versions, HTTP/1.1 parsing was performed "in-place" on top of the raw TCP data. It was not an issue while connection keep-alive and massive header manipulations were not implemented/supported. However, when reusing connections to process several requests and adding more and more header manipulations (it is not uncommon to see configurations with hundreds of http-request rules), performance and security became concerns. In addition, supporting HTTP/2 raised compatibility challenges between both protocols, especially around the ability to transcode one version into another. 

To be performant, secure, and future-proof, a new approach had to be envisioned. This is what we achieved by using our own internal HTTP representation, called the HTX.

HTX is the internal name for the HTTP abstraction layer in HAProxy. It serves as the interface between the low-level layers of HAProxy, particularly the one responsible for parsing and converting different HTTP versions, the HTTP multiplexers, and the application part.  It standardizes the representation of messages between the different versions of HTTP.

Thanks to its design, almost all attacks that have appeared on HTTP/1 since the arrival of HTX have had no effect on HAProxy.

How HAProxy handled HTTP/1 before HTX

Originally, HAProxy was a TCP proxy and L4 load balancer. The HTTP/1 processing was added on top of it to be light, handle one request per connection, and only perform a few modifications. Over time, the trend has changed, and using HAProxy as an HTTP/1 proxy/load balancer has become the main usage, with more complex configurations and increasingly expensive HTTP processing. 

To protect HAProxy and servers behind it from attacks against the HTTP/1.1 protocol, costly and complex manipulations were mandatory, making processing even more expensive and the maintenance harder. The limits of the pre-HTX model were reached, mainly because of its design:

  • We directly received HTTP/1 from the socket into a buffer, and on output, we directly emitted this buffer to another socket.

  • The buffer therefore contained requests with all their flaws and variations (extra spaces, etc). The start and end of headers were indexed on the fly, and the various analyzers had to account for all possible variations (upper/lower case header names, spaces after the :, spaces at the end of the line, lone line feeds (LF) instead of carriage return line feed (CRLF), forbidden characters).

  • Rewriting a header required anticipating the change in size (up or down), and potentially deleting an eventual CR and an LF if a header was deleted, or inserting one if a header was added. These modifications also required updating the header index so that subsequent analyzers remained well synchronized. Some rewrites could insert CRLFs haphazardly ("hacks"), resulting in headers not being detected by subsequent stages because they were not indexed. The same applied to checks.

  • Data brought up as chunks appeared in raw format with the chunk, including optional extensions, so it was not possible to perform simple string search processing without having to account for chunking. This is notably why the http-buffer-request directive only processed the first chunk.

  • For very light and historical uses (with minimal header consideration, a model closer to early HTTP/1.0), this operation was relatively optimal since everything received was sent back with very little analysis.

  • With the arrival of keep-alive, which required parsing many headers, performing many more checks (Content-Length vs. Transfer-Encoding, host vs. authority, connection, upgrade, etc.), and making even more changes (adaptation between close vs. keep-alive sides), the simplicity of the original model became a liability. All advanced parsing work had to be redone at each processing stage, often multiple times per request and response.

How HAProxy handled HTTP/2 before HTX

]]> ]]> With the arrival of HTTP/2, HAProxy was faced with a completely different paradigm. A text-based protocol with no real framing for HTTP/1 against a binary-based protocol with a well-defined framing for HTTP/2. The main challenge was to find a way to add HTTP/2 support while making it compatible with the HTTP processing stack of HAProxy.

The HTTP/2 support was originally implemented as a protocol conversion layer between internal HTTP/1 and external HTTP/2. However, this raised several security issues due to the ambiguities of this conversion. But this also came with an unfortunate extra cost. The model was as follows:

  • On input, a block of HTTP/2 data was received and decoded. HEADERS frames were decompressed via the HPACK algorithm and produced a list of headers as (name, value) pairs. This list was then used to fabricate an HTTP/1.1 request by combining the method, URI, and adding the headers with : after the names and CRLFs after the values. This already posed several problems, because H2 is binary-transparent, meaning it is technically possible to encode : and CRLF in field values, making header injection possible if not enough care was taken.

  • The reconstructed URL was generally an absolute URL because the elements provided by the client (method, scheme, authority, path) were concatenated, and URL matchings in the configurations no longer worked (e.g., url_beg/static).

  • Data received in DATA frames resulted in HTTP chunks if the Content-Length was not announced. Similarly, it was difficult to analyze request chunks when needed (e.g., impossible to perform state modifications in HTTP/2 on the stats page, which used POST).

  • On output, the HTTP/2 converter had to re-parse HTTP/1.1 headers to fabricate a HEADERS frame. The parser used was simpler because it was assumed that HAProxy could be trusted to send valid protocol, but this caused many problems with error messages (errorfiles), responses forged in Lua, and the cache, which could sometimes store content that had undergone few protocol checks. Furthermore, to read the data, the parser also had to account for chunking and emit a DATA frame for each chunk. As a result, an HTTP/1 response sent back over HTTP/2 had to be parsed twice: once by the HTTP/1 parser for analysis, and a second time by the HTTP/1 parser integrated into the HTTP/2 converter.

  • The HTTP/2 converter also had to infer the correct operating mode for the response based on the Connection header and the announced HTTP version (HTTP/1.0 vs HTTP/1.1). It also struggled with edge cases involving Transfer-Encoding combined with Content-Length, as well as cases with no announced length where closing the server-side connection signaled the end of the response. Trailers were not supported because they were too risky to implement in this conversion model.

  • HTTP/2 was not implemented on the server side because of the increase in special cases to handle and the difficulty in converting these responses into valid HTTP/1. Furthermore, it quickly became clear that with this model, it was impossible to maintain a correct level of performance by doing end-to-end H2 because it required four conversions for each exchange.

The HTX: common internal representation for different HTTP versions

The "legacy" model having shown its limits, we logically decided to undertake a transition towards a more rational model. Continuing to base all internal operations on HTTP/1 was clearly a hindrance to the adoption of HTTP/2 and any other future versions of HTTP. The HTX was born from this thinking: to achieve a common internal representation for all current or future versions of HTTP.

If we take the previous example, here is how it happens today to transmit a request from an H2 client to an HTTP/1.1 server:

]]> ]]> If we look more closely at what happens on the H2-to-HTX conversion side, we now obtain a structured message that has nothing to do with the previous HTTP/1 version:

]]> ]]> It is at the moment of sending the request to the server that the conversion to HTTP/1.1 occurs, and this is done using standardized information:

]]> ]]> How HTX works: a technical deep-dive

The HTX is a structured representation of an HTTP message, intended to serve as a common foundation for all versions of HTTP within HAProxy. Internally, an HTX message is stored in a buffer. It consists of a part containing information about the message, the metadata, followed by a set of blocks containing a portion of the message resulting from parsing. This eliminates formatting differences tied to specific HTTP versions and standardizes the data. For example, header names are stored in lowercase, and spaces at the beginning and end of header values are removed, as are CRLFs at the end of the header.

Because an HTX structure is limited to the size of a buffer, only part of a large HTTP message may be present in HTX at any one time. An HTX message can be thought of as a pipe: as parsing progresses, new blocks are appended; HAProxy processes them (header rewriting, body compression, etc.); and they are then removed on the output side to be formatted and sent to the remote peer.

Organization of HTX blocks

HTX blocks are stored in a contiguous memory area, an array of blocks. Each block is divided into two parts. The first part, the block index, contains information related to the block: its type, its size, and the address of its content in the array. These indexes are stored starting from the end of the block array. The second part, meanwhile, contains the block data and is stored starting from the beginning of the array.

]]> ]]> The block indexes remain ordered and stored linearly. We use a positive position to identify a block index. This position can then be converted into an address relative to the beginning of the block array.

]]> ]]>
  • "head" is the position of the oldest block index

  • "tail" is that of the newest

  • The part corresponding to the block data is a memory space that can "wrap" and is located at the beginning of the block array. The block data is not directly accessible; one must go through the index of the corresponding block to know the address of its data, relative to the beginning of the block array.

    ]]> ]]> When the free space between the index area and the data area is too small to store the data of a new block, we restart from the beginning to find free space. The advantage of managing block data as a circular memory area is to optimize the use of available space when blocks are manipulated — for example, when a header is deleted — or when the blocks are simply consumed to be formatted and sent to the remote peer.

    However, this sometimes requires a defragmentation step when the distribution of blocks becomes too fragmented, and it is necessary to recover free space to continue processing. Concretely, the data can be arranged in two different ways:

    • Contiguous and ordered: with two possible free spaces — before and after the data. To preserve the order of the blocks as much as possible, additions are made primarily at the end of the array. The gaps between the blocks are not directly reusable:

    ]]> ]]>
  • Scattered and unordered: where the only usable space for inserting new blocks is located after the most recent data:

  • ]]> ]]> Defragmentation is necessary when the usable free space becomes too small. In this case, the block data are realigned at the beginning of the array to obtain the largest possible contiguous free space. The gaps between the block data are thus recovered. During defragmentation, unused block indexes are erased, and the index array is also defragmented.

    ]]> ]]> Structure of HTX block indexes

    A block index contains the following information about the block:

    • A 32-bit field: 4 bits for the block type and 28 bits for the size of the block data.

    • The address of the block data, on 32 bits, relative to the beginning of the block array.

    Block types overview

    Among the different types of HTX blocks, we find the elements that constitute an HTTP message:

    • A start-line: (method + uri) for a request, or (status + reason) for a response.

    • A header or a trailer: a (name + value) pair.

    • Data.

    There are also internal block types used to mark the end of headers and trailers, or to mark a block as unused (for example, when it has been deleted).

    Encoding details

    Depending on the type, the information of a block will be encoded differently:

    Header or trailer block:
    0b 0000 0000 0000 0000 0000 0000 0000 0000
       ---- ------------------------ ---------
       type  value length (1MB max)  name length (256B max)
     Start-line or data block:
    0b 0000 0000 0000 0000 0000 0000 0000 0000
       ---- ----------------------------------
       type      data length (256 MB max)
     "End of headers" or "end of trailers" marker:
    0b 0000 0000 0000 0000 0000 0000 0000 0001
       ---- ----------------------------------
       type         always set to 1    
    Unused:
    0b 0000 0000 0000 0000 0000 0000 0000 0000
       ---- ----------------------------------
       type         always set to 0 

    Block ordering

    An HTX message is typically composed of the following blocks, in this order:

    1. A start-line (request or response)

    2. Zero or more header blocks

    3. An end-of-headers marker

    4. Zero or more data blocks (HTTP)

    5. Zero or more trailer blocks

    6. An end-of-trailers marker (optional, but always present if there is at least one trailer block)

    7. Zero or more data blocks (TUNNEL)

    Responses with interim status codes

    In the case of responses, when there are interim responses (1xx), the first three blocks can be repeated before having the final response (2xx, 3xx, 4xx, or 5xx). In all cases, whether for requests or responses, the start-line, headers, and end-of-headers marker always remain grouped. This is true at the time of parsing, but also at the time of message formatting. The same applies to trailers and the end-of-trailers marker.

    Structure of HTX block data

    HTX block data comes in several forms, each representing a specific part of an HTTP message. The following sections describe how each block type is structured and used.

    The start-line

    The start line is the first block of an HTX message. The data in this block is structured. In HTTP/1.1, it is directly extracted from the first line of the request or response. In H2 and H3, it comes from the pseudo-headers (:method, :scheme, :authority, :path). Furthermore, because this block is emitted after all message headers have been parsed, it also contains information about the message itself, in the form of flags. For example, it can indicate whether the message is a request or a response, whether a size was announced via the Content-Length header, and so on.

    Headers and trailers

    Header data and trailers are stored the same way in HTX. There are two different block types to simplify internal header processing, but from the HTX perspective, there is no real difference, apart from their position in the message. Header blocks always come after a start-line and before an end-of-headers marker, while trailers always come after any data and are terminated by an end-of-trailers marker. In both cases, it is a {name, value} pair.

    Data

    The message payload is stored as data blocks, outside of any transfer encoding. Thus, in HTTP/1.1, the chunked formatting disappears. The same blocks are also used to store data exchanged in an HTTP tunnel.

    End-of-headers or end-of-trailers markers

    These are blocks without data. Depending on their type, they mark the end of the headers of a message or the end of the trailers. The end-of-headers marker is mandatory and marks the separation between the headers and the rest of the message. The end-of-trailers marker is optional and marks the end of the HTTP message and can potentially be followed by data exchanged in an HTTP tunnel.

    The benefits of using HTX in HAProxy

    Simplified and more secure manipulation

    As header names are normalized and indexed, searching is straightforward; it is enough to iterate over the blocks and compare them without having to re-parse them. There are relatively few headers per request or response (very often less than ten per request, between one and two dozen per response), so any additional indexing would be superfluous. This simplifies operations by eliminating the need to preserve relative positions between all headers.

    Rewrites are also well-controlled. The name of a header is normally not modified, and the value that needs modification (e.g., cookie modification) is already perfectly delimited and cleared of leading/trailing spaces and CRLFs. The API prevents insertion of forbidden characters such as NUL, CR, or LF. As a result, it is effectively impossible to leave CRLFs in a value accidentally, and each HTX header corresponds to exactly one outgoing header.

    In HTTP/1, the CRLF at the end of each line is automatically added by the protocol conversion, so the user or analyzer working on HTX has no direct control. Except in cases of conversion bugs, this design makes it impossible to pass one header for another or inject portions of a request or response to cause smuggling. Risks of bugs in the analysis or application layer are minimized because this layer no longer needs to manage available space for modifications; HTX handles it automatically, making the API trivial to use.

    Regarding the data, its delimitation is performed by the output converter. A block emitted in HTTP/1 will lead to the creation of a chunk, while the same block emitted in HTTP/2 will lead to one or more DATA frames. The total size of the converted data must match the announced size, otherwise an error is reported. Here too, the user has no control, so there is no risk of confusion about the output protocol. We again benefit from the intrinsic knowledge of the remaining data to be transmitted, avoiding the misinterpretation of boundary formatting that could lead to exploitation.

    This design is why nearly all HTTP/1 attacks since HTX’s introduction have had no effect on HAProxy. A few early issues affected the first versions due to missing checks in the conversion stage (e.g., spaces in the name of an HTTP/2 or HTTP/3 method), but these had very limited impact.

    Closer to RFCs

    Modern HTTP specs (RFC911x) have been carefully separated by distinguishing the semantic part from the protocol part. Thus, all HTTP versions are subject to the same rules described in RFC9110, and each version also has its own constraints. For example, HTTP/1 describes how the body length of a message is determined based on the method used, the status code, the presence or absence of a Transfer-Encoding header, etc., while HTTP/2 and HTTP/3 do not have this last header. Conversely, HTTP/2 and HTTP/3 are not allowed to let a Connection header pass and have specific rules regarding mandatory headers depending on the methods, and which may also be subject to negotiations within the framework of optional extensions (e.g., RFC8441 to pass WebSocket over HTTP/2).

    These types of checks were tedious to implement in the analysis layer because they all had to be subject to version checks to know what to verify. And errors are even easier on the response path because an HTTP/2 request transferred over HTTP/1 will lead to an HTTP/1 response that must be re-encoded in HTTP/2, but the error of relying on the request version rather than the response version for a given treatment is very quick to happen. All this code was therefore considered very sensitive and was subject to very few improvements for fear of making it fragile.

    The HTX made it possible to report protocol checks to the ends, in the converters, and to leave only semantic checks in the analyzers, operating on the HTX representation. Thus, each protocol converter is more free with its checks and can stick to its own rules without the risk of others benefiting from it as a side effect.

    More easily extensible

    Adding support for new protocols only requires writing the new converters, potentially by drawing inspiration from another similar protocol. There is no longer any need to modify the analyzers or the core semantic layer. This is how HTTP/3 support on top of the QUIC layer was added so quickly—with fewer than 3000 lines of code, some of which came from the HTTP/2 implementation (and were later shared between them). Indeed, in practice, implementing a new protocol mostly comes down to writing the HTX transcoding code to/from this protocol and nothing else.

    Support for the FastCGI protocol was introduced in much the same way, since this protocol is primarily just a different representation of HTTP. As a result, the codebase is now well positioned to accommodate new experimental versions of protocols while remaining maintainable. For example, when HTTP/4 eventually takes shape, the HTTP/3 code will probably be reused and adapted to form the beginning of HTTP/4, thus preserving a proven basis that can evolve alongside the protocol and be ready with a functional version once the protocol is ratified.

    This approach keeps the focus on the essential, namely the protocol itself and its interoperability with the outside world rather than on the impacts it could have on the entire codebase. By comparison, the first implementation of HTTP/2, starting from an already ratified protocol, took more than a year to complete.

    Conclusion

    The HTX enables us, among other things, to free ourselves from the details related to the different versions of HTTP in the core of HAProxy, allowing the conversion layers to handle them on input and output. HAProxy is thus capable of enabling clients and servers to communicate regardless of the HTTP versions used on each side. 

    Ultimately, this abstraction immunizes HAProxy against numerous classes of bugs affecting HTTP/1. In HAProxy, HTTP/1 is not processed for analysis (rewriting, routing, or searching). Once the translation into HTX is done, HAProxy is no longer subject to HTTP/1 attacks. The focus is therefore concentrated solely on the conversion layer.

    For example, a well-known request smuggling attack, which involves sending an HTTP/1 request with both a Content-Length header and a Transfer-Encoding header in order to hide a second request in the payload of the first is not possible in HTX, by design. Information related to data size is extracted from the message and stored as metadata.

    While HAProxy cannot block every possible request smuggling attack (consisting of hiding one request inside another), it will not be vulnerable to them and will prevent a certain number of them from being sent to servers, precisely because with the switch to HTX, the HTTP message undergoes normalization, and HAProxy can only transmit what it understands.

    ]]> HTTP/1.1 is indeed an ambiguous and complicated protocol to parse correctly. Before HTX, HAProxy was probably affected by some of these known attacks, and a lot of time was spent resolving parsing and processing bugs. Today, this no longer happens. This is a significant benefit brought by the switch to HTX.

    ]]> Sanitizing HTTP/1: a technical deep dive into HAProxy’s HTX abstraction layer appeared first on HAProxy Technologies.]]>
    <![CDATA[HAProxy Enterprise WAF Protects Against React2Shell (CVE-2025-55182)]]> https://www.haproxy.com/blog/react2shell-cve-2025-55182-mitigation-haproxy Mon, 08 Dec 2025 09:12:00 +0000 https://www.haproxy.com/blog/react2shell-cve-2025-55182-mitigation-haproxy ]]> Executive summary (TL;DR)

    At a glance

    • The issue: A critical remote code execution (RCE) vulnerability, dubbed "React2Shell," affects React Server Components and Next.js (CVE-2025-55182).

    • Severity: 10.0 (Critical).

    • Status: Active exploitation observed in the wild; public proof-of-concept code is available.

    HAProxy protection

    • HAProxy Enterprise WAF: Customers using the HAProxy Enterprise WAF, powered by the Intelligent WAF Engine, are protected against most attack vectors. Refined rulesets covering remaining edge cases are available.

    • HAProxy Community Edition: We provide sample ACLs based on best recommendations and known attack vendors.

    • Immediate action required: If you are running React or Next.js behind HAProxy, update your WAF rulesets immediately and plan to patch your backend applications.

    What is CVE-2025-55182 (React2Shell)?

    On December 3, 2025, the React team announced a critical security vulnerability in React Server Components (RSC). Identified as CVE-2025-55182 (and covering the now-duplicate CVE-2025-66478), this flaw allows unauthenticated attackers to execute arbitrary JavaScript code on backend servers.

    Technical impact:

    The vulnerability stems from insecure deserialization within the RSC "Flight" protocol, which is used for client-server communication. By sending a specially crafted HTTP request payload, an attacker can manipulate how React decodes data, influencing server-side execution logic.

    Because the flaw exists in the default configuration of affected applications — including those built with standard frameworks like Next.js — deployments are immediately at risk without requiring any developer code changes.

    CVSS v3 Score: 10.0 (Critical).

    Affected versions:

    • React: Versions 19.0, 19.1.0, 19.1.1, and 19.2.0.

    • Next.js: Versions 15.x and 16.x using App Router.

    • Other Frameworks: Any library bundling the vulnerable react-server implementation (e.g., Waku, RedwoodSDK).

    How HAProxy One protects your infrastructure

    While patching the upstream application is the ultimate remediation, HAProxy One provides a multi-layered security platform that stops attacks at the edge of your network, providing a critical first line of defense. You can stop the attack before it ever reaches your vulnerable servers.

    1. Managed protection with HAProxy Edge

    For customers using HAProxy Edge, our managed Application Delivery Network (ADN), no immediate action is required on your part to enable protection. Your traffic is already being filtered through our global network, which is regularly updated with the latest threat intelligence and WAF rulesets. This ensures you have the best protection available while you plan your backend patching strategy.

    2. Automatic protection with HAProxy Enterprise WAF

    For customers using HAProxy Enterprise WAF only (see below), protection against this exploit is available via our latest rule updates.

    Our initial testing confirmed that the HAProxy Enterprise WAF already blocked most identified malicious payloads associated with this vulnerability. To ensure comprehensive coverage, our security team has refined the ruleset to handle specific edge cases derived from global traffic analysis on HAProxy Edge. This threat intelligence, enhanced by machine learning, ensures your protection evolves as fast as the threat landscape.

    Action required:

    Update your WAF rulesets immediately to ensure you have the latest protections released on December 5th. Follow these instructions to update quickly:

    3. Moderate protection with CRS mode (ModSecurity)

    For customers using HAProxy Enterprise WAF in the OWASP CRS compatibility mode, or the standalone lb-modsecurity module, protection against this vulnerability depends on your active rule version. This is due to the signature-based approach that the OWASP Core Rule Set provides. We advise customers to use the latest stable CRS v4 ruleset and ensure rules REQUEST-920-PROTOCOL-ENFORCEMENT, REQUEST-934-APPLICATION-ATTACK-GENERIC, and REQUEST-949-BLOCKING-EVALUATION are enabled.

    If you are an HAProxy customer and are unsure about what protections you may have or best practices, please contact support.

    Protections with HAProxy Community Edition

    Below is a sample configuration for Community Edition users. This is based on best recommendations from the industry and the known attack vectors and is expected to provide reasonable protection, but may not cover all edge cases. We will update this as we learn more.

    A basic example of a recommended ACL is provided here:

    ]]> ]]> Additional defensive measures

    While HAProxy mitigates the immediate risk, we recommend a multi-layered security strategy:

    1. Patch the source: Apply the official fixes immediately. The React team has released versions 19.0.1, 19.1.2, and 19.2.1 to address this issue.

    2. Monitor logs: Watch your HAProxy logs for a spike in HTTP 403 errors, which indicates the WAF is actively blocking exploitation attempts.

    3. Audit your environment: Recent data suggests up to 39% of cloud environments may contain vulnerable instances. Ensure you have identified all public-facing applications running Next.js or React

    Conclusion

    Vulnerabilities like React2Shell highlight the volatility of the modern threat landscape. With threat actors operationalizing exploits within hours of disclosure, relying solely on patching backend applications leaves a dangerous window of exposure.

    HAProxy One provides the robust, multi-layered security needed to “virtually patch” vulnerabilities instantly. By leveraging the intelligence derived from our global traffic, our WAF rulesets evolve in real time to protect your infrastructure.

    Next steps

    References

    ]]> HAProxy Enterprise WAF Protects Against React2Shell (CVE-2025-55182) appeared first on HAProxy Technologies.]]>