HAProxy Technologies 2026 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy]]> https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp Fri, 27 Feb 2026 09:59:00 +0000 https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp ]]> If you’ve worked with VMware Horizon (now Omnissa Horizon), you know it’s a common way for enterprise users to connect to remote desktops. But for IT engineers and DevOps teams? It’s a whole different story. Horizon’s custom protocols and complex connection requirements make load balancing a bit tricky. 

With its recent sale to Omnissa, the technology hasn’t changed—but neither has the headache of managing it effectively. Let’s break down the problem and explain why Horizon can be such a beast to work with… and how HAProxy can help.

What Is Omnissa Horizon?

Horizon is a remote desktop solution that provides users with secure access to their desktops and applications from virtually anywhere. It is known for its performance, flexibility, and enterprise-level capabilities. Here’s how a typical Horizon session works:

  1. Client Authentication: The client initiates a TCP connection to the server for authentication.

  2. Server Response: The server responds with details about which backend server the client should connect to.

  3. Session Establishment: The client establishes one TCP connection and two UDP connections to the designated backend server.

The problem? In order to maintain session integrity, all three connections must be routed to the same backend server. But Horizon’s protocol doesn’t make this easy. The custom protocol relies on a mix of TCP and UDP, which have fundamentally different characteristics, creating unique challenges for load balancing.

Why Load Balancing Omnissa Horizon Is So Difficult

The Multi-Connection Challenge

Since these connections belong to the same client session, they must route to the same backend server. A single misrouted connection can disrupt the entire session. For a load balancer, this is easier said than done.

The Problem with UDP

UDP is stateless, which means it doesn’t maintain any session information between the client and server. This is in stark contrast to TCP, which ensures state through its connection-oriented protocol. Horizon’s use of UDP complicates things further because:

  • There’s no built-in mechanism to track sessions.

  • Load balancers can’t use traditional stateful methods to ensure all connections from a client go to the same server.

  • Maintaining session stickiness for UDP typically requires workarounds that add complexity (like an external data source).

Traditional Load Balancing Falls Short

Most load balancers rely on session stickiness (or affinity) to route traffic consistently. In TCP, this is often achieved with in-memory client-server mappings, such as with HAProxy's stick tables feature. However, since UDP is stateless and doesn't track sessions like TCP does, stick tables do not support UDP. Keeping everything coordinated without explicit session tracking feels like solving a puzzle without all the pieces—and that’s where the frustration starts. 

This is why Omnissa (VMWare) suggests using their “Unified Access Gateway” (UAG) appliance to handle the connections. While this makes one problem easier, it adds another layer of cost and complexity to your network. While you may need the UAG for a more comprehensive solution for Omnissa products, it would be great if there was a simpler, cleaner, and more efficient solution.

This leaves engineers with a critical question: How do you achieve session stickiness for a stateless protocol? This is where HAProxy offers an elegant solution.

Enter HAProxy: A Stateless Approach to Stickiness

HAProxy’s balance-source algorithm is the key to solving the Horizon multi-protocol challenge. This approach uses consistent hashing to achieve session stickiness without relying on stateful mechanisms like stick tables. From the documentation:

“The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up.” 

Here’s how it works:

  1. Hashing Client IP: HAProxy computes a hash of the client’s source IP address.

  2. Mapping to Backend Servers: The hash is mapped to a specific backend server in the pool.

  3. Consistency Across Connections: The same client IP will always map to the same backend server.

This deterministic, stateless approach ensures that all connections from a client—whether TCP or UDP—are routed to the same server, preserving session integrity.

Why Stateless Stickiness Works

The beauty of HAProxy’s solution lies in its simplicity and efficiency—it has low overhead, works for both protocols and is tolerant to changes. Changes to the server pool may cause the connections to rebalance, but those clients will be redirected consistently as noted in the documentation:

“If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.”

It is super efficient because there is no need for in-memory storage or synchronization between load balancers. The same algorithm works seamlessly for both TCP and UDP. 

This stateless method doesn’t just solve the problem; it does so elegantly, reducing complexity and improving reliability.

]]> ]]> Implementing HAProxy for Omnissa Horizon

While the configuration is relatively straightforward, we will need the HAProxy Enterprise UDP Module to provide UDP load balancing. This module is included in HAProxy Enterprise, which adds additional enterprise functionality and ultra-low-latency security layers on top of our open-source core.

]]> Implementation Overview

So, how easy is it to implement? Just a few lines of configuration will get you what you need. You start by defining your frontend and backend, and then add the “magic”:

  1. Define Your Frontend and Backend: The frontend section handles incoming connections, while the backend defines how traffic is distributed to servers.

  2. Enable Balance Source: The balance source directive ensures that HAProxy computes a hash of the client’s IP and maps it to a backend server.

  3. Optimize Health Checks: Include the check keyword for backend servers to enable health checks. This ensures that only healthy servers receive traffic.

  4. UDP Load Balancing: The UDP module in the enterprise edition is necessary for UDP load balancing, and uses the udp-lb keyword. 

Here’s what a basic configuration might look like for the custom “Blast” protocol:

]]> ]]> This setup ensures that all incoming connections—whether TCP or UDP—are mapped to the same backend server based on the client’s IP address. The hash-type consistent option minimizes disruption during server pool changes.

This approach is elegant in its simplicity. We use minimal configuration, but we still get a solid approach to session stickiness. It is also incredibly performant, keeping memory usage and CPU demands low. Best of all, it is highly reliable, with consistent hashing ensuring stable session persistence, even when servers are added or removed.

Refined health tracking & balancing UAG

While the basic configuration above works well, there are a few refinements and adjustments that can be added for a more comprehensive solution. In production-grade Omnissa Horizon environments, HAProxy is typically deployed in front of Unified Access Gateways (UAGs) rather than directly in front of internal Connection Servers. 

This architecture places HAProxy at the edge to manage incoming external traffic before it enters the DMZ, ensuring that UAGs (which act as hardened proxies for internal VDI operations) remain secure and performant. There are a few key refinements we can add for this production-ready setup:

Synchronized health tracking

While basic port checks verify network connectivity, they do not guarantee that the underlying Horizon application services are healthy. To solve this, use a dedicated health check backend like be_uag_https that specifically targets the /favicon.ico path. HAProxy can verify that all relevant UAG and Connection Server services are fully functional, not just that the port is open. 

Long-lived session persistence

Omnissa Horizon sessions are notably long-lived, with a default maximum duration of 10 hours. Standard load balancer timeouts are often too aggressive, potentially severing active virtual desktop connections during a typical workday. To ensure stability, HAProxy can be configured with extended timeout server and timeout client settings of 10 hours for all Blast and PCoIP backends. This aligns the load balancer’s persistence with the application’s session lifecycle, ensuring that even if a user is momentarily idle, their secondary protocols remain pinned to the correct UAG node.

Edge security and SSL bridging

For external-facing deployments, HAProxy should serve as the first line of defense using advanced security features like WAF (Web Application Firewall) and Brute Force Detection on the initial authentication endpoints. This protects the environment from credential-stuffing and application-layer attacks before they ever reach the UAG. 

Furthermore, because UAGs require end-to-end encryption for security, HAProxy should be configured for SSL Bridging. It is important to use the same SSL certificate on both the HAProxy virtual service and the UAG nodes.

This is crucial because the UAGs use fingerprinting for the certificate used for incoming requests, meaning the certificate presented by the HAProxy load balancer and the certificate on the UAG's outside interface must be the same to prevent certificate mismatch errors during the session handoff between the primary authentication and secondary display protocols.

Sample configuration with UAG load balancing & advanced health tracking

In this refined setup, the be_uag_https backend does the heavy lifting. All other backends simply "watch" its status. See the Omnissa documentation for a full list of port requirements for the different services within Unified Access Gateway.

]]> ]]> Understanding the track Directive and Timing

When you use the track keyword, the secondary servers inherit the state of the target. They don’t send their own health check packets, this enables further synchronicity: If srv1 fails the favicon check, it is marked down for Blast TCP, Blast UDP, and PCoIP UDP at the exact same millisecond. 

This prevents the "zombie session" issue. Without tracking, a user might be connected via TCP while their UDP media stream is hitting a dead server.

This centralized tracking approach transforms your health checks from a series of fragmented probes into a unified "source of truth" for your infrastructure. By anchoring every protocol to a single HTTP health check, you eliminate the risk of partial failures. A server that appears healthy for UDP while its TCP services are actually failing can't happen, and the client's entire session remains synchronized.

It's a configuration that's both more robust and significantly lighter on your backend resources, providing the stability required for high-performance virtual desktop environments.

Advanced Options in HAProxy 3.0+

HAProxy 3.0 introduced enhancements that make this approach even better. It offers more granular control over hashing, allowing you to specify the hash key (e.g., source IP or source+port). This is particularly useful for scenarios where IP addresses may overlap or when the list of servers is in a different order.

We can also include hash-balance-factor, which will help keep any individual server from being overloaded. From the documentation:

“Specifying a "hash-balance-factor" for a server with "hash-type consistent" enables an algorithm that prevents any one server from getting too many requests at once, even if some hash buckets receive many more requests than others. 

[...]

If the first-choice server is disqualified, the algorithm will choose another server based on the request hash, until a server with additional capacity is found.”

Finally, we can adjust the hash function to be used for the hash-type consistent option. This defaults to sdbm, but there are 4 functions and an optional none if you want to manually hash it yourself. See the documentation for details on these functions.

Sample configuration using advanced options:

]]> ]]> These features improve flexibility and reduce the risk of uneven traffic distribution across backend servers.

Coordination Without Coordination

The genius of HAProxy’s solution lies in its stateless state. By relying on consistent algorithms, it achieves an elegant solution that many would assume requires complex session tracking or external databases. This approach is not only efficient but also scalable.

The result? A system that feels like it’s maintaining state without actually doing so. It’s like a magician revealing their trick—it’s simpler than it looks, but still impressive.

Understanding Omnissa Horizon’s challenges is half the battle. Implementing a solution can be surprisingly straightforward with HAProxy. You can ensure reliable load balancing for even the most complex protocols by leveraging stateless stickiness through consistent hashing.

This setup not only solves the Horizon problem but also demonstrates the power of HAProxy as a versatile tool for DevOps and IT engineers. Whether you’re managing legacy applications or cutting-edge deployments, HAProxy has the features to make your life easier.


Frequently asked questions (FAQs)

]]>

Resources

]]> Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy appeared first on HAProxy Technologies.]]>
<![CDATA[Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF]]> https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf Fri, 27 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf ]]> The average cost of a security breach reached nearly $4.4 million in 2025, according to the publication Cost of Data Breach Report. To proactively address this substantial financial and security risk, Infobip, a global cloud communications platform, used HAProxy Enterprise to implement a security and uptime framework that is both highly modular and highly performant. 

Infobip has 62 data centers spread across the globe — and operates each data center with everything it needs to run independently of others. There are no reliability dependencies between data centers, and if one or more go down, the others automatically pick up the slack. 

The company processes enormous volumes of traffic, peaking at over 80,000 transactions per second during events such as Black Friday. These transactions went through HAProxy Enterprise with the integrated HAProxy Enterprise WAF.

To protect its applications and meet strict customer compliance requirements, Infobip needed a Web Application Firewall (WAF). However, finding a solution that could meet their demanding technical and business needs was a significant challenge. 

At HAProxyConf, engineers from Infobip shared the story of their search and how they ultimately found success with the next-gen HAProxy Enterprise WAF, powered by the Intelligent WAF Engine. Their journey highlights the critical need for a WAF that delivers security without compromising on performance, accuracy, or manageability. 

]]> The challenge: finding a scalable WAF for a global, high-performance infrastructure

Infobip’s requirements for a WAF were stringent. Their globally distributed infrastructure, with scores of independent data centers, meant that any solution had to be scalable and easy to manage centrally. Furthermore, due to demanding client SLAs, Infobip had to keep any new latency to an almost invisible level.  

Additional security — with no added latency? This strict requirement immediately excluded many traditional WAFs, which are often slow and inefficient.

]]> ]]> The team evaluated several options:

  • Cloud-based WAFs were not a good fit. Concerns included whether vendors had a presence in all of Infobip's regions and the need to classify the WAF provider as a data sub-processor, which they wanted to avoid. 

  • Hardware appliances were also ruled out. Scalability was lacking, management was a challenge, and costs were high. 

  • Virtual appliances didn’t meet Infobip’s operational approach, which runs everything possible in containers for consistency, security, and ease of management. 

Since Infobip was already a happy user of HAProxy Enterprise for load balancing and SSL termination, they decided to put HAProxy Enterprise WAF to the test. 

The evaluation: the Intelligent WAF Engine provides a breakthrough

]]> ]]> Infobip’s initial tests involved two distinct WAF engines: one based on ModSecurity and the HAProxy Advanced WAF (which has since been succeeded by the HAProxy Enterprise WAF). The results were mixed, highlighting the "WAF trade-off" with either option:

  • The Advanced WAF was extremely fast but proved too aggressive for their web portal, leading to false positives.

  • The ModSecurity WAF handled the portal well but introduced unacceptable latency on high-throughput APIs.

Infobip needed one solution that could handle both use cases, without the trade-offs. Fortunately, during the evaluation period, HAProxy Technologies launched the next-generation HAProxy Enterprise WAF, powered by the Intelligent WAF Engine.

This new WAF is designed to address the complexities and demands of modern application environments and the advanced threats they face — and is distinguished by its exceptional balanced accuracy, simple management, and ultra-low latency and resource usage. The Intelligent WAF Engine represents a technical breakthrough by moving beyond static lists and regex-based attack signatures to a non-signature-based detection system.

]]> ]]> By employing threat intelligence from HAProxy Edge’s 60+ billion daily requests, enhanced by machine learning, the Intelligent WAF Engine delivers:

  • Exceptional accuracy: A 98.5% balanced accuracy rate in an open source WAF benchmark, significantly outperforming the industry average of 90%.

  • Ultra-low latency: Under 1ms of added latency, even when handling complex traffic.

  • Simple management: Easy to set up and manage with out-of-the-box behavior suitable for most deployments.

  • 100% privacy: No external connection, and no third-party data processing.

A notable feature of the HAProxy Enterprise WAF is the optional OWASP Core Rule Set (CRS) compatibility mode, for organizations that require OWASP CRS support for specific use cases or compliance. When enabled, this mode achieves on average 15X lower latency than the ModSecurity WAF using the OWASP CRS — even under mixed traffic conditions.

This next-generation WAF solved Infobip's core problem, providing the ultra-low latency needed for API traffic and the exceptional accuracy required for their web portal, with an efficient and privacy-first operating model.

The implementation: a phased, automated rollout

Infobip had a solution to their challenging security and performance requirements in hand. Now they "just" needed to deploy it — and keep it updated — safely and securely.

So, with their new, breakthrough solution in hand, Infobip devised a careful, automated rollout plan across all 62 of their data centers globally:

  1. Deploy in learning mode: The team first deployed HAProxy Enterprise WAF in a non-blocking learning mode. This allowed them to learn traffic patterns and fine-tune rules without impacting production. To ensure rock-solid reliability, they configured a “circuit breaker” to automatically disable the WAF if CPU usage ever spiked, choosing availability over security during the initial learning phase. (NB: No spike occurred.) 

  2. Enable protection path-by-path: Due to Infobip's use of a microservices architecture, they had the ability to enable blocking mode on an application-by-application basis. The team would analyze the WAF traffic for a specific path (e.g., /sms), ensure there were no false positives, and then switch that path to protection mode. This gave them the opportunity to monitor again in production, then move to the next application. 

  3. Automate with dynamic updates: Infobip manages all configurations centrally and deploys updates globally within 15 minutes. When a new application comes online, they simply update a map file that is automatically downloaded by HAProxy Enterprise instances, avoiding a full reload or redeployment - and the latency hiccups that would cause. This highlights the simple yet powerful setup and management framework that HAProxy Enterprise provides. 

During Infobip’s presentation, the audience asked, “After setting up an app, do you still need much fine-tuning of WAF rules?” to which Juraj Ban replied, “No. Not anymore.”

The result: security + performance, without compromise

By implementing HAProxy Enterprise WAF, Infobip achieved its goal of strengthening its security posture without sacrificing performance. After the initial fine-tuning, they have experienced virtually no false positives and have met or exceeded all customer compliance requirements.

]]> ]]> The project was so successful that Infobip’s Chief Information Security Officer, Andro Galinović, provided a powerful endorsement:

]]> Infobip's story is a testament to how a modern, intelligent WAF can solve the complex security challenges of a global, high-performance platform. By choosing HAProxy Enterprise, they gained a solution that is not only fast and accurate but also flexible enough to fit seamlessly into their highly automated, container-based environment.


]]> Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF appeared first on HAProxy Technologies.]]>
<![CDATA[Omnissa Horizon alternative: how HAProxy solves UDP load balancing]]> https://www.haproxy.com/blog/omnissa-horizon-alternative Wed, 25 Feb 2026 14:00:00 +0000 https://www.haproxy.com/blog/omnissa-horizon-alternative ]]> The grace period is over. Your Horizon environment needs a new home, and your legacy load balancer isn't coming with you. You need a better Omnissa Horizon alternative.

Omnissa's separation from Broadcom has disrupted VDI routing for many organizations, and vSphere 7's October 2025 end-of-life has made the situation more urgent. If you're planning to replace Omnissa Horizon infrastructure right now, you're facing a choice: replicate the old expensive architecture or use this forced refresh to fix what wasn't working.

Legacy ADCs were never built for this protocol

Omnissa Horizon runs on Blast Extreme, a UDP-heavy protocol that creates a coordination nightmare for traditional load balancers. Every user session requires three simultaneous connections: one TCP channel for authentication, plus two UDP streams for display and audio. All three must hit the same backend server, or the session dies.

Legacy ADCs (Application Delivery Controllers) solve this with brute force: massive in-memory "coordination tables" that track every connection state. This approach was already inefficient, but in a forced migration scenario, it becomes a budget killer. You're looking at hardware refresh quotes that rival your new Omnissa licensing costs just to handle a protocol that UDP was designed for in the first place.

There's a better approach that eliminates this architectural bottleneck entirely.

HAProxy stateless coordination

HAProxy solves the Blast routing challenge with consistent hashing (balance source) for TCP and UDP load balancing, a stateless algorithm that maps client IPs to backend servers deterministically.

Here's why this matters for your migration:

Traditional ADC

HAProxy Enterprise

Stores connection state in memory

Uses pure math, no state to sync

Requires hardware overprovisioning

Scales horizontally on commodity infrastructure

Cost scales with capacity

Cost scales per HAProxy Enterprise instance

With HAProxy, you get superior Blast performance, eliminate hardware refresh CAPEX, and free up budget to offset rising vSphere costs.

Stateless stickiness in action

When a Horizon client connects, HAProxy hashes the client's source IP. That hash deterministically maps to the same backend server, which means TCP auth, and both UDP streams route to the correct destination (without storing session tables).

]]> There is no state to replicate across HA pairs, no memory tuning for peak user counts, or licensing tiers based on "connections per second."

Build strategically: get more than a VDI Gateway

Migrating to HAProxy as your Omnissa Horizon alternative doesn't have to be purely defensive spending. There's a broader infrastructure problem you can solve at the same time.

Most organizations today suffer from application delivery fragmentation. You're running legacy ADCs for VDI and web apps, separate API gateways for microservices, service mesh overlays for Kubernetes, and different tools for different clouds. 

Each silo has its own management plane, monitoring stack, and security policy language. Troubleshooting a user complaint that spans "VDI → Kubernetes app → external API" requires logging into four different systems.

By choosing HAProxy for your Omnissa migration, you're automatically placing the cornerstone of a Universal Mesh architecture into your infrastructure.

What Universal Mesh means in practice

The same HAProxy Enterprise instance handling your Blast traffic can:

  • Route north-south traffic (users → VDI pools)

  • Route east-west traffic (VDI → backend databases, internal APIs)

  • Serve as your Kubernetes Ingress Controller (containerized apps)

  • Act as your API Gateway (external partner integrations)

All managed through HAProxy Fusion Control Plane: one UI, one config model, one observability platform.

Migration path: tactical fix to strategic foundation

Phase 1 (weeks 1-4): solve the immediate crisis

  • Deploy HAProxy Enterprise as your Omnissa Horizon gateway through HAProxy Fusion Control Plane

  • Configure balance source with consistent hashing for stateless UDP routing

  • Migrate user traffic off the legacy ADC

Phase 2 (months 2-6): consolidate adjacent workloads

  • Route your web application traffic through the same HAProxy layer

  • Migrate API gateway functions to HAProxy Enterprise (you already own it)

  • Route Kubernetes traffic through HAProxy Enterprise

Phase 3 (6-12 months): full Universal Mesh

  • Federate HAProxy Enterprise instances across clouds

  • Establish unified policy for mTLS, rate limiting, and WAF

  • Retire the last legacy ADC appliances

By this point, you will have addressed the immediate Horizon crisis and consolidated your application delivery infrastructure. Instead of managing separate systems for VDI, API Gateway, and Kubernetes ingress, you're running a unified data plane. The operational benefit shows up in troubleshooting: when you can trace a user issue from VDI through containerized apps to external APIs in a single interface, you're solving problems in minutes instead of hours."

This moment matters

The Omnissa migration is forcing you to make decisions now, but the consequences of those decisions will compound for years.

Choosing the path of least resistance (buying another expensive ADC because "it's what we know") might leave companies having this same conversation in a few years when the next vendor changeup occurs. 

The technical complexity of the Omnissa migration is real. But the path through it doesn't have to be complicated.

Ready to escape the ADC vendor lock-in?

Talk to our solutions team about architecting your Omnissa environment on HAProxy Enterprise and building the foundation for a Universal Mesh that grows with you.

]]> Omnissa Horizon alternative: how HAProxy solves UDP load balancing appeared first on HAProxy Technologies.]]>
<![CDATA[Don't panic: a low-risk strategy for Ingress NGINX retirement]]> https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement Thu, 19 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement ]]> The Ingress NGINX project is winding down. For many organizations, this means planning a migration for critical infrastructure.

While the HAProxy Kubernetes Ingress Controller is the natural successor for these workloads, a "rip and replace" strategy isn’t always viable. You might have complex configurations, customized annotations, or deployment freezes that make a sudden switch risky.

There's a lower-risk path: Place HAProxy in front of your existing Ingress NGINX deployment. 

By leveraging the HAProxy One platform approach, you can bridge your legacy Ingress NGINX setup and your future infrastructure without downtime. This buys you time while adding immediate security and observability benefits.

Taking a "shield and shift" approach

This strategy mirrors the architecture we've previously recommended for vulnerability protection (like CitrixBleed). Deploy HAProxy Enterprise as your edge layer, sitting in front of your current Ingress NGINX controller. You wrap your existing ingress with enterprise-grade security and visibility, without touching your working NGINX configurations.

]]> ]]> This approach leverages a unified data plane. HAProxy Enterprise at the edge creates a protective layer that's consistent with your future HAProxy Kubernetes Ingress Controller. The HAProxy One platform uses the same high-performance engine at the edge and within Kubernetes, unlike disparate solutions that force you to maintain different configurations and skill sets.

The security policies, rate limits, and observability metrics you configure at the edge today translate directly to your Kubernetes clusters tomorrow. No relearning. No translation. 

1. Immediate security hardening

Legacy software becomes a security liability over time. An HAProxy edge layer acts as a security filter. You can apply rate limiting, bot management, and enterprise WAF rules to sanitize traffic before it reaches the deprecated controller.

2. Better visibility into your traffic

Migration anxiety comes from blindness. HAProxy Fusion unifies the management of your external edge gateways and internal Kubernetes controllers.

HAProxy Fusion provides a single pane of glass for all traffic flows—even those heading to your legacy Ingress NGINX controller. It allows you to visualize service dependencies and automate the routing changes required for the migration, turning a manual, error-prone switchover into a managed workflow.

3. Migrate one service at a time

This is the operational advantage. Once HAProxy Enterprise handles your ingress traffic, you don't need to cut everything over at once.

Configure HAProxy Enterprise to route most traffic to your existing Ingress NGINX setup. Then carve out specific paths, domains, or services to route to a new, parallel HAProxy Kubernetes Ingress Controller deployment.

Migrate service by service, pod by pod, or region by region. Test a new configuration in production with real traffic. If it works, great. If not, revert the routing without redeploying your cluster.

Configuration example

The setup is straightforward. Configure your edge HAProxy Enterprise to listen on your public IP address and forward traffic to your Ingress NGINX service's internal IP address.

Here's a simplified routing configuration:

]]> ]]> Looking ahead: Gateway API support

This architecture isn't just a stopgap. It's infrastructure that scales with you.

]]> ]]> As Kubernetes networking moves toward the Gateway API, a flexible edge routing layer lets you adopt new standards at your own pace. We're developing HAProxy Unified Gateway to support both Ingress and Gateway API standards—giving you a single platform that evolves with the ecosystem.

Stabilize your environment now. Migrate on your timeline. The configuration knowledge you build today (the routing logic, security policies, and operational patterns) carries forward. You're not buying time to delay a painful migration. You're building the foundation for your next-generation infrastructure, one service at a time.

Getting help

You don't have to migrate alone:

  • Community Support: Join our Slack to discuss migration strategies with other users

  • Documentation: We're releasing migration tutorials and annotation mapping guides soon

  • Enterprise Support: If you need hands-on help for critical workloads, our support and sales teams can help you architect a safe transition with HAProxy Fusion and HAProxy Enterprise

]]> Don't panic: a low-risk strategy for Ingress NGINX retirement appeared first on HAProxy Technologies.]]>
<![CDATA[February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service]]> https://www.haproxy.com/blog/cves-2026-quic-denial-of-service Thu, 12 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/cves-2026-quic-denial-of-service ]]> The latest versions of HAProxy Community, HAProxy Enterprise, and HAProxy ALOHA fix two vulnerabilities in the QUIC library. These issues could allow a remote attacker to cause a denial of service. The vulnerabilities involve malformed packets that can crash the HAProxy process through an integer underflow or an infinite loop.

If you use an affected product with the QUIC component enabled, you should update to a fixed version as soon as possible. Instructions are provided below on how to determine if your HAProxy installation is using QUIC. If you cannot yet update, you can temporarily workaround this issue by disabling the QUIC component.

Vulnerability details

  • CVE Identifiers: CVE-2026-26080 and CVE-2026-26081

  • CVSSv3.1 Score: 7.5 (High)

  • CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

  • Reported by: Asim Viladi Oglu Manizada

Description

Two separate issues were found in how HAProxy processes QUIC packets:

  • Token length underflow (CVE-2026-26081): This affects versions 3.0 (ALOHA 16.5) and later. A remote, unauthenticated attacker can cause a process crash. This happens by sending a malformed QUIC Initial packet that causes an integer underflow during token validation.

  • Truncated varint loop (CVE-2026-26080): This affects versions 3.2 (ALOHA 17.0) and later. An attacker can cause a denial of service. By sending a QUIC packet with a truncated varint, the frame parser enters an infinite loop until the system watchdog terminates the process.

Repeated attacks can  enable a lasting denial of service for your environment.

Affected versions and remediation

HAProxy Technologies released new versions of its products on Thursday, February 12, 2026, to patch these vulnerabilities.

CVE-2026-26081 (Token length underflow)

Product

Affected version(s)

Fixed version

HAProxy Community / Performance Packages

3.0 and later

3.0.16

3.1.14

3.2.12

3.3.3

HAProxy Enterprise

3.0 and later

hapee-lb-3.0r1-1.0.0-351.929

hapee-lb-3.1r1-1.0.0-355.744

hapee-lb-3.2r1-1.0.0-365.548

HAProxy ALOHA

16.5 and later

16.5.30

17.0.18

17.5.16

CVE-2026-26080 (Truncated varint loop)

Product

Affected version(s)

Fixed version

HAProxy Community / Performance Packages

3.2 and later

3.2.12

3.3.3

HAProxy Enterprise

3.2 and later

hapee-lb-3.2r1-1.0.0-365.548

HAProxy ALOHA

17.0 and later

17.0.18

17.5.16

Test if you’re affected

Users of affected products can determine if the QUIC component is enabled on their HAProxy installation and whether they are affected:

For a single installation (test a single config file):

grep -iE "quic" /path/to/haproxy/config && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

For multiple installations (test each config file in folder):

grep -irE "quic" /path/to/haproxy/folder && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

A response containing “QUIC may be enabled” indicates your HAProxy installation is potentially affected and you need to manually review and disable any QUIC listeners. The fastest method is by using the global keyword tune.quic.listen off (for version 3.3) or no-quic (3.2 and below).

Update instructions

Users of affected products should update immediately by pulling the latest image or package for their release track.

  • HAProxy Enterprise users can find update instructions in the customer portal.

  • HAProxy ALOHA users should follow the standard firmware update procedure in your documentation.

  • HAProxy Community users should compile from the latest source or update via their distribution's package manager or available images.

]]> Support

If you are an HAProxy customer and have questions about this advisory or the update process, please contact our support team via the Customer Portal.

]]> February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service appeared first on HAProxy Technologies.]]>
<![CDATA[Zero crashes, zero compromises: inside the HAProxy security audit]]> https://www.haproxy.com/blog/haproxy-security-audit-results Mon, 09 Feb 2026 15:00:00 +0000 https://www.haproxy.com/blog/haproxy-security-audit-results ]]> An in-depth look at the recent audit by Almond ITSEF, validating HAProxy’s architectural resilience and defining the shared responsibility of secure configuration.

Trust is the currency of the modern web. When you are the engine behind the world’s most demanding applications, "trust" isn't a marketing slogan—it’s an engineering requirement.

At HAProxy Technologies, we have always believed that high performance must never come at the cost of security or correctness. But believing in your own code isn’t enough. You need objective, adversarial validation. That's why we were glad to hear that ANSSI, the French cybersecurity agency, commissioned the rigorous security audit of HAProxy (performed by Almond ITSEF), which focused on code source analysis, fuzzing, and dynamic penetration testing as part of their efforts to support the security assessment of open source software.

The results are in. After weeks of intense stress testing, code analysis, and fuzzing, the auditors reached a clear verdict: HAProxy 3.2.5 is a mature, secure product that is reliable for production.

While we are incredibly proud of the results, we are equally grateful for the "operational findings" and the recommendations that highlight the importance of configuration in security. Here is a transparent look at what the auditors found and what it means for your infrastructure.

Unshakeable stability: 25 days of fuzzing, zero crashes

The most significant takeaway from the audit was the exceptional stability of the HAProxy core. The auditors didn't just review code; they hammered it.

The team performed extensive "fuzzing" by feeding the system massive amounts of malformed, garbage, and malicious data. They primarily targeted the HAProxy network request handling and internal sockets. This testing went on for days, and in the case of internal sockets, up to 25 days.

The result? Zero bugs. Zero crashes.

For software that manages mission-critical traffic, handling millions of requests per second, this level of resilience is paramount. It confirms that the core logic of HAProxy is built to withstand not just standard traffic, but the chaotic and malicious noise of the open internet.

Validating the architecture

Beyond the stress tests, the audit validated several key architectural choices that differentiate HAProxy from other load balancers.

Process isolation

The report praised HAProxy’s "defense-in-depth" strategy. We isolate the privileged "master" process (which handles administrative tasks, spawns processes, and retains system capabilities) from the unprivileged "worker" process (which handles the actual untrusted network traffic). 

By strictly separating these roles, HAProxy ensures that even if a worker were compromised by malicious traffic, the attacker would find themselves trapped in a container with zero system capabilities.

Custom memory management

Sometimes, we get asked why we use custom memory structures (pools) rather than standard system libraries (malloc). The answer has always been performance. Our custom allocators eliminate the locking overhead and fragmentation of general-purpose libraries, allowing for predictable, ultra-low latency.

However, custom code often introduces risk. That is why this audit was so critical: static analysis confirmed that our custom implementation is not just faster, but robust and secure, identifying no memory corruption vulnerabilities.

Clean code

The auditors found zero vulnerabilities in the HAProxy source code itself. The only vulnerability identified was in a third-party dependency (mjson), which had already been patched in a subsequent update and shared with the upstream project.

A case for shared responsibility

No software is perfect, and no audit is complete without findings. The report highlighted risks that lie not in the software’s flaws, but in operational configuration.

This brings us to a crucial concept: Shared Responsibility. We provide a bulletproof engine, but the user sits in the driver's seat. The audit highlighted a few areas where "default" behaviors prioritize compatibility over strict security, requiring administrators to be intentional with their config.

We believe in transparency, so we are highlighting these operational recommendations to provide guidance, much of which experienced HAProxy users will recognize as standard configuration best practice.

1. The ACL "bypass" myth

The auditors noted that Access Control Lists (ACLs) based on URL paths could be bypassed using URL encoding (e.g., accessing /login by sending /log%69n). While this may appear to be a security gap, it’s actually a result of HAProxy’s commitment to transparency. As a proxy, HAProxy’s primary job is to deliver traffic exactly as it’s received. Since a backend server might technically treat /login and /log%69n as distinct resources, HAProxy doesn't normalize them by default to avoid breaking legitimate, unique application logic.

If your backend decodes these characters and you need to enforce stricter controls, you have three main paths forward:

  1. Adopt a positive security model: Instead of trying to block "bad" paths (which are easy to alias), switch to an "Allow" list that only permits known-good URLs and blocks everything else.

  2. Manual normalization: For specific use cases, you can use the normalize-uri directive to choose which types of normalization to apply to percent-encoded characters before they hit your ACL logic (depending on the application's type and operating system).

  3. Enterprise WAF: If you prefer  "turnkey" protection, the HAProxy Enterprise WAF automatically handles this normalization, sitting in front of the logic to decode payloads safely.

The positive security model is a standard best practice and the only safe way to deal with URLs. The fact that the auditors unknowingly adopted an unsafe approach here made us think about how to emit new warnings when detecting such bad patterns, maybe by categorizing actions. This ongoing feedback loop within the community helps us continue to improve and refine a decades-old project.

2. Stats page access

The report noted that the Stats page uses Basic Auth and, if not configured with TLS, sends credentials in cleartext. It also reveals the HAProxy version number by default.

It’s important to remember that the Stats page is a legacy developer tool designed to be extremely lightweight. It isn't enabled by default, and its simplicity is a feature, not a bug. It’s meant to provide quick visibility without heavy dependencies. We appreciate the comment on the relevance of displaying the version by default. This is historical, and there's an option to hide it, but we're considering switching the default to hide it and provide an option to display it, as it can sometimes help tech teams quickly spot anomalies.

The stats page doesn’t reveal much truly sensitive data by default, so if you want to expose your stats like many technical sites do, including haproxy.org, you can easily enable it. However, if you want to configure it to expose more information on it that you consider sensitive (e.g., IP addresses), then you should absolutely secure it

The page doesn't natively handle advanced encryption or modern auth, so if you need to access it, follow these best practices:

  • Use a strong password for access

  • Wrap the Stats page in a secured listener that enforces TLS and rate limiting.

  • Only access the page through a secure tunnel like a VPN or SSH.

For larger environments, HAProxy Fusion offers a more modern approach. Instead of checking individual raw stats pages, HAProxy Fusion provides a centralized, RBAC-secured control plane. This gives you high-level observability across your entire fleet.

3. Startup stability

The auditors identified that specific malformed configuration values (like tune.maxpollevents) could cause a segmentation fault during startup.

While these were startup issues that did not affect live runtime traffic, the issue was identified and fixed immediately, and the fix was released the week following the preliminary report. This is the power of open source and active maintenance—issues are found and squashed rapidly.

Power, trust, and freedom

This audit reinforces the core pillars of our approach:

  • Power: Power is not just speed, but also the ability to withstand pressure. The exhaustive fuzzing tests prove that HAProxy is an engine built not just to run fast, but to run without disruption.

  • Trust: The fact that the auditors found zero vulnerabilities in the source code is a massive validation, but it isn't a coincidence. It is a testament to our Open Source DNA. Trust is earned through transparency, peer review, the continuous scrutiny of a global community, and professional security researchers.

  • Freedom: The "findings" regarding configuration remind us that HAProxy offers infinite flexibility. You have the freedom to configure it exactly as your infrastructure needs, but that freedom requires understanding your configuration choices.

Conclusion: deploy with confidence

The audit concludes that HAProxy 3.2 is "very mature" and "reliable for production".

We are committed to maintaining these high standards. We don't claim our code is flawless (no serious developer does). But we do claim that our focus on extreme performance never compromises our secure coding practices.

Next steps for users:

  • Upgrade: We recommend all users upgrade to the latest  HAProxy 3.2+ to benefit from the latest hardening and fixes.

  • Review: Audit your own configurations. Are you using "Deny" rules on paths? Consider switching to the standard positive security model.

  • Explore: If the complexity of manual hardening feels daunting, explore HAProxy One. It provides the same robust engine but adds the guardrails to simplify security at scale.

]]> Zero crashes, zero compromises: inside the HAProxy security audit appeared first on HAProxy Technologies.]]>
<![CDATA[How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One]]> https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one Thu, 05 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one ]]> History is everywhere at Dartmouth College, and while the campus is steeped in tradition, its IT infrastructure can’t afford to get stuck in the past. In an institution where world-class research and undergraduate studies intersect, technology must be fast, invisible, and – above all – reliable.

That reliability was put to the test when Dartmouth’s load balancing vendor was acquired twice in five years, as Avi Networks moved to VMware and VMware moved to Broadcom. Speaking at HAProxyConf 2025, Dartmouth infrastructure engineers Curt David Barthel and Kevin Doerr described how they began to see what they called “rising license costs without apparent value, and declining vendor support subsequent to acquisition after acquisition.”

It was clear that they were beginning to pay more for less — and it was time for a change.

After conducting thorough research, interviews, and demonstrations, Dartmouth settled on the best path forward: HAProxy One, the world’s fastest application delivery and security platform. 

For Dartmouth, it wasn’t just a migration; it was an opportunity to innovate on its existing infrastructure. They leveraged the platform’s deep observability and automation to architect a custom Load Balancing as a Service (LBaaS) solution.

Today, that platform is fully automated and self-service, making life easier for 50+ users across various departments and functions. Dartmouth’s journey serves as a technical blueprint for those hoping to make the switch from Avi to HAProxy One.

]]> Was history repeating itself?

As an undergraduate at Dartmouth, you’re likely to be taught that history doesn’t repeat itself — but sometimes it rhymes. 

Infrastructure changes were not new to the Dartmouth IT team. For roughly 20 years, the team managed its infrastructure using F5 Global and Local Traffic Managers. Later, they layered a software load balancing solution from Avi Networks on top of their F5 environment.

However, the landscape shifted as Avi was acquired by VMware, which was subsequently acquired by Broadcom. The changes led to rising licensing costs and declining vendor support. The solution began to feel like a closed ecosystem, forcing Dartmouth into a state of vendor lock-in that limited its architectural freedom.

]]> ]]> Ultimately, the team identified three "deal-breakers" that made their legacy environment unsustainable:

  1. Vendor lock-in: Today’s multi-cloud and hybrid cloud environments demand a platform-agnostic infrastructure. Yet, Dartmouth’s existing software was moving in the opposite direction — becoming increasingly tied to a specific vendor's ecosystem (VMware).

  2. Rising costs & constrained scaling: The licensing model was no longer aligned with Dartmouth’s needs. Increases in traffic often triggered disproportionately high costs, while complex licensing tiers made it difficult for the team to scale or innovate creatively.

  3. Automation roadblocks: To provide true "Load Balancing as a Service," the team needed a robust, template-driven workflow. The existing API didn't support the level of deep automation and auditability required to offer users a truly self-service experience.

Meeting new criteria

The Dartmouth team followed a dictum from the famous UCLA basketball coach, John Wooden: “Be quick — but don’t hurry.” 

They had established a high level of service for its users, and they wanted to maintain and also improve on that. So they set out their requirements carefully, including:

  • Comprehensive load balancing: Robust support for both L4 and L7 traffic.

  • API-first control plane: A solution that offers total data plane management through a modern, programmable interface.

  • Deep automation: Built-in features to support a GitOps-style workflow.

  • Modern orchestration: Native service discovery for Kubernetes environments.

  • Extensibility: The ability to customize and extend the platform to meet unique institutional needs.

]]> ]]> To find the right partner, Dartmouth conducted an extensive evaluation of top vendors where they demonstrated their products, along with customer reference interviews. HAProxy stood out for “less grandiose marketing” and the ability to run on-premises, in addition to cloud native implementation. 

HAProxy One met every current requirement and supported future plans. The platform was found to be cost-effective and to feature excellent support. 

"We interviewed many vendors, and HAProxy came out on top, particularly with the top-notch support model. It's beyond remarkable — it's unparalleled. Having that wealth of expertise is absolutely invaluable."

Building Rome in a few days

To replace their legacy environment, the Dartmouth team didn't just install new software; they engineered a robust, automated platform. 

The deployment was centered around HAProxy Fusion Control Plane, integrating essential networking components like IP address management (IPAM), global server load balancing (GSLB), and the virtual router redundancy protocol (VRRP). To maintain consistency with their existing operations, they also implemented custom TCP and HTTP log formats using the common log format (CLF).

The team then worked with their existing configuration manifests, in YAML format, which are sent to a Git repo to specify each user’s configuration options. This is all driven by a master Ansible playbook. 

]]> ]]> At the heart of this new system is a GitOps-driven workflow that makes infrastructure changes nearly invisible to the end user. The process follows a highly structured pipeline:

  1. User input: Power users submit their requirements through a simple, standardized front end.

  2. Manifest creation: These requirements are captured in YAML-formatted configuration manifests and committed to a Git repository.

  3. Automation pipeline: Each commit triggers a Jenkins pipeline that launches a master Ansible playbook.

  4. Configuration generation: Ansible uses Jinja2 templates to transform the YAML data into a valid, human-readable HAProxy configuration file.

  5. Centralized deployment: The playbook authenticates to the HAProxy Fusion Control Plane via API and pushes the configuration to HAProxy Fusion as a single, centralized update.

  6. Data plane synchronization: HAProxy Fusion then distributes and synchronizes the configuration across the entire fleet of HAProxy Enterprise data plane nodes, ensuring consistent, high-availability deployment at scale.

This modular approach provides Dartmouth with a "plug-and-play" level of flexibility. While the team is not deploying a web application firewall (WAF) at go-live, the framework is already in place to support it. When they are ready to activate the HAProxy Enterprise WAF, the process will be streamlined. Once the initial migration is complete, adding security layers will be as simple as activating a pre-tested template.

Observability without complexity

A big win for the IT team was the clear separation of responsibilities. Users are granted read-only access to HAProxy Fusion, allowing them to track the status of their requests and view their specific configurations in real time. Meanwhile, the IT team retains central control over the control plane, ensuring security and stability across the entire institution.

With every configuration change fully logged and auditable, troubleshooting has shifted from a manual "guessing game" to a data-driven process. Combined with HAProxy’s highly responsive support, Dartmouth now has a load-balancing environment that is not only faster and more cost-effective but significantly easier to manage.

Keys to the new city

Sometimes it’s seemingly small things that turn out to be crucial to success. What made Dartmouth’s transition to HAProxy work so well? 

The team manages more than 1,100 load balancer manifests, all of which were confirmed and validated against the new automation framework well before “go-live.” Specific “power” users were trained to use the HAProxy Fusion GUI, preparing them in advance for system deployment. 

The old architecture and the new one have been run side-by-side, so migration only requires a simple CNAME switch. If issues arise, users can fall back to the previous implementation, and behavior between the two systems can be easily compared in a real, “live fire” environment.

]]> ]]> The team cited several critical success factors, including:

  • The HAProxy Slack channel for support, with unparalleled responsiveness and a highly capable team

  • A developer team at HAProxy that is consistently available and responsive

  • Power user engagement and trust through early testing and implementation

Every feature from the Avi environment has now been implemented on HAProxy One — and in the process, Dartmouth has been able to introduce new capabilities that didn’t exist before. The response to date has been very strong. Power users say, “This looks great. This is much better than what we used to have.”

Ultimately, Dartmouth didn’t just swap vendors; they built a platform that puts them back in control. By prioritizing automation and architectural freedom, the team has moved past the cycle of rising costs and closed ecosystems. They now have a high-performance, self-service environment that is reliable, cost-effective, and ready to scale whenever they are.

]]> How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One appeared first on HAProxy Technologies.]]>
<![CDATA[Properly securing OpenClaw with authentication]]> https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication Tue, 03 Feb 2026 08:24:00 +0000 https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication ]]> OpenClaw (née MoltBot, née ClawdBot) is taking over the world. Everyone is spinning their own, either on a VPS, or their own Mac mini. 

But here's the problem: OpenClaw is brand new, and its security posture is mostly unknown. Security researchers have already found thousands of publicly available instances exposing everything from credentials to private messages.

While OpenClaw has a Gateway component — the UI and WebSocket that controls access — there are serious issues with its password/token-based authentication:

  • Until recently, you could skip authentication entirely on localhost.

  • The GET URL token authentication mechanism is questionable for such young code.

  • Trust needs to be earned, not assumed.

In this post, we'll secure OpenClaw using a battle-tested method with HAProxy.

The plan: implement HAProxy’s HTTP Basic Authentication

HAProxy’s HTTP Basic Authentication is a robust method for securing access to production systems with a username/password combination. In this guide, we’ll do the following:

  1. Install HAProxy

  2. Configure HAProxy with automatic TLS, basic auth, and rate limiting

  3. Install OpenClaw and authenticate access using the basic auth credentials

We'll cover running OpenClaw on a VPS first. In a follow-up, we'll tackle Mac mini deployments with secure remote access (think Tailscale, but entirely self-hosted). 

We'll also add smart rate limiting: anyone who sends more than 5 unauthorized requests within 120 seconds is blocked for 1 minute. The clever part? They'll see a 401 Unauthorized instead of 429 Too Many Requests, so attackers won't even know they've been rate-limited.

]]> You'll need two checklist items to get started:

  1. A VPS running anywhere with Ubuntu 24.04, and a public IP address

  2. A domain/subdomain pointing DNS to the VPS public IP

To see everything in action, visit our live demo link and experiment.

]]> Building it yourself

1) Install the HAProxy image

First, we'll install the high-performance HAProxy image:

]]> blog20260203-1.sh]]> We now have HAProxy 3.3 installed, with the high-performance AWS-LC library and full ACME support for automatic TLS certificates. Now, we just need to apply the configuration to make it work.

2) Configure HAProxy

Edit /etc/haproxy/haproxy.cfg and insert the following lines into the global section. This will set us up to use automatic TLS:

]]> blog20260203-2.cfg]]> Now let’s add configuration for automatic TLS using Let’s Encrypt. Edit the last line for your own domain:

]]> blog20260203-3.cfg]]> Next, we'll take care of the basic HAProxy configuration items. Don’t forget to change the line starting with ssl-f-use to use the correct subdomain alias from the my_files section:

]]> blog20260203-4.cfg]]> 3) Restart HAProxy

Restart HAProxy to apply the updated configuration:

]]> blog20260203-5.sh]]> Next, edit the HAProxy systemd file to make it automatically write certificates to disk. Run the following command:

]]> blog20260203-6.sh]]> You're now ready to insert the following line under [Service]:

]]> blog20260203-7.cfg]]> Finally, reload systemd:

]]> blog20260203-8.sh]]> 4) Install OpenClaw and access it securely

You're now ready to install OpenClaw:

]]> blog20260203-9.sh]]> That’s it! You can now run the following command:

]]> blog20260203-10.sh]]> This process will give you your personal access token. This is still needed for proper authentication inside OpenClaw itself.

You can now visit https://subdomain.example.com/?token=<gateway token>. When doing this for the first time, you'll have to provide a username and password.

You can also configure your macOS app to talk to this OpenClaw instance. Just insert the username and password directly into the Websocket URL, as shown below:

]]> ]]> One more thing

Check your rate limiting occasionally to see who's knocking at your door:

]]> blog20260203-11.sh]]> You might be surprised how many bots are already scanning for OpenClaw instances. That 401 response is working hard. Any line item where gpc0 is higher than 5 has been limited.

What if you accidentally lock yourself out? Simply run this command, where <key> is your IP address:bash:

]]> blog20260203-12.sh]]> Secure from the start

You now have an OpenClaw instance that's actually secure, not just "hopefully secure." Here's what's protecting you:

  • Defense in depth – You're not relying on OpenClaw's young authentication code. HAProxy handles the security layer with battle-tested HTTP Basic Auth that's been protecting production systems for decades.

  • Stealth rate limiting – Attackers hitting your instance will see authentication failures, not rate limit errors. They won't know they've been blocked, which means they'll waste time and resources before giving up.

  • Automatic TLS – Let's Encrypt handles your certificates with zero manual intervention. No expired certs, no security warnings, no hassle.

If you need more authentication methods or additional security layers, check out HAProxy Enterprise load balancer. When you’re ready to control your deployment at scale, using HAProxy Fusion for centralized management, observability, and automation.

Stay safe and keep learning!

]]> Properly securing OpenClaw with authentication appeared first on HAProxy Technologies.]]>
<![CDATA[Universal Mesh in action: how PayPal solved multi-cloud complexity with HAProxy]]> https://www.haproxy.com/blog/how-paypal-solved-multi-cloud-complexity-with-haproxy Thu, 15 Jan 2026 00:00:00 +0000 https://www.haproxy.com/blog/how-paypal-solved-multi-cloud-complexity-with-haproxy ]]> The hardest part of modern infrastructure isn’t choosing your deployment environments — it’s bridging communication between them. Large enterprises are constantly facing the challenge of keeping everything connected, secure, and fast when their infrastructures are spread across different clouds and on-premises systems.

PayPal faces this challenge every day, managing a global infrastructure that processes $1.6 trillion in annual payments across 436 million active accounts. Their environment is a complex mix of on-premises data centers and three major cloud providers (AWS, GCP, and Azure). With over 3,500 applications in service — some modern, others still relying on HTTP/1.1 — they dealt with overlapping CIDR / IP addresses, where multiple business units used the same private IP address ranges, and inconsistent cloud-native tools that made seamless communication difficult.

]]> ]]> To solve this, they didn't just patch their network; they built a Universal Mesh with HAProxy Enterprise load balancer and HAProxy Fusion Control Plane. This unified connectivity fabric, known internally as Project Meridian, supersedes earlier mesh technologies to provide a holistic framework for internal and external application delivery. Meridian serves as a universal translator across conflicting networks, creating a multi-tenant solution that eliminates the need to reinvent access patterns for every cloud provider.

In their recent HAProxyConf presentation, Senior Staff Network Engineers Kalaiyarasan Manoharan and Siddhartha Mukkamala detailed PayPal’s transformation. Here are the seven key steps they took to master multi-cloud networking.

]]> 1. Identify core challenges

The PayPal environment presented a number of challenges that demanded a unified solution: 

  • Connectivity. The core PayPal business and its business units, such as Braintree, Venmo, and Zettle, had applications spread across AWS, Azure, and GCP, with no unified way to communicate between them or share core services. 

  • Overlapping CIDR / IP addresses. Most business units used the same network ranges/subnets, making direct routing impossible. Overlap in private IP address space and subnets necessitated the routing of traffic over the public internet to connect services across different clouds, as there was no way to distinguish between identical internal addresses within different business units.

  • Exposing services. Without a private path, services often had to communicate over the public internet, which increased latency and expanded the attack surface.

  • Visibility. There was no "single pane of glass" to view end-to-end traffic flows, making troubleshooting a nightmare. 

Any solution had to address these challenges, making inter-service communication faster, easier, and more secure, with improved observability. 

2. Specify the architectural approach

PayPal’s goal was to create a "reusable solution that can abstract the complexity of the cloud providers." They envisioned a connectivity fabric that would provide a simple and unified way for business units to communicate securely, regardless of where any given service or data resource was hosted.

]]> ]]> The project was split into two main components:

  • Inner Meridian: Handles private connectivity between internal business units and internal cloud services.

  • Outer Meridian: Manages connectivity to external partners, SaaS providers, and AI models, such as GCP Gemini.

This simple division divided up the challenges involved in the overall solution into two manageable buckets. 

3. Build a non-overlapping IP fabric

]]> ]]> The most significant hurdle for Project Meridian to overcome was the overlapping CIDR / IP addresses. This overlap drove PayPal to expose many endpoints over the public internet. Project Meridian pulls these endpoints off the public grid. 

How did they do it? Instead of re-IPing thousands of servers (a multi-year nightmare), PayPal's engineers created a neutral zone using the 198.18.0.0/15 IP address range (defined in RFC 5735). This special-use range is designated for testing and is not routable over the public internet. This allowed them to leave the internal IP addresses alone and translate them only at the edge. 

By building their "Meridian Edge Services Fabric" with this non-overlapping range, they created a private "bridge" that allowed all business units to communicate without re-addressing their entire existing infrastructure.

Furthermore, HAProxy Enterprise’s ability to perform Source Network Address Translation (SNAT) allows Meridian to create a virtual network across incompatible existing networks. NATing makes traffic from outside a network appear as if it originated locally, without any changes to an application’s network configuration. 

This clever move created a private, non-overlapping, intermediary network layer with its own unique IP space. This allows PayPal to connect all the disparate cloud environments, without needing to “re-address” existing infrastructure. 

4. Deploy HAProxy Enterprise as the multi-cloud gateway

While PayPal initially explored cloud-native services, they soon realized they needed a more flexible, vendor-agnostic tool. They chose HAProxy Enterprise as the core component because it provided a unified, multi-tenant solution that works the same way in AWS as it does in GCP, Azure, or on-premises.

]]> ]]> They deployed HAProxy Enterprise clusters, known as Meridian Edges, across different clouds and regions for each business unit to ensure high availability. These edges handle the heavy lifting: SSL termination, protocol translation (converting HTTP/1.1 to modern HTTP/2), and Source Network Address Translation (SNAT) to bridge the different IP ranges.

]]> 5. Implement smart routing

With the CIDR problem solved, PayPal needed a way to route traffic to the correct application. Traditional DNS propagation is too slow for dynamic cloud environments. Instead of relying on complex DNS subdomains, they adopted a simple and effective strategy that leverages HAProxy Enterprise’s powerful path-based routing capabilities.

By moving routing logic out of DNS and into the mesh (HAProxy), PayPal decoupled service location from network location. This is a hallmark of Universal Mesh architecture.

For example, a request destined for "App 2" in "Business Unit 2" is sent to a unified endpoint, such as example.paypal.com/bu2/app2. The HAProxy Enterprise-powered Meridian Edge at the source receives the request and terminates the SSL. Using a dynamic map file, HAProxy Enterprise performs a high-performance lookup of the URI path to determine the exact destination Meridian Edge. This allows for granular, intelligent traffic steering without the administrative overhead of managing thousands of individual DNS records. 

The destination HAProxy Enterprise instance rewrites the intended URI path and forwards the request to the internal application, making the entire process seamless for the end services: “the Meridian Edge Service Fabric is an entirely private path.” 

6. Centralize observability and control 

To manage this distributed network of HAProxy Enterprise clusters, PayPal uses HAProxy Fusion as its management layer. This provides a "single pane of glass" where engineers can look up a unique correlation ID to see exactly how a request performed at every hop—from the network round-trip time to the application response time.

This provides clear evidence of where a bottleneck actually exists, leading to faster resolution.

7. Measure the results and build forward 

The impact of Project Meridian has been transformative for PayPal:

  • 24% latency reduction: By redirecting traffic away from the public CDN path and onto the private fabric with persistent HTTP/2 connections, they achieved a significant performance improvement.

  • Enhanced security: Moving applications to an entirely private path significantly reduced their external attack surface.

  • Operational efficiency: Service onboarding is now much faster. Once a service is in the Meridian directory, other units can connect to it easily without weeks of manual firewall tickets.

Conclusion

With Meridian, all three major public cloud providers, as well as any in-house assets that PayPal controls, function as a single, unified set of services and resources. A payments API in AWS can communicate with a risk API in GCP and then a compliance API in Azure, eliminating the need to generate traffic across the public internet. Most enterprise companies can only be envious of such an effective solution. 

As Siddhartha concluded, “Building that private connectivity between the business units is especially hard when there is an IP address overlap. We partnered with HAProxy, which helped us provide consistent connectivity across cloud providers.”

And PayPal isn't finished yet. They are currently working on a self-service automation model and partnering with HAProxy to implement advanced service discovery. This will further accelerate PayPal’s ability to innovate across its global footprint.

PayPal’s Meridian is a powerful real-world use case of Universal Mesh succeeding at enterprise scale. Universal Mesh is a unified connectivity fabric designed to solve the challenges of traditional networking and fractured connectivity models. It is an emergent architectural pattern that provides a holistic framework for application delivery, superseding earlier mesh technologies by addressing a broader scope of problems with a more elegant and scalable design.

]]> Universal Mesh in action: how PayPal solved multi-cloud complexity with HAProxy appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Kubernetes Ingress Controller 3.2]]> https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-3-2 Tue, 13 Jan 2026 08:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-3-2 ]]> We’re excited to announce the simultaneous releases of HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2! All new features described here apply to both products.

These releases introduce user-defined annotations, a new frontend CRD, and other minor improvements, and we’ll cover these in detail below. Visit our documentation to view the full release notes.

If you have questions about how to replace Ingress NGINX or how to migrate from Ingress to Gateway API, you can skip to the FAQs.

Version compatibility with HAProxy 

HAProxy Kubernetes Ingress Controller 3.2 is built with HAProxy 3.2.

New to HAProxy Kubernetes Ingress Controller?

HAProxy Kubernetes Ingress Controller is a free, open-source product providing high-performance Kubernetes-native application routing for the Ingress API. It supports HTTP/S and TCP (via CRD), and is built on HAProxy’s legendary performance, flexibility, and reliability. Additionally, it provides a low-risk migration path to Gateway API via the HAProxy Unified Gateway (beta). 

HAProxy Enterprise Kubernetes Ingress Controller provides secure, high-performance Kubernetes-native application routing for the Ingress API. It combines the flexibility of the open-source HAProxy Kubernetes Ingress Controller with an integrated web application firewall (WAF) and world-class support.

What’s new?

Feature

Benefit

Impact

User-defined annotations

Add new annotations independent of release pipeline and make use of more HAProxy features

Rapid feature adoption and modernization; simple support for Ingress NGINX annotations

Frontend CRD

Flexibly configure HAProxy frontend sections, validate changes to K8s resources

Simplified configuration and added flexibility

These enhancements make the community and enterprise products even more flexible, and enable simpler migration from existing Ingress NGINX deployments (EOL in March 2026) to HAProxy Kubernetes Ingress Controller. For an immediate, step-by-step technical transition plan from Ingress NGINX to HAProxy Kubernetes Ingress Controller, see our Ingress NGINX migration assistant.

Ready to upgrade?

When you're ready to start the upgrade process, view the upgrade instructions for your product:

User-defined annotations

]]> ]]> User-defined annotations are annotations with full validation that users can create for the frontend and backend sections of the HAProxy configuration that the ingress controller generates. These annotations are CRD driven, and allow you to limit their scope to certain resources. They're powerful, unlocking all previously unavailable HAProxy options through custom templates. They also bundle in safety through validation rules you define. 

User-defined annotations are especially useful when migrating to HAProxy Kubernetes Ingress Controller. If any annotation is missing, you can easily recreate it without tethering yourself to our release schedule.

HAProxy offers an extensive number of powerful load balancing options, all detailed within our Configuration Manual. The best and most reliable way to expose them is through secure deployment methods like user-defined annotations that still fully expose HAProxy's standard settings. 

How do user-defined annotations compare to CRDs?

Both annotations and CRDs have validation and can represent almost everything HAProxy offers. However, CRDs don't offer the same level of granularity that custom annotations do.

User-defined annotations vs. regular annotations

Security

The most important difference between user-defined annotations and regular annotations involves security. With user-defined annotations, there's a clear separation between internal teams that define them and those that consume them. 

When an administrator defines an annotation through a custom resource, they can define and limit its usage. This can be achieved by limiting annotations on certain HAProxy sections, namespaces, Services, or Ingress, as needed. If a specific service or group needs a little more configuration freedom, administrators can create a team-specific custom annotation.

Developers and teams 

Teams receive a complete list of available annotations from their admin or admin group. If they need additional annotations, team members can send new requests to their admin(s).

Validation

User-defined annotations have validation. You'll use Common Expression Language (CEL) to write these rules, which can be lenient or strict, simple or complex. Stricter rules help minimize the risk of misconfigurations.

Delivery speed

While the number of supported annotations has steadily grown alongside this project, no two deployments are identical. Company A needs different customization than Company B. While this project's goal is to consider every use case and setup, covering all scenarios with limited resources and time isn't possible. 

Luckily, user-defined annotations reduce the need for new annotations to be accepted, developed, and released. You can simply create a new annotation, deploy it, and start using it immediately.

Monitoring

We all read logs — right? When a user configures a user-defined annotation, validation runs and any error messages will display in the log. That user will quickly see if annotations aren't accepted due to various reasons. User-defined annotations offer added advantages too. Even if validation fails, the user-defined annotations will still appear in your configuration, but as a comment in the frontend or backend, alongside the error messages, which are also displayed as comments, helping explain what went wrong.

How can I distinguish user-defined annotations from 'regular' ones?

The official HAProxy annotations can have ingress.kubernetes.io, haproxy.org, and haproxy.com prefixes. User-defined annotations can have any prefix you define. For example, a well-known corporation at example.com can use an example.com prefix. Let's now tackle how to define the structure.

How to enable user-defined annotations?

The HAProxy Kubernetes Ingress Controller must be started with the following command line argument: --custom-validation-rules=<namespace>/<crd-name>.

If you’re using helm to manage HAProxy Kubernetes Ingress Controller, you may be using a custom values file (using -f <values file>). In this case, ensure you have the following path covered:

]]> blog20251218-12.cfg]]> User-defined annotation examples

We'll begin defining our annotations by including a prefix we want to use:

]]> blog20251218-01.yaml]]> This prefix indicates that the annotations HAProxy Kubernetes Ingress Controller will process are user-defined:

]]> blog20251218-02.yaml]]> We use standard Golang templating for the templates parameter, so any complex templating can be used. We write our rules in Common Expression Language (CEL). Once this is applied, a log confirmation message will appear:

ValidationRules haproxy-controller/example-validationrules accepted and set [example.com]

How to use user-defined annotations

Within Service, Ingress, or ConfigMap, we'll simply add our annotation metadata:

]]> blog20251218-03.yaml]]> After applying the annotation(s), we'll see the following in our configuration file:

]]> blog20251218-04.cfg]]> Working with more complex annotations

The user-defined annotations feature also enables you to create more complex annotations. The json type is highly useful in this scenario:

]]> blog20251218-05.cfg]]> Since rules and templates can be sophisticated, HAProxy Kubernetes Ingress Controller now supports multi-line annotations. If your template consists of a multi-line string, HAProxy Kubernetes Ingress Controller will create multiple lines in the configuration using the same annotation:

]]> blog20251218-06.cfg]]> Predefined variables

While using templates, the following variables are also available:

BACKEND, NAMESPACE, INGRESS, SERVICE, POD_NAME, POD_NAMESPACE, and POD_IP

]]> blog20251218-07.cfg]]> Options for defining rules]]> blog20251218-08.cfg]]> User-defined frontend annotations

Since Kubernetes lacks a Frontend object, you can instead define frontend annotations in your HAProxy Kubernetes Ingress Controller ConfigMap. This exists as an annotation of ConfigMap — not as a key-value pair.

]]> blog20251218-09.yaml]]> User-defined backend annotations

Users can also define backend annotations in three ways: via ConfigMap, Service, or Ingress. These annotations come with some caveats: 

  • Annotations made with ConfigMap will be applied to each supported backend. 

  • Annotations made with Service will be applied only on the specified service.

  • Annotations made with Ingress will be applied on services used in Ingress.

]]> What happens when you try to use the same annotation in multiple places? 

  • Service annotations have the highest priority. 

  • If Service annotations don't exist, Ingress annotations will be applied next. 

  • If neither Service nor Ingress annotations exist, ConfigMap annotations will be applied next.

To dive even deeper into user annotations, check out our user annotations documentation.

Frontend Custom Resources

]]> ]]> Similar to backend CRDs, you can now use Custom Resources to further configure the essential frontend sections that should always exist within your HAProxy configuration — such as HTTP, HTTPS, and STATS. We make this distinction since TCP frontend sections are created and managed solely through their own TCP CRDs, by comparison.

It's important to note that frontend CRDs should only be available to administrators, since they impact all traffic in the controller.

To start using them, you'll need to specify which resource is connected to a specific frontend. There are three new values you can use for frontend Custom Resources:

  • cr-frontend-http

    • Configures the HTTP frontend in your HAProxy configuration

  • cr-frontend-https

    • Configures the HTTPS frontend in your HAProxy configuration

  • cr-frontend-stats

    • Configures the STATS frontend in your HAProxy configuration

You can configure these specific frontend CRDs within Ingress Controller's ConfigMap:

]]> blog20251218-10.yaml]]> All available options contained within the frontend section of HAProxy can be configured using frontend CRDs. But what happens with any predefined values? All CRD values are merged with values that already exist. For example, CRD values will come first for binds, http-request rules, and for all lists in general. Afterwards, HAProxy Kubernetes Ingress Controller will append its own values on top of everything else.

]]> blog20251218-11.yaml]]> Minor improvements

HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2 add the following enhancements:

  • Backend names are now more readable than before

    • Each backend previously consisted of a namespace title, service name, and port number (or name) in the <namespace>_<service>_<port> format. The new format is <namespace>_svc_<service>_<port>. This enables more finely-grained statistical analysis, since it's now easier to separate namespace title and service name. 

  • The admin port is now the only way of fetching pprof and Prometheus data. This helps protect sensitive stats data.

  • A new generate-certificates-signer annotation will automatically generate TLS certificates signed by a provided CA secret for incoming connections. This uses the generate-certificates and ca-sign-file HAProxy bind options.

  • We've added a new --disable-ingress-status-update flag. When set, the controller will skip updating the loadBalancer status field in managed Ingress resources.

  • HAProxy Kubernetes Ingress Controller has moved from OpenSSL to the AWS-LC for added security, faster SSL/TLS cryptography, and higher throughput with low latency.

Deprecations

There are several planned feature deprecations for the next version of HAProxy Kubernetes Ingress Controller (version 3.4). 

First, we're removing support for CRDs in the ingress.v1.haproxy.org group. Those are CRDs for backends, defaults, globals, and TCPs. However, all of these have had ingress.v3.haproxy.org alternatives already available since HAProxy Kubernetes Ingress Controller 3.0.

  • With the available binary released on GitHub, we can use --input-file and --output-file to convert your resources from v1 to v3. You can use a simple Terminal command to begin converting:
    ./haproxy-ingress-controller --input-file=global-full-v1.yaml --output-file=global-full-v3.yaml

To better unify functionality across multiple products (especially HAProxy Unified Gateway), most of the annotations we currently use will also be deprecated in favor of using Custom Resources. To ensure continuity and provide a simple migration from annotations to CRDs, we'll release a tool that converts the output of annotations into CRDs. We'll make this available to community and enterprise users in 2026.

Contributions

]]> ]]> HAProxy Kubernetes Ingress Controller's development thrives on community feedback and feature input. We’d like to thank the code contributors who helped make this version possible!

Contributor

Area

Hélène Durand

FEATURE, BUG, BUILD, DOC, OPTIM, TEST

Ivan Matmati

BUG, FEATURE, TEST, DOC

Dario Tranchitella

BUG

Dinko Korunić

FEATURE

Philipp Hossner

BUG, FEATURE

SF97

BUG, BUILD

Fabiano Parente

FEATURE

Saba Orkoshneli

CLEANUP

Vladyslav Riabyk

FEATURE

Zlatko Bratkovic

BUG, FEATURE, TEST, BUILD, CLEANUP, DOC, OPTIM, REORG

FAQs and what’s next

Can I replace Ingress NGINX with HAProxy Kubernetes Ingress Controller?

Ingress NGINX is officially reaching end of life in March 2026, after which planned releases, bug fixes, security updates, and feature development will stop. We're here to help teams replace Ingress NGINX and ensure continuity.

HAProxy Kubernetes Ingress Controller is the easiest, most immediate, and most direct production-ready replacement for teams facing a tight migration deadline. While not a 100% drop-in replacement, a robust annotation system — including the new user-defined annotations and our Ingress NGINX Migration Assistant — make it simple to achieve equivalent functionality for a stress-free switchover. HAProxy Kubernetes Ingress Controller also offers superior speed, stability, and advanced features to level up your existing Ingress setup.

To learn more about migration, we encourage you to watch our on-demand webinar and contact us with any questions.

Can I migrate from Ingress to Gateway API?

For teams considering migrating from Ingress to Gateway API, the new Kubernetes-native standard for traffic management, HAProxy makes it simple. 

  1. First, HAProxy Kubernetes Ingress Controller users will be able to migrate easily to the new HAProxy Unified Gateway, maintaining their existing Ingress rules (feature coming in 2026). 

  2. Second, HAProxy Unified Gateway users will be able to gradually migrate from Ingress to Gateway API within the same product for consistent management.

HAProxy Unified Gateway is a free, open-source product providing unified, high-performance, Kubernetes-native application routing for both Gateway API and Ingress. HAProxy Unified Gateway provides flexible protocol support, role-based access control, and a low-risk, gradual migration path for organizations moving from Ingress to Gateway API. Combined with HAProxy’s legendary performance and reliability, these key features support the needs of modern applications and evolving organizations.

Can I manage all my traffic with HAProxy – in Kubernetes and other environments too?

HAProxy One — the world's fastest application delivery and security platform — provides universal traffic management with a data plane and control plane that are completely infra-agnostic. For Kubernetes users it currently enables intelligent external load balancing, multi-cluster routing, direct-to-pod load balancing, and the groundbreaking universal mesh. In 2026, we're adding built-in support for both Gateway API and Ingress via HAProxy Fusion Control Plane. These enhancements will enable HAProxy One to provide comprehensive Kubernetes routing and load balancing as part of its universal traffic management. 

Development for HAProxy Enterprise Kubernetes Ingress Controller will continue, with version 3.4 planned for 2026. Existing users can keep using HAProxy Enterprise Kubernetes Ingress Controller, or upgrade to HAProxy One for universal traffic management, intelligent multi-layered security, and a centralized control plane that works across all environments. 

To learn more, check out our Kubernetes solution.

Conclusion 

HAProxy Kubernetes Ingress Controller 3.2 and HAProxy Enterprise Kubernetes Ingress Controller 3.2 are even more powerful, while simplifying migration from alternatives such as Ingress NGINX. User-defined annotations and frontend CRDs enable faster feature adoption and modernization, and more flexible configuration. We hope you enjoy using these new features!

To learn more about HAProxy Kubernetes Ingress Controller, follow our blog and browse our documentation. To take HAProxy Enterprise Kubernetes Ingress Controller for a test drive, contact us.

If you want to explore additional Kubernetes capabilities in HAProxy — such as external load balancing and multi-cluster routing — check out our on-demand webinar.

]]> Announcing HAProxy Kubernetes Ingress Controller 3.2 appeared first on HAProxy Technologies.]]>