HAProxy Technologies 2026 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[Announcing HAProxy Unified Gateway 1.0]]> https://www.haproxy.com/blog/announcing-haproxy-unified-gateway-1-0 Tue, 24 Mar 2026 00:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-unified-gateway-1-0 ]]> Today at KubeCon Amsterdam, we are announcing the 1.0 release of HAProxy Unified Gateway, incorporating valuable community feedback from our beta users. HAProxy Unified Gateway delivers unified, high-performance, cloud-native application routing backed by an open-source community with 25+ years of experience.

New to HAProxy Unified Gateway?

HAProxy Unified Gateway is a free, open-source project maintained by HAProxy Technologies that enables Kubernetes-native traffic management covering Gateway API specifications 1.3, 1.4, and 1.5.

Organizations are moving more workloads to Kubernetes, which requires routing methods that handle diverse applications and organizational complexity. The Gateway API standard provides a solution through role-based access control, enabling a clear separation of concerns among infrastructure providers, cluster operators, and application developers.

HAProxy Unified Gateway provides flexible protocol support, role-based access control, and a future path for unified Ingress support coming later in 2026. It brings exceptional performance and efficiency to Kubernetes traffic management because it is built on HAProxy, the world’s fastest software load balancer. By adopting HAProxy Unified Gateway, teams have the freedom to scale exceptionally well with class-leading performance and efficiency on any infrastructure.

What's new in HAProxy Unified Gateway 1.0

The 1.0 release focuses on operational simplicity, expanded routing capabilities, and deep integration with HAProxy configuration features.

Feature

Benefit

Impact

HAProxy 3.2 LTS core

Provides a highly efficient, low-resource data plane for high-volume traffic.

Delivers improved performance, security, and reliability to Kubernetes.

Comprehensive HAProxy configuration

Manage Backend, Default, and Global settings natively via CRDs.

Define and maintain advanced traffic policies alongside your application code.

Expanded route and listener support

Route HTTP, HTTPS, and TLS traffic.

Seamlessly support multiple applications.

Enhanced dynamic scaling

Dynamically updates servers based on Kubernetes API endpoints.

Ensures smooth traffic flow by reducing reloads and eliminating dropped connections during scaling.

Metrics and session persistence

Adds built-in performance monitoring and sticky session routing.

Increases visibility and reliability for stateful, mission-critical services.

Helm chart installation

Automates the deployment and upgrading of the gateway and CRDs.

Accelerates integration into existing continuous integration and delivery pipelines.

Scaling Kubernetes routing (and more) with the HAProxy One platform

As your infrastructure grows, Kubernetes traffic management becomes just one piece of a much larger puzzle. To truly simplify, scale, and secure modern applications in any environment, organizations require a unified approach.

This is where HAProxy One comes in.

While HAProxy Unified Gateway provides developers and cluster operators with the flexibility of the Gateway API, enterprise teams managing mission-critical workloads often require advanced security, global observability, and centralized management. To meet these needs, organizations can seamlessly extend their architecture with our HAProxy One enterprise platform, which includes the following:

  • HAProxy Enterprise: Upgrading to our enterprise data plane adds multi-layered security — including an integrated web application firewall (WAF), bot management, and advanced threat intelligence — without sacrificing our signature ultra-low latency.

  • HAProxy Fusion Control Plane: HAProxy Fusion provides centralized management, observability, and automation across your entire deployment. Looking ahead, Gateway API support is coming to HAProxy Fusion, which will empower platform engineering teams to manage both traditional infrastructure and Kubernetes-native routing from a single, authoritative control plane.

Core technology built on HAProxy 3.2 LTS

HAProxy Unified Gateway 1.0 is built on the stable and powerful HAProxy 3.2 LTS core. This foundation ensures that your Kubernetes traffic management benefits from the extensive performance enhancements, security updates, and reliability improvements introduced in the 3.2 long-term support release. It provides a highly efficient data plane that minimizes resource consumption while reliably routing high volumes of traffic.

Comprehensive HAProxy configuration support

To give you more control over your traffic routing, this release introduces three new Custom Resource Definitions (CRDs) — Global, Defaults, and Backend — that let you manage HAProxy configuration natively within Kubernetes. The Global CRD lets you override HAProxy's global section, adjusting connection limits, TLS cipher suites, DH parameters, and process-level tuning. The Defaults CRD lets you customize the timeout and connection-handling values that every frontend and backend in the cluster inherits. The Backend CRD lets you configure load balancing behaviour, health checks, and server options for individual services. 

Frontends are generated automatically from your Gateway and Route resources — that's the Gateway API model — so there is no Frontend CRD. Instead, your frontend configuration is expressed declaratively through the Gateway spec. 

In the beta release, operators who needed to override Global or Defaults settings had to inject auxiliary configuration files via ConfigMap volume mounts, a workaround that sat outside Kubernetes' resource model and required modifying the controller Deployment to wire up. With these CRDs, the same overrides are expressed as standard Kubernetes objects, applied with kubectl apply, and picked up by the controller automatically.

Expanded route and listener support

HAProxy Unified Gateway 1.0 expands its routing capabilities to cover the full range of production traffic patterns. It supports HTTPRoute for HTTP and HTTPS traffic with TLS termination, and TLSRoute for TLS passthrough — where HAProxy reads the SNI hostname from the TLS ClientHello to route the connection without decrypting it. 

Because both listener types attach to the same Gateway definition, platform teams can establish port exposure and TLS policy once, and application teams can add or change routes independently — without modifying the underlying infrastructure.

Enhanced dynamic scaling

Managing dynamic environments is simpler with improved scaling mechanisms in version 1.0. In the beta release, every pod scale event — whether triggered by a Horizontal Pod Autoscaler or a manual kubectl scale — could require a full HAProxy reload to update the backend server pool. HAProxy Unified Gateway 1.0 eliminates this by registering a watch against the Kubernetes Endpoints API for every Service referenced by a route. When pod counts change, the controller detects the updated IP addresses and reconciles server entries in the running HAProxy process directly via the Runtime API, without triggering a reload. 

The result is that active connections are never dropped during scale-out, and scale-in events drain gracefully — connections to departing pods are allowed to complete before those servers are removed from the pool.

New operational features for metrics and session persistence

Observability and traffic consistency are critical for production workloads. This release addsbuilt-in metrics (with a Prometheus-compatible endpoint), allowing operators to monitor gateway performance and application health directly. It also introduces session persistence, ensuring that users maintain a continuous connection to the same backend pod when required by stateful applications. These additions provide the visibility and reliability needed to confidently run mission-critical services in Kubernetes.

Simplified installation and lifecycle management

To help you deploy and manage the gateway with ease, we have introduced support for Helm charts. This streamlines the installation process and enables the automatic deployment and upgrading of Custom Resource Definitions. By standardizing on Helm, infrastructure teams can integrate HAProxy Unified Gateway into their existing continuous integration and delivery pipelines quickly and reliably.

Configuration example

In this example, a platform team is integrating HAProxy Unified Gateway into their GitOps-based delivery pipeline, which uses Flux and ArgoCD with Helm chart manifests stored in version control. Rather than managing a sequence of raw YAML files and re-applying them at upgrade time, they adopt the official HAProxy Unified Gateway Helm chart, which handles CRD installation, namespace creation, RBAC, and controller deployment in a single, repeatable operation. A values.yaml file checked into their repo tracks the configuration.

Step 1 — Add the HAProxy Helm chart repository

]]> blog20260324-01.sh]]> Step 2 — Install Gateway API CRDs (prerequisite)]]> blog20260324-02.sh]]> Step 3 — Install HAProxy Unified Gateway]]> blog20260324-04.sh]]> Step 4 — Verify the installation]]> blog20260324-05.sh]]> Step 5 — Upgrade to a new Helm chart version

The Helm chart version (specified with --version) and the HAProxy Unified Gateway application version are independent. The chart's appVersion field tracks the HAProxy Unified Gateway release and is updated automatically by HAProxy Technologies whenever a new version of HAProxy Unified Gateway is published to Docker Hub.

To upgrade the chart, run helm upgrade with the new chart version:

]]> blog20260324-06.sh]]> If your values.yaml sets image.tag explicitly (as in Step 3), that value takes precedence over appVersion and will pin the HAProxy Unified Gateway image regardless of chart updates. To upgrade to a new HAProxy Unified Gateway release, update image.tag in values.yaml to the desired version before running helm upgrade. To track appVersion automatically instead, remove the image.tag override from your values.yaml.

The Helm chart automatically applies any updated CRD schemas before rolling out the new controller Deployment, ensuring that CRDs stay in sync with the controller version throughout the upgrade lifecycle.

Summary

The helm install command replaces the multi-step kubectl sequence from the on-premises installation guide. With crds.install: true, the chart installs the Backend, HugConf, and HugGate CRDs as part of the release and tracks them in the Helm release metadata. The gatewayClass.create: true flag creates the haproxy GatewayClass automatically, so the cluster is ready to accept Gateway resources immediately after installation. The values.yaml serves as a declarative record of the deployment's configuration, which can be committed to version control and reconciled by GitOps tooling on every change.

Ready to upgrade or try?

HAProxy Unified Gateway remains a free, open-source product dedicated to the Kubernetes community and the adoption of the Gateway API standard.

You can find the Docker image for HAProxy Unified Gateway on Docker Hub and the Helm Charts on GitHub. We encourage you to participate and contribute to the community project on GitHub.

]]> Announcing HAProxy Unified Gateway 1.0 appeared first on HAProxy Technologies.]]>
<![CDATA[Back to fundamentals: 7 insights from Kelsey Hightower at HAProxyConf]]> https://www.haproxy.com/blog/back-to-fundamentals-7-insights-from-kelsey-hightower-at-haproxyconf Fri, 20 Mar 2026 00:00:00 +0000 https://www.haproxy.com/blog/back-to-fundamentals-7-insights-from-kelsey-hightower-at-haproxyconf ]]> Building trust through open source, why model context protocol (MCP) is a gift to the proxy community, and more.

Early in his career, Kelsey Hightower made a bet. The load balancer his team was running was consuming too much memory, and he was convinced he knew the fix. He told his manager: “If it doesn’t work, fire me. But I think I can make it work.” The fix was HAProxy. It was a story he shared publicly for the first time at HAProxyConf 2025, where he delivered a keynote address, “The Fundamentals.”

Hightower has been one of the most thoughtful voices in technology for decades, with significant contributions to open source software, particularly Kubernetes. We were delighted that he accepted our invitation to deliver the keynote address at HAProxyConf 2025, and that he also joined a lively panel discussion later the same day. We’ve drawn on both sessions to share his key insights here.

]]> 1. The primacy of fundamentals

Hightower argued that the basis of a valuable career in technology lies in mastering fundamental principles. He warned that some professionals “have no idea why they’re doing what they’re doing. They’re just assigned the Jira ticket, and off they go like little robots.” This can lead to stagnation, so a 20-year career might look more like “20 years of one-year experience.”

Hightower challenged the very notion of “legacy software.” “Let me guess what your company is doing,” he said. “You take data in, do something in the middle, and write it to a database... Most of these fundamentals revolve around that.” Whether code is COBOL, Fortran, or the latest “modern” language, the underlying pattern of data processing remains the same.

“The people who understand the fundamentals tend to be the most creative because they can see the low-level details so they can rearrange things to match whatever they need,” Hightower explained. It’s a capability that’s rarer than it sounds.

Becoming such a creative individual is a strategic blueprint for career resilience. "Those who understand these fundamentals do really unique things to make things work,” he points out. “They build really cool data pipelines. They can manipulate any protocol and translate it to another, kind of like this whole HAProxy thing.”

2. “Understanding” as a first-class product

Hightower sees “understanding” as a first-class product to be created and distributed. He criticizes those who use technical jargon to signal their own expertise: “Sure, you look smart,” he asserts, “but that doesn’t make the other person feel smart.”

Early in his career, Hightower was a junior engineer on a team that was “using a particular popular load balancer at the time, and it was using too much memory per request… I’m in the corner on my laptop, figuring out how to swap out the popular proxy at the time for this little small one, this little ‘HAProxy’ thing.…”

]]> ]]> “One day, I bet my career and said, ‘Hey, listen, if it doesn’t work, fire me. But I think I can make it work…’” continued Hightower. He subbed in HAProxy for the load balancer that was burning too much memory. And HAProxy did the job, keeping memory usage consistently low for days on end.

“I think,” he continued, “that’s when I earned my technologist stripes. It wasn’t the fact that I was able to explore new technology. It was the fact that I was able to curate it for the specific use case at hand. And I understood what it meant to put my reputation on the line.”

Later, at Google, he spent six months writing “Kubernetes the Hard Way,” a comprehensive tutorial that forces the user to manually perform every step of setting up a cluster, rather than writing an automation tool. His reasoning: the more people who understood every nuance of Kubernetes, the more creative contributors the project would attract.

3. The “NoCode” manifesto

Presented with any new trend, Hightower’s first question is simple: “Does it provide any value to me?” This ensures that real-world utility wins out over hype.

If a company “makes a billion dollars with three servers”, then “Kubernetes offers zero value to you... SCP (Secure Copy Protocol) is all you need.” For AI, he cautioned against “jumping to a complex and expensive large language model (LLM) to analyze structured data that could simply be put in a database.” Look for an efficient library to, for instance, convert JSON to XML; using an LLM is simply “wasting your money.”

Hightower has built a satirical GitHub repository called NoCode to make the point. The theme: “The best way to write secure and reliable applications is to write nothing, deploy anywhere.” The project’s contribution guide clearly states, “All changes are welcome as long as no code is involved.” NoCode is so popular that engineering directors have even asked him to take it down because of the “distractions it’s been causing their team.”

]]> ]]> Practitioners should leverage existing, battle-hardened solutions rather than reinventing the wheel. HAProxy is a prime example: “If you see someone implementing, like, proxy features, you say, hey, there’s this thing called HAProxy. You can put it there, and it does all of these things that you have on the roadmap.”

4. Don’t become a “junior human”

Hightower asserts that technical excellence is inseparable from personal growth, intellectual curiosity, and empathetic human interaction. When he achieved the title of Distinguished Engineer at Google, others asked how to follow in his footsteps. In response, he warns: “You don’t want to spend your whole career chasing becoming a Senior or Distinguished Engineer and remaining a junior human being.”

To become a “senior human,” begin with a simple question: “Why?” But intellectual curiosity also requires emotional courage. He points to the vulnerability inherent in a code review: “You know how much courage it takes to submit a PR? Because it’s going to be judged by your peers.” Even a seemingly technical process is steeped in emotion and social risk.

He continued with a challenge to management: “Think about the ways that you kill curiosity in your company, in your team.” This leads to employees who learn to stop trying and just do the “bare minimum not to get fired” — the kind of career stultification that Hightower had warned against.

5. Open source as a relay race

In the open source world, Hightower recommends collective responsibility, succession planning, and direct financial support for projects. Initially, “I thought these projects were about a marathon,” he confesses, “like I will be running this race forever, and I only needed to learn to pace myself.” This approach leads to burnout and project abandonment.

]]> ]]> “Now I believe that that’s false. It’s more of a relay race,” he continued. “You need to be thinking about who you will hand that baton to.” He had begun the confd project to solve a specific problem in Docker, but “it wasn’t very extensible.” When HashiCorp built a competing project, Consul Template, Hightower felt not upset, but validated. He had witnessed his idea outgrow him.

Hightower then made an unambiguous case for funding. “If we want these projects to exist, you have to be willing to pay for them,” he states plainly. In the “relay race” model, all participants, including users, have a role to play.

6. The new shape of automation

Hightower said that the move from imperative, script-based automation to declarative, intent-based systems changes the very nature of an engineer’s work and the requirements for security and observability. In the imperative model, the engineer tells the system how to do something. In the declarative model, the engineer tells the system what they want, and the system figures out how to achieve it.

]]> ]]> Because automated systems can scale problems just as fast as they scale solutions, Kelsey argues that the real key to modern infrastructure isn't perfect automation, but correlation.

He describes the introduction of the “trace ID” in microservices as a “revolution” because it acts as the digital paper trail for automated actions. By passing a trace ID through the entire stack, engineers can link a downstream effect (such as a slow query) back to the specific upstream intent that triggered it — transforming hours of manual troubleshooting into a simple search for the root cause.

7. MCP as a “gift to the proxy community”

Hightower sardonically frames model context protocol (MCP) as “a gift to the proxy community.” He argues that AI, rather than making established technologies obsolete, is creating an opportunity for mature, fundamental tools such as proxies to reassert their value.

He notes that AI is “not a cheap thing to run,” with costly GPU cycles and high electricity consumption. MCP — the emerging standard for letting LLMs call external tools — is not a “new magical construct,” but a simple API that uses existing technologies such as HTTP and JSON-RPC.

“Let’s not pretend we’re talking about a whole new paradigm,” he states clearly. “People are starting to make the same mistakes because they forgot the fundamentals.” The MCP specification has “nothing about permissions. There’s nothing about headers. There’s nothing about exchange and scope tokens we’ve been doing for 20 years.”

The immaturity and lack of security of the nascent AI/MCP ecosystem is a “gift to the proxy community… If you try to use it as is, you’re going to be in the news — for the wrong reasons.” A mature proxy can be placed in front of an insecure MCP endpoint to provide the authentication, authorization, rate limiting, and input validation that the protocol itself lacks.

The enduring power of “why?”

Many of the insights Hightower shared flow from a single, foundational practice: the relentless asking of “Why?”

Why does this system work the way it does? Why does this community matter? Why are we repeating old mistakes with new technology?

The result is a commitment to curiosity, to critical thinking, and to the first principles that govern not only software, but also people’s professional lives. Stepping back from the “how” of the daily Jira ticket and connecting with the “why” that drives one’s work is not a distraction; it’s actually the most valuable work we can do.

Hightower’s themes — mastering fundamentals, building trust through open source, using the right tool for the job — are ideas HAProxy has been building on for over two decades. The emerging AI and MCP landscape is the latest test of those principles. If you’re thinking about how to secure and govern AI traffic in your infrastructure, explore how HAProxy One addresses the AI gateway challenge.

]]> Back to fundamentals: 7 insights from Kelsey Hightower at HAProxyConf appeared first on HAProxy Technologies.]]>
<![CDATA[Announcing HAProxy Fusion 2.0]]> https://www.haproxy.com/blog/announcing-haproxy-fusion-2-0 Mon, 16 Mar 2026 08:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-fusion-2-0 ]]> Today, we announce the release of HAProxy Fusion 2.0. This release marks a generational leap for the authoritative control plane that orchestrates HAProxy Enterprise’s high-performance application delivery and security. With a combination of new headliner features, structural changes, and improvements to the performance of the underlying API, HAProxy Fusion has jumped from version 1.3 to version 2.0.

HAProxy Fusion 2.0 enables modern security management, cloud-native deployment and service discovery, and numerous enhancements to automation, access management, and scalability that will propel HAProxy One — and the innovative applications that depend on it — into a new era.

]]> New to HAProxy Fusion?

HAProxy Fusion provides full-lifecycle management, monitoring, and automation of multi-cluster, multi-cloud, and multi-team HAProxy Enterprise deployments. HAProxy Fusion combines a high-performance control plane with a modern GUI and API, enterprise administration, a comprehensive observability suite, and infrastructure integrations including AWS, Kubernetes, Consul, and Prometheus. 

Together, this flexible data plane, scalable control plane, and secure edge network form HAProxy One: the world’s fastest application delivery and security platform that is the G2 category leader in load balancing, API management, container networking, DDoS protection, and web application firewall (WAF). 

]]> ]]> To learn more, contact our team for a demonstration.

What’s new in HAProxy Fusion 2.0

This release introduces significant enhancements to security, automation, and scale, and support for HAProxy Enterprise load balancer versions 3.1 and 3.2.

]]> Upgrade to HAProxy Fusion 2.0

When you're ready to start the upgrade process, please carefully read our HAProxy Fusion upgrade documentation (customer login required).

Modern security management

HAProxy Fusion 2.0 introduces a unified “security control plane” to orchestrate the multi-layered security capabilities in HAProxy Enterprise. This architecture combines the next-gen performance of HAProxy Enterprise’s security layers — powered by threat intelligence enhanced by machine learning — with a next-gen security UX. 

This powerful combination makes it simple to implement common security patterns (such as Web App and API Protection), or add edge security to complex traffic management solutions (such as Universal Mesh and Load Balancing as a Service (LBaaS), while providing easy access to flexible building blocks and deep customization for those who need it.

Centralized security policy

HAProxy Fusion includes centralized security policy to orchestrate the multi-layered security capabilities of HAProxy Enterprise, in any environment or form factor, including: 

  • HAProxy Enterprise Bot Management Module, powered by the new Threat Detection Engine, which uses reputational and behavioral signals to accurately identify humans, verified bots (such as search engine and AI crawlers), and malicious bots; and detect and label complex and high-impact threats, including application layer DDoS attacks, brute force attacks, web scrapers, and vulnerability scanners.

  • HAProxy Enterprise WAF, powered by the Intelligent WAF Engine, which detects and mitigates application attacks, such as SQL injection, XSS, CSRF, and more.

  • HAProxy Enterprise’s security building blocks, such as the Global Profiling Engine (GPE), ACLs, CAPTCHA Module, allow-lists and deny-lists, and more.

Security Profiles

HAProxy Fusion makes centralized security policy fast and easy to deploy with “Security Profiles”. Security Profiles provide preset security policies that administrators can apply in just a few clicks to simplify configuration and secure traffic into new applications. 

HAProxy Fusion provides a default Security Profile to help administrators to get started quickly. The default Security Profile includes intelligent presets suitable for common application types. Administrators can easily create customized Security Profiles, tailored to particular use cases, that can be reused or further customized as new use cases emerge.

]]>

Security Profiles in HAProxy Fusion 2.0

]]> Threat-Response Matrix

HAProxy Fusion’s Security Profiles make it simple to create and customize full-spectrum security policies with an intuitive visual policy builder called the “Threat-Response Matrix”. Part of HAProxy Fusion’s modern web GUI, the Threat-Response Matrix enables administrators to orchestrate the multi-layered security capabilities in HAProxy Enterprise without requiring detailed knowledge of HAProxy’s configuration language or the underlying modules.

Using the Threat-Response Matrix, administrators can: 

  • combine Monitored Signals and Decisions, using a response framework based on simple thresholds and standard logical operators; 

  • view and apply a recommended Decision for each Monitored Signal (recommendations provided by HAProxy Fusion);

  • see a clear visual representation of how the Monitored Signals and Decisions are connected; 

  • see how a new Security Profile will affect real-time traffic in Learning Mode; 

  • seamlessly toggle between Learning Mode and Enforcement Mode when a Security Profile is ready for production traffic.

]]>

Threat-Response Matrix in HAProxy Fusion 2.0

]]> Enhanced service discovery

HAProxy Fusion 2.0 introduces deep support for Consul Enterprise, including partitions, namespaces, and the key-value store. Consul support now also includes the key-value store. This enhanced service discovery natively understands complex Consul and Consul Enterprise architectures.

This release adds variable and map transformers, allowing users to extract specific Consul and Kubernetes metadata and map them directly to HAProxy configuration directives. This includes Consul tags and meta key-value pairs, and Kubernetes annotations, version tags, and canary labels.

Conditional automation also allows for logic-based configuration generation. These enhancements enable true multi-tenancy, allowing HAProxy Enterprise deployments to securely manage traffic for disparate teams across complex architectures.

]]>

Kubernetes and Consul service discovery in HAProxy Fusion 2.0

]]> Native Kubernetes deployment

HAProxy Fusion 2.0 introduces the HAProxy Fusion Operator, which allows the control plane to be deployed natively inside Kubernetes clusters, as part of our broader Kubernetes solution. The HAProxy Fusion Operator deploys directly into your cluster via a manifest applied using kubectl.

The operator automates image configuration and orchestrates essential services. This fully provisions the control plane and its databases in under five minutes.

Full-lifecycle automation

HAProxy Fusion 2.0 introduces an official Terraform Provider and enhanced Ansible Playbook support specifically for managing HAProxy Fusion resources.

Administrators can now declare the desired state of their HAProxy Enterprise clusters, groups, and configurations. This enables granular configuration as code, effectively managing individual configuration objects like frontends and backends.

Zero-touch user provisioning

HAProxy Fusion 2.0 enables automatic role mapping. Administrators can configure HAProxy Fusion to read group claims from the OpenID Connect (OIDC) token and automatically assign the corresponding internal RBAC role.

This dynamically translates Identity Provider groups to HAProxy Fusion roles, automating onboarding and offboarding. This integration ensures users immediately have the correct permissions upon login.

]]>

Mapping HAProxy Fusion roles to OIDC roles in HAProxy Fusion 2.0

]]> High-performance API and enhanced GUI

The new HAProxy Fusion API v2 is re-engineered for higher performance at scale. It is designed to handle hyperscale bursts without increasing latency. The API supports order-of-magnitude larger configurations and a significantly higher number of frontends and backends.

Additionally, the user interface has been reorganized to create a more intuitive workflow. Configuration fields are now logically grouped by section into tabs. Frontend and backend templates include tabs for general properties, performance and stability, traffic management, and security and advanced settings.

Extended product lifecycle

Starting with HAProxy Fusion 2.0, every release is now a Long-Term Support (LTS) version. This provides a standardized lifecycle of two years of active support followed by six months of migration support, during which customers will be guided by our support team to upgrade their infrastructure to the latest version before the end of the support period.

This extended commitment offers the stability and predictability enterprise teams need to plan infrastructure updates on their own terms and maximize the return on investment for each deployment.

Try HAProxy Fusion 2.0

If you haven’t tried the power of HAProxy Fusion, this is the perfect time to schedule a demo with our team. We’ll talk you through the basics of how to manage, observe, and automate your HAProxy Enterprise deployment, and show you how HAProxy Fusion 2.0 takes things to the next level, with modern security management, cloud-native deployment and service discovery, full-lifecycle automation, and zero-touch user provisioning. SecOps and DevOps teams — this one’s for you!

There has never been a better time to start using HAProxy Fusion. Request a demo or visit our documentation to begin your upgrade.

]]> Announcing HAProxy Fusion 2.0 appeared first on HAProxy Technologies.]]>
<![CDATA[Streamlining your NIS2 and DORA compliance solution with HAProxy]]> https://www.haproxy.com/blog/dora-and-nis2-compliance-solution Fri, 13 Mar 2026 12:00:00 +0000 https://www.haproxy.com/blog/dora-and-nis2-compliance-solution ]]> The EU's cybersecurity mandate is in effect. Here’s how HAProxy supports your regulatory requirements.

With NIS2 and DORA now in effect, EU organizations face a fundamental shift in how they approach security. Compliance is a standard built into every layer of your environment, from your hardware and OS to your software configuration.

HAProxy alone doesn't make an organization compliant, yet it serves as a critical technical component of a strong security strategy. By providing multi-layered security at the application layer, HAProxy Enterprise helps teams meet the technical expectations of these mandates as part of their broader security infrastructure.

]]> NIS2 vs DORA: what do they require?

The European Union introduced NIS2 and DORA with a clear mission: to protect the essential services that EU citizens rely on every day. From power grids and telecommunications to local governments and banking systems, these critical sectors are increasingly targeted by ransomware, DDoS attacks, and other sophisticated threats.

Both frameworks mandate that organizations raise their cybersecurity and operational resilience standards to ensure they remain secure, reliable, and operationally stable.

NIS2: strengthening cybersecurity across essential services

NIS2 (Network and Information Security Directive 2) builds upon the original NIS Directive, introducing stricter requirements and encompassing a broader range of sectors. 

It mandates that organizations implement appropriate and proportionate technical, operational, and organizational measures to manage cybersecurity risks.

In practice, this means:

  • Conducting regular risk assessments to find and fix vulnerabilities.

  • Establishing transparent processes for incident detection and reporting within strict deadlines.

  • Ensuring business continuity and recovery capabilities.

  • Managing supply chain security and implementing safeguards such as encryption and access control.

  • Making executive management accountable for security decisions.

NIS2 applies to critical entities in industries such as energy, transport, healthcare, digital infrastructure, public administration, food production, and manufacturing.

DORA: what financial institutions must prove

While NIS2 casts a wide net, the Digital Operational Resilience Act (DORA) focuses specifically on financial institutions and their technology providers.

DORA requires a comprehensive framework for operational resilience, ensuring financial entities can withstand, respond to, and recover from all information and communication technology (ICT) related incidents. To comply, organizations must:

  • Establish a formal framework for managing information and communication technology risks.

  • Implement continuous monitoring and threat intelligence.

  • Conduct advanced digital operational resilience testing.

  • Maintain robust incident classification and reporting.

  • Oversee and audit critical third-party providers.

This regulation affects banks, insurers, investment firms, payment processors, crypto-asset providers, and the cloud or information and communication technology vendors that serve them.

What are the penalties under NIS2 and DORA?

Non-compliance penalties under NIS2 and DORA are intentionally severe to ensure high-level accountability. 

The EU has established one of the most stringent accountability frameworks with the NIS2 and DORA regulations. Non-compliance with NIS2 can result in fines reaching up to €10 million or 2% of global turnover. Similarly, DORA imposes fines of up to 2%, with the potential for daily penalties for continuous non-compliance.

Summary of NIS2 and DORA requirements

NIS2

DORA

Who it applies to

Essential and important entities in sectors such as energy, healthcare, transport, public administration, digital infrastructure, and manufacturing.

Financial institutions and ICT third-party providers, including banks, insurers, investment firms, and crypto-asset service providers.

Management liability and sanctions

• Executives (CEOs, board members) can be personally liable for gross negligence. [1]

• Possible temporary bans from management roles. 

Public disclosure of violations. 

Mandatory audits and compliance orders.

• Management must maintain operational resilience and enforce oversight of ICT vendors. 

• Authorities can publish penalty details, including the nature of the breach and responsible entities.

Enforcement focus

Builds a culture of accountability from leadership downward, making cybersecurity a core business priority.

Promotes transparency and public trust by ensuring financial stability and visibility into regulatory actions.

Bottom line

Financial penalties

• Essential entities: up to €10 million or 2% of global turnover (whichever is higher). 

• Important entities: up to €7 million or 1.4% of global turnover.[1]

• Up to 2% of global annual turnover

Daily penalties possible for ongoing non-compliance. [2]

From legalese to technical reality: the implicit mandate for application security

The NIS2 Directive and DORA define high-level outcomes rather than specific toolsets: protect critical services, manage risk, detect and report incidents, and maintain evidence of your resilience. Because these goals inherently rely on the availability and integrity of your applications, strong Layer 7 controls become a vital component of your defense strategy.

In practice, a web application firewall is one of the most effective ways to implement visible, logging-rich security at the application layer. In the HAProxy Enterprise load balancer, the HAProxy Enterprise WAF sits alongside built-in application-layer DDoS protection and the HAProxy Enterprise Bot Management Module, allowing you to address exploits, floods, and automated abuse as a critical layer in your overall security infrastructure.

How HAProxy helps meet technical expectations

Building a resilient infrastructure requires a "defense-in-depth" approach that protects services thoroughly. Since many modern attacks target the application layer, organizations must consider how to mitigate these specific threats.

]]> ]]>
  • WAF protection solution. Application-layer protection including deep request inspection, policy enforcement, and detailed logs, supporting incident detection, reporting, and virtual patching. This functionality is critical for covering the full spectrum of application-layer risks.

  • Application-layer DDoS protection solutions. Global rate limiting, surge handling, connection management, adaptive challenges, and circuit breakers help maintain availability during floods and abusive traffic patterns.

  • Bot management solutions. Fast and flexible identification and categorization of bots, including unwanted crawlers and scripted attackers, so teams can block automated abuse (including brute force attacks, web scrapers, and vulnerability scanners) while preserving traffic from humans and verified crawlers.

  • What to consider for auditability

    While the directives do not prescribe a specific product, and requirements vary by industry and nation, demonstrating operational resilience often requires evidence of robust technical controls. Capabilities that help demonstrate this resilience include:

    • WAF for exploit prevention and virtual patching.

    • Layer 7 DDoS protections for rate control and graceful degradation.

    • Bot management for detecting, classifying, and labeling a broad spectrum of complex, high-impact threats.

    • HAProxy Fusion centralizes HAProxy logs and telemetry, aggregating real-time metrics and security logs for formal reporting and audit trails.

    These capabilities provide practical coverage of application-layer risks, offering the data and visibility teams need to support their compliance assessments.

    HAProxy Enterprise combines these capabilities within a single layer in the traffic path, delivering high performance and low complexity.

    The compliance crossroads

    Organizations now face a choice between two paths.

    The traditional way: complexity through bolt-on solutions 

    Some teams opt to deploy a separate, standalone WAF appliance or cloud service. This can introduce new challenges, such as:

    • Added latency and network hops that slow down applications.

    • Integration conflicts between different vendors and architectures.

    • Higher total cost of ownership due to extra licenses, training, and monitoring tools.

    • Another potential point of failure in the data path.

    The HAProxy way: simplicity through unification 

    Instead of stacking tools, a smarter path is to build security into the platform you already trust. That’s where HAProxy Enterprise load balancer comes in.

    HAProxy Enterprise: a key component of your compliance strategy

    HAProxy Enterprise provides high-performance load balancing for TCP, UDP, QUIC, and HTTP-based applications, high availability, an API/AI gateway, Kubernetes application routing, SSL processing, DDoS protection, bot management, global rate limiting, and a next-generation WAF. 

    HAProxy Enterprise is built on the highly regarded open-source HAProxy, the most widely used software load balancer, ensuring exceptional performance, reliability, and flexibility. It enhances this core with ultra-low-latency security features and includes premier support.

    ]]> ]]> HAProxy Enterprise benefits from full-lifecycle management, monitoring, and automation (provided by HAProxy Fusion), and next-generation security layers powered by threat intelligence from HAProxy Edge and enhanced by machine learning.

    Resilience without compromise

    Security often comes with trade-offs, but not here. HAProxy Enterprise integrates advanced defenses directly into the HAProxy instance you already have in the traffic path, helping to support the technical side of your business continuity plans.

    • Zero additional latency and low resource use: The WAF operates natively inside the load balancer, scanning and blocking malicious traffic inline, without increasing latency or CPU use.

    • Seamless upgrade path: Upgrading from HAProxy Community to HAProxy Enterprise requires no new hardware or network redesigns; your existing HAProxy configurations and automation continue to work seamlessly.

    • Consistent cross-environment protection: Deploy on-premises, in containers, or across multiple clouds with the same security posture everywhere.

    The result is security at the speed of your business, providing robust technical controls that are invisible to users and simple for your teams to manage.

    ]]> The smartest move is an upgrade

    Both NIS2 and DORA demand that organizations prove their resilience. That means having the proper controls, visibility, encryption, risk management, and continuity built into the heart of your infrastructure.

    With HAProxy Enterprise, you don’t need extra tools or added complexity. High availability, built right into our name, ensures your infrastructure performs reliably under heavy loads or security events. Strengthen your security posture through the same platform you trust for traffic delivery, now enhanced with enterprise-grade protection and observability to support your compliance efforts.

    Ready to see it in action?

    Have questions about how HAProxy fits into your compliance strategy?

    Note on Compliance: HAProxy Enterprise provides robust security and observability to help organizations manage application-layer risks. However, achieving compliance with regulations such as NIS2 or DORA depends on a customer’s overall security infrastructure, operating environment, and the management of their broader risk program. The examples provided in this article are for illustrative purposes only and do not constitute a legal guarantee or a promise of compliance.

    [1] NIS2 Fines & Consequences

    [2] Final text of the Digital Operational Resilience Act (DORA)

    ]]> Streamlining your NIS2 and DORA compliance solution with HAProxy appeared first on HAProxy Technologies.]]>
    <![CDATA[Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy]]> https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp Fri, 27 Feb 2026 09:59:00 +0000 https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp ]]> If you’ve worked with VMware Horizon (now Omnissa Horizon), you know it’s a common way for enterprise users to connect to remote desktops. But for IT engineers and DevOps teams? It’s a whole different story. Horizon’s custom protocols and complex connection requirements make load balancing a bit tricky. 

    With its recent sale to Omnissa, the technology hasn’t changed—but neither has the headache of managing it effectively. Let’s break down the problem and explain why Horizon can be such a beast to work with… and how HAProxy can help.

    What Is Omnissa Horizon?

    Horizon is a remote desktop solution that provides users with secure access to their desktops and applications from virtually anywhere. It is known for its performance, flexibility, and enterprise-level capabilities. Here’s how a typical Horizon session works:

    1. Client Authentication: The client initiates a TCP connection to the server for authentication.

    2. Server Response: The server responds with details about which backend server the client should connect to.

    3. Session Establishment: The client establishes one TCP connection and two UDP connections to the designated backend server.

    The problem? In order to maintain session integrity, all three connections must be routed to the same backend server. But Horizon’s protocol doesn’t make this easy. The custom protocol relies on a mix of TCP and UDP, which have fundamentally different characteristics, creating unique challenges for load balancing.

    Why Load Balancing Omnissa Horizon Is So Difficult

    The Multi-Connection Challenge

    Since these connections belong to the same client session, they must route to the same backend server. A single misrouted connection can disrupt the entire session. For a load balancer, this is easier said than done.

    The Problem with UDP

    UDP is stateless, which means it doesn’t maintain any session information between the client and server. This is in stark contrast to TCP, which ensures state through its connection-oriented protocol. Horizon’s use of UDP complicates things further because:

    • There’s no built-in mechanism to track sessions.

    • Load balancers can’t use traditional stateful methods to ensure all connections from a client go to the same server.

    • Maintaining session stickiness for UDP typically requires workarounds that add complexity (like an external data source).

    Traditional Load Balancing Falls Short

    Most load balancers rely on session stickiness (or affinity) to route traffic consistently. In TCP, this is often achieved with in-memory client-server mappings, such as with HAProxy's stick tables feature. However, since UDP is stateless and doesn't track sessions like TCP does, stick tables do not support UDP. Keeping everything coordinated without explicit session tracking feels like solving a puzzle without all the pieces—and that’s where the frustration starts. 

    This is why Omnissa (VMWare) suggests using their “Unified Access Gateway” (UAG) appliance to handle the connections. While this makes one problem easier, it adds another layer of cost and complexity to your network. While you may need the UAG for a more comprehensive solution for Omnissa products, it would be great if there was a simpler, cleaner, and more efficient solution.

    This leaves engineers with a critical question: How do you achieve session stickiness for a stateless protocol? This is where HAProxy offers an elegant solution.

    Enter HAProxy: A Stateless Approach to Stickiness

    HAProxy’s balance-source algorithm is the key to solving the Horizon multi-protocol challenge. This approach uses consistent hashing to achieve session stickiness without relying on stateful mechanisms like stick tables. From the documentation:

    “The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up.” 

    Here’s how it works:

    1. Hashing Client IP: HAProxy computes a hash of the client’s source IP address.

    2. Mapping to Backend Servers: The hash is mapped to a specific backend server in the pool.

    3. Consistency Across Connections: The same client IP will always map to the same backend server.

    This deterministic, stateless approach ensures that all connections from a client—whether TCP or UDP—are routed to the same server, preserving session integrity.

    Why Stateless Stickiness Works

    The beauty of HAProxy’s solution lies in its simplicity and efficiency—it has low overhead, works for both protocols and is tolerant to changes. Changes to the server pool may cause the connections to rebalance, but those clients will be redirected consistently as noted in the documentation:

    “If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.”

    It is super efficient because there is no need for in-memory storage or synchronization between load balancers. The same algorithm works seamlessly for both TCP and UDP. 

    This stateless method doesn’t just solve the problem; it does so elegantly, reducing complexity and improving reliability.

    ]]> ]]> Implementing HAProxy for Omnissa Horizon

    While the configuration is relatively straightforward, we will need the HAProxy Enterprise UDP Module to provide UDP load balancing. This module is included in HAProxy Enterprise, which adds additional enterprise functionality and ultra-low-latency security layers on top of our open-source core.

    ]]> Implementation Overview

    So, how easy is it to implement? Just a few lines of configuration will get you what you need. You start by defining your frontend and backend, and then add the “magic”:

    1. Define Your Frontend and Backend: The frontend section handles incoming connections, while the backend defines how traffic is distributed to servers.

    2. Enable Balance Source: The balance source directive ensures that HAProxy computes a hash of the client’s IP and maps it to a backend server.

    3. Optimize Health Checks: Include the check keyword for backend servers to enable health checks. This ensures that only healthy servers receive traffic.

    4. UDP Load Balancing: The UDP module in the enterprise edition is necessary for UDP load balancing, and uses the udp-lb keyword. 

    Here’s what a basic configuration might look like for the custom “Blast” protocol:

    ]]> ]]> This setup ensures that all incoming connections—whether TCP or UDP—are mapped to the same backend server based on the client’s IP address. The hash-type consistent option minimizes disruption during server pool changes.

    This approach is elegant in its simplicity. We use minimal configuration, but we still get a solid approach to session stickiness. It is also incredibly performant, keeping memory usage and CPU demands low. Best of all, it is highly reliable, with consistent hashing ensuring stable session persistence, even when servers are added or removed.

    Refined health tracking & balancing UAG

    While the basic configuration above works well, there are a few refinements and adjustments that can be added for a more comprehensive solution. In production-grade Omnissa Horizon environments, HAProxy is typically deployed in front of Unified Access Gateways (UAGs) rather than directly in front of internal Connection Servers. 

    This architecture places HAProxy at the edge to manage incoming external traffic before it enters the DMZ, ensuring that UAGs (which act as hardened proxies for internal VDI operations) remain secure and performant. There are a few key refinements we can add for this production-ready setup:

    Synchronized health tracking

    While basic port checks verify network connectivity, they do not guarantee that the underlying Horizon application services are healthy. To solve this, use a dedicated health check backend like be_uag_https that specifically targets the /favicon.ico path. HAProxy can verify that all relevant UAG and Connection Server services are fully functional, not just that the port is open. 

    Long-lived session persistence

    Omnissa Horizon sessions are notably long-lived, with a default maximum duration of 10 hours. Standard load balancer timeouts are often too aggressive, potentially severing active virtual desktop connections during a typical workday. To ensure stability, HAProxy can be configured with extended timeout server and timeout client settings of 10 hours for all Blast and PCoIP backends. This aligns the load balancer’s persistence with the application’s session lifecycle, ensuring that even if a user is momentarily idle, their secondary protocols remain pinned to the correct UAG node.

    Edge security and SSL bridging

    For external-facing deployments, HAProxy should serve as the first line of defense using advanced security features like WAF (Web Application Firewall) and Brute Force Detection on the initial authentication endpoints. This protects the environment from credential-stuffing and application-layer attacks before they ever reach the UAG. 

    Furthermore, because UAGs require end-to-end encryption for security, HAProxy should be configured for SSL Bridging. It is important to use the same SSL certificate on both the HAProxy virtual service and the UAG nodes.

    This is crucial because the UAGs use fingerprinting for the certificate used for incoming requests, meaning the certificate presented by the HAProxy load balancer and the certificate on the UAG's outside interface must be the same to prevent certificate mismatch errors during the session handoff between the primary authentication and secondary display protocols.

    Sample configuration with UAG load balancing & advanced health tracking

    In this refined setup, the be_uag_https backend does the heavy lifting. All other backends simply "watch" its status. See the Omnissa documentation for a full list of port requirements for the different services within Unified Access Gateway.

    ]]> ]]> Understanding the track Directive and Timing

    When you use the track keyword, the secondary servers inherit the state of the target. They don’t send their own health check packets, this enables further synchronicity: If srv1 fails the favicon check, it is marked down for Blast TCP, Blast UDP, and PCoIP UDP at the exact same millisecond. 

    This prevents the "zombie session" issue. Without tracking, a user might be connected via TCP while their UDP media stream is hitting a dead server.

    This centralized tracking approach transforms your health checks from a series of fragmented probes into a unified "source of truth" for your infrastructure. By anchoring every protocol to a single HTTP health check, you eliminate the risk of partial failures. A server that appears healthy for UDP while its TCP services are actually failing can't happen, and the client's entire session remains synchronized.

    It's a configuration that's both more robust and significantly lighter on your backend resources, providing the stability required for high-performance virtual desktop environments.

    Advanced Options in HAProxy 3.0+

    HAProxy 3.0 introduced enhancements that make this approach even better. It offers more granular control over hashing, allowing you to specify the hash key (e.g., source IP or source+port). This is particularly useful for scenarios where IP addresses may overlap or when the list of servers is in a different order.

    We can also include hash-balance-factor, which will help keep any individual server from being overloaded. From the documentation:

    “Specifying a "hash-balance-factor" for a server with "hash-type consistent" enables an algorithm that prevents any one server from getting too many requests at once, even if some hash buckets receive many more requests than others. 

    [...]

    If the first-choice server is disqualified, the algorithm will choose another server based on the request hash, until a server with additional capacity is found.”

    Finally, we can adjust the hash function to be used for the hash-type consistent option. This defaults to sdbm, but there are 4 functions and an optional none if you want to manually hash it yourself. See the documentation for details on these functions.

    Sample configuration using advanced options:

    ]]> ]]> These features improve flexibility and reduce the risk of uneven traffic distribution across backend servers.

    Coordination Without Coordination

    The genius of HAProxy’s solution lies in its stateless state. By relying on consistent algorithms, it achieves an elegant solution that many would assume requires complex session tracking or external databases. This approach is not only efficient but also scalable.

    The result? A system that feels like it’s maintaining state without actually doing so. It’s like a magician revealing their trick—it’s simpler than it looks, but still impressive.

    Understanding Omnissa Horizon’s challenges is half the battle. Implementing a solution can be surprisingly straightforward with HAProxy. You can ensure reliable load balancing for even the most complex protocols by leveraging stateless stickiness through consistent hashing.

    This setup not only solves the Horizon problem but also demonstrates the power of HAProxy as a versatile tool for DevOps and IT engineers. Whether you’re managing legacy applications or cutting-edge deployments, HAProxy has the features to make your life easier.


    Frequently asked questions (FAQs)

    ]]>

    Resources

    ]]> Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy appeared first on HAProxy Technologies.]]>
    <![CDATA[Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF]]> https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf Fri, 27 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf ]]> The average cost of a security breach reached nearly $4.4 million in 2025, according to the publication Cost of Data Breach Report. To proactively address this substantial financial and security risk, Infobip, a global cloud communications platform, used HAProxy Enterprise to implement a security and uptime framework that is both highly modular and highly performant. 

    Infobip has 62 data centers spread across the globe — and operates each data center with everything it needs to run independently of others. There are no reliability dependencies between data centers, and if one or more go down, the others automatically pick up the slack. 

    The company processes enormous volumes of traffic, peaking at over 80,000 transactions per second during events such as Black Friday. These transactions went through HAProxy Enterprise with the integrated HAProxy Enterprise WAF.

    To protect its applications and meet strict customer compliance requirements, Infobip needed a Web Application Firewall (WAF). However, finding a solution that could meet their demanding technical and business needs was a significant challenge. 

    At HAProxyConf, engineers from Infobip shared the story of their search and how they ultimately found success with the next-gen HAProxy Enterprise WAF, powered by the Intelligent WAF Engine. Their journey highlights the critical need for a WAF that delivers security without compromising on performance, accuracy, or manageability. 

    ]]> The challenge: finding a scalable WAF for a global, high-performance infrastructure

    Infobip’s requirements for a WAF were stringent. Their globally distributed infrastructure, with scores of independent data centers, meant that any solution had to be scalable and easy to manage centrally. Furthermore, due to demanding client SLAs, Infobip had to keep any new latency to an almost invisible level.  

    Additional security — with no added latency? This strict requirement immediately excluded many traditional WAFs, which are often slow and inefficient.

    ]]> ]]> The team evaluated several options:

    • Cloud-based WAFs were not a good fit. Concerns included whether vendors had a presence in all of Infobip's regions and the need to classify the WAF provider as a data sub-processor, which they wanted to avoid. 

    • Hardware appliances were also ruled out. Scalability was lacking, management was a challenge, and costs were high. 

    • Virtual appliances didn’t meet Infobip’s operational approach, which runs everything possible in containers for consistency, security, and ease of management. 

    Since Infobip was already a happy user of HAProxy Enterprise for load balancing and SSL termination, they decided to put HAProxy Enterprise WAF to the test. 

    The evaluation: the Intelligent WAF Engine provides a breakthrough

    ]]> ]]> Infobip’s initial tests involved two distinct WAF engines: one based on ModSecurity and the HAProxy Advanced WAF (which has since been succeeded by the HAProxy Enterprise WAF). The results were mixed, highlighting the "WAF trade-off" with either option:

    • The Advanced WAF was extremely fast but proved too aggressive for their web portal, leading to false positives.

    • The ModSecurity WAF handled the portal well but introduced unacceptable latency on high-throughput APIs.

    Infobip needed one solution that could handle both use cases, without the trade-offs. Fortunately, during the evaluation period, HAProxy Technologies launched the next-generation HAProxy Enterprise WAF, powered by the Intelligent WAF Engine.

    This new WAF is designed to address the complexities and demands of modern application environments and the advanced threats they face — and is distinguished by its exceptional balanced accuracy, simple management, and ultra-low latency and resource usage. The Intelligent WAF Engine represents a technical breakthrough by moving beyond static lists and regex-based attack signatures to a non-signature-based detection system.

    ]]> ]]> By employing threat intelligence from HAProxy Edge’s 60+ billion daily requests, enhanced by machine learning, the Intelligent WAF Engine delivers:

    • Exceptional accuracy: A 98.5% balanced accuracy rate in an open source WAF benchmark, significantly outperforming the industry average of 90%.

    • Ultra-low latency: Under 1ms of added latency, even when handling complex traffic.

    • Simple management: Easy to set up and manage with out-of-the-box behavior suitable for most deployments.

    • 100% privacy: No external connection, and no third-party data processing.

    A notable feature of the HAProxy Enterprise WAF is the optional OWASP Core Rule Set (CRS) compatibility mode, for organizations that require OWASP CRS support for specific use cases or compliance. When enabled, this mode achieves on average 15X lower latency than the ModSecurity WAF using the OWASP CRS — even under mixed traffic conditions.

    This next-generation WAF solved Infobip's core problem, providing the ultra-low latency needed for API traffic and the exceptional accuracy required for their web portal, with an efficient and privacy-first operating model.

    The implementation: a phased, automated rollout

    Infobip had a solution to their challenging security and performance requirements in hand. Now they "just" needed to deploy it — and keep it updated — safely and securely.

    So, with their new, breakthrough solution in hand, Infobip devised a careful, automated rollout plan across all 62 of their data centers globally:

    1. Deploy in learning mode: The team first deployed HAProxy Enterprise WAF in a non-blocking learning mode. This allowed them to learn traffic patterns and fine-tune rules without impacting production. To ensure rock-solid reliability, they configured a “circuit breaker” to automatically disable the WAF if CPU usage ever spiked, choosing availability over security during the initial learning phase. (NB: No spike occurred.) 

    2. Enable protection path-by-path: Due to Infobip's use of a microservices architecture, they had the ability to enable blocking mode on an application-by-application basis. The team would analyze the WAF traffic for a specific path (e.g., /sms), ensure there were no false positives, and then switch that path to protection mode. This gave them the opportunity to monitor again in production, then move to the next application. 

    3. Automate with dynamic updates: Infobip manages all configurations centrally and deploys updates globally within 15 minutes. When a new application comes online, they simply update a map file that is automatically downloaded by HAProxy Enterprise instances, avoiding a full reload or redeployment - and the latency hiccups that would cause. This highlights the simple yet powerful setup and management framework that HAProxy Enterprise provides. 

    During Infobip’s presentation, the audience asked, “After setting up an app, do you still need much fine-tuning of WAF rules?” to which Juraj Ban replied, “No. Not anymore.”

    The result: security + performance, without compromise

    By implementing HAProxy Enterprise WAF, Infobip achieved its goal of strengthening its security posture without sacrificing performance. After the initial fine-tuning, they have experienced virtually no false positives and have met or exceeded all customer compliance requirements.

    ]]> ]]> The project was so successful that Infobip’s Chief Information Security Officer, Andro Galinović, provided a powerful endorsement:

    ]]> Infobip's story is a testament to how a modern, intelligent WAF can solve the complex security challenges of a global, high-performance platform. By choosing HAProxy Enterprise, they gained a solution that is not only fast and accurate but also flexible enough to fit seamlessly into their highly automated, container-based environment.


    ]]> Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF appeared first on HAProxy Technologies.]]>
    <![CDATA[Omnissa Horizon alternative: how HAProxy solves UDP load balancing]]> https://www.haproxy.com/blog/omnissa-horizon-alternative Wed, 25 Feb 2026 14:00:00 +0000 https://www.haproxy.com/blog/omnissa-horizon-alternative ]]> The grace period is over. Your Horizon environment needs a new home, and your legacy load balancer isn't coming with you. You need a better Omnissa Horizon alternative.

    Omnissa's separation from Broadcom has disrupted VDI routing for many organizations, and vSphere 7's October 2025 end-of-life has made the situation more urgent. If you're planning to replace Omnissa Horizon infrastructure right now, you're facing a choice: replicate the old expensive architecture or use this forced refresh to fix what wasn't working.

    Legacy ADCs were never built for this protocol

    Omnissa Horizon runs on Blast Extreme, a UDP-heavy protocol that creates a coordination nightmare for traditional load balancers. Every user session requires three simultaneous connections: one TCP channel for authentication, plus two UDP streams for display and audio. All three must hit the same backend server, or the session dies.

    Legacy ADCs (Application Delivery Controllers) solve this with brute force: massive in-memory "coordination tables" that track every connection state. This approach was already inefficient, but in a forced migration scenario, it becomes a budget killer. You're looking at hardware refresh quotes that rival your new Omnissa licensing costs just to handle a protocol that UDP was designed for in the first place.

    There's a better approach that eliminates this architectural bottleneck entirely.

    HAProxy stateless coordination

    HAProxy solves the Blast routing challenge with consistent hashing (balance source) for TCP and UDP load balancing, a stateless algorithm that maps client IPs to backend servers deterministically.

    Here's why this matters for your migration:

    Traditional ADC

    HAProxy Enterprise

    Stores connection state in memory

    Uses pure math, no state to sync

    Requires hardware overprovisioning

    Scales horizontally on commodity infrastructure

    Cost scales with capacity

    Cost scales per HAProxy Enterprise instance

    With HAProxy, you get superior Blast performance, eliminate hardware refresh CAPEX, and free up budget to offset rising vSphere costs.

    Stateless stickiness in action

    When a Horizon client connects, HAProxy hashes the client's source IP. That hash deterministically maps to the same backend server, which means TCP auth, and both UDP streams route to the correct destination (without storing session tables).

    ]]> There is no state to replicate across HA pairs, no memory tuning for peak user counts, or licensing tiers based on "connections per second."

    Build strategically: get more than a VDI Gateway

    Migrating to HAProxy as your Omnissa Horizon alternative doesn't have to be purely defensive spending. There's a broader infrastructure problem you can solve at the same time.

    Most organizations today suffer from application delivery fragmentation. You're running legacy ADCs for VDI and web apps, separate API gateways for microservices, service mesh overlays for Kubernetes, and different tools for different clouds. 

    Each silo has its own management plane, monitoring stack, and security policy language. Troubleshooting a user complaint that spans "VDI → Kubernetes app → external API" requires logging into four different systems.

    By choosing HAProxy for your Omnissa migration, you're automatically placing the cornerstone of a Universal Mesh architecture into your infrastructure.

    What Universal Mesh means in practice

    The same HAProxy Enterprise instance handling your Blast traffic can:

    • Route north-south traffic (users → VDI pools)

    • Route east-west traffic (VDI → backend databases, internal APIs)

    • Serve as your Kubernetes Ingress Controller (containerized apps)

    • Act as your API Gateway (external partner integrations)

    All managed through HAProxy Fusion Control Plane: one UI, one config model, one observability platform.

    Migration path: tactical fix to strategic foundation

    Phase 1 (weeks 1-4): solve the immediate crisis

    • Deploy HAProxy Enterprise as your Omnissa Horizon gateway through HAProxy Fusion Control Plane

    • Configure balance source with consistent hashing for stateless UDP routing

    • Migrate user traffic off the legacy ADC

    Phase 2 (months 2-6): consolidate adjacent workloads

    • Route your web application traffic through the same HAProxy layer

    • Migrate API gateway functions to HAProxy Enterprise (you already own it)

    • Route Kubernetes traffic through HAProxy Enterprise

    Phase 3 (6-12 months): full Universal Mesh

    • Federate HAProxy Enterprise instances across clouds

    • Establish unified policy for mTLS, rate limiting, and WAF

    • Retire the last legacy ADC appliances

    By this point, you will have addressed the immediate Horizon crisis and consolidated your application delivery infrastructure. Instead of managing separate systems for VDI, API Gateway, and Kubernetes ingress, you're running a unified data plane. The operational benefit shows up in troubleshooting: when you can trace a user issue from VDI through containerized apps to external APIs in a single interface, you're solving problems in minutes instead of hours."

    This moment matters

    The Omnissa migration is forcing you to make decisions now, but the consequences of those decisions will compound for years.

    Choosing the path of least resistance (buying another expensive ADC because "it's what we know") might leave companies having this same conversation in a few years when the next vendor changeup occurs. 

    The technical complexity of the Omnissa migration is real. But the path through it doesn't have to be complicated.

    Ready to escape the ADC vendor lock-in?

    Talk to our solutions team about architecting your Omnissa environment on HAProxy Enterprise and building the foundation for a Universal Mesh that grows with you.

    ]]> Omnissa Horizon alternative: how HAProxy solves UDP load balancing appeared first on HAProxy Technologies.]]>
    <![CDATA[Don't panic: a low-risk strategy for Ingress NGINX retirement]]> https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement Thu, 19 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement ]]> The Ingress NGINX project is winding down. For many organizations, this means planning a migration for critical infrastructure.

    While the HAProxy Kubernetes Ingress Controller is the natural successor for these workloads, a "rip and replace" strategy isn’t always viable. You might have complex configurations, customized annotations, or deployment freezes that make a sudden switch risky.

    There's a lower-risk path: Place HAProxy in front of your existing Ingress NGINX deployment. 

    By leveraging the HAProxy One platform approach, you can bridge your legacy Ingress NGINX setup and your future infrastructure without downtime. This buys you time while adding immediate security and observability benefits.

    Taking a "shield and shift" approach

    This strategy mirrors the architecture we've previously recommended for vulnerability protection (like CitrixBleed). Deploy HAProxy Enterprise as your edge layer, sitting in front of your current Ingress NGINX controller. You wrap your existing ingress with enterprise-grade security and visibility, without touching your working NGINX configurations.

    ]]> ]]> This approach leverages a unified data plane. HAProxy Enterprise at the edge creates a protective layer that's consistent with your future HAProxy Kubernetes Ingress Controller. The HAProxy One platform uses the same high-performance engine at the edge and within Kubernetes, unlike disparate solutions that force you to maintain different configurations and skill sets.

    The security policies, rate limits, and observability metrics you configure at the edge today translate directly to your Kubernetes clusters tomorrow. No relearning. No translation. 

    1. Immediate security hardening

    Legacy software becomes a security liability over time. An HAProxy edge layer acts as a security filter. You can apply rate limiting, bot management, and enterprise WAF rules to sanitize traffic before it reaches the deprecated controller.

    2. Better visibility into your traffic

    Migration anxiety comes from blindness. HAProxy Fusion unifies the management of your external edge gateways and internal Kubernetes controllers.

    HAProxy Fusion provides a single pane of glass for all traffic flows—even those heading to your legacy Ingress NGINX controller. It allows you to visualize service dependencies and automate the routing changes required for the migration, turning a manual, error-prone switchover into a managed workflow.

    3. Migrate one service at a time

    This is the operational advantage. Once HAProxy Enterprise handles your ingress traffic, you don't need to cut everything over at once.

    Configure HAProxy Enterprise to route most traffic to your existing Ingress NGINX setup. Then carve out specific paths, domains, or services to route to a new, parallel HAProxy Kubernetes Ingress Controller deployment.

    Migrate service by service, pod by pod, or region by region. Test a new configuration in production with real traffic. If it works, great. If not, revert the routing without redeploying your cluster.

    Configuration example

    The setup is straightforward. Configure your edge HAProxy Enterprise to listen on your public IP address and forward traffic to your Ingress NGINX service's internal IP address.

    Here's a simplified routing configuration:

    ]]> ]]> Looking ahead: Gateway API support

    This architecture isn't just a stopgap. It's infrastructure that scales with you.

    ]]> ]]> As Kubernetes networking moves toward the Gateway API, a flexible edge routing layer lets you adopt new standards at your own pace. We're developing HAProxy Unified Gateway to support both Ingress and Gateway API standards—giving you a single platform that evolves with the ecosystem.

    Stabilize your environment now. Migrate on your timeline. The configuration knowledge you build today (the routing logic, security policies, and operational patterns) carries forward. You're not buying time to delay a painful migration. You're building the foundation for your next-generation infrastructure, one service at a time.

    Getting help

    You don't have to migrate alone:

    • Community Support: Join our Slack to discuss migration strategies with other users

    • Documentation: We're releasing migration tutorials and annotation mapping guides soon

    • Enterprise Support: If you need hands-on help for critical workloads, our support and sales teams can help you architect a safe transition with HAProxy Fusion and HAProxy Enterprise

    ]]> Don't panic: a low-risk strategy for Ingress NGINX retirement appeared first on HAProxy Technologies.]]>
    <![CDATA[February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service]]> https://www.haproxy.com/blog/cves-2026-quic-denial-of-service Thu, 12 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/cves-2026-quic-denial-of-service ]]> The latest versions of HAProxy Community, HAProxy Enterprise, and HAProxy ALOHA fix two vulnerabilities in the QUIC library. These issues could allow a remote attacker to cause a denial of service. The vulnerabilities involve malformed packets that can crash the HAProxy process through an integer underflow or an infinite loop.

    If you use an affected product with the QUIC component enabled, you should update to a fixed version as soon as possible. Instructions are provided below on how to determine if your HAProxy installation is using QUIC. If you cannot yet update, you can temporarily workaround this issue by disabling the QUIC component.

    Vulnerability details

    • CVE Identifiers: CVE-2026-26080 and CVE-2026-26081

    • CVSSv3.1 Score: 7.5 (High)

    • CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

    • Reported by: Asim Viladi Oglu Manizada

    Description

    Two separate issues were found in how HAProxy processes QUIC packets:

    • Token length underflow (CVE-2026-26081): This affects versions 3.0 (ALOHA 16.5) and later. A remote, unauthenticated attacker can cause a process crash. This happens by sending a malformed QUIC Initial packet that causes an integer underflow during token validation.

    • Truncated varint loop (CVE-2026-26080): This affects versions 3.2 (ALOHA 17.0) and later. An attacker can cause a denial of service. By sending a QUIC packet with a truncated varint, the frame parser enters an infinite loop until the system watchdog terminates the process.

    Repeated attacks can  enable a lasting denial of service for your environment.

    Affected versions and remediation

    HAProxy Technologies released new versions of its products on Thursday, February 12, 2026, to patch these vulnerabilities.

    CVE-2026-26081 (Token length underflow)

    Product

    Affected version(s)

    Fixed version

    HAProxy Community / Performance Packages

    3.0 and later

    3.0.16

    3.1.14

    3.2.12

    3.3.3

    HAProxy Enterprise

    3.0 and later

    hapee-lb-3.0r1-1.0.0-351.929

    hapee-lb-3.1r1-1.0.0-355.744

    hapee-lb-3.2r1-1.0.0-365.548

    HAProxy ALOHA

    16.5 and later

    16.5.30

    17.0.18

    17.5.16

    CVE-2026-26080 (Truncated varint loop)

    Product

    Affected version(s)

    Fixed version

    HAProxy Community / Performance Packages

    3.2 and later

    3.2.12

    3.3.3

    HAProxy Enterprise

    3.2 and later

    hapee-lb-3.2r1-1.0.0-365.548

    HAProxy ALOHA

    17.0 and later

    17.0.18

    17.5.16

    Test if you’re affected

    Users of affected products can determine if the QUIC component is enabled on their HAProxy installation and whether they are affected:

    For a single installation (test a single config file):

    grep -iE "quic" /path/to/haproxy/config && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

    For multiple installations (test each config file in folder):

    grep -irE "quic" /path/to/haproxy/folder && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

    A response containing “QUIC may be enabled” indicates your HAProxy installation is potentially affected and you need to manually review and disable any QUIC listeners. The fastest method is by using the global keyword tune.quic.listen off (for version 3.3) or no-quic (3.2 and below).

    Update instructions

    Users of affected products should update immediately by pulling the latest image or package for their release track.

    • HAProxy Enterprise users can find update instructions in the customer portal.

    • HAProxy ALOHA users should follow the standard firmware update procedure in your documentation.

    • HAProxy Community users should compile from the latest source or update via their distribution's package manager or available images.

    ]]> Support

    If you are an HAProxy customer and have questions about this advisory or the update process, please contact our support team via the Customer Portal.

    ]]> February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service appeared first on HAProxy Technologies.]]>
    <![CDATA[Zero crashes, zero compromises: inside the HAProxy security audit]]> https://www.haproxy.com/blog/haproxy-security-audit-results Mon, 09 Feb 2026 15:00:00 +0000 https://www.haproxy.com/blog/haproxy-security-audit-results ]]> An in-depth look at the recent audit by Almond ITSEF, validating HAProxy’s architectural resilience and defining the shared responsibility of secure configuration.

    Trust is the currency of the modern web. When you are the engine behind the world’s most demanding applications, "trust" isn't a marketing slogan—it’s an engineering requirement.

    At HAProxy Technologies, we have always believed that high performance must never come at the cost of security or correctness. But believing in your own code isn’t enough. You need objective, adversarial validation. That's why we were glad to hear that ANSSI, the French cybersecurity agency, commissioned the rigorous security audit of HAProxy (performed by Almond ITSEF), which focused on code source analysis, fuzzing, and dynamic penetration testing as part of their efforts to support the security assessment of open source software.

    The results are in. After weeks of intense stress testing, code analysis, and fuzzing, the auditors reached a clear verdict: HAProxy 3.2.5 is a mature, secure product that is reliable for production.

    While we are incredibly proud of the results, we are equally grateful for the "operational findings" and the recommendations that highlight the importance of configuration in security. Here is a transparent look at what the auditors found and what it means for your infrastructure.

    Unshakeable stability: 25 days of fuzzing, zero crashes

    The most significant takeaway from the audit was the exceptional stability of the HAProxy core. The auditors didn't just review code; they hammered it.

    The team performed extensive "fuzzing" by feeding the system massive amounts of malformed, garbage, and malicious data. They primarily targeted the HAProxy network request handling and internal sockets. This testing went on for days, and in the case of internal sockets, up to 25 days.

    The result? Zero bugs. Zero crashes.

    For software that manages mission-critical traffic, handling millions of requests per second, this level of resilience is paramount. It confirms that the core logic of HAProxy is built to withstand not just standard traffic, but the chaotic and malicious noise of the open internet.

    Validating the architecture

    Beyond the stress tests, the audit validated several key architectural choices that differentiate HAProxy from other load balancers.

    Process isolation

    The report praised HAProxy’s "defense-in-depth" strategy. We isolate the privileged "master" process (which handles administrative tasks, spawns processes, and retains system capabilities) from the unprivileged "worker" process (which handles the actual untrusted network traffic). 

    By strictly separating these roles, HAProxy ensures that even if a worker were compromised by malicious traffic, the attacker would find themselves trapped in a container with zero system capabilities.

    Custom memory management

    Sometimes, we get asked why we use custom memory structures (pools) rather than standard system libraries (malloc). The answer has always been performance. Our custom allocators eliminate the locking overhead and fragmentation of general-purpose libraries, allowing for predictable, ultra-low latency.

    However, custom code often introduces risk. That is why this audit was so critical: static analysis confirmed that our custom implementation is not just faster, but robust and secure, identifying no memory corruption vulnerabilities.

    Clean code

    The auditors found zero vulnerabilities in the HAProxy source code itself. The only vulnerability identified was in a third-party dependency (mjson), which had already been patched in a subsequent update and shared with the upstream project.

    A case for shared responsibility

    No software is perfect, and no audit is complete without findings. The report highlighted risks that lie not in the software’s flaws, but in operational configuration.

    This brings us to a crucial concept: Shared Responsibility. We provide a bulletproof engine, but the user sits in the driver's seat. The audit highlighted a few areas where "default" behaviors prioritize compatibility over strict security, requiring administrators to be intentional with their config.

    We believe in transparency, so we are highlighting these operational recommendations to provide guidance, much of which experienced HAProxy users will recognize as standard configuration best practice.

    1. The ACL "bypass" myth

    The auditors noted that Access Control Lists (ACLs) based on URL paths could be bypassed using URL encoding (e.g., accessing /login by sending /log%69n). While this may appear to be a security gap, it’s actually a result of HAProxy’s commitment to transparency. As a proxy, HAProxy’s primary job is to deliver traffic exactly as it’s received. Since a backend server might technically treat /login and /log%69n as distinct resources, HAProxy doesn't normalize them by default to avoid breaking legitimate, unique application logic.

    If your backend decodes these characters and you need to enforce stricter controls, you have three main paths forward:

    1. Adopt a positive security model: Instead of trying to block "bad" paths (which are easy to alias), switch to an "Allow" list that only permits known-good URLs and blocks everything else.

    2. Manual normalization: For specific use cases, you can use the normalize-uri directive to choose which types of normalization to apply to percent-encoded characters before they hit your ACL logic (depending on the application's type and operating system).

    3. Enterprise WAF: If you prefer  "turnkey" protection, the HAProxy Enterprise WAF automatically handles this normalization, sitting in front of the logic to decode payloads safely.

    The positive security model is a standard best practice and the only safe way to deal with URLs. The fact that the auditors unknowingly adopted an unsafe approach here made us think about how to emit new warnings when detecting such bad patterns, maybe by categorizing actions. This ongoing feedback loop within the community helps us continue to improve and refine a decades-old project.

    2. Stats page access

    The report noted that the Stats page uses Basic Auth and, if not configured with TLS, sends credentials in cleartext. It also reveals the HAProxy version number by default.

    It’s important to remember that the Stats page is a legacy developer tool designed to be extremely lightweight. It isn't enabled by default, and its simplicity is a feature, not a bug. It’s meant to provide quick visibility without heavy dependencies. We appreciate the comment on the relevance of displaying the version by default. This is historical, and there's an option to hide it, but we're considering switching the default to hide it and provide an option to display it, as it can sometimes help tech teams quickly spot anomalies.

    The stats page doesn’t reveal much truly sensitive data by default, so if you want to expose your stats like many technical sites do, including haproxy.org, you can easily enable it. However, if you want to configure it to expose more information on it that you consider sensitive (e.g., IP addresses), then you should absolutely secure it

    The page doesn't natively handle advanced encryption or modern auth, so if you need to access it, follow these best practices:

    • Use a strong password for access

    • Wrap the Stats page in a secured listener that enforces TLS and rate limiting.

    • Only access the page through a secure tunnel like a VPN or SSH.

    For larger environments, HAProxy Fusion offers a more modern approach. Instead of checking individual raw stats pages, HAProxy Fusion provides a centralized, RBAC-secured control plane. This gives you high-level observability across your entire fleet.

    3. Startup stability

    The auditors identified that specific malformed configuration values (like tune.maxpollevents) could cause a segmentation fault during startup.

    While these were startup issues that did not affect live runtime traffic, the issue was identified and fixed immediately, and the fix was released the week following the preliminary report. This is the power of open source and active maintenance—issues are found and squashed rapidly.

    Power, trust, and freedom

    This audit reinforces the core pillars of our approach:

    • Power: Power is not just speed, but also the ability to withstand pressure. The exhaustive fuzzing tests prove that HAProxy is an engine built not just to run fast, but to run without disruption.

    • Trust: The fact that the auditors found zero vulnerabilities in the source code is a massive validation, but it isn't a coincidence. It is a testament to our Open Source DNA. Trust is earned through transparency, peer review, the continuous scrutiny of a global community, and professional security researchers.

    • Freedom: The "findings" regarding configuration remind us that HAProxy offers infinite flexibility. You have the freedom to configure it exactly as your infrastructure needs, but that freedom requires understanding your configuration choices.

    Conclusion: deploy with confidence

    The audit concludes that HAProxy 3.2 is "very mature" and "reliable for production".

    We are committed to maintaining these high standards. We don't claim our code is flawless (no serious developer does). But we do claim that our focus on extreme performance never compromises our secure coding practices.

    Next steps for users:

    • Upgrade: We recommend all users upgrade to the latest  HAProxy 3.2+ to benefit from the latest hardening and fixes.

    • Review: Audit your own configurations. Are you using "Deny" rules on paths? Consider switching to the standard positive security model.

    • Explore: If the complexity of manual hardening feels daunting, explore HAProxy One. It provides the same robust engine but adds the guardrails to simplify security at scale.

    ]]> Zero crashes, zero compromises: inside the HAProxy security audit appeared first on HAProxy Technologies.]]>