HAProxy and Varnish are two great open-source software projects that aim to improve the performance, resilience, and scalability of web applications. These two software projects are not competitors. Instead of that, they can work together to make any web infrastructure more agile and robust at the same time.

In this article, I’m going to explain how to use both of them on a web application hosted on a single domain name.

Best of Each: Benefits of HAProxy & Varnish

Benefits of using HAProxy:

  • It's a real load balancer with smart persistence.

  • Request Queueing: When all of the servers are processing their maximum number of requests, incoming requests queue up in the backend.

  • Transparent proxying: HAProxy can be configured to spoof the client IP address when establishing the TCP connection to the server.

Benefits of using Varnish:

  • Cache server with stale content delivery.

  • Content compression: When retrieving content from the backend, Varnish has the ability to compress it using gzip and distribute it in a compressed format.

  • Edge Side Includes (ESI): A programming language that enables the integration of segments of web pages into other web pages.


The following functionalities are common to both HAProxy and Varnish:

  • Content switching

  • URL rewriting

  • DDOS protection

So if we need any of them, we can use either HAProxy or Varnish.

Why a Single Domain?

In a web application, there are two types of content: static and dynamic.

Dynamic content is content that is generated in real-time and customized for a specific user based on their current browsing behavior within the application. Conversely, any content that does not fit this description can be classified as static. 


A page that is generated by PHP and whose content changes every few minutes or seconds (such as the CMS WordPress or Drupal) can be considered a 'pseudo-static' page.

The biggest strength of Varnish is that it can cache static objects, deliver them on behalf of the server, and offload most of the traffic from the server.

An object is identified by a Host header and its URL. When you have a single domain name, you have a single Host header for all your requests: static, pseudo-static, or dynamic. It's important to note that you can’t split traffic: every request must arrive on a single type of device, whether it's a load balancer, a cache, etc.

A good practice to separate dynamic and static content is to use one domain name per type of object: www.domain.tld for dynamic and static.domain.tld for static content. By doing this, you can forward dynamic traffic to the load balancer and static traffic to the caches directly.

Now, I guess you understand that the web application host naming can have an impact on the platform you’re going to build.

In this article, I’ll only focus on applications using a single domain name. We’ll see how we can route traffic to the right product despite the limitation of using a single domain name.

Available Architectures

If we consider the "web application" as a single component referred to as "appserver," then there are two primary architectures to choose from:



Using HAProxy in Front of Varnish (Pros & Cons)


  • HAProxy‘s smart load balancing algorithms, such as uri and url_param make the Varnish cache more efficient while also improving the hit rate.

  • HAProxy makes the Varnish layer scalable since it's load-balanced.

  • HAProxy protects Varnish ramp-up when starting up.

  • HAProxy can protect against DDOS and slow loris attacks.

  • Varnish can be used as a WAF.


  • There isn't an easy way to do application layer persistence.

  • HAProxy's queueing system can hardly protect the application hidden by Varnish.

  • The client IP will be mandatorily forwarded on the X-Forwarded-For header (or any header you want).

Using Varnish in Front of HAProxy (Pros & Cons)


  • HAProxy provides smart layer 7 persistence.

  • HAProxy's layer is scalable (with persistence preserved) when it's load-balanced by Varnish.

  • HAProxy protects the APPSERVER through request queuing.

  • Varnish can be used as a WAF.

  • HAProxy can utilize the client IP address (provided by Varnish in an HTTP header) for transparent proxying (connecting to the appserver with the client IP).


  • HAProxy is unable to protect against DDOS attacks, whereas Varnish can.

  • The cache size must be large enough to store all objects.

  • Varnish's layer is not scalable.

So, Which Architecture Should You Choose?

Rather than selecting the lesser of two unfavorable architectures, it is better to develop a platform that does not have any drawbacks.

The architecture

The diagram below shows the architecture we’re going to build and work on.

diagram of a haproxy and varnish platform


  • H: HAProxy load balancer (could be the Aloha Load Balancer or any other homemade one).

  • V: Varnish servers

  • S: Web application servers (tomcat, jboss, etc.)

  • C: Client or end-user

The primary functions of each layer:


  • Layer 7 traffic routing

  • DDOS protection (syn flood, slow loris, etc.)

  • application request flow optimization


  • Caching

  • Compression

  • Could be used later as a WAF to protect the application


  • hosts the application and static content


  • browses and uses the web application

Traffic Flow

Basically, the client will send all the requests to HAProxy, then HAProxy, based on the URL or file extension, will make a routing decision:

  • If the request looks to be for a (pseudo) static object, then it will forward it to Varnish. If Varnish misses the object, it will use HAProxy to get the content from the server.

  • All other requests will be sent to the appserver. If we’ve done our job properly, this should be only dynamic traffic.


I don’t want to use Varnish as the default option in the flow because dynamic content could be cached, which could lead to somebody’s personal information being sent to everybody.

Furthermore, in case of massive misses or purposely built requests to bypass the caches, I don’t want the servers to be hammered by Varnish. This way HAProxy protects them with tight traffic regulations between Varnish and appservers.

#1 Dynamic traffic flow

The platform should route requests that require dynamic content in accordance with the diagram presented below:

haproxy varnish dynamic content traffic flow diagram


  1. The client sends its request to HAProxy.

  2. HAProxy chooses a server based on cookie persistence or a load-balancing algorithm if there is no cookie.

  3. The server processes the request and sends the response back to HAProxy, which forwards it to the client.

Static traffic flow

The platform should route requests that require static content in accordance with the diagram presented below:

haproxy varnish static content traffic flow diagram


  1. The client sends a request to HAProxy, which recognizes that it is seeking static content.

  2. HAProxy forwards the request to Varnish. If Varnish has the object in the cache (a "HIT"), it sends it directly to HAProxy.

  3. If Varnish doesn’t have the object in the cache or if the cache has expired, then Varnish forwards the request to HAProxy.

  4. HAProxy randomly chooses a server. The response goes back to the client through Varnish.


In the event of a "MISS," the process can become rather cumbersome. However, it has been designed in this manner to leverage HAProxy's traffic regulation features to prevent Varnish from overwhelming the servers. Additionally, as Varnish is only responsible for handling static content, its HIT rate is over 98%, resulting in minimal overhead and improved protection.

What are the Pros of Using This Architecture?

  • Utilizing smart load-balancing algorithms like uri and url_param can increase the efficiency of Varnish caching and improve the hit rate.

  • Making the Varnish layer scalable.

  • Startup protection for Varnish and APPSERVER, allowing server reboot or farm expansion even under heavy load.

  • HAProxy can protect against DDOS and slow loris attacks.

  • Smart layer 7 persistence with HAProxy.

  • APPSERVER protection through HAProxy request queueing.

  • HAProxy can use the client IP address to do Transparent proxying (getting connected to the APPSERVER with the client IP).

  • Detection of cache farm failure and routing to application servers is essential for worst-case management.

  • Load-balancing any type of TCP-based protocol hosted on APPSERVER is possible.

What are the Cons of Using This Architecture?

There are a few “non-blocking” issues:

  • Scaling the HAProxy layer is challenging and requires the use of two crossed Virtual IPs that are declared in the DNS.

    Using Varnish as a web application firewall (WAF) is not feasible because it can only see static traffic passing through. However, this limitation can be addressed with an easy update.


HAProxy configuration

On Aloha, the global section is already set up for you, and the HAProxy stats socket is available at /var/run/haproxy.stats.

  stats socket ./haproxy.stats level admin
  log local3

# default options
  option http-server-close
  mode http
  log global
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# HAProxy's stats
listen stats
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

# main frontend dedicated to end users
frontend ft_web
  acl static_content path_end .jpg .gif .png .css .js .htm .html
  acl pseudo_static path_end .php ! path_beg /dynamic/
  acl image_php path_beg /images.php
  acl varnish_available nbsrv(bk_varnish_uri) ge 1
  # Caches health detection + routing decision
  use_backend bk_varnish_uri if varnish_available static_content
  use_backend bk_varnish_uri if varnish_available pseudo_static
  use_backend bk_varnish_url_param if varnish_available image_php
  # dynamic content or all caches are unavailable
  default_backend bk_appsrv

# appsrv backend for dynamic content
backend bk_appsrv
  balance roundrobin
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  # Transparent proxying using the client IP from the TCP connection
  source usesrc clientip
  server s1 cookie s1 check maxconn 250
  server s2 cookie s2 check maxconn 250

# static backend with balance based on the uri, including the query string
# to avoid caching an object on several caches
backend bk_varnish_uri
  balance uri # in latest HAProxy version, one can add 'whole' keyword
  # Varnish must tell it's ready to accept traffic
  option httpchk HEAD /varnishcheck
  http-check expect status 200
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 check maxconn 1000
  server varnish2 check maxconn 1000

# cache backend with balance based on the value of the URL parameter called "id"
# to avoid caching an object on several caches
backend bk_varnish_url_param
  balance url_param id
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 maxconn 1000 track bk_varnish_uri/varnish1
  server varnish2 maxconn 1000 track bk_varnish_uri/varnish2

# frontend used by Varnish servers when updating their cache
frontend ft_web_static
  monitor-uri /haproxycheck
  # Tells Varnish to stop asking for static content when servers are dead
  # Varnish would deliver staled content
  monitor fail if nbsrv(bk_appsrv_static) eq 0
  default_backend bk_appsrv_static

# appsrv backend used by Varnish to update their cache
backend bk_appsrv_static
  balance roundrobin
  # anything different than a status code 200 on the URL /staticcheck.txt
  # must be considered as an error
  option httpchk
  option httpchk HEAD /staticcheck.txt
  http-check expect status 200
  # Transparent proxying using the client IP provided by X-Forwarded-For header
  source usesrc hdr_ip(X-Forwarded-For)
  server s1 check maxconn 50 slowstart 10s
  server s2 check maxconn 50 slowstart 10s

Varnish configuration

backend bk_appsrv_static {
        .host = "";
        .port = "80";
        .connect_timeout = 3s;
        .first_byte_timeout = 10s;
        .between_bytes_timeout = 5s;
        .probe = {
                .url = "/haproxycheck";
                .expected_response = 200;
                .timeout = 1s;
                .interval = 3s;
                .window = 2;
                .threshold = 2;
                .initial = 2;

acl purge {

sub vcl_recv {
### Default options

        # Health Checking
        if (req.url == /varnishcheck) {
                error 751 "health check OK!";

        # Set default backend
        set req.backend = bk_appsrv_static;

        # grace period (stale content delivery while revalidating)
        set req.grace = 30s;

        # Purge request
        if (req.request == "PURGE") {
                if (!client.ip ~ purge) {
                        error 405 "Not allowed.";
                return (lookup);

        # Accept-Encoding header clean-up
        if (req.http.Accept-Encoding) {
                # use gzip when possible, otherwise use deflate
                if (req.http.Accept-Encoding ~ "gzip") {
                        set req.http.Accept-Encoding = "gzip";
                } elsif (req.http.Accept-Encoding ~ "deflate") {
                        set req.http.Accept-Encoding = "deflate";
                } else {
                        # unknown algorithm, remove accept-encoding header
                        unset req.http.Accept-Encoding;

                # Microsoft Internet Explorer 6 is well know to be buggy with compression and css / js
                if (req.url ~ ".(css|js)" && req.http.User-Agent ~ "MSIE 6") {
                        remove req.http.Accept-Encoding;

### Per host/application configuration
        # bk_appsrv_static
        # Stale content delivery
        if (req.backend.healthy) {
                set req.grace = 30s;
        } else {
                set req.grace = 1d;

        # Cookie ignored in these static pages
        unset req.http.cookie;

### Common options
         # Static objects are first looked up in the cache
        if (req.url ~ ".(png|gif|jpg|swf|css|js)(?.*|)$") {
                return (lookup);

        # if we arrive here, we look for the object in the cache
        return (lookup);

sub vcl_hash {
        if (req.http.host) {
        } else {
        return (hash);

sub vcl_hit {
        # Purge
        if (req.request == "PURGE") {
                set obj.ttl = 0s;
                error 200 "Purged.";

        return (deliver);

sub vcl_miss {
        # Purge
        if (req.request == "PURGE") {
                error 404 "Not in cache.";

        return (fetch);

sub vcl_fetch {
        # Stale content delivery
        set beresp.grace = 1d;

        # Hide Server information
        unset beresp.http.Server;

        # Store compressed objects in memory
        # They would be uncompressed on the fly by Varnish if the client doesn't support compression
        if (beresp.http.content-type ~ "(text|application)") {
                set beresp.do_gzip = true;

        # remove any cookie on static or pseudo-static objects
        unset beresp.http.set-cookie;

        return (deliver);

sub vcl_deliver {
        unset resp.http.via;
        unset resp.http.x-varnish;

        # could be useful to know if the object was in cache or not
        if (obj.hits > 0) {
                set resp.http.X-Cache = "HIT";
        } else {
                set resp.http.X-Cache = "MISS";

        return (deliver);

sub vcl_error {
        # Health check
        if (obj.status == 751) {
                set obj.status = 200;
                return (deliver);

Read More:

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.