HTTP/2 is the first major successor to the Hypertext Transfer Protocol (HTTP). It enables reliable communication between clients and servers at the application layer (Layer 7) of the Open Systems Interconnection (OSI) Model, focusing on improved resource efficiency and throughput versus HTTP/1.1.
The HTTP Working Group — part of the Internet Engineering Task Force (IETF) — introduced HTTP/2 in 2015. It launched 18 years after the release of HTTP/1.1, which remains the leading HTTP version used today (though most major websites support HTTP/2). However, HTTP/2's performance improvements have since grown increasingly critical for applications that need greater speed and scalability. The protocol is currently standardized under RFC 9113.
How does HTTP/2 work?
Following the widespread adoption of HTTP/1.1, the simpler websites users were accustomed to soon grew more and more complex. Expanding feature sets, surging traffic, and an uptick in average loading times meant that a faster protocol was necessary. Enter HTTP/2.
Binary framing
As a result, HTTP/2 requests work a little differently than HTTP/1.1 requests. This is mainly due to HTTP/2's binary framing layer. While HTTP/1.1 requests are sent synchronously in plain text, HTTP/2 requests are broken into easily-transmittable units called "frames" — forming a bidirectional communication stream using a single TCP connection. This connection remains open during the session and readily accepts repeated requests.
This new design enabled multiplexing in HTTP/2, which allows clients to immediately make as many concurrent requests as possible (within rate limits) without needing each response to come back first. HTTP/2 server responses can also arrive in any order — not strictly in the same order as the requests coming in. This helps mitigate, but doesn't eliminate, problems stemming from head-of-line blocking in HTTP/1.1.
Flow control
Previously, a server would have to process client requests one at a time, in the order they're received, and return responses in an identical fashion. This resulted in a lot of waiting and inefficiency, which in turn added latency. Applications and websites could quickly become less responsive, and this issue only worsened as traffic scaled upwards.
Additionally, a flow control mechanism further reduces blocking by explicitly setting the buffering capacity for servers and load balancers. This keeps chunks of data stored temporarily in-memory until capacity is freed up — instead of requiring the server to process the entire request payload before doing anything else.
HTTP/2 can enforce these flows across individual data streams or even for the entire connection itself, which can handle multiple streams simultaneously. Flow control prevents these streams from competing while taking frame length (or size) into account. While HTTP/1.1's pipelining feature helped address some early performance issues, it's not as effective as HTTP/2 multiplexing and still requires responses in full. Conversely, HTTP/2 responses can arrive in intermingled chunks that don't respect any specific order.
Prioritization and more
A key challenge with HTTP/1.1 gradually emerged as webpages became more complex. The process of serving hundreds of individual resources (scripts, images, HTML, etc.) mixed with synchronous processing limited websites from loading content in an optimized order.
For example, a large script or element — depending on its placement in the DOM — could take ages to load and block other portions of the page from loading. This increased first contentful paint (FCP) times and threatened to drive users away.
HTTP/2 allows for stream prioritization, enabling developers (and the browser) to determine which resources should be delivered first. This doesn't negate the asynchronous delivery so critical to HTTP/2's performance, but it does boost flexibility by assigning priority levels to separate data streams based on weighted integers. It also gives developers more control over resource utilization from one stream to the next.
However, early adopters of the feature could sometimes struggle to implement prioritization reliably. Browser support has historically been spotty. When prioritization isn't available, backend resources are allocated evenly between streams — whether they're parent streams (delivered first) or child streams dependent on those parents (and therefore assigned weighted priority).
Beyond that, HTTP/2 also introduced handy features — such as header compression to reduce payloads, and server push to automatically serve clients important resources (CSS, JavaScript and other rendering components) before they explicitly request them.
Does HAProxy support HTTP/2?
Yes! All HAProxy products support HTTP/2. We can load balance any HTTP/2 application using a variety of load-balancing algorithms, while supporting core performance features such as multiplexing, connection pooling, and compression. HAProxy Enterprise also supports applications using any version of the HTTP protocol.
To learn more about HTTP/2 support in HAProxy, check out our Configuration Manual or our blog post, Your Comprehensive Guide to HAProxy Protocol Support.