Flow control is a networking mechanism that dictates data transmission rates between devices or network nodes — thus stopping the recipient from becoming overwhelmed during processing. Otherwise, a performance bottleneck can occur when a device accepts more data packets than it can successfully unpack and reassemble. This can cause the sender to resend packets later and use bandwidth that could have been better allocated.
A form of throttling, flow control boosts performance by preventing resource exhaustion. A recipient machine may store extra messages it receives in temporary memory (or a buffer) to prevent bottlenecking, and flow control can play a role in keeping this buffer from overflowing.
A given node (such as a sender) can fulfill its data-processing functions faster than another (such as a recipient) — leading to this processing imbalance that needs to be fixed. When this scales to environments with a seemingly innumerable quantity of connected devices (like a hospital or IoT network), flow control becomes that much more critical to preserving sustained network performance.
Flow control can have a receiver (such as a network switch with a lower speed uplink) either pause all traffic over the channel for a specified time, or only pause traffic for specific quality-of-service (QoS) classes. This accepts lower priority traffic such as large file downloads that won’t notice a slight delay — giving the receiver a chance to catch up without disrupting latency-sensitive transmissions, such as VoIP.
Latency and throughput aside, we can also measure overall performance by tracking data loss and retransmission rates, which this kind of mechanism aims to reduce. It also works across wireless networks and wired (Ethernet) networks alike. Flow control not only makes transmission more efficient, but helps level the playing field across networks with a variety of device types and specifications.
The concept of flow control emerged in the 1970s and evolved in lockstep with the early internet and TCP/IP stack. Baked into some early internet protocols, including those used ubiquitously today (such as TCP), it's an important feature that helps conserve precious network bandwidth.
How does flow control work?
There are two primary ways to implement flow control. The ideal choice will depend on your networking requirements, aspirational performance and QoS benchmarks, and the topography of your network.
Stop-and-wait flow control
This method of transmission sends data packets in individual frames — splitting the overall payload into digestible bits that the receiver can process more efficiently. When the client shoots a packet over to the recipient, it requests that an ACK
frame be sent back to confirm successful frame transmission.
This process repeats itself until the entire payload is transmitted successfully while respecting preconfigured timeouts. It works synchronously because each frame is processed in the order in which it arrives. This built-in pause is great for reducing data loss and integrity, but sacrifices some processing speed as a result.
Sliding window flow control
Conversely, sliding window flow control requires the sender to transmit multiple frames at the same time, as opposed to just one. It also does not wait for one authoritative frame to send an ACK
to confirm successful transmission. Instead, any frame in the set can return an ACK
to the original sender.
This process occurs across a given time window, during which a maximum number of frames can be transmitted. While this theoretically enables faster processing, it does require administrators to carefully track which frames arrive (and when) to confirm integrity of the entire stream. Error handling is therefore a little more complex, but this method does offer a good compromise between speed and reliability.
Sliding window flow control also uses some unique methods to ensure data integrity. The first — called go-back-N — asks the recipient to process an entire window of sent frames (never exceeding the limit) before processing the next group of frames. Any transmission errors require the sender to retransmit any troublesome frames before moving on.
The second — called selective repeat — only sends the impacted frame again when errors occur. The first frame sent doesn't necessarily trigger an ACK
from the recipient, but subsequent frames within a group can. By assessing these ACK
replies in conjunction with any NACK
(negative acknowledgement) replies, flow control can determine which individual frames are interrupting packet reassembly and quickly mitigate transmission problems.
You’ve mastered one topic, but why stop there?
Our blog delivers the expert insights, industry analysis, and helpful tips you need to build resilient, high-performance services.
Does HAProxy support flow control?
Yes! HAProxy One delivers robust protocol support — including those with built-in flow-control mechanisms, such as HTTP, QUIC, and TCP. Working as a load balancer providing flexible application delivery, HAProxy One lets users configure the timeouts, request limits, and data transmission mechanisms that comprise flow control.
To learn more about flow control in HAProxy, check out our blog, Introduction to Traffic Shaping Using HAProxy.