Load Balancers at DigitalOcean
In this presentation, Neal Shrader describes how DigitalOcean leverages HAProxy to power several key components within its infrastructure. First, HAProxy is used as a component of DigitalOcean’s Load Balancer-as-a-Service product. Second, it’s used as a frontend to its Regional Network Service, which is responsible for orchestrating changes within their software defined network. Last, HAProxy is used for load balancing traffic to their edge gateways and public websites. HAProxy provides the redundancy and performance they need to satisfy both their internal infrastructure needs as well as the needs of their customers.
Hi everyone. Thank you so much for allowing me to talk today, it’s a pleasure to get to speak to you. So, I’m going to be talking about load balancers at DigitalOcean; basically how we utilize HAProxy not only in our internal services but also externally and through our product offerings as well.
We use HAProxy extensively internally, both for internal services and in our product offerings.
The regional network service’s primary purpose is to orchestrate this full mesh of tunnels between members.
So, I’ll start with the regional network service. The regional network service is essentially the engine of software defined networking at DigitalOcean. Its primary responsibility is orchestrating the overlay network that ensures tenant isolation on our private network. Now, an overlay simply is the process of encapsulated traffic at a source and then writing it over a very simple IP fabric to be decapsulated the other side and to be presented to the VM.
Every single state change that happens inside of a user’s network, it becomes a one-to-many action. So, if a user creates a droplet, needs a live migrate from one hypervisor to another or to destroy, that needs to be propagated out everywhere. The regional network service’s primary purpose is to orchestrate this full mesh of tunnels between members. We call it a virtual chassis, of which there’s only one today, but soon there will be many available.
This is the general shape of the architecture itself.
On each hypervisor, we utilize Open vSwitch to express our data path, our pipeline. From there we translate that message into OpenFlow and then persist it in the data path and then from there we’re able to encapsulate and decapsulate accordingly. There are also some ancillary services that are responsible for projecting the state of the chassis towards our user-facing services. So, for instance, we don’t necessarily have to round trip to Bangalore to be able to say, “Okay, what’s the state of this chassis?”
This is what the general architecture looks like for an incoming request into our control plane.
So, the next evolution of Load Balancer.
Now, one of the complicating factors here is the integration into our existing software defined network. So, in addition we’re going to need to leverage Open vSwitch on Kubernetes as well and to be able to land a daemon that we’re going to call connflow-d that will be able to watch the placement of these pods and to be able to ingest these ApplyVirtualMessages from our orchestration software. From there, we’ll be able to ensure connectivity into our backend droplets and be able to orchestrate as we expect it to.
That is a very quick tour of some of the usage of HAProxy internal to DigitalOcean and what I wanted to speak to you today.
Organizations rapidly deploy HAProxy products to deliver websites and applications with the utmost performance, observability and security at any scale and in any environment. Looking for more stories?