Introduction

At HAProxy Technologies, we edit and sell a hardware and virtual load balancer called ALOHA (which stands for Application Layer Optimisation and High-Availability). A few months ago, we managed to make it run on the most common hypervisors available:

  • VMWare (ESX, vsphere)
  • Citrix XenServer
  • HyperV
  • Xen OpenSource
  • KVM

< ADVERTISEMENT>So whatever your hypervisor is, you can run an ALOHA on top of it 🙂 </ADVERTISEMENT>

Since a Load-Balancer appliance is Network IO intensive, we thought it was a good opportunity to bench each Hypervisor from a virtual network performance point of view.

Well, more and more companies use Virtualization in their infrastructures, so we guessed that a lot of people would be interested by the results of this bench, that’s why we decided to publish them on our blog.

Things to bear in mind about virtualization

One of the interesting feature of Virtualization is to be able to consolidate several servers onto a single hardware.
As a consequence, the resources (CPU, memory, disk and network IOs) are shared between several virtual machines.
An other issue to take into account is that the Hypervisor is like a new “layer” between the hardware and the OS inside the VM, which means that it may have an impact on the performance.

Purpose of benchmarking Hypervisors

First of all: WE ARE TOTALLY NEUTRAL AND HAVE NO INTEREST SAYING GOOD OR BAD THINGS ABOUT ANY HYPERVISORS.

Our main goal here is to check if each Hypervisor performs well enough to allow us to sell our Virtual Appliance on top of it.
From the tests we’ll run, we want to be able to measure the impact of the Hypervisor on the Virtual Machine performance

Benchmark platform and procedure

To run these tests, we use the same server for all Hypervisors, just swapping the hard-drive, to run each hypervisor independently.

Hypervisor Hardware summarized below:

  • CPU quad core i7 @3.4GHz
  • 16G of memory
  • Network card 1G copper e1000e

NOTE: we benched some other network cards and we got UGLY results. (Cf conclusions)
NOTE: there is a single VM running on the hypervisor: The Aloha.

The Aloha Virtual Appliance used is the Aloha VA 4.2.5 with 1G of memory and 2 vCPUs.
The client and WWW servers are physical machines plugged on the same LAN than the Hypervisor.
The client tool is injected and the webserver behind the Aloha VA is httpterm.
So basically, the only thing that will change during these tests is the Hypervisor.

The Aloha is configured in reverse-proxy mode (using HAProxy) between the client and the server, load-balancing and analyzing HTTP requests.
We focused mainly on virtual networking performance: the number of HTTP connections per second and associated bandwidth.
We ran the benchmark with different object sizes: 0, 1K, 2K, 4K, 8K, 16K, 32K, 48K, 64K.
NOTE: by “HTTP connection”, we mean a single HTTP request with its response over a single TCP connection, like in HTTP/1.0.

Basically, the 0K object test is used to get the number of connections per second the VA can do and the 64K object is used to measure the maximum bandwidth.

NOTE: the maximum bandwidth will be 1G anyway since we’re limited by the physical NIC.

We are going to bench Network IO only since this is the intensive usage a load-balancer does.
We won’t bench disks IOs…

Tested Hypervisors

We benched a native Aloha against Aloha VA embedded in the Hypervisors listed below:

  • HyperV
  • RHEV (KVM based)
  • vshpere 5.0
  • Xen 4.1 on Ubuntu 11.10
  • XenServer 6.0

Benchmark results

Raw server performance (native tests, without any hypervisor)

For the first test, we run the Aloha on the server itself without any Hypervisor.
That way, we’ll have some figures on the capacity of the server itself. We’ll use those numbers later in the article to compare the impact of each Hypervisor on performance.

native_performance

Microsoft HyperV

We tested HyperV on a Windows 2008 r2 server.
For this hypervisor 2 network cards are available:

  1. Legacy network adapter: which emulates the network layer through the tulip driver.
    ==> With this driver, we got around 1.5K requests per second, which is really poor…
  2. Network adapter: requires the hv_netvsc driver supplied by Microsoft in open source since Linux Kernel 2.6.32.
    ==> this is the driver we used for the tests

hyperv_performance

RHEV 3.0 Beta (KVM based)

RHEV is Red Hat Hypervisor, based on KVM.
For the Virtualization of the Network Layer, RHEV uses the virtio drivers.
Note that RHEV was still in the Beta version when running this test.

VMWare Vsphere 5

There are 3 types of network cards available for Vsphere 5.0
1. Intel e1000: e1000 driver, emulates network layer into the VM.
2. VMxNET 2: allows network layer virtualization
3. VMxNET 3: allows network layer virtualization
The best results were obtained with the vmxnet2 driver.

Note: We have not tested Vsphere 4 either ESX 3.5.

vsphere_performance

Xen OpenSource 4.1 on Ubuntu 11.10

Since CentOS 6.0 does not provide Xen OpenSource in its official repositories, we decided to use the latest (Oneiric Ocelot) Ubuntu server distribution, with Xen 4.1 on top of it.
Xen provides two network interfaces:

  1. emulated one, based on 8139too driver
  2. the virtualized network layer, xen-vnif

Of course, the results are much better with xen-vnif, so we’re going to use this driver for the test.

xen41_performance

Citrix Xenserver 6.0

The network driver used for XenServer is the same one as the Xen OpenSource: xen-vnif.

xenserver60_performance

Hypervisors comparison

HTTP connections per second

The graph below summarizes the HTTP connections per second capacity for each Hypervisor.
It shows us the Hypervisor overhead by comparing the light blue line, which represents the server capacity without any Hypervisor to each hypervisor line..

http_connections_comparison

Bandwidth usage

The graph below summarizes the HTTP connections per second capacity for each Hypervisor.
It shows us the Hypervisor overhead by comparing the light blue line which represents the server capacity without any Hypervisor to each hypervisor line.

bandwith_comparison

Performance loss

Well, comparing Hypervisors to each other is nice, but remember, we wanted to know how much performance was lost in the hypervisor layer.
The graph below shows, in percentage, the loss generated by each hypervisor when compared to the native Aloha.
The highest the percentage, the worst for the hypervisor…

performance_loss_comparison

Conclusion

  • the Hypervisor layer has a non-negligible impact on networking performance on a Virtualized Load-Balancer running in reverse-proxy mode.
    But I guess it would be the same for any VM which is Networking IO intensive
  • The shortest the connections, the biggest the impact is.
    For very long connections like TSE, IMAP, etc… virtualization might make sense
  • Vsphere seems in advanced compared to its competitors from a performance point of view.
  • HyperV and Citrix XenServer have interesting performances.
  • RHEV (KVM) and Xen open source can still improve performance.
    Unless this is related to our procedure.
  • Even if the hardware layer is not accessed by the VM anymore, it still has a huge impact on performance.
    IE, on vsphere, we could not go higher than 20K connections per second with a Realtek NIC in the server…
    With the Intel e1000e driver, we got up to 55K connections per second…
    So, even when you use virtualization, hardware counts!

Links

SHARE THIS ARTICLE