A virtual machine (VM) packages all the software that would run on a physical machine (including its own kernel) and runs in isolation under the supervision of software on a physical machine. VMs will run applications using their own OS and virtualized resources. 

While a VM behaves like a physical machine and runs on a physical host, it functions like a sandbox to safeguard the host machine. This strict separation boosts security and shields the host from feeling the impacts of fluctuating resource use within the VM — since the host OS generally has priority to system resources. It also prevents a "guest" VM from toppling the entire system when issues strike. 

VMs are also commonly used to flexibly deploy applications in different environments (such as for testing or in production). For example, you can run Windows or Linux in a VM on macOS, and vice versa. This isn't always possible, however, without some technical caveats. It's also possible to run a VM inside of a VM, but this is rarely recommended.

Overall, VMs help solve the "it works on my machine" problem that developers have encountered for ages. Virtual machines move us closer to a platform-agnostic computing landscape in which dueling OSes don't immediately kill development projects or modern deployments. 

IBM is credited with introducing the first VM to market in late 1972. While developing the x86 architecture in 1998, VMware recognized the need for virtualization and host segmentation — culminating in the release of their first commercial (and modernized) VM in 1999. This brought more power and flexibility to PC users. Nowadays, any OS or platform can accommodate multiple implementations of virtual machines without overly-complex workarounds.

How does a virtual machine (VM) work?

VMs exist independently of one another and have their own processing space. They operate with the host machine's user space, which is dedicated to running top-level applications and processes abstracted from the kernel. Most CPUs also utilize virtualization instructions to avoid the overhead of examining each VM instruction. Virtualized programs are logically separated from this nerve center of the physical machine to avoid crashes and other glitches from impacting the underlying system. 

There are two main types of virtual machines: 

  1. A system VM stands in completely for the physical machine while splitting its resources between each virtual instance. These need hypervisors to work effectively based on the underlying system. 

  2. A process VM is reserved for single processes or tasks that run as applications on the host. The physical host's details are obfuscated from the VM, which enables programming and native functionality for apps that wouldn't normally work on the host.

One physical host machine can host multiple unique VM instances (even when their emulated architectures differ), and enables users to run numerous applications concurrently as long as the host has enough CPU and memory to allocate. However, running too many VM instances can cause performance problems and instability — especially when multiple VMs attempt to use the same CPU core simultaneously. Virtual machines can also run on hardware-based servers that support more intensive workloads to help avoid this. These servers have memory management units that can limit VMs to their own memory space, dedicated network cards to offload shared access mediation from the hypervisor, and other benefits. 

This brings us to scalability. VMs can scale horizontally (by adding more VM instances, within given resource constraints) or vertically (by allocating more resources to each VM). Vertical scaling is easier on physical hosts with more headroom, such as rack-mounted servers. Meanwhile, horizontal scaling brings added flexibility by letting administrators quickly add or delete instances. Both are viable options, but scaling out versus up follows the microservices deployment model more closely and is generally preferred wherever possible. 

It's also important to note the differences between VMs and containers. VMs virtualize the physical host layer and provide all components needed to completely replicate the underlying OS. Containers include an application binary and its dependencies, share the same host kernel, and are thus lighter weight. A given VM will almost always boot up slower than a container, yet has the ability to run its own OS and multiple apps side by side. Each has its place in a modern deployment strategy.

Notes on hypervisors

The CPU architecture of the host machine — such as Arm or x86 — determines which VMs it supports. Architecture also influences whether your chosen hypervisor (Hyper-V, VMware ESXi, Proxmox, etc.) can effectively bridge the communicative gap between host and VM. 

When an x86 host is used to test an Arm-compatible app, the hypervisor's instructions from the VM to the host CPU cannot be properly deciphered. Each processor speaks a different language without the ability to translate for the other. This is where a special VM program that emulates the host architecture comes in handy — letting the hypervisor map instructions from the virtualized CPU to the physical host CPU and complete runtime tasks.

Hypervisors also come in two varieties: 

  1. Type 1 – These effectively replace the host OS and run directly on physical hardware such as bare metal servers. This requires separate products to help manage the VM, supports templating for rapid spin-up of new VMs, and encompasses kernel-based virtual machines. 

  2. Type 2 – These run within a host OS as applications, and at a smaller scale — typically on individual machines such as laptops and desktops. Users create their virtual machine(s) manually and select a suitable OS, granularly control CPU threading plus memory allocation, and support hardware acceleration for intensive tasks.

What makes a virtual machine (VM) useful?

Aside from the flexibility and intelligent resource utilization that VMs deliver, they also come with the following benefits: 

  • They help developers deploy their applications faster across varied environments and systems, and are easier to implement. 

  • They boost portability by helping bring apps that aren't native to a particular system to that environment pretty seamlessly.

  • They are scalable both horizontally and vertically, and can be mutable (or not) depending on an organization's needs. 

  • They help boost security by containing risky processes or apps to their own environments, while protecting the host from malware or viruses. 

  • They're relatively cheap to set up and maintain, and can run on shared infrastructure to reduce costs. 

  • They can boost uptime, as moving VM instances between hypervisors is easy in the event of a failure. 

Like any technology, virtual machines aren't perfect. They add a layer of complexity to your computing environment and add potential points of failure. They're also more resource intensive than modernized alternatives such as containers — while typically taking up more space, as the VM's dependencies are stored on the physical host itself.

You’ve mastered one topic, but why stop there?

Our blog delivers the expert insights, industry analysis, and helpful tips you need to build resilient, high-performance services.

Thank you! Your submission was successful.

Does HAProxy support virtual machines (VMs)?

Yes! HAProxy One delivers the world's most demanding applications, APIs, and Al services in any environment — including VMs. You can install HAProxy Enterprise as an OpenStack or VMware vSphere virtual machine. Users can also install HAProxy ALOHA on leading VM solutions, such as KVM, Microsoft Hyper-V, VMware vSphere, and VMware Workstation Player. 

To learn more about VM support in HAProxy, check out our HAProxy Enterprise VM installation instructions, or our blog titled Hypervisors Performance Comparison & Benchmarking.