The hypervisor is a fairly old technology, but it is still super relevant for enabling virtualization. The first hypervisors to provide full virtualization were developed by IBM in 1967. They were developed as a test tool (named SIMMON) for IBM’s CP/CMS operating system.
But what exactly is virtualization? In simple terms, virtualization is a process of creating a software-based (or virtual) version of something that uses a fixed amount of storage, networking, and computational resources. It works by partitioning the underlying hardware and running each partition as a separate, isolated Virtual Machine, which has its own operating system.
Now, this is where hypervisor comes in. They make the process of virtualization feasible. In this overview article, we have explained different types of hypervisors and how do they work. Let’s start with a basic question.
What Is A Hypervisor?
Definition: A hypervisor is computer hardware, software, or firmware that creates virtual machines and then efficiently manages and allocates resources to them. Each virtual machine can run its own operating system and applications.
The computer on which the hypervisor is installed is called a host machine, and all virtual machines are called guest machines. A hypervisor makes it easy to split the resources of the host machine and allocate them to individual guest machines. It also allows you to manage the execution of guest operating systems and applications on a single piece of computer hardware.
Let’s say you have a PC with 16 GB of RAM and 500 GB of storage running on the Linux operating system, and you want to run applications that require macOS. In this case, you can create a virtual machine running macOS and then use a hypervisor to manage its resources. For instance, you can allocate it 4GB of RAM and 100 GB of storage.
From a guest machine’s viewpoint, there is no difference between the physical and virtualized environment. Virtual machines do not know that they are created by a hypervisor and that they are sharing available resources. They run concurrently on the hardware that powers them. And thus, they rely entirely on hardware’s stable operation.
Hypervisors have been around for more than half a century, but due to the increase in demand for cloud computing in recent years, their importance has become more apparent.
Types Of Hypervisors
Since the mid-1970s, two different types of hypervisors have been used to implement virtualization:
Type 1 / Bare-metal / Native Hypervisors
Type-1 hypervisors run directly on the host’s hardware. Since they have direct access to the underlying hardware and do not need to go through the operating system layer, they are also called bare-metal hypervisors.
They perform better, run more efficiently, and are more secure than other types of (Type-2) hypervisors. That’s why large organizations and companies prefer bare-metal hypervisors for data center computing jobs.
While most Type-1 hypervisors allow admins to manually allocate resources based on the application’s priority, some provide dynamic resource allocation and management options.
The early hypervisors, such as the test software SIMMON, were type-1 hypervisors.
Modern Examples: VMware ESXi, Nutanix AHV, Oracle VM Server for x86, Microsoft Hyper-V.
Type 2 / Hosted Hypervisors
Like all computer programs, Type-2 hypervisors run on an operating system. Thus, they rely on both underlying hardware and software. The guest operating systems are built on top of the host operating system.
While these hypervisors allow you to create multiple virtual machines, they can’t directly access the host hardware and its resources. The pre-installed operating system controls the network, memory, and storage allocation. This restricts the hypervisors to make critical decisions and adds a certain amount of latency.
However, they are easy to set up and manage. They do not require a dedicated admin and are compatible with a broad range of hardware. Most developers use them for testing purposes.
Examples: VMware Workstation, VirtualBox, QEMU, VMware Player, VMware Fusion, and Parallels Desktop for Mac.
Portability: A Hypervisor can run multiple guest (virtual) machines independent of the host machine, and each guest machine can have a different operating system.
An authorized user can shift workloads and allocate memory, storage, and computing resources across multiple guest machines as per requirements. When a specific application needs more power, users can grant additional resources (from the host machine) via the hypervisor.
Cost-efficient: If you don’t install a hypervisor, you may need to purchase different physical hardware for running or testing different applications. However, using a hypervisor, you can set multiple instances of a variety of operating systems on a single powerful physical machine. It also reduces the cost of computing resources and electricity consumption significantly.
Flexibility: Since hypervisor isolates the operating system from the underlying hardware, the associated applications no longer rely on particular hardware drivers. This makes the overall system more flexible to run a variety of software.
Secure: The isolation of each guest means an issue with one guest does not affect the others. For instance, if a malicious program corrupts all files on a virtual machine, the files and applications on the other machines are less likely to get affected.
System backup & recovery: Virtual machines are files, and like any conventional file, they can be copied and restored. Hypervisor-based replication is easier and cost-effective than other replication techniques of virtual machines. It is also hardware natural, which means one can store any duplicate file to any storage device with ease.
Compromised Performance: Because resources are shared in virtual environments (though guests remain isolated from each other), it could affect performance significantly.
Sometimes, the underlying root cause stays hidden. For example, if the load increases on one program to the point where maximum hardware allocation is met, then the guest machine is either stalled or starts to grab resources from other guests running on the same host machine. This creates a hardware shortage, which impacts the responsiveness of other active applications.
Risk: Virtualization comes with a risk because you are keeping all your eggs in one basket. If the host machine fails, all its guest machines will fail too. This kind of risk is called a ‘single point of failure.’
Increased complexity: Managing multiple virtual machines is more complex than managing a physical machine. Some hypervisors have a steep learning curve. And as virtualization becomes more popular, more new skills are required.
Making Virtualization Feasible on Home Computers
In the mid-2000s, microprocessor manufactures started adding hardware virtualization assistance to their products such as AMD-V and Intel VT-x. In later processor models, they integrated more hardware support that enabled significant speed gains. As of 2019, all modern Intel and AMD processors support virtual machine capabilities.
Hypervisors also have a place in modern embedded systems. These are mostly Type-1 hypervisors designed with specific requirements. Unlike computer hardware, embedded systems use a wide range of architectures and less standardized environments.
Virtualization in these systems facilitates greater efficiency, high-bandwidth communication channels, isolation, security, and real-time capabilities. For example, OKL4 supports various architectures, including x86 and ARM. It has been deployed on more than 2 billion cell phones, both as a baseband operating system and for hosting virtual operating systems.
Security is one of the most crucial factors in virtualization technology. If an attacker gains unauthorized access to the hypervisor, he could then gain access to every guest machine on the host software by exploiting shared hardware caches or other vulnerabilities. This type of attack is called hyperjacking.
However, modern hypervisors are robust and well-protected. Although there have been a few news of small hacks, no major hyperjacking has been reported so far besides ‘proof of concept’ testing.