Loading content...
While Type 1 hypervisors dominate data centers and cloud infrastructure, Type 2 hypervisors bring virtualization to a different domain: the developer's workstation, the security researcher's lab, and the IT professional's testing environment. These hosted hypervisors run as applications within a conventional operating system, trading some performance and isolation for convenience and accessibility.
Type 2 hypervisors democratized virtualization, making it accessible to anyone with a laptop. Understanding their architecture is essential for appreciating when hosted virtualization is appropriate—and when the constraints of a host OS become limiting.
By the end of this page, you will understand how Type 2 hypervisors operate atop a host operating system, their architectural components, performance characteristics, and optimal use cases. You'll gain the knowledge to evaluate when hosted virtualization is appropriate and understand the fundamental trade-offs compared to bare-metal solutions.
A Type 2 hypervisor, also known as a hosted hypervisor, is virtualization software that runs as an application on top of an existing operating system. Unlike Type 1 hypervisors that directly interface with hardware, Type 2 hypervisors rely on the host OS for hardware access, device drivers, and resource management.
The fundamental definition:
A Type 2 hypervisor is a software layer that:
Modern Type 2 hypervisors still use hardware virtualization extensions (Intel VT-x, AMD-V) for CPU virtualization—just like Type 1 hypervisors. The distinction isn't about hardware support, but about where the hypervisor sits in the software stack: atop an OS (Type 2) or directly on hardware (Type 1). This is why Type 2 hypervisor performance has improved dramatically with hardware-assisted virtualization.
Common Type 2 hypervisors:
The Type 2 hypervisor market includes widely-known products:
These products serve millions of developers, testers, and IT professionals who need to run multiple operating systems on a single workstation without dedicated server hardware.
The architecture of Type 2 hypervisors differs fundamentally from Type 1: an entire operating system sits between the hypervisor and the hardware. This creates a layered architecture with distinct performance and security implications.
The Type 2 software stack:
┌─────────────────────────────────────────────────────────────────┐│ Guest Virtual Machines ││ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ││ │ Guest 1 │ │ Guest 2 │ │ Guest 3 │ ... ││ │ (Linux) │ │ (Windows) │ │ (FreeBSD) │ ││ │ │ │ │ │ │ ││ │ User Apps │ │ User Apps │ │ User Apps │ ││ │ OS Kernel │ │ OS Kernel │ │ OS Kernel │ ││ └─────────────┘ └─────────────┘ └─────────────┘ │├─────────────────────────────────────────────────────────────────┤│ TYPE 2 HYPERVISOR ││ (Runs as Host Application) ││ ┌───────────────────────────────────────────────────────────┐ ││ │ Hypervisor Application (VMware Workstation, VirtualBox) │ ││ │ ├── Virtual Machine Monitor │ ││ │ ├── Guest Memory Manager (via host mmap/VirtualAlloc) │ ││ │ └── Virtual Device Emulators │ ││ └───────────────────────────────────────────────────────────┘ ││ ┌───────────────────────────────────────────────────────────┐ ││ │ Kernel Module (Optional but common) │ ││ │ ├── Hooks into host kernel for efficient VM execution │ ││ │ └── Manages VT-x/AMD-V interactions │ ││ └───────────────────────────────────────────────────────────┘ │├─────────────────────────────────────────────────────────────────┤│ HOST OPERATING SYSTEM ││ ┌───────────────────────────────────────────────────────────┐ ││ │ Host Applications (Browser, IDE, etc.) │ ││ └───────────────────────────────────────────────────────────┘ ││ ┌───────────────────────────────────────────────────────────┐ ││ │ Host OS Kernel (Windows, Linux, macOS) │ ││ │ ├── Process Scheduler (schedules hypervisor + other apps)│ ││ │ ├── Memory Manager (allocates memory to hypervisor) │ ││ │ └── Device Drivers (hypervisor uses for I/O) │ ││ └───────────────────────────────────────────────────────────┘ │├─────────────────────────────────────────────────────────────────┤│ PHYSICAL HARDWARE ││ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ││ │ CPU │ │ Memory │ │ Storage │ │ Network │ ││ │ Cores │ │ (RAM) │ │ (Disk) │ │ NICs │ ││ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │└─────────────────────────────────────────────────────────────────┘Key architectural components:
The kernel module is crucial for Type 2 hypervisor performance. It allows the hypervisor to use hardware virtualization extensions (VT-x/AMD-V) efficiently, bypassing much of the host OS overhead for CPU virtualization. Without the module, VirtualBox falls back to 'software virtualization,' which is dramatically slower.
CPU virtualization in Type 2 hypervisors leverages the same hardware extensions (VT-x, AMD-V) as Type 1 hypervisors, but with an important difference: the hypervisor must coordinate with the host OS for CPU scheduling and must handle the complexities of running guests within a user-space process.
The execution model:
When a Type 2 hypervisor VM runs, CPU execution alternates between:
┌─────────────────────────────────────────────────────────────────┐│ TYPE 2 EXECUTION FLOW │├─────────────────────────────────────────────────────────────────┤│ ││ Host OS schedules hypervisor process ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Hypervisor User-Space Process │ ││ │ ├── Decides which vCPU to run │ ││ │ └── Issues ioctl() to kernel module │ ││ └──────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Kernel Module (e.g., vmmon, vboxdrv)│ ││ │ ├── Prepares VMCS/VMCB │ ││ │ ├── Executes VMLAUNCH/VMRESUME │ ││ │ └── Enters VMX non-root mode │ ││ └──────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Guest VM Execution (VMX non-root) │◀──────────────┐ ││ │ • Guest code runs at near-native │ │ ││ │ • Sensitive ops trigger VM exit │ │ ││ └──────────────────────────────────────┘ │ ││ │ │ ││ ▼ │ ││ ┌──────────────────────────────────────┐ │ ││ │ VM Exit Occurs │ │ ││ │ ├── Simple exit? Handle in kernel │───────────────┘ ││ │ │ (return to guest immediately) │ ││ │ └── Complex exit? Return to user │ ││ │ space hypervisor for handling │ ││ └──────────────────────────────────────┘ ││ │ ││ ▼ ││ (Cycle repeats; host OS may preempt hypervisor) ││ │└─────────────────────────────────────────────────────────────────┘Host preemption challenges:
Unlike Type 1 hypervisors that have complete control over CPU scheduling, Type 2 hypervisors compete with other host applications for CPU time. This creates unique challenges:
These challenges make Type 2 hypervisors less suitable for latency-sensitive or real-time workloads.
In Type 2 virtualization, two schedulers compete: the host OS schedules the hypervisor process, and within that process, the hypervisor schedules vCPUs. Neither scheduler is aware of the other's decisions. If the host preempts the hypervisor while a guest holds a spinlock, other guest vCPUs may spin uselessly, wasting CPU time—a problem that's harder to solve without hypervisor control over scheduling.
Memory management in Type 2 hypervisors involves coordinating with the host OS memory manager. Guest physical memory is backed by host virtual memory, adding complexity to the address translation hierarchy.
The four-level address hierarchy:
In a Type 2 environment, addresses go through four levels:
┌────────────────────────────────────────────────────────────────┐│ TYPE 2 MEMORY TRANSLATION HIERARCHY │├────────────────────────────────────────────────────────────────┤│ ││ Guest Application ││ │ ││ ▼ ││ Guest Virtual Address (GVA) ││ │ ││ ▼ (Guest page tables) ││ Guest Physical Address (GPA) ││ │ ││ │ ┌─────────────────────────────────────────────┐ ││ └─▶│ Hardware EPT/NPT (if available) │ ││ │ OR │ ││ │ Software translation via hypervisor │ ││ └─────────────────────────────────────────────┘ ││ │ ││ ▼ ││ Host Virtual Address (HVA) ││ (Memory mapped into hypervisor process via mmap/VirtualAlloc) ││ │ ││ ▼ (Host page tables) ││ Host Physical Address (HPA) ││ │ ││ ▼ ││ Physical Memory (RAM) ││ │├────────────────────────────────────────────────────────────────┤│ NOTE: With EPT/NPT, hardware accelerates GVA→GPA→HPA ││ The HVA→HPA translation happens when setting up EPT ││ Host OS may swap HVA to disk, adding another layer │└────────────────────────────────────────────────────────────────┘Memory allocation strategies:
Type 2 hypervisors use several approaches to allocate and manage guest memory:
The host swapping problem:
A critical difference from Type 1 hypervisors: the host OS can swap guest memory to disk. From the guest's perspective, memory access that should take 100 nanoseconds suddenly takes 10 milliseconds—an unpredictable 100,000x slowdown. This makes Type 2 hypervisors challenging for workloads requiring consistent memory access latency.
Mitigation strategies include:
While Type 1 hypervisors carefully manage memory overcommitment with techniques like ballooning and compression, Type 2 hypervisors are subject to the host OS's memory management policies. Running multiple VMs that together exceed host RAM often leads to severe performance degradation as the host aggressively swaps.
I/O virtualization in Type 2 hypervisors fundamentally differs from Type 1: all I/O ultimately goes through the host operating system. This adds latency but simplifies driver development and provides access to the host's rich device driver ecosystem.
The I/O path:
When a guest VM performs I/O, the typical path is:
┌────────────────────────────────────────────────────────────────┐│ TYPE 2 I/O PATH (Network Example) │├────────────────────────────────────────────────────────────────┤│ ││ Guest VM ││ ├── Application sends network packet ││ ├── Guest OS network stack (TCP/IP) ││ └── Guest uses virtual NIC driver (e1000, virtio-net) ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Virtual Device Emulator │ ││ │ (Hypervisor User Space) │ ││ │ ├── Receives packet from guest │ ││ │ ├── Translates to host socket/tap │ ││ │ └── Issues host system call │ ││ └──────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Host OS Network Stack │ ││ │ ├── NAT / Bridged networking │ ││ │ ├── Host TCP/IP processing │ ││ │ └── Host NIC driver │ ││ └──────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────┐ ││ │ Physical Network Interface │ ││ │ (Packet transmitted on wire) │ ││ └──────────────────────────────────────┘ ││ ││ CONTRAST WITH TYPE 1: ││ Type 1 hypervisor bypasses host OS entirely— ││ virtual NIC → hypervisor I/O layer → physical NIC ││ │└────────────────────────────────────────────────────────────────┘Networking modes:
Type 2 hypervisors offer multiple networking configurations:
| Mode | Description | Guest-to-Host | Guest-to-External | Use Case |
|---|---|---|---|---|
| NAT | Guest traffic routed through host's IP with address translation | Yes | Yes (outbound) | Simple internet access, no incoming |
| Bridged | Guest appears as separate host on physical network | Yes | Yes (full) | When guest needs real network presence |
| Host-Only | Private network between host and guests only | Yes | No | Isolated development/testing |
| Internal | Private network between guests only (no host) | No | No | Multi-VM isolated environments |
Storage virtualization:
Storage I/O follows a similar pattern:
With virtual disk files, data may be cached twice: once in the guest OS page cache, and again in the host OS page cache. This wastes memory and can create consistency issues. Type 2 hypervisors offer options to disable host caching (O_DIRECT equivalent), trading memory efficiency for potential performance loss.
One of the key differentiators of Type 2 hypervisors is their focus on host-guest integration. Unlike Type 1 hypervisors where guests run in data-center isolation, Type 2 guests often need to interact seamlessly with the host desktop environment.
Common integration features:
Guest additions/tools:
These integration features require software installed inside the guest, known variously as:
These tools consist of:
| Component | Purpose | Performance Impact |
|---|---|---|
| Video driver | Enables dynamic resolution, 3D acceleration, smooth rendering | Major improvement |
| Network driver | Paravirtualized networking for higher throughput, lower latency | Significant improvement |
| Storage driver | Optimized disk I/O, TRIM support for virtual disks | Significant improvement |
| Memory balloon | Reports memory pressure to hypervisor, enables dynamic memory | Enables overcommit |
| Agent service | Clipboard, drag-drop, seamless mode, time sync | User experience |
Guest tools dramatically improve Type 2 hypervisor performance and usability. Without them, guests use slow emulated devices, display resolution is fixed, and integration features are unavailable. Installing guest tools should be one of the first post-installation steps for any VM.
Understanding Type 2 hypervisor performance requires acknowledging the inherent overhead of the host OS layer. While modern Type 2 hypervisors with hardware virtualization achieve impressive performance, they cannot match Type 1 in demanding scenarios.
Performance overhead sources:
| Workload Type | Performance vs Native | Notes |
|---|---|---|
| CPU-bound compute | 85-95% | With VT-x/AMD-V; most guest code runs at native speed |
| Memory-intensive | 80-90% | EPT overhead; potential host memory pressure |
| Disk I/O (sequential) | 60-80% | Virtual disk files add filesystem overhead |
| Disk I/O (random) | 40-70% | Higher overhead for small random I/O |
| Network throughput | 60-80% | NAT adds overhead; bridged is better |
| Graphics (2D) | 70-90% | With guest tools; varies by hypervisor |
| Graphics (3D) | 30-70% | Highly variable; GPU passthrough helps |
Optimization strategies:
To maximize Type 2 hypervisor performance:
When running multiple VMs on a Type 2 hypervisor, they compete not just for CPU but for the hypervisor's internal resources. I/O operations from one VM can stall another's. This contention is harder to diagnose and manage than in Type 1 environments with sophisticated resource management.
Type 2 hypervisors excel in specific scenarios while being inappropriate for others. Understanding this distinction is crucial for making informed virtualization decisions.
Ideal use cases:
The development workflow advantage:
Type 2 hypervisors shine in development workflows. A developer can:
This flexibility explains why Type 2 hypervisors remain popular despite Type 1's performance advantages.
For many development scenarios, containers (Docker) have displaced Type 2 VMs. Containers offer faster startup, lower overhead, and better host integration for Linux-based workloads. However, when you need to run a different kernel (Windows on Linux, or vice versa), full-system hardware testing, or complete OS isolation, Type 2 VMs remain indispensable.
We've explored the architecture and operation of Type 2 hosted hypervisors—the accessible virtualization solution that runs on commodity operating systems. Let's consolidate the key concepts:
Looking ahead:
The next page directly compares Type 1 and Type 2 hypervisors, providing a comprehensive analysis of their differences across architecture, performance, security, and management. This comparison will solidify your understanding of when to choose each approach.
You now understand Type 2 hosted hypervisors—their architecture, how they interact with the host OS, their integration features, and their appropriate use cases. This knowledge is essential for the comprehensive comparison with Type 1 hypervisors that follows.