Loading learning content...
Address space is one of the most precious resources in computer architecture. Every byte of RAM, every device register, every firmware region, and every memory-mapped buffer must have a unique address for the CPU to access it. The challenge of orchestrating this vast address space—allocating regions to competing demands without conflicts or inefficiency—is a fundamental system design problem.
As systems evolved from 16-bit microprocessors with 64 KB of addressable memory to 64-bit behemoths capable of addressing 16 exabytes, I/O addressing strategies have transformed dramatically. Yet the core challenge remains: how do we organize the address space to accommodate both memory and devices, ensure compatibility, maintain performance, and plan for future expansion?
This page takes a deep dive into address space architecture, exploring how physical and virtual address spaces are partitioned, how devices claim regions during enumeration, how memory holes and remapping work, and how modern systems manage increasingly complex I/O topologies.
By the end of this page, you will understand: (1) The structure of physical address space on modern x86-64 and ARM systems, (2) How devices claim address space through BAR configuration, (3) The memory hole concept and RAM remapping mechanisms, (4) Virtual address space layout and kernel/user divisions, (5) Address space exhaustion scenarios and mitigation strategies, and (6) How to inspect and debug address space allocations using system tools.
The physical address space defines what the CPU can directly reference through its address bus. Understanding its structure is essential for systems programming, device driver development, and low-level debugging.
Physical Address Width
The width of the physical address bus determines the maximum addressable physical memory:
| Address Bits | Addressable Space | Era |
|---|---|---|
| 16 bits | 64 KB | 8080/Z80 |
| 20 bits | 1 MB | 8086 Real Mode |
| 24 bits | 16 MB | 80286 |
| 32 bits | 4 GB | i386, Pentium |
| 36 bits | 64 GB | PAE Extension |
| 40-46 bits | 1-64 TB | Early x86-64 |
| 52 bits | 4 PB | Current x86-64 max |
Modern x86-64 processors typically implement 46-52 physical address bits, though few systems have anywhere near the theoretical maximum installed. ARM64 implementations vary between 40 and 52 bits depending on implementation.
Physical Address Space Layout
A typical x86-64 system's physical address space is organized into distinct regions:
Key Physical Address Regions
1. Legacy Low Memory (0 - 1 MB)
The first megabyte of physical address space retains legacy x86 layout for compatibility:
2. Extended Memory (1 MB - Top of RAM)
Above 1 MB, standard RAM extends upward. The extent depends on installed DRAM and where the MMIO hole begins.
3. MMIO Hole (Varies - 4 GB)
This region is reserved for 32-bit MMIO (PCI BARs), APIC, I/O APIC, and firmware tables. Its size depends on installed devices and chipset configuration—typically 256 MB to 1 GB.
4. High Memory (Above 4 GB)
On systems with more than 4 GB of RAM, the memory "stolen" by the MMIO hole is remapped here. Additional RAM and 64-bit device BARs also reside in this region.
The BIOS reports the physical address map to the operating system via the E820 BIOS call (or EFI Memory Map on UEFI systems). This map describes which regions are usable RAM, reserved, ACPI data, or unusable. The OS uses this map to avoid allocating RAM over MMIO regions.
One of the most important address space concepts is the MMIO hole—a region of physical addresses reserved for device access that creates a gap in usable RAM addressing.
Why the Hole Exists
For 32-bit PCI/PCIe devices (those with 32-bit BARs), MMIO addresses must reside below 4 GB. This requirement stems from backward compatibility: older devices cannot decode addresses above 0xFFFFFFFF. Additionally, several fixed system components occupy addresses just below 4 GB:
Between these fixed allocations and the need for 32-bit BAR space, a substantial region below 4 GB is consumed by MMIO.
| Component | Address Range | Size | Notes |
|---|---|---|---|
| 32-bit PCI BARs | 0xC000_0000 - 0xDFFF_FFFF | 512 MB | Graphics, NICs, storage |
| PCI Config Space (ECAM) | 0xE000_0000 - 0xEFFF_FFFF | 256 MB | Enhanced Configuration Access |
| Reserved | 0xF000_0000 - 0xFEBF_FFFF | 236 MB | Chipset specific |
| I/O APIC | 0xFEC0_0000 - 0xFECF_FFFF | 1 MB | Interrupt routing |
| HPET | 0xFED0_0000 - 0xFED0_FFFF | 64 KB | High-precision timer |
| Local APIC | 0xFEE0_0000 - 0xFEE0_FFFF | 64 KB | Per-CPU interrupt controller |
| Firmware | 0xFF00_0000 - 0xFFFF_FFFF | 16 MB | BIOS/UEFI flash |
RAM Remapping (Memory Reclaim)
Consider a system with 4 GB of RAM and a 1 GB MMIO hole starting at 3 GB:
Without remapping, 1 GB of installed RAM would be inaccessible. Modern chipsets implement memory remapping (sometimes called "North Bridge Remapping" or "Memory Reclaim"):
The operating system sees the full 4 GB as usable, split across non-contiguous physical ranges. This is reflected in the E820 memory map, which shows two RAM regions.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# Linux kernel boot messages showing memory map (example system with 8 GB RAM)# Use: dmesg | grep -E "(e820|BIOS-e820)" [ 0.000000] BIOS-provided physical RAM map:[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdffff] usable[ 0.000000] BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000c0000000-0x00000000feffffff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable # Interpretation:# 0x00000000-0x0009FBFF (639 KB): Conventional memory# 0x00100000-0xBFFDFFFF (3071 MB): Main RAM below MMIO hole# 0xC0000000-0xFEFFFFFF (767 MB): MMIO hole (reserved)# 0xFF000000-0xFFFFFFFF (16 MB): Firmware ROM area# 0x100000000-0x21FFFFFFF (4608 MB): Remapped/High RAM # Total usable: 639 KB + 3071 MB + 4608 MB ≈ 7.5 GB (rest reserved for MMIO) # View formatted memory regions:$ cat /proc/iomem | head -40 00000000-00000fff : Reserved00001000-0009fbff : System RAM0009fc00-0009ffff : Reserved000a0000-000bffff : PCI Bus 0000:00000c0000-000c7fff : Video ROM000e0000-000fffff : Reserved00100000-bffdffff : System RAM 01000000-01ffffff : Kernel code 02000000-023fffff : Kernel rodata 02600000-02afffff : Kernel data 02c00000-02ffffff : Kernel bssc0000000-cfffffff : PCI Bus 0000:00 c0000000-cfffffff : 0000:00:02.0 (VGA compatible controller)d0000000-dfffffff : PCI MMIOe0000000-efffffff : PCI ECAM (Configuration Space)f0000000-f7ffffff : PCI Bus 0000:00fec00000-fec00fff : IOAPIC 0fed00000-fed003ff : HPET 0fee00000-fee00fff : Local APICff000000-ffffffff : Reserved100000000-21fffffff : System RAMA 32-bit operating system without PAE can only address 4 GB of virtual address space. Even with memory remapping, it cannot access remapped RAM above 4 GB. This is why 32-bit Windows reported only ~3.2 GB usable on systems with 4 GB installed—the remaining RAM was hidden behind the MMIO hole.
Base Address Registers (BARs) are the mechanism by which PCI/PCIe devices claim address space. Understanding the enumeration and allocation process is crucial for system bring-up, debugging device conflicts, and writing firmware or operating system code.
Device Discovery and Enumeration
The PCI/PCIe bus hierarchy is enumerated at boot time:
12345678910111213141516171819202122232425262728293031323334353637
# Viewing PCI BAR allocations on Linux using lspci# The -v flag shows BARs, -vv shows more detail $ lspci -vvs 00:02.000:02.0 VGA compatible controller: Intel Corporation Device 9a49 (rev 03) DeviceName: Onboard - Video Subsystem: Dell Device 09cc Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 143 IOMMU group: 0 Region 0: Memory at 6000000000 (64-bit, non-prefetchable) [size=16M] Region 2: Memory at 4000000000 (64-bit, prefetchable) [size=256M] Region 4: I/O ports at 4000 [size=64] Expansion ROM at 000c0000 [virtual] [disabled] [size=128K] # Reading this output:# - Region 0: 64-bit non-prefetchable BAR at 0x6000000000 (MMIO registers)# Size: 16 MB, mapped in high physical address space# - Region 2: 64-bit prefetchable BAR at 0x4000000000 (GPU memory aperture)# Size: 256 MB, suitable for write-combining access# - Region 4: I/O port BAR at 0x4000 (legacy I/O)# Size: 64 ports for legacy VGA compatibility # View all memory-mapped regions for the device:$ cat /sys/bus/pci/devices/0000:00:02.0/resource0x0000006000000000 0x0000006000ffffff 0x00000000000402000x0000000000000000 0x0000000000000000 0x00000000000000000x0000004000000000 0x000000400fffffff 0x000000000014220c0x0000000000000000 0x0000000000000000 0x00000000000000000x0000000000004000 0x000000000000403f 0x00000000000401010x0000000000000000 0x0000000000000000 0x00000000000000000x00000000000c0000 0x00000000000dffff 0x0000000000046200 # Each line: start_addr end_addr flags# Non-zero lines indicate assigned BARsBAR Allocation Strategy
Firmware must allocate BARs following specific rules:
Natural Alignment: A 1 MB BAR must be aligned to a 1 MB boundary. A 256 MB BAR must be aligned to 256 MB.
Non-Overlapping: No two BARs may share any physical addresses.
32-bit Constraint: 32-bit BARs must receive addresses below 4 GB. Only 64-bit BARs can use addresses above 4 GB.
Prefetchable Segregation: Prefetchable and non-prefetchable regions are usually allocated from separate pools for memory controller efficiency.
Bridge Windows: PCI-to-PCI bridges have memory window registers that must encompass all BAR allocations of downstream devices.
Address Space Pressure Points
The 32-bit MMIO region (below 4 GB) is limited and increasingly strained:
This pressure led to Above 4G Decoding BIOS options and Resizable BAR (ReBAR) technology, which move large BARs above 4 GB where address space is plentiful.
Resizable BAR allows the GPU's full VRAM to be mapped into CPU address space via 64-bit BARs above 4 GB. Without ReBAR, only a 256 MB window is accessible, requiring sliding window access. With ReBAR, the entire VRAM is directly addressable, improving CPU-GPU data transfer performance for games and compute workloads.
While physical address space is finite and shared among all hardware, each process has its own virtual address space provided by the memory management unit (MMU). Understanding how operating systems partition virtual address space is essential for understanding MMIO access from software.
x86-64 Virtual Address Space
Current x86-64 processors implement 48-bit virtual addresses (expandable to 57 bits with 5-level paging). The 48-bit space is divided:
The kernel maps MMIO regions into its portion of virtual address space, making them accessible to kernel code and device drivers.
| Virtual Address Range | Size | Contents |
|---|---|---|
| 0xFFFF880000000000 - 0xFFFFC87FFFFFFFFF | 64 TB | Direct mapping of all physical memory (physmap) |
| 0xFFFFC90000000000 - 0xFFFFC9FFFFFFFFFF | 1 TB | vmalloc/ioremap space (dynamic allocations) |
| 0xFFFFEA0000000000 - 0xFFFFEAFFFFFFFFFF | 1 TB | Virtual memory map (struct page array) |
| 0xFFFFFFFF80000000 - 0xFFFFFFFFFFFFFFFF | 2 GB | Kernel text and data (modules above) |
| 0xFFFFFFFF00000000 - 0xFFFFFFFF7FFFFFFF | 2 GB | Module mapping space |
MMIO Mapping in Virtual Space
When a driver calls ioremap(), the kernel creates page table entries mapping the MMIO physical addresses into the vmalloc/ioremap region of virtual address space:
Physical: 0x00000000E0000000 (PCI device BAR)
↓ (ioremap)
Virtual: 0xFFFFC90001234000 (kernel ioremap space)
The driver uses the virtual address for all subsequent MMIO access. The page table entries are marked with:
User-Space MMIO Access
Normally, user-space programs cannot access MMIO directly. However, specialized mechanisms exist:
/dev/mem: Character device providing access to physical memory (including MMIO). Restricted to CAP_SYS_RAWIO capability.
UIO (Userspace I/O): Framework allowing user-space drivers. The kernel maps specific MMIO regions into user-space process virtual address space.
VFIO: Virtualization-focused framework for secure, isolated user-space device access. Used by QEMU/KVM for device passthrough.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
/* * User-Space MMIO Access Examples * * WARNING: Direct hardware access from user space is dangerous. * These techniques require appropriate permissions and should * only be used for debugging or specialized applications. */ #include <stdio.h>#include <stdint.h>#include <stdlib.h>#include <fcntl.h>#include <unistd.h>#include <sys/mman.h> /* * Method 1: /dev/mem Access * * Requires root or CAP_SYS_RAWIO capability. * WARNING: Writing to wrong addresses can crash the system! */volatile uint32_t* mmio_map_dev_mem(off_t phys_addr, size_t size){ int fd = open("/dev/mem", O_RDWR | O_SYNC); if (fd < 0) { perror("Cannot open /dev/mem"); return NULL; } /* Map physical address range into our virtual address space */ void *mapped = mmap( NULL, /* Let kernel choose virtual address */ size, /* Size of mapping */ PROT_READ | PROT_WRITE, /* Read/write access */ MAP_SHARED, /* Changes visible to device */ fd, /* /dev/mem file descriptor */ phys_addr /* Physical address offset */ ); close(fd); /* Can close fd after mapping */ if (mapped == MAP_FAILED) { perror("mmap failed"); return NULL; } return (volatile uint32_t*)mapped;} /* * Example: Read a device register via /dev/mem */void read_device_register_devmem(void){ /* Example: Read HPET general capabilities register * HPET is typically at physical address 0xFED00000 */ off_t hpet_phys = 0xFED00000; size_t map_size = 0x1000; /* 4 KB for HPET registers */ volatile uint32_t *hpet = mmio_map_dev_mem(hpet_phys, map_size); if (!hpet) return; /* Read 32-bit general capabilities register at offset 0x00 */ uint32_t caps_lo = hpet[0]; uint32_t caps_hi = hpet[1]; printf("HPET Capabilities: 0x%08X%08X", caps_hi, caps_lo); /* Clean up */ munmap((void*)hpet, map_size);} /* * Method 2: UIO (Userspace I/O) Mapping * * Safer than /dev/mem - only maps device-specific regions. * Device must have UIO driver bound. */volatile void* uio_map_device(const char *uio_device, int map_index, size_t *size){ char path[256]; int fd; size_t map_size; FILE *f; /* Read mapping size from sysfs */ snprintf(path, sizeof(path), "/sys/class/uio/%s/maps/map%d/size", uio_device, map_index); f = fopen(path, "r"); if (!f) { perror("Cannot read UIO map size"); return NULL; } fscanf(f, "0x%zx", &map_size); fclose(f); /* Open UIO device */ snprintf(path, sizeof(path), "/dev/%s", uio_device); fd = open(path, O_RDWR); if (fd < 0) { perror("Cannot open UIO device"); return NULL; } /* Map the specified region */ void *mapped = mmap( NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, map_index * getpagesize() /* Each map at page-aligned offset */ ); close(fd); if (mapped == MAP_FAILED) { perror("UIO mmap failed"); return NULL; } *size = map_size; return mapped;} /* * Method 3: VFIO Mapping (for VM passthrough scenarios) * * Most secure method - provides IOMMU isolation. * Typically used for assigning entire devices to VMs. *//* VFIO mapping is more complex - see kernel documentation */System administrators and developers frequently need to inspect address space allocations for debugging, optimization, or understanding system behavior. Operating systems provide various tools for this purpose.
Linux Address Space Tools
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
#!/bin/bash# Comprehensive Linux Address Space Debugging # ============================================================# 1. Physical Memory Map (I/O Memory Regions)# ============================================================echo "=== /proc/iomem - Physical Address Allocations ==="sudo cat /proc/iomem # Sample output shows physical address regions:# 00000000-00000fff : Reserved# 00001000-0009fbff : System RAM# 000a0000-000bffff : PCI Bus 0000:00# 00100000-bffdffff : System RAM# 01000000-01ffffff : Kernel code# 02000000-023fffff : Kernel rodata# c0000000-febfffff : PCI Bus 0000:00# c0000000-cfffffff : 0000:00:02.0 (Intel GPU)# fec00000-fec00fff : IOAPIC 0# fee00000-fee00fff : Local APIC# 100000000-21fffffff : System RAM # ============================================================# 2. I/O Port Allocations # ============================================================echo "=== /proc/ioports - Port Allocations ==="sudo cat /proc/ioports # Sample output shows port assignments:# 0000-0cf7 : PCI Bus 0000:00# 0000-001f : dma1# 0020-0021 : pic1# 0040-0043 : timer0# 0060-0060 : keyboard# 0064-0064 : keyboard# 0070-0077 : rtc0# 03f8-03ff : serial # ============================================================# 3. PCI Device BAR Details# ============================================================echo "=== lspci BAR Information ==="lspci -vvv | grep -A 20 "Memory at|I/O ports at" # ============================================================# 4. Individual Device Resources# ============================================================echo "=== PCI Device Resource Files ==="for dev in /sys/bus/pci/devices/*/; do echo "Device: $(basename $dev)" echo " Vendor: $(cat $dev/vendor 2>/dev/null) Device: $(cat $dev/device 2>/dev/null)" if [ -f "$dev/resource" ]; then echo " Resources:" cat "$dev/resource" | head -7 | awk 'NR==1{print " BAR0: "$0} NR==2{print " BAR1: "$0} NR==3{print " BAR2: "$0}' fi echo ""done # ============================================================# 5. Memory Map of a Running Process# ============================================================echo "=== Process Memory Map ==="PID=$$ # Current shellcat /proc/$PID/maps | head -20 # Shows virtual address mappings:# 55a000400000-55a000402000 r--p 00000000 08:01 1234 /bin/bash# 55a000402000-55a0004f0000 r-xp 00002000 08:01 1234 /bin/bash# 7f1234567000-7f1234569000 r--p 00000000 08:01 5678 /lib/libc.so.6 # ============================================================# 6. Kernel Memory Statistics# ============================================================echo "=== /proc/meminfo ==="cat /proc/meminfo | grep -E "MemTotal|MemFree|MemAvailable|Buffers|Cached" # ============================================================# 7. ACPI Tables (firmware memory regions)# ============================================================echo "=== ACPI Tables ==="ls -la /sys/firmware/acpi/tables/ # ============================================================# 8. dmesg Memory-Related Messages# ============================================================echo "=== dmesg Memory Messages ==="dmesg | grep -i -E "memory|e820|RAM|iomem|mmio" | head -30Windows Address Space Tools
Windows provides similar capabilities through different interfaces:
Common Debugging Scenarios
Device Conflict Detection: Two drivers claiming overlapping MMIO regions cause one to fail. Check /proc/iomem for overlaps.
Missing Memory Investigation: If less RAM is reported than installed, check for MMIO hole size or firmware reserved regions.
Driver MMIO Mapping Failure: If ioremap() returns NULL, the physical range may already be claimed by another driver.
Performance Issues: Incorrect memory types (MMIO marked cacheable) cause erratic behavior. Check MTRRs and ioremap variants used.
The output of /proc/iomem is indented to show hierarchical relationships. Parent regions (PCI Bus) contain child regions (device BARs). This hierarchy matches the physical bus topology and helps identify which controller owns each memory range.
Address space architecture continues to evolve as hardware capabilities expand and new use cases emerge. Understanding these trends is essential for forward-looking system design.
Historical Evolution
Current Trends
CXL: The Next Frontier
Compute Express Link (CXL) is particularly significant for address space architecture. CXL Type 3 devices (memory expanders) add pools of memory to the system that appear as normal physical addresses. This enables:
Operating systems must evolve to manage these heterogeneous memory regions, potentially allocating hot data to fast local DRAM and cool data to CXL-attached memory.
IOMMU and Device Isolation
Modern systems increasingly use IOMMUs to provide virtual addressing for devices, not just CPUs. Each device operates in its own virtual address space:
This evolution means "address space" now has multiple levels: CPU physical, CPU virtual, device virtual (IOMMU-translated), and system-wide coherent (CXL).
Just as the 4 GB barrier once seemed insurmountable, current 48-bit or even 57-bit address spaces may eventually feel constrained. The industry has consistently found ways to extend addressing—and will continue to do so as memory technologies evolve.
This page has provided a comprehensive exploration of how I/O and memory share address space—one of the most fundamental aspects of computer system architecture. Let's consolidate the key concepts:
Looking Ahead
With a thorough understanding of both Port-Mapped and Memory-Mapped I/O, plus how they consume address space, we're ready to compare these paradigms directly—examining their trade-offs and guiding principles for choosing between them in system and device driver design.
You now possess detailed knowledge of address space architecture—from physical layout through virtual mapping to debugging and future trends. This foundational understanding enables you to work confidently with low-level system internals, device drivers, and embedded systems.