Loading learning content...
When a process terminates and releases its memory, what happens to that freed space? In fixed partitioning, the answer is simple: the partition remains, waiting for another process of suitable size. But in variable partitioning, the situation is far more nuanced. Freed memory becomes a "hole" — an island of availability in an ocean of allocated space — and managing these holes effectively becomes a central challenge of memory management.
Hole management is the art and science of tracking, organizing, and utilizing these free memory regions. Done poorly, it leads to fragmented, unusable memory and degraded system performance. Done well, it enables high memory utilization and responsive allocation. This page explores the techniques that make effective hole management possible.
By the end of this page, you will understand: how holes are created and tracked, the data structures used for hole management, strategies for maintaining efficient search and allocation, the critical process of hole coalescing (merging), and the relationship between hole management and system performance.
Understanding hole management begins with understanding how holes come into existence, transform, and eventually disappear. Each hole has a lifecycle that mirrors the dynamic nature of process execution.
Hole Creation:
Holes are created through several mechanisms:
Hole Transformation:
Once created, holes can transform:
Hole Elimination:
Holes disappear when:
Hole State Transitions in Practice:
Consider a system with 100 KB of free memory. As processes arrive and depart, the hole structure evolves:
| Time | Event | Hole Structure | Total Free |
|---|---|---|---|
| T0 | Initial state | [100 KB] | 100 KB |
| T1 | Process A (40 KB) arrives | [60 KB] | 60 KB |
| T2 | Process B (30 KB) arrives | [30 KB] | 30 KB |
| T3 | Process A terminates | [40 KB], [30 KB] | 70 KB |
| T4 | Process C (20 KB) arrives | [20 KB], [30 KB] | 50 KB |
| T5 | Process B terminates | [20 KB], [30 KB], [30 KB]* | 80 KB |
| T6 | Coalescing | [20 KB], [60 KB] | 80 KB |
*Before coalescing, adjacent holes exist separately.
Notice how the same total free memory can support different maximum allocation sizes depending on fragmentation. At T3, we have 70 KB free but the largest single hole is only 40 KB.
Efficient hole management requires data structures that support several key operations:
Each data structure offers different trade-offs among these operations:
The Free List Approach:
The most straightforward hole tracking mechanism is the free list — a linked list where each node represents a hole with its starting address and size.
Ordering Strategies:
Implementation Details:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
/* Address-ordered free list for hole management */ typedef struct Hole { uint32_t start; // Starting address uint32_t size; // Size in bytes struct Hole* next; // Next hole (in address order) struct Hole* prev; // Previous hole (for O(1) removal)} Hole; typedef struct FreeList { Hole* head; Hole* tail; uint32_t hole_count; uint32_t total_free; uint32_t largest_hole; // Cached for quick query} FreeList; // Insert hole maintaining address ordervoid insert_hole_ordered(FreeList* list, uint32_t start, uint32_t size) { Hole* new_hole = (Hole*)malloc(sizeof(Hole)); new_hole->start = start; new_hole->size = size; // Find insertion point (address order) Hole* current = list->head; Hole* prev = NULL; while (current != NULL && current->start < start) { prev = current; current = current->next; } // Link the new hole new_hole->next = current; new_hole->prev = prev; if (prev == NULL) { list->head = new_hole; } else { prev->next = new_hole; } if (current != NULL) { current->prev = new_hole; } else { list->tail = new_hole; } // Update statistics list->hole_count++; list->total_free += size; if (size > list->largest_hole) { list->largest_hole = size; } // Attempt to coalesce with neighbors coalesce(list, new_hole);}| Operation | Address-Ordered | Size-Ordered | LIFO |
|---|---|---|---|
| Insert | O(n) — find position | O(n) — find position | O(1) — insert at head |
| First-Fit Search | O(n) | O(n) | O(n) |
| Best-Fit Search | O(n) | O(1)* or O(log n)** | O(n) |
| Delete (given pointer) | O(1) with doubly-linked | O(1) with doubly-linked | O(1) |
| Coalesce | O(1) — neighbors in list | O(n) — find neighbors | O(n) — find neighbors |
Coalescing (or merging) is the process of combining adjacent free holes into a single, larger hole. This operation is critical for combating fragmentation and maintaining the largest possible contiguous free regions.
Why Coalescing Matters:
Without coalescing, memory would eventually degrade into a landscape of tiny, unusable holes. Even if the total free memory is substantial, no single hole would be large enough for larger allocation requests. Coalescing recuperates this fragmented capacity.
The Coalescing Operation:
When a hole is created (through deallocation), the memory manager checks whether the new hole is adjacent to existing holes. Two holes are adjacent if:
Hole1.start + Hole1.size == Hole2.start (Hole1 immediately precedes Hole2)
If adjacency is found, the holes are merged:
Merged.start = min(Hole1.start, Hole2.start)
Merged.size = Hole1.size + Hole2.size
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
/* Bidirectional coalescing in address-ordered free list */ typedef struct Hole { uint32_t start; uint32_t size; struct Hole* prev; // Previous hole in address order struct Hole* next; // Next hole in address order} Hole; /** * Coalesce a hole with its neighbors if adjacent. * Modifies the free list in place and frees merged hole structures. * Returns the final merged hole (may be the input or a neighbor). */Hole* coalesce(FreeList* list, Hole* hole) { Hole* result = hole; // Check and merge with previous hole if (hole->prev != NULL) { Hole* prev_hole = hole->prev; // Check if previous hole is adjacent if (prev_hole->start + prev_hole->size == hole->start) { // Merge: extend previous to include current prev_hole->size += hole->size; // Unlink current hole prev_hole->next = hole->next; if (hole->next != NULL) { hole->next->prev = prev_hole; } else { list->tail = prev_hole; } // Free the absorbed hole structure free(hole); hole = prev_hole; result = prev_hole; list->hole_count--; } } // Check and merge with next hole if (hole->next != NULL) { Hole* next_hole = hole->next; // Check if next hole is adjacent if (hole->start + hole->size == next_hole->start) { // Merge: extend current to include next hole->size += next_hole->size; // Unlink next hole hole->next = next_hole->next; if (next_hole->next != NULL) { next_hole->next->prev = hole; } else { list->tail = hole; } // Free the absorbed hole structure free(next_hole); list->hole_count--; } } // Update largest hole cache if necessary if (result->size > list->largest_hole) { list->largest_hole = result->size; } return result;} /* * Coalescing Example: * * Before: [P1]---[HOLE: 20K]---[P2]---[HOLE: 30K]---[P3] * Address: 0K 20K 40K 60K 90K * * Event: P2 terminates (releases 40K-60K, 20K block) * New hole created at 40K, size 20K * * After Step 1 (check prev): * Prev hole ends at 40K, new hole starts at 40K → ADJACENT * Merge: [HOLE: 20K + 20K = 40K] from 20K to 60K * * After Step 2 (check next): * Merged hole ends at 60K, next hole starts at 60K → ADJACENT * Merge: [HOLE: 40K + 30K = 70K] from 20K to 90K * * Result: [P1]---[HOLE: 70K]---[P3] * Address: 0K 20K 90K */Without address-ordered data structures, finding a hole's neighbors requires O(n) search. Boundary tags solve this by including size information at both the start and end of each block. When freeing a block, the system reads the tag of the preceding block to check if it's free — enabling O(1) backward coalescing without traversing any list.
Boundary tags are a ingenious technique that embeds metadata within the memory blocks themselves, enabling O(1) coalescing without explicit neighbor pointers or sorted data structures.
The Problem with Pointer-Based Coalescing:
In a simple free list, when we free a block, we know the block's address and size. We can easily check the next block (at address + size), but checking the previous block requires either:
The Boundary Tag Solution:
Every block (allocated or free) includes a footer tag in addition to a header tag. The footer contains the block's size, allowing us to calculate the previous block's start address:
Previous block's footer address = Current block start - sizeof(footer)
Previous block's size = *footer
Previous block's start = Current block start - Previous block's size
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
/* Boundary Tag Memory Block Layout */ /* * Block Layout: * +------------------+ * | Header | ← size (31 bits) + allocated flag (1 bit) * +------------------+ * | | * | Payload | ← User data (when allocated) * | or | Prev/Next pointers (when free) * | Free Links | * | | * +------------------+ * | Footer | ← size (31 bits) + allocated flag (1 bit) * +------------------+ */ typedef uint32_t word_t; #define HEADER_SIZE sizeof(word_t)#define FOOTER_SIZE sizeof(word_t)#define OVERHEAD (HEADER_SIZE + FOOTER_SIZE) // Pack size and allocation bit#define PACK(size, alloc) ((size) | (alloc)) // Extract size (mask off allocation bit)#define GET_SIZE(header) ((header) & ~0x7) // Check allocation status#define IS_ALLOCATED(header) ((header) & 0x1) // Get header from block pointer#define GET_HEADER(bp) ((word_t*)((char*)(bp) - HEADER_SIZE)) // Get footer from block pointer (using header's size)#define GET_FOOTER(bp) ((word_t*)((char*)(bp) + GET_SIZE(*GET_HEADER(bp)) - OVERHEAD)) // Navigate to adjacent blocks#define NEXT_BLOCK(bp) ((char*)(bp) + GET_SIZE(*GET_HEADER(bp)))#define PREV_BLOCK(bp) ((char*)(bp) - GET_SIZE(*((word_t*)((char*)(bp) - OVERHEAD)))) /** * Free a block with immediate coalescing using boundary tags. * Returns pointer to the (possibly merged) free block. */void* free_with_coalesce(void* bp) { word_t size = GET_SIZE(*GET_HEADER(bp)); // Check adjacent blocks int prev_alloc = IS_ALLOCATED(*GET_HEADER(PREV_BLOCK(bp))); int next_alloc = IS_ALLOCATED(*GET_HEADER(NEXT_BLOCK(bp))); if (prev_alloc && next_alloc) { // Case 1: Both neighbors allocated - just mark as free *GET_HEADER(bp) = PACK(size, 0); *GET_FOOTER(bp) = PACK(size, 0); } else if (prev_alloc && !next_alloc) { // Case 2: Merge with next block size += GET_SIZE(*GET_HEADER(NEXT_BLOCK(bp))); remove_from_free_list(NEXT_BLOCK(bp)); *GET_HEADER(bp) = PACK(size, 0); *GET_FOOTER(bp) = PACK(size, 0); } else if (!prev_alloc && next_alloc) { // Case 3: Merge with previous block size += GET_SIZE(*GET_HEADER(PREV_BLOCK(bp))); remove_from_free_list(PREV_BLOCK(bp)); bp = PREV_BLOCK(bp); *GET_HEADER(bp) = PACK(size, 0); *GET_FOOTER(bp) = PACK(size, 0); } else { // Case 4: Merge with both neighbors size += GET_SIZE(*GET_HEADER(PREV_BLOCK(bp))); size += GET_SIZE(*GET_HEADER(NEXT_BLOCK(bp))); remove_from_free_list(PREV_BLOCK(bp)); remove_from_free_list(NEXT_BLOCK(bp)); bp = PREV_BLOCK(bp); *GET_HEADER(bp) = PACK(size, 0); *GET_FOOTER(bp) = PACK(size, 0); } // Add merged block to free list add_to_free_list(bp); return bp;}Memory Overhead Analysis:
Boundary tags add 2 words (typically 8 bytes on 32-bit, 16 bytes on 64-bit) per block:
| Block Size | Payload | Overhead | Overhead % |
|---|---|---|---|
| 16 bytes | 8 bytes | 8 bytes | 50% |
| 64 bytes | 56 bytes | 8 bytes | 12.5% |
| 256 bytes | 248 bytes | 8 bytes | 3.1% |
| 4096 bytes | 4088 bytes | 8 bytes | 0.2% |
For small allocations, boundary tag overhead is significant. Many allocators optimize for this:
Production allocators like glibc's ptmalloc2 eliminate footer overhead for allocated blocks. The key insight: we only need to know if the previous block is free (for coalescing). This single bit can be stored in the current block's header (since size values are always aligned, some low-order bits are always zero and can hold flags).
The organization of the free list directly impacts allocation speed, fragmentation patterns, and memory utilization. Several strategies exist, each optimized for different workload characteristics.
Organization Strategy Options:
| Strategy | Insertion Order | Search Pattern | Coalesce Efficiency | Best Use Case |
|---|---|---|---|---|
| Address-Ordered | O(n) - find position | O(n) - scan until fit | O(1) - neighbors adjacent | Workloads needing frequent coalescing |
| LIFO (Stack) | O(1) - insert at head | O(n) - scan until fit | O(n) - neighbors scattered | Short-lived allocations, hot cache |
| FIFO (Queue) | O(1) - insert at tail | O(n) - scan from head | O(n) - neighbors scattered | Fairness across hole ages |
| Size-Ordered | O(n) - find position | O(1)/O(log n) - best-fit | O(n) - neighbors not adjacent | Workloads dominated by best-fit needs |
LIFO (Last-In, First-Out) Organization:
Newly freed blocks are inserted at the front of the free list:
Advantages:
Disadvantages:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
/* LIFO (Stack) organization for free list */ typedef struct Block { uint32_t size; struct Block* next; struct Block* prev;} Block; static Block* free_list_head = NULL; // O(1) insertion at headvoid free_lifo(void* ptr) { Block* block = (Block*)((char*)ptr - sizeof(Block)); // Insert at head block->next = free_list_head; block->prev = NULL; if (free_list_head != NULL) { free_list_head->prev = block; } free_list_head = block; // Note: Coalescing requires O(n) search to find neighbors // Alternative: Use boundary tags for O(1) coalescing} // First-fit allocation from LIFO listvoid* alloc_lifo(uint32_t size) { Block* current = free_list_head; while (current != NULL) { if (current->size >= size) { // Remove from list if (current->prev != NULL) { current->prev->next = current->next; } else { free_list_head = current->next; } if (current->next != NULL) { current->next->prev = current->prev; } return (char*)current + sizeof(Block); } current = current->next; } return NULL; // No fit found}Address-Ordered Organization:
Blocks are maintained in sorted order by memory address:
Advantages:
Disadvantages:
Hybrid Approaches:
Production allocators often combine strategies:
Some allocators use a circular free list with a roving pointer. Instead of always starting search from the head (which biases allocation toward certain memory regions), the search begins where the last allocation ended. This distributes allocations more evenly across memory, reducing fragmentation clustering.
Effective hole management requires visibility into the state of free memory. Operating systems maintain various statistics that inform allocation decisions, trigger maintenance operations, and enable performance analysis.
Key Hole Statistics:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
/* Hole statistics tracking and analysis */ typedef struct HoleStats { uint32_t total_free; // Sum of all hole sizes uint32_t largest_hole; // Maximum hole size uint32_t smallest_hole; // Minimum hole size uint32_t hole_count; // Number of holes // Distribution histogram (power-of-2 size classes) uint32_t size_histogram[16]; // Derived metrics (computed on demand or periodically) float fragmentation_index; float average_hole_size;} HoleStats; // Update statistics after hole list modificationvoid update_hole_statistics(FreeList* list, HoleStats* stats) { stats->total_free = 0; stats->largest_hole = 0; stats->smallest_hole = UINT32_MAX; stats->hole_count = 0; memset(stats->size_histogram, 0, sizeof(stats->size_histogram)); Hole* current = list->head; while (current != NULL) { stats->total_free += current->size; stats->hole_count++; if (current->size > stats->largest_hole) { stats->largest_hole = current->size; } if (current->size < stats->smallest_hole) { stats->smallest_hole = current->size; } // Update histogram int size_class = compute_size_class(current->size); stats->size_histogram[size_class]++; current = current->next; } // Compute derived metrics if (stats->total_free > 0) { stats->fragmentation_index = 1.0f - ((float)stats->largest_hole / (float)stats->total_free); stats->average_hole_size = (float)stats->total_free / (float)stats->hole_count; } else { stats->fragmentation_index = 0.0f; stats->average_hole_size = 0.0f; }} // Decision support: should we compact?bool should_compact(HoleStats* stats, uint32_t requested_size) { // Compact if: // 1. Request can't be satisfied by largest hole, AND // 2. Total free is sufficient, AND // 3. Fragmentation index exceeds threshold return (stats->largest_hole < requested_size) && (stats->total_free >= requested_size) && (stats->fragmentation_index > 0.5f);} // Print diagnostic reportvoid print_hole_report(HoleStats* stats) { printf("=== Hole Statistics Report ===\n"); printf("Total Free Memory: %u bytes\n", stats->total_free); printf("Largest Hole: %u bytes\n", stats->largest_hole); printf("Smallest Hole: %u bytes\n", stats->smallest_hole); printf("Number of Holes: %u\n", stats->hole_count); printf("Average Hole Size: %.2f bytes\n", stats->average_hole_size); printf("Fragmentation Index: %.2f%%\n", stats->fragmentation_index * 100); printf("\nSize Distribution:\n"); for (int i = 0; i < 16; i++) { if (stats->size_histogram[i] > 0) { printf(" [%u - %u]: %u holes\n", 1 << (i + 4), (1 << (i + 5)) - 1, stats->size_histogram[i]); } }}Monitoring and Alerting:
Production systems often implement threshold-based alerts:
| Metric | Warning Threshold | Critical Threshold | Action |
|---|---|---|---|
| Total Free | < 20% of total | < 10% of total | Consider swapping |
| Fragmentation Index | > 0.5 | > 0.75 | Schedule compaction |
| Largest Hole | < 50% of average request | < 25% | Aggressive coalescing |
| Hole Count | > 1000 | > 5000 | Consider memory pool reset |
These thresholds help operators anticipate memory problems before they become critical, enabling proactive management rather than reactive crisis response.
Hole management operations occur frequently in active systems — potentially thousands of times per second. Performance optimization is essential to prevent memory management from becoming a system bottleneck.
Key Performance Factors:
Optimization Techniques:
1. Caching the Largest Hole:
Maintain a cached pointer to the largest hole. Update on insertions that create larger holes, but invalidate (set to NULL) when the largest hole is allocated from. Rebuild cache lazily on next query.
2. Quick Lists for Common Sizes:
Maintain separate LIFO lists for commonly requested sizes. Allocations check the quick list first (O(1)), falling back to general search only if empty.
3. Lock-Free Data Structures:
In multi-threaded systems, lock-free algorithms using atomic operations (CAS — Compare-And-Swap) can dramatically reduce contention:
// Lock-free insertion at head
do {
new_block->next = free_list_head;
} while (!CAS(&free_list_head, new_block->next, new_block));
4. Thread-Local Caches:
Each thread maintains a private cache of free blocks. Allocations first check the local cache (no synchronization). Deallocation returns to local cache. Periodic rebalancing moves blocks between caches when imbalanced.
On many-core systems (64+ CPUs), centralized free lists become severe bottlenecks. Even with lock-free structures, cache-line bouncing (multiple cores fighting over the same cache line containing the list head) devastates performance. Production allocators like jemalloc and tcmalloc use per-CPU caches and arena-based designs to achieve true scalability.
Hole management is the crucial subsystem that determines how effectively a variable partitioning system utilizes physical memory. The techniques explored in this page form the foundation of all dynamic memory allocators, from simple educational implementations to production-grade systems like glibc's malloc.
What's Next:
With hole management understood, we turn to the fundamental challenge these techniques struggle against: external fragmentation. The next page explores how fragmentation emerges, its theoretical limits (including the famous fifty-percent rule), measurement techniques, and why it ultimately motivated the move beyond contiguous allocation to paging.
You now possess a comprehensive understanding of hole management in variable partitioning systems. This knowledge is directly applicable to understanding modern memory allocators, garbage collectors, and the OS subsystems that manage physical memory frames for virtual memory implementations.