Loading learning content...
Deallocation — the return of memory from a process to the free pool — is the indispensable complement to allocation. Without effective deallocation, memory would be consumed irreversibly, and systems would require constant reboots. Yet deallocation is often treated as an afterthought, despite being equally critical to system health.
Effective deallocation must accomplish several goals: identify the memory region being freed, validate the operation, update tracking data structures, and coalesce with adjacent free regions to combat fragmentation. Each step presents challenges that, if handled poorly, lead to memory leaks, fragmentation, corruption, or crashes.
This page explores deallocation in comprehensive depth, from basic mechanics to advanced techniques used in production systems.
By the end of this page, you will understand: the complete deallocation process from request to completion, how coalescing is performed to merge adjacent holes, data structure updates required during deallocation, validation and error handling for robust operation, common bugs and pitfalls in deallocation code, and performance optimization techniques.
When a process terminates or explicitly releases memory, the operating system must reclaim the corresponding partition. This involves multiple coordinated steps, each crucial for maintaining system integrity.
Step-by-Step Deallocation:
Step 1: Request Reception
Deallocation begins when:
free(), sbrk(-n))Step 2: Address Validation
The memory manager must verify that the provided address is valid:
Step 3: Partition Lookup
Locate the partition metadata corresponding to the address:
Step 4: Ownership Release
Remove the partition from the allocated list. Update any per-process memory tracking.
Step 5: Hole Creation
Convert the partition into a hole:
Step 6: Coalescing
Check for adjacent free holes and merge if found:
Step 7: Statistics Update
Update system memory statistics:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
/* Complete deallocation implementation */ typedef struct Partition { uint32_t start; uint32_t size; pid_t owner; struct Partition* next; struct Partition* prev;} Partition; typedef struct Hole { uint32_t start; uint32_t size; struct Hole* next; struct Hole* prev;} Hole; typedef struct MemoryManager { Partition* allocated_list; // Sorted by address Hole* free_list; // Sorted by address uint32_t total_memory; uint32_t total_free; uint32_t allocated_count; uint32_t hole_count;} MemoryManager; /** * Deallocate a partition and return it to the free pool. * * @param mgr Memory manager state * @param address Starting address of partition to free * @param pid Process ID requesting the deallocation * @return 0 on success, error code on failure */int deallocate(MemoryManager* mgr, uint32_t address, pid_t pid) { // Step 2: Validate address if (address < mgr->user_base || address >= mgr->user_limit) { return ERR_INVALID_ADDRESS; } // Step 3: Find the partition Partition* part = find_partition_by_address(mgr->allocated_list, address); if (part == NULL) { return ERR_NOT_ALLOCATED; // Address not in allocated list } // Step 2b: Verify ownership if (part->owner != pid && pid != KERNEL_PID) { return ERR_PERMISSION_DENIED; } uint32_t freed_size = part->size; // Step 4: Remove from allocated list remove_from_allocated_list(mgr, part); // Step 5: Create hole entry Hole* new_hole = create_hole(address, freed_size); // Step 6: Insert and coalesce insert_hole_with_coalesce(mgr, new_hole); // Step 7: Update statistics mgr->total_free += freed_size; mgr->allocated_count--; return SUCCESS;} /** * Find partition by starting address. * Binary search if list is sorted; O(n) linear search otherwise. */Partition* find_partition_by_address(Partition* list, uint32_t address) { Partition* current = list; while (current != NULL) { if (current->start == address) { return current; } if (current->start > address) { // Passed the expected position (address-sorted list) return NULL; } current = current->next; } return NULL;} /** * Remove partition from allocated list. * O(1) with doubly-linked list. */void remove_from_allocated_list(MemoryManager* mgr, Partition* part) { if (part->prev != NULL) { part->prev->next = part->next; } else { mgr->allocated_list = part->next; } if (part->next != NULL) { part->next->prev = part->prev; } free(part);}When a process terminates, the OS deallocates all its partitions automatically. When a process explicitly frees memory (e.g., shrinking the heap), only the specified region is freed. Both follow the same core logic but differ in scope and triggering mechanism.
Coalescing (merging adjacent holes) is the most critical step in deallocation. Without it, memory would fragment into countless tiny, useless holes. Effective coalescing is what enables variable partitioning to work at all.
The Coalescing Cases:
When a partition P is freed, creating hole H, there are four possible coalescing scenarios:
Case 1: No Adjacent Holes
[Allocated A] [H: Freed] [Allocated B]
H remains a standalone hole. No merging possible.
Case 2: Hole Before Only
[Hole X] [H: Freed] [Allocated B]
Merge X and H into single hole: size = X.size + H.size, start = X.start
Case 3: Hole After Only
[Allocated A] [H: Freed] [Hole Y]
Merge H and Y into single hole: size = H.size + Y.size, start = H.start
Case 4: Holes on Both Sides
[Hole X] [H: Freed] [Hole Y]
Merge all three: size = X.size + H.size + Y.size, start = X.start
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192
/* Four-case coalescing implementation */ /** * Insert a new hole into the free list and coalesce with neighbors. * Assumes free list is sorted by address. * * @param mgr Memory manager * @param hole Newly freed hole to insert */void insert_hole_with_coalesce(MemoryManager* mgr, Hole* hole) { uint32_t hole_start = hole->start; uint32_t hole_end = hole->start + hole->size; // Find insertion point (address-ordered) Hole* prev = NULL; Hole* next = mgr->free_list; while (next != NULL && next->start < hole_start) { prev = next; next = next->next; } // Determine adjacency bool prev_adjacent = (prev != NULL) && (prev->start + prev->size == hole_start); bool next_adjacent = (next != NULL) && (hole_end == next->start); if (prev_adjacent && next_adjacent) { // Case 4: Both adjacent - merge prev, hole, and next prev->size += hole->size + next->size; prev->next = next->next; if (next->next != NULL) { next->next->prev = prev; } free(next); free(hole); mgr->hole_count -= 1; // Removed one hole (net) } else if (prev_adjacent) { // Case 2: Only prev adjacent - extend prev prev->size += hole->size; free(hole); // hole_count unchanged } else if (next_adjacent) { // Case 3: Only next adjacent - extend next backward next->start = hole->start; next->size += hole->size; free(hole); // hole_count unchanged } else { // Case 1: No adjacent holes - insert as new hole hole->prev = prev; hole->next = next; if (prev != NULL) { prev->next = hole; } else { mgr->free_list = hole; } if (next != NULL) { next->prev = hole; } mgr->hole_count++; } // Update largest hole if necessary update_largest_hole_cache(mgr);} /* * Visual example of Case 4: * * Before deallocation: * |--Hole X--|--Part P--|--Hole Y--| * 100 KB 50 KB 80 KB * * After freeing partition P: * |--Hole X--|--H: freed--|--Hole Y--| * 100 KB 50 KB 80 KB * * After coalescing (Case 4): * |---------- Merged Hole ----------| * 230 KB * * Result: 3 separate blocks become 1 large hole */Coalescing Complexity Analysis:
| Data Structure | Find Neighbors | Merge Operation | Total |
|---|---|---|---|
| Address-sorted list | O(n) insert point | O(1) | O(n) |
| Address-sorted list with boundary tags | O(1) | O(1) | O(1) |
| Two trees (address + size) | O(log n) | O(log n) | O(log n) |
| Bitmap | O(1) | O(1) | O(1) |
Boundary Tags for O(1) Coalescing:
With boundary tags embedded in blocks, we can achieve O(1) coalescing:
No list traversal required — adjacency is determined purely by memory layout.
Immediate coalescing (as shown) merges holes at every deallocation. Deferred coalescing batches the work: holes are added to a list without merging, and a periodic sweep consolidates them. Deferred coalescing can be more efficient when allocations and deallocations of similar sizes cluster temporally — the same region may be reallocated before coalescing matters.
Deallocation involves updating multiple data structures, and these updates must remain consistent. Failure to properly maintain data structures leads to memory corruption, leaks, or crashes.
Data Structures Affected:
Consistency Requirements:
| Invariant | Description | Violated When |
|---|---|---|
| No overlap | No two blocks overlap in address range | Merge / remove logic has bugs |
| Complete coverage | Every address is in exactly one block | Block removed without adding hole |
| Sorted order | Lists remain correctly sorted | Insert at wrong position |
| Size consistency | Block size matches end - start | Size not updated after merge |
| Cross-reference integrity | Multiple views agree | One structure updated, other not |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130
/* Deallocation with comprehensive data structure updates */ typedef struct ProcessMemory { pid_t pid; uint32_t base; // Base of partition uint32_t limit; // Size of partition struct ProcessMemory* next;} ProcessMemory; typedef struct ProcessTable { ProcessMemory* allocations[MAX_PROCESSES]; uint32_t allocation_count[MAX_PROCESSES]; uint32_t total_allocated[MAX_PROCESSES];} ProcessTable; typedef struct MemoryManager { // Core allocation structures Partition* allocated_list; Hole* free_list; // Per-process tracking ProcessTable* process_table; // Statistics uint32_t total_memory; uint32_t total_free; uint32_t largest_hole; uint32_t hole_count; uint32_t allocated_count; // Optional: bitmap for fast lookup uint8_t* bitmap; uint32_t bitmap_unit_size;} MemoryManager; /** * Complete deallocation with all data structure updates. */int full_deallocate(MemoryManager* mgr, uint32_t address, pid_t pid) { // 1. Validate and find partition Partition* part = find_partition(mgr, address); if (!part || part->owner != pid) { return ERR_INVALID; } uint32_t start = part->start; uint32_t size = part->size; // 2. Update process table FIRST (before removing partition) remove_from_process_table(mgr->process_table, pid, start); // 3. Remove from allocated list unlink_partition(mgr, part); // 4. Create and insert hole with coalescing Hole* hole = make_hole(start, size); insert_with_coalesce(mgr, hole); // 5. Update bitmap (if present) if (mgr->bitmap != NULL) { uint32_t start_unit = start / mgr->bitmap_unit_size; uint32_t num_units = (size + mgr->bitmap_unit_size - 1) / mgr->bitmap_unit_size; clear_bitmap_range(mgr->bitmap, start_unit, num_units); } // 6. Update statistics mgr->total_free += size; mgr->allocated_count--; // 7. Update largest hole cache Hole* largest = find_largest_hole(mgr->free_list); mgr->largest_hole = (largest != NULL) ? largest->size : 0; // 8. Free partition metadata free(part); return SUCCESS;} /** * Remove allocation record from per-process table. */void remove_from_process_table(ProcessTable* pt, pid_t pid, uint32_t address) { ProcessMemory** prev = &pt->allocations[pid]; ProcessMemory* current = pt->allocations[pid]; while (current != NULL) { if (current->base == address) { *prev = current->next; pt->allocation_count[pid]--; pt->total_allocated[pid] -= current->limit; free(current); return; } prev = ¤t->next; current = current->next; }} /** * Debug: Verify all data structures are consistent. * Call after deallocation to catch bugs. */bool verify_consistency(MemoryManager* mgr) { // Check 1: allocated_list + free_list sizes sum to total_memory uint32_t alloc_sum = sum_partition_sizes(mgr->allocated_list); uint32_t free_sum = sum_hole_sizes(mgr->free_list); if (alloc_sum + free_sum != mgr->total_memory) { return false; // Memory leak or double-count } // Check 2: No overlapping blocks if (has_overlaps(mgr)) { return false; } // Check 3: Statistics match actual structures if (count_holes(mgr->free_list) != mgr->hole_count) { return false; } // Check 4: No adjacent holes (coalescing worked) if (has_adjacent_holes(mgr->free_list)) { return false; } return true;}Data structure updates should follow a specific order to maintain consistency during failures. Typically: (1) Remove from per-process table, (2) Remove from allocated list, (3) Add to free list. If the system crashes mid-operation, this ordering minimizes the damage — a partition may leak but won't be double-allocated.
Deallocation is a security-critical operation. Improper validation enables dangerous exploits and system instability. A robust deallocation implementation must handle all error cases gracefully.
Validation Checks Required:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
/* Comprehensive deallocation validation */ typedef enum { DEALLOC_SUCCESS = 0, DEALLOC_ERR_NULL_POINTER, DEALLOC_ERR_INVALID_ADDRESS, DEALLOC_ERR_MISALIGNED, DEALLOC_ERR_NOT_ALLOCATED, DEALLOC_ERR_WRONG_OWNER, DEALLOC_ERR_DOUBLE_FREE, DEALLOC_ERR_CORRUPTED} DeallocResult; /* Track recently freed addresses for double-free detection */#define RECENT_FREES 256static uint32_t recent_free_addrs[RECENT_FREES];static int recent_free_idx = 0; /** * Validate a deallocation request before processing. * Returns detailed error code on failure. */DeallocResult validate_deallocation(MemoryManager* mgr, void* ptr, pid_t pid) { if (ptr == NULL) { // Freeing NULL is typically allowed but no-op return DEALLOC_SUCCESS; // Or could treat as no-op } uint32_t address = (uint32_t)ptr; // Check 1: Address within valid memory range if (address < mgr->user_base || address >= mgr->user_limit) { log_security_event("Invalid deallocation address", pid, address); return DEALLOC_ERR_INVALID_ADDRESS; } // Check 2: Alignment (assuming 8-byte alignment minimum) if (address % ALIGNMENT != 0) { log_security_event("Misaligned deallocation", pid, address); return DEALLOC_ERR_MISALIGNED; } // Check 3: Double-free detection for (int i = 0; i < RECENT_FREES; i++) { if (recent_free_addrs[i] == address) { log_security_event("Double-free detected", pid, address); return DEALLOC_ERR_DOUBLE_FREE; } } // Check 4: Find allocation record Partition* part = find_partition_by_address(mgr->allocated_list, address); if (part == NULL) { // Address not in allocated list log_security_event("Freeing non-allocated memory", pid, address); return DEALLOC_ERR_NOT_ALLOCATED; } // Check 5: Ownership if (part->owner != pid && pid != KERNEL_PID) { log_security_event("Freeing another process's memory", pid, address); return DEALLOC_ERR_WRONG_OWNER; } // Check 6: Sanity check on metadata if (part->size == 0 || part->size > mgr->total_memory) { log_security_event("Corrupted allocation metadata", pid, address); return DEALLOC_ERR_CORRUPTED; } return DEALLOC_SUCCESS;} /** * Record address in recent-free list for double-free detection. */void record_recent_free(uint32_t address) { recent_free_addrs[recent_free_idx] = address; recent_free_idx = (recent_free_idx + 1) % RECENT_FREES;} /** * Clear address from recent-free list (on new allocation at same spot). */void clear_from_recent_frees(uint32_t address) { for (int i = 0; i < RECENT_FREES; i++) { if (recent_free_addrs[i] == address) { recent_free_addrs[i] = 0; } }}Security Implications:
Deallocation vulnerabilities are among the most exploited in systems programming:
| Vulnerability | Description | Potential Exploit |
|---|---|---|
| Double-Free | Same address freed twice | Allocator corruption, arbitrary write |
| Use-After-Free | Memory accessed after free | Data leak, code execution |
| Invalid Free | Freeing non-heap address | Arbitrary memory corruption |
| Wrong Owner Free | Freeing other process's memory | Privilege escalation |
Production systems employ techniques like:
Double-free is one of the most common and dangerous memory bugs. When the same memory is freed twice, it may appear twice in the free list. Subsequent allocations return the same address to two different requests, leading to aliased pointers and eventual corruption. Robust allocators must detect and prevent this.
When a process terminates, the operating system must free all memory associated with that process. This bulk deallocation requires special handling different from freeing individual blocks.
Challenges of Bulk Deallocation:
Optimization: Deferred Coalescing:
Instead of coalescing after each block is freed:
This reduces coalescing operations from O(n²) to O(n).
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129
/* Efficient bulk deallocation on process termination */ /** * Free all memory allocated to a terminated process. * Optimized with deferred coalescing. * * @param mgr Memory manager * @param pid Process ID that terminated * @return Total bytes freed */uint32_t free_process_memory(MemoryManager* mgr, pid_t pid) { uint32_t total_freed = 0; // Phase 1: Collect all partitions owned by this process Partition* to_free[MAX_ALLOCATIONS_PER_PROCESS]; int count = 0; Partition* current = mgr->allocated_list; while (current != NULL) { if (current->owner == pid) { to_free[count++] = current; } current = current->next; } if (count == 0) { return 0; // Process had no allocations } // Phase 2: Convert partitions to holes WITHOUT coalescing for (int i = 0; i < count; i++) { Partition* part = to_free[i]; total_freed += part->size; // Remove from allocated list unlink_partition(mgr, part); // Add hole (no coalesce) Hole* hole = make_hole(part->start, part->size); insert_hole_no_coalesce(mgr, hole); free(part); } // Phase 3: Single-pass coalescing sweep coalesce_all_adjacent(mgr); // Phase 4: Update statistics once mgr->total_free += total_freed; mgr->allocated_count -= count; recalculate_hole_stats(mgr); // Phase 5: Clear process table entry clear_process_allocations(mgr->process_table, pid); return total_freed;} /** * Insert hole without coalescing (for batch operations). */void insert_hole_no_coalesce(MemoryManager* mgr, Hole* hole) { // Simple insertion at correct address order position Hole* prev = NULL; Hole* curr = mgr->free_list; while (curr != NULL && curr->start < hole->start) { prev = curr; curr = curr->next; } hole->prev = prev; hole->next = curr; if (prev) prev->next = hole; else mgr->free_list = hole; if (curr) curr->prev = hole; mgr->hole_count++;} /** * Single-pass coalescing of all adjacent holes. * O(n) where n is number of holes. */void coalesce_all_adjacent(MemoryManager* mgr) { Hole* current = mgr->free_list; while (current != NULL && current->next != NULL) { Hole* next = current->next; // Check if current and next are adjacent if (current->start + current->size == next->start) { // Merge next into current current->size += next->size; current->next = next->next; if (next->next) { next->next->prev = current; } free(next); mgr->hole_count--; // Don't advance current - check if new merged block // is adjacent to the next hole } else { current = current->next; } }} /* * Performance comparison: * * Individual coalescing for N blocks: * Each dealloc: O(n) insert + O(1) coalesce check = O(n) * Total: O(N * n) = O(N²) for N deallocations into n-sized free list * * Deferred coalescing: * N inserts: O(N * n) = O(N²) [or O(N log n) with tree] * 1 coalesce sweep: O(N) * Total: O(N²) but with smaller constants * O(N log N) with tree-based insertion * * In practice, deferred coalescing is significantly faster due to * better cache behavior and reduced pointer chasing. */In systems with virtual memory and paging, process termination is much simpler: the page tables are discarded, and all frames are returned to the free frame pool. There's no need to track individual allocations or perform complex coalescing — a major advantage of non-contiguous allocation.
Deallocation performance is critical in allocation-heavy workloads. Slow deallocation can become a bottleneck, especially in multi-threaded applications with high allocation churn.
Performance Bottlenecks:
Optimization Techniques:
Boundary Tags for O(1) Operations:
With boundary tags, deallocation becomes O(1):
Deallocation with boundary tags:
1. Read own header to get size
2. Read previous footer to check if free
3. Read next header to check if free
4. Merge as needed, update headers/footers
5. Add to free list head (O(1))
12345678910111213141516171819202122232425262728293031323334353637383940
/* O(1) deallocation with boundary tags */ #define GET_SIZE(hdr) ((hdr) & ~0x7)#define IS_FREE(hdr) (!((hdr) & 0x1))#define PACK(size, alloc) ((size) | (alloc)) void* bt_free(void* ptr) { word_t* header = (word_t*)ptr - 1; uint32_t size = GET_SIZE(*header); // Check previous block word_t* prev_footer = header - 1; bool prev_free = IS_FREE(*prev_footer); // Check next block word_t* next_header = (word_t*)((char*)ptr + size); bool next_free = IS_FREE(*next_header); if (prev_free && next_free) { // Merge all three uint32_t prev_size = GET_SIZE(*prev_footer); uint32_t next_size = GET_SIZE(*next_header); uint32_t new_size = prev_size + size + next_size; word_t* new_header = header - prev_size / sizeof(word_t); word_t* new_footer = next_header + next_size/sizeof(word_t) - 1; *new_header = PACK(new_size, 0); *new_footer = PACK(new_size, 0); ptr = (void*)(new_header + 1); } // ... handle other cases similarly ... // Add to LIFO free list: O(1) ((FreeBlock*)ptr)->next = free_list_head; free_list_head = (FreeBlock*)ptr; return ptr;}Latency vs. Throughput Trade-off:
Techniques optimizing deallocation often trade one dimension for another:
| Technique | Latency | Throughput | Memory Overhead |
|---|---|---|---|
| Immediate coalesce | Higher | Lower | Minimal |
| Deferred coalesce | Lower average | Higher | Temporary fragmentation |
| Quick lists | Constant O(1) | Highest | Cache memory |
| Thread-local | Low (no contention) | Scales linearly | Per-thread structures |
Memory deallocation is notoriously error-prone. Understanding common bugs helps avoid them in implementation and identify them during debugging.
Common Deallocation Bugs:
unlink_partition(part);
// MISSING: insert_hole(hole);
free(part); // Metadata freed, memory lost
free(ptr);
// ... later ...
free(ptr); // CRASH or corruption
free(ptr);
ptr->value = 42; // UNDEFINED BEHAVIOR
insert_hole(mgr, new_hole);
// MISSING: coalesce check
void* p = alloc(100);
free(p, 50); // Only frees 50 bytes, 50 bytes leak
mgr->hole_count++; // But we merged, so count is wrong
Debugging Techniques:
1. Poison Memory on Free
Fill freed memory with a distinctive pattern (e.g., 0xDEADBEEF). If the pattern is read, it indicates use-after-free.
2. Canary Values
Place magic values before/after allocations. Check canaries on free — corruption indicates buffer overflow.
3. Free List Integrity Checks
Periodically verify:
4. Address Sanitizers
Tools like ASan (Address Sanitizer) detect use-after-free, double-free, and buffer overflows at runtime with low overhead.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
/* Debugging aids for deallocation bugs */ #define POISON_VALUE 0xDEADBEEF#define CANARY_HEAD 0xCAFEBABE#define CANARY_TAIL 0xFACEFEED typedef struct DebugBlock { uint32_t canary_head; uint32_t size; uint32_t alloc_id; // Unique ID for this allocation uint32_t alloc_time; // When allocated // ... user payload follows ... // ... canary_tail at end of payload ...} DebugBlock; void debug_free(MemoryManager* mgr, void* ptr) { DebugBlock* block = (DebugBlock*)ptr - 1; // Check canaries if (block->canary_head != CANARY_HEAD) { panic("Heap corruption: head canary damaged at %p", ptr); } uint32_t* tail = (uint32_t*)((char*)ptr + block->size); if (*tail != CANARY_TAIL) { panic("Heap corruption: tail canary damaged at %p " "(buffer overflow?)", ptr); } // Poison the freed memory memset(ptr, POISON_VALUE, block->size); // Mark as freed for double-free detection block->canary_head = 0xFREEFREE; // Signals "freed" // Proceed with normal deallocation standard_free(mgr, block);} /* Verify freed memory is still poisoned */void check_use_after_free(void* ptr, size_t size) { uint32_t* words = (uint32_t*)ptr; for (size_t i = 0; i < size / sizeof(uint32_t); i++) { if (words[i] != POISON_VALUE) { panic("Use-after-free: freed memory at %p was modified", ptr + i * sizeof(uint32_t)); } }}Production allocators layer multiple defenses: canaries catch overflows, poisoning catches use-after-free, randomization defeats exploits, and guard pages catch out-of-bounds access. No single technique is sufficient — defense in depth is essential.
Deallocation is the essential counterpart to allocation, responsible for reclaiming memory and maintaining system health. Effective deallocation requires careful attention to coalescing, data structure consistency, validation, and performance.
Module Complete:
With this page, you have completed the Variable Partitioning module. You now understand:
This knowledge forms the foundation for understanding why operating systems evolved toward paging and virtual memory — techniques that fundamentally solve the fragmentation problems inherent in contiguous allocation.
You have mastered Variable Partitioning — a foundational memory management technique. The deep understanding of dynamic allocation, fragmentation, and deallocation you've developed applies broadly: to heap allocators, kernel memory management, and anywhere variable-size contiguous allocation occurs. Next, explore Allocation Strategies for deeper algorithmic understanding, or proceed to Fragmentation and Compaction for mitigation techniques.