Loading learning content...
In the evolution of operating systems, few transitions have been as consequential as the move from fixed to variable partitioning. Fixed partitioning, while simple to implement, imposed rigid constraints on memory utilization—processes either fit neatly into predefined slots or suffered from severe internal fragmentation. Variable partitioning emerged as the elegant solution to this fundamental inefficiency, introducing a dynamic approach where memory partitions are sculpted at runtime to precisely match process requirements.
This paradigm shift fundamentally altered how operating systems think about memory allocation. Rather than asking "which existing partition fits this process?", variable partitioning asks "how can we carve out exactly the memory this process needs?". This seemingly simple change in perspective unlocked dramatic improvements in memory utilization and laid the conceptual groundwork for modern memory management techniques.
By the end of this page, you will understand the complete mechanics of dynamic partition creation: how the operating system maintains memory state, the algorithms for carving partitions from free memory, the data structures that enable efficient allocation, and the critical relationship between partition creation and overall system performance. You'll gain the foundation needed to understand both historical implementations and modern memory management systems.
Variable partitioning, also known as dynamic partitioning or variable-size partitioning, is a memory management scheme where partition sizes are determined at runtime based on the actual memory requirements of processes. Unlike fixed partitioning, where the number and sizes of partitions are determined at system generation time, variable partitioning creates partitions dynamically as processes request memory.
The Fundamental Principle:
At its core, variable partitioning operates on a simple principle: allocate exactly as much memory as a process needs, no more and no less. When a process requests memory, the operating system finds a suitable region of free memory and creates a partition of exactly the requested size. When the process terminates, that partition is returned to the pool of free memory.
| Characteristic | Fixed Partitioning | Variable Partitioning |
|---|---|---|
| Partition size determination | System generation time (static) | Runtime (dynamic) |
| Partition boundaries | Immutable during system operation | Created and destroyed dynamically |
| Memory request handling | Find existing partition that fits | Create new partition of exact size |
| Internal fragmentation | Potentially severe | Eliminated entirely |
| External fragmentation | None | Can become significant |
| Memory utilization | Often poor (wasted space in partitions) | Initially excellent, may degrade |
| Implementation complexity | Simple | Moderate to complex |
| Metadata overhead | Minimal (fixed partition table) | Variable (depends on fragmentation) |
Historical Context:
Variable partitioning emerged in the 1960s as a response to the inefficiencies of fixed partitioning systems. Early mainframe operating systems like IBM's OS/360 MVT (Multiprogramming with a Variable number of Tasks) implemented variable partitioning to improve memory utilization in expensive mainframe memory. The technique represented a significant advancement in operating system design, trading implementation simplicity for improved resource efficiency.
The Memory Continuum:
In variable partitioning, physical memory is viewed as a continuum of addressable bytes. Initially, after loading the operating system, the entire remaining memory is available as one large free region (or "hole"). As processes are loaded, the operating system carves partitions from this free space. Over time, as processes arrive and depart, memory becomes a patchwork of allocated partitions and free holes.
In variable partitioning literature, you'll encounter several terms used interchangeably: partition (the allocated memory block), hole or cavity (a contiguous region of free memory), and block (generic term for any contiguous memory region). The term fragmentation refers to the condition where free memory is divided into non-contiguous pieces, reducing usable capacity.
Understanding dynamic partition creation requires first understanding the initial state of memory when the system boots. The journey from power-on to a fully operational multiprogrammed system involves several critical steps that establish the foundation for variable partitioning.
Boot Sequence and Memory Layout:
When a system boots, the memory is effectively a blank canvas. The boot process loads the operating system kernel into memory, typically starting at the lowest physical addresses. The kernel claims a contiguous region for:
After the kernel establishes itself, the remaining physical memory becomes the user memory pool — the region from which process partitions will be allocated.
12345678910111213141516171819202122232425262728293031
/* Conceptual memory layout after system initialization */ #define MEMORY_SIZE (64 * 1024 * 1024) // 64 MB total memory#define KERNEL_BASE 0x00000000#define KERNEL_SIZE (4 * 1024 * 1024) // 4 MB for kernel // Memory regions after boottypedef struct { uint32_t start; uint32_t size; enum { KERNEL, USER_AVAILABLE } type;} MemoryRegion; MemoryRegion initial_layout[] = { { 0x00000000, KERNEL_SIZE, KERNEL }, { KERNEL_SIZE, MEMORY_SIZE - KERNEL_SIZE, USER_AVAILABLE }}; // Initial free list: single large holetypedef struct FreeBlock { uint32_t start_address; uint32_t size; struct FreeBlock* next;} FreeBlock; // After boot: one contiguous hole of 60 MBFreeBlock initial_free = { .start_address = KERNEL_SIZE, // 0x00400000 .size = MEMORY_SIZE - KERNEL_SIZE, // 60 MB .next = NULL};The Free Memory Pool:
At system initialization, the user memory forms a single, large contiguous block of free memory. This initial state is the most favorable for variable partitioning — any process whose memory requirement doesn't exceed the total free memory can be accommodated. The operating system maintains metadata to track this free region:
Memory State Transitions:
As the system operates, memory transitions through states. A byte of memory is either:
The operating system's memory manager is responsible for tracking these states and ensuring correct transitions as processes are created and terminated.
The initial single-hole state is theoretically optimal — it can satisfy any request up to the total free memory. As the system operates, this single hole becomes fragmented into multiple smaller holes, progressively reducing the maximum satisfiable request size. This observation motivates the study of fragmentation and compaction, covered in later pages.
When a new process is created or an existing process requests additional memory, the operating system must allocate a partition. This partition creation process involves several carefully orchestrated steps, each critical to maintaining system integrity and memory consistency.
Step-by-Step Partition Creation:
Step 1: Request Reception and Validation
The process begins when a process (or the kernel on behalf of a new process) issues a memory allocation request. The request specifies the size of memory needed. The memory manager first validates the request:
Step 2: Free Block Search
The memory manager searches the free block list to find a hole that can accommodate the request. Various algorithms exist for this search (first-fit, best-fit, worst-fit), each with different performance characteristics. The search must find a hole whose size is at least equal to the requested size.
Step 3: Partition Carving
Once a suitable hole is found, the memory manager "carves" a partition from it. If the hole is exactly the requested size, the entire hole becomes the partition. If the hole is larger, it is split:
Step 4: Metadata Update
The memory manager updates its data structures to reflect the allocation:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
/* Dynamic partition creation implementation */ typedef struct Partition { uint32_t start_address; uint32_t size; pid_t owner_pid; struct Partition* next;} Partition; typedef struct Hole { uint32_t start_address; uint32_t size; struct Hole* next;} Hole; // Global memory management statestatic Hole* free_list = NULL; // List of free holesstatic Partition* allocated_list = NULL; // List of allocated partitions /** * Allocate a partition of the specified size for the given process. * Returns the starting address of the partition, or -1 on failure. */int32_t allocate_partition(uint32_t requested_size, pid_t pid) { // Step 1: Validate request if (requested_size == 0) { return -1; // Invalid request } // Step 2: Search for a suitable hole (first-fit strategy) Hole* prev = NULL; Hole* current = free_list; while (current != NULL) { if (current->size >= requested_size) { // Found a suitable hole uint32_t partition_start = current->start_address; // Step 3: Carve the partition if (current->size == requested_size) { // Exact fit: remove hole from free list if (prev == NULL) { free_list = current->next; } else { prev->next = current->next; } free(current); } else { // Hole is larger: shrink it current->start_address += requested_size; current->size -= requested_size; } // Step 4: Create and record the allocated partition Partition* new_partition = malloc(sizeof(Partition)); new_partition->start_address = partition_start; new_partition->size = requested_size; new_partition->owner_pid = pid; new_partition->next = allocated_list; allocated_list = new_partition; return partition_start; } prev = current; current = current->next; } return -1; // No suitable hole found}Step 5: Memory Initialization
Depending on system security policies, the newly allocated partition may be initialized:
Step 6: Address Space Setup
The final step integrates the partition into the process's address space. The memory manager:
In multiprocessor systems or with concurrent allocation requests, the partition creation process must be atomic or properly synchronized. If two requests simultaneously find the same hole suitable and both attempt to allocate from it, data corruption and system crashes can result. Production implementations use spinlocks or other synchronization primitives to protect the free list.
When a suitable hole is larger than the requested partition size, the hole must be split. This splitting operation is central to variable partitioning and has subtle implications for system performance and fragmentation.
The Splitting Decision:
Given a hole of size H and a request of size R (where H > R), the system can:
Most systems employ a minimum hole threshold: if (H - R) is below this threshold, the entire hole is allocated to avoid creating uselessly small fragments.
Splitting Strategy Considerations:
| Strategy | Description | Advantage | Disadvantage |
|---|---|---|---|
| Always split | Split regardless of remainder size | Minimal internal fragmentation | Can create tiny, unusable holes |
| Threshold-based | Split only if remainder exceeds threshold | Balanced approach | Introduces some internal fragmentation |
| Alignment-based | Split at aligned boundaries | Improves cache/page performance | Complex, may waste more memory |
| Never split small holes | Allocate entire small hole | Fewer fragments | Higher internal fragmentation |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
/* Partition splitting with minimum hole threshold */ #define MIN_HOLE_SIZE 64 // Minimum viable hole size (bytes) /** * Splits a hole to create a partition, respecting minimum hole size. * Returns true if split occurred, false if entire hole was allocated. */bool split_hole(Hole* hole, uint32_t partition_size, Partition** out_partition) { uint32_t remainder = hole->size - partition_size; // Create the new partition *out_partition = malloc(sizeof(Partition)); (*out_partition)->start_address = hole->start_address; if (remainder >= MIN_HOLE_SIZE) { // Remainder is large enough: split the hole (*out_partition)->size = partition_size; // Adjust the hole to represent only the remainder hole->start_address += partition_size; hole->size = remainder; return true; // Hole was split } else { // Remainder too small: allocate entire hole (*out_partition)->size = hole->size; // Includes small extra // Hole will be removed from free list by caller return false; // Entire hole consumed }} /* Memory visualization: * * Before split (hole of 1000 bytes, request for 800): * |<---- HOLE: 1000 bytes ---->| * * After split (if MIN_HOLE_SIZE = 64): * |<- PARTITION: 800 ->|<- HOLE: 200 ->| * * If request was 980 bytes (remainder = 20 < MIN_HOLE_SIZE): * |<---- PARTITION: 1000 bytes (all 1000 allocated) ---->| * The extra 20 bytes become internal fragmentation. */Address Arithmetic in Splitting:
When splitting a hole, the arithmetic is straightforward but must be precise:
New Partition Start = Hole Start
New Partition Size = Requested Size
New Hole Start = Hole Start + Requested Size
New Hole Size = Original Hole Size - Requested Size
This arithmetic assumes the partition is carved from the beginning of the hole. Some systems carve from the end instead, which can have implications for locality and fragmentation patterns.
Alignment Considerations:
Modern systems often require partition addresses to be aligned to specific boundaries (4-byte, page-size, cache-line, etc.). When alignment is required, the splitting logic becomes more complex:
Knuth's analysis shows that after many allocations and deallocations, approximately half the memory blocks will be holes. This "50-percent rule" (covered in detail in the fragmentation page) directly results from the repeated splitting of holes during partition creation. Understanding split mechanics is essential to understanding this emergent behavior.
Efficient dynamic partition creation requires sophisticated data structures to track both allocated partitions and free holes. The choice of data structure significantly impacts allocation speed, memory overhead, and system scalability.
Requirements for Partition Tracking Data Structures:
Common Data Structure Approaches:
Linked List Representation:
The simplest and most common approach uses a singly or doubly linked list of free blocks:
Advantages:
Disadvantages:
Implementation:
123456789101112131415161718192021222324252627282930313233343536373839
/* Doubly-linked free list for partition tracking */ typedef struct FreeBlock { uint32_t start; uint32_t size; struct FreeBlock* prev; struct FreeBlock* next;} FreeBlock; typedef struct { FreeBlock* head; FreeBlock* tail; uint32_t total_free; uint32_t block_count;} FreeList; // O(n) search for first-fit allocationFreeBlock* find_first_fit(FreeList* list, uint32_t size) { FreeBlock* current = list->head; while (current != NULL) { if (current->size >= size) { return current; } current = current->next; } return NULL; // No suitable block found} // O(1) removal from doubly-linked listvoid remove_from_list(FreeList* list, FreeBlock* block) { if (block->prev) block->prev->next = block->next; else list->head = block->next; if (block->next) block->next->prev = block->prev; else list->tail = block->prev; list->block_count--; list->total_free -= block->size;}Hybrid Approaches:
Production memory allocators often combine multiple data structures to optimize for different access patterns:
Understanding these trade-offs is essential for system designers who must balance allocation speed against memory overhead and implementation complexity.
The creation of a partition is most commonly triggered when a new process is loaded into memory. Understanding this process illuminates the practical context in which variable partitioning operates.
The Process Creation Sequence:
Determining Memory Requirements:
Before allocating a partition, the operating system must determine how much memory the process needs. This information comes from several sources:
| Component | Size Determination | Typical Range | Characteristics |
|---|---|---|---|
| Code segment | Size recorded in executable header | KBs to MBs | Fixed at compile time, typically read-only |
| Data segment | Initialized data in executable + BSS | KBs to MBs | Fixed initial size, may grow with heap |
| Stack | System default or process specification | 4KB to 8MB | Grows downward, may need guard pages |
| Heap | Initial zero, grows dynamically | 0 to GBs | Grows via sbrk/mmap system calls |
| Libraries | Sum of required shared library sizes | MBs | May be shared between processes |
Partition Size Calculation:
In a simple variable partitioning system (before demand paging), the entire process address space must be loaded. The partition size is calculated as:
Partition Size = Code Size + Data Size + BSS Size + Initial Stack + Initial Heap
= Text Segment + Data Segment + Stack Allocation
The operating system typically adds some padding for stack growth and dynamic allocation within the partition.
The Loading Process:
Once a partition is allocated, the OS loader performs the actual loading:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
/* Simplified process loading into a partition */ typedef struct Executable { uint32_t text_offset; // Code section offset in file uint32_t text_size; // Code section size uint32_t data_offset; // Initialized data offset uint32_t data_size; // Initialized data size uint32_t bss_size; // Uninitialized data (zeroed) uint32_t entry_point; // Starting address (relative) uint32_t stack_size; // Requested stack size} Executable; typedef struct ProcessControlBlock { pid_t pid; uint32_t partition_base; uint32_t partition_limit; uint32_t entry_point; // ... other PCB fields} PCB; int load_process(const char* path, pid_t pid, PCB* pcb) { // Step 1: Read executable header Executable exe; read_executable_header(path, &exe); // Step 2: Calculate total memory requirement uint32_t total_size = exe.text_size + exe.data_size + exe.bss_size + exe.stack_size + 4096; // Heap initial allocation // Step 3: Allocate partition int32_t partition_start = allocate_partition(total_size, pid); if (partition_start < 0) { return -1; // Memory allocation failed } // Step 4: Load segments into partition uint32_t current_addr = partition_start; // Load code segment load_segment(path, exe.text_offset, current_addr, exe.text_size); current_addr += exe.text_size; // Load initialized data load_segment(path, exe.data_offset, current_addr, exe.data_size); current_addr += exe.data_size; // Zero BSS (uninitialized data) memset((void*)current_addr, 0, exe.bss_size); current_addr += exe.bss_size; // Stack is at the end of partition, grows downward uint32_t stack_top = partition_start + total_size; // Step 5: Configure PCB pcb->pid = pid; pcb->partition_base = partition_start; pcb->partition_limit = total_size; pcb->entry_point = partition_start + exe.entry_point; return 0; // Success}In variable partitioning, processes are compiled as relocatable code — addresses are relative to the start of the program, not absolute. At load time, the base register is set to the partition's starting address. All memory accesses by the process add this base value, transparently mapping relative addresses to physical addresses.
Not every allocation request can be satisfied. When the memory manager cannot find a suitable hole, the system must decide how to proceed. The handling of allocation failures is a critical aspect of memory management that directly impacts system behavior and user experience.
Reasons for Allocation Failure:
Response Strategies:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192
/* Comprehensive allocation with failure handling */ typedef enum { ALLOC_SUCCESS, ALLOC_WAIT, ALLOC_COMPACTED, ALLOC_SWAPPED, ALLOC_FAILED} AllocationResult; typedef struct AllocationRequest { pid_t pid; uint32_t size; int priority; struct AllocationRequest* next;} AllocationRequest; static AllocationRequest* pending_queue = NULL; AllocationResult allocate_with_recovery(uint32_t size, pid_t pid, uint32_t* out_address) { // Attempt 1: Direct allocation int32_t addr = allocate_partition(size, pid); if (addr >= 0) { *out_address = addr; return ALLOC_SUCCESS; } // Attempt 2: Check if compaction would help uint32_t total_free = calculate_total_free_memory(); if (total_free >= size) { // Memory exists but is fragmented perform_compaction(); addr = allocate_partition(size, pid); if (addr >= 0) { *out_address = addr; return ALLOC_COMPACTED; } } // Attempt 3: Try swapping out a suitable candidate pid_t victim = select_swap_victim(size); if (victim != INVALID_PID) { swap_out_process(victim); addr = allocate_partition(size, pid); if (addr >= 0) { *out_address = addr; return ALLOC_SWAPPED; } } // Attempt 4: Queue for later retry (if process can wait) if (can_process_wait(pid)) { AllocationRequest* req = malloc(sizeof(AllocationRequest)); req->pid = pid; req->size = size; req->priority = get_process_priority(pid); req->next = pending_queue; pending_queue = req; block_process(pid); // Suspend until memory available return ALLOC_WAIT; } // Final: Allocation failed return ALLOC_FAILED;} // Called when a process frees memoryvoid on_memory_freed(uint32_t freed_size) { // Check pending requests AllocationRequest** prev = &pending_queue; AllocationRequest* current = pending_queue; while (current != NULL) { if (current->size <= get_largest_hole_size()) { // Try to satisfy this request uint32_t addr; if (allocate_partition(current->size, current->pid) >= 0) { *prev = current->next; unblock_process(current->pid); free(current); current = *prev; continue; } } prev = ¤t->next; current = current->next; }}Without careful policy design, large requests may starve indefinitely if smaller requests continuously consume freed memory. Production systems implement aging mechanisms — increasing the priority of waiting requests over time — to ensure eventual service.
Dynamic partition creation is the foundational operation of variable partitioning memory management. Understanding this process deeply is essential for comprehending both historical memory management techniques and the design decisions underlying modern systems.
What's Next:
With the foundation of dynamic partition creation established, we turn to the complementary challenge: managing the free memory holes that result from allocation and deallocation. The next page explores Hole Management — the strategies and data structures used to track, organize, and efficiently utilize free memory regions in a variable partitioning system.
You now understand the complete mechanics of dynamic partition creation in variable partitioning systems. This knowledge forms the foundation for understanding hole management, fragmentation, allocation strategies, and the eventual migration to more sophisticated techniques like paging and virtual memory.