Loading learning content...
The single partition model's inefficiencies created an imperative for change. When a computer worth thousands of dollars per hour sits idle during I/O operations, economic pressure demands a solution. That solution is multiprogramming—keeping multiple programs in memory simultaneously so the CPU can switch to another program when one is blocked.
But multiprogramming requires a fundamental change in memory management. If multiple programs must coexist in memory, we need to partition memory into regions, assign each program to a partition, and protect programs from each other. This page explores multiple partition allocation—the natural evolution of single-partition systems.
Multiple partition allocation comes in two primary forms:
Each approach has distinct characteristics that make it suitable for different scenarios.
By the end of this page, you will understand the architecture of multiple partition systems, the mechanics of both fixed and variable partitioning, how processes are assigned to partitions, and the fragmentation problems that each approach creates. You'll be able to analyze which partitioning scheme suits different workload patterns.
Before examining multiple partitions technically, let's understand why multiprogramming is so valuable. The mathematics are compelling.
CPU Utilization Analysis
Consider a program that spends fraction p of its time waiting for I/O. In a single-partition system, the CPU utilization is simply (1 - p)—the fraction of time the program is actually computing.
With n programs in memory (multiprogramming), the probability that all programs are waiting for I/O simultaneously is p^n. Therefore:
CPU Utilization = 1 - p^n
Let's see the dramatic impact:
| I/O Wait (p) | n=1 | n=2 | n=3 | n=4 | n=5 |
|---|---|---|---|---|---|
| 50% | 50% | 75% | 87.5% | 93.8% | 96.9% |
| 80% | 20% | 36% | 49% | 59% | 67% |
| 90% | 10% | 19% | 27% | 34% | 41% |
| 95% | 5% | 10% | 14% | 19% | 23% |
For I/O-bound workloads (90% I/O wait), adding just 4 more programs increases CPU utilization from 10% to 41%—a 4x improvement. This is the economic justification for multiprogramming.
The Memory Requirement
But multiprogramming has a prerequisite: all n programs must be in memory simultaneously. If each program needs 100KB and we have only 400KB of user memory, we can support at most 4 programs. This creates the fundamental tradeoff:
This tradeoff motivates efficient memory allocation—we want to fit as many programs as possible into available memory.
The degree of multiprogramming is the number of programs simultaneously resident in memory. Higher degrees improve CPU utilization but require more memory and more sophisticated memory management. Modern systems may have degrees in the hundreds or thousands, enabled by virtual memory techniques we'll study later.
Fixed partitioning divides memory into a permanent set of partitions at system initialization time. Each partition has a fixed size that never changes during system operation. This was the first approach to multiprogramming, used in systems like IBM OS/360 MFT (Multiprogramming with a Fixed number of Tasks).
Equal-Size Partitions
The simplest fixed partitioning uses equally-sized partitions:
╔═══════════════════════════════╗ High Memory
║ Partition 4 (256KB) ║
╠═══════════════════════════════╣
║ Partition 3 (256KB) ║
╠═══════════════════════════════╣
║ Partition 2 (256KB) ║
╠═══════════════════════════════╣
║ Partition 1 (256KB) ║
╠═══════════════════════════════╣
║ Operating System ║
╚═══════════════════════════════╝ Address 0
This approach is simple to implement but has severe limitations:
Unequal-Size Partitions
A more practical approach uses partitions of varying sizes:
╔═══════════════════════════════╗ High Memory
║ Partition 5 (512KB) ║ Large programs
╠═══════════════════════════════╣
║ Partition 4 (256KB) ║
╠═══════════════════════════════╣
║ Partition 3 (128KB) ║ Medium programs
╠═══════════════════════════════╣
║ Partition 2 (64KB) ║
╠═══════════════════════════════╣
║ Partition 1 (32KB) ║ Small programs
╠═══════════════════════════════╣
║ Operating System ║
╚═══════════════════════════════╝ Address 0
This better matches the variety of program sizes in a typical workload.
Partition 1 (32KB): [P_a, P_b, P_c] → Busy queue, jobs waiting
Partition 2 (64KB): [P_d]
Partition 3 (128KB): [] → Empty, but 32KB jobs can't use it
Partition 4 (256KB): [P_e]
Partition 5 (512KB): [] → Idle, wasted
Global Queue: [P_a(30KB), P_b(100KB), P_c(50KB), P_d(200KB)]
Partition 3 (128KB) frees:
- P_a (30KB) fits → Assigned, wastes 98KB
- Alternative: P_b (100KB) fits better → 28KB waste
Policy choice: First-fit or Best-fit?
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
STRUCTURE FixedPartitionTable: partitions: ARRAY[1..MAX_PARTITIONS] OF PartitionEntry num_partitions: INTEGER STRUCTURE PartitionEntry: base: ADDRESS // Starting address size: SIZE // Partition size in bytes allocated: BOOLEAN // Is partition in use? process_id: PID // Process using this partition (if allocated) // Strategy 1: Best-Fit Assignment (minimize waste)FUNCTION AssignBestFit(process: Process, table: FixedPartitionTable) -> INTEGER: best_partition := -1 min_waste := INFINITY FOR i := 1 TO table.num_partitions: IF NOT table.partitions[i].allocated THEN IF table.partitions[i].size >= process.memory_size THEN waste := table.partitions[i].size - process.memory_size IF waste < min_waste THEN best_partition := i min_waste := waste END IF END IF END IF END FOR IF best_partition = -1 THEN RETURN ERROR_NO_PARTITION // No suitable partition END IF // Assign process to partition table.partitions[best_partition].allocated := TRUE table.partitions[best_partition].process_id := process.pid // Load process into partition LoadProcess(process, table.partitions[best_partition].base) RETURN best_partitionEND FUNCTION // Strategy 2: First-Fit Assignment (faster search)FUNCTION AssignFirstFit(process: Process, table: FixedPartitionTable) -> INTEGER: FOR i := 1 TO table.num_partitions: IF NOT table.partitions[i].allocated THEN IF table.partitions[i].size >= process.memory_size THEN table.partitions[i].allocated := TRUE table.partitions[i].process_id := process.pid LoadProcess(process, table.partitions[i].base) RETURN i END IF END IF END FOR RETURN ERROR_NO_PARTITIONEND FUNCTION // Release partition when process terminatesPROCEDURE ReleasePartition(partition_index: INTEGER, table: FixedPartitionTable): IF table.partitions[partition_index].allocated THEN // Clean up process UnloadProcess(table.partitions[partition_index].process_id) // Mark partition as free table.partitions[partition_index].allocated := FALSE table.partitions[partition_index].process_id := NULL_PID // Partition is now available for next process END IFEND PROCEDUREIBM's OS/360 MFT (Multiprogramming with a Fixed number of Tasks) was a landmark implementation of fixed partitioning. System administrators configured partition sizes at system generation time based on expected workload characteristics. Once set, these partitions remained fixed until the next system generation—a process that could take hours.
Variable partitioning (also called dynamic partitioning) eliminates the fixed partition constraint. Partitions are created dynamically when processes are loaded, sized exactly to each process's requirements.
IBM OS/360 MVT (Multiprogramming with a Variable number of Tasks) pioneered this approach.
How Variable Partitioning Works
Initially, all user memory is one large free block (a "hole"). As processes are loaded and terminated:
Hole Management Data Structures
The OS must track all holes to find suitable locations for new processes. Common data structures include:
| Structure | Search Time | Insertion/Deletion | Memory Overhead |
|---|---|---|---|
| Linked List | O(n) | O(1) after finding | Low (pointers only) |
| Ordered List (by size) | O(n) | O(n) to maintain order | Low |
| Ordered List (by address) | O(n) | O(n) for ordered insert | Low |
| Bitmap | O(n) bits scanned | O(1) | High for fine granularity |
| Buddy System | O(log n) | O(log n) | Moderate |
The linked list remains most common due to simplicity and the relatively small number of holes in typical systems.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100
STRUCTURE HoleNode: base: ADDRESS // Starting address of hole size: SIZE // Size of hole in bytes next: POINTER TO HoleNode prev: POINTER TO HoleNode STRUCTURE VariablePartitionManager: hole_list_head: POINTER TO HoleNode process_table: MAP<PID, ProcessMemoryDescriptor> STRUCTURE ProcessMemoryDescriptor: base: ADDRESS size: SIZE // Allocate memory for a new processFUNCTION AllocateMemory(manager: VariablePartitionManager, process: Process) -> ADDRESS: // Find suitable hole using allocation strategy // (First-Fit, Best-Fit, Worst-Fit, Next-Fit) hole := FindSuitableHole(manager.hole_list_head, process.memory_size) IF hole = NULL THEN // Try compaction Compact(manager) hole := FindSuitableHole(manager.hole_list_head, process.memory_size) IF hole = NULL THEN RETURN ERROR_OUT_OF_MEMORY END IF END IF allocated_base := hole.base // Allocate from beginning of hole IF hole.size = process.memory_size THEN // Exact fit: remove entire hole from list RemoveFromList(manager.hole_list_head, hole) FreeHoleNode(hole) ELSE // Partial fit: shrink the hole hole.base := hole.base + process.memory_size hole.size := hole.size - process.memory_size END IF // Record allocation manager.process_table[process.pid] := { base: allocated_base, size: process.memory_size } RETURN allocated_baseEND FUNCTION // Release memory when process terminatesPROCEDURE FreeMemory(manager: VariablePartitionManager, pid: PID): IF pid NOT IN manager.process_table THEN RETURN // Process already freed or never allocated END IF descriptor := manager.process_table[pid] // Create new hole for freed memory new_hole := AllocateHoleNode() new_hole.base := descriptor.base new_hole.size := descriptor.size // Insert hole into list and merge with adjacent holes InsertAndMergeHole(manager.hole_list_head, new_hole) // Remove from process table DELETE manager.process_table[pid]END PROCEDURE // Merge adjacent holes to combat fragmentationPROCEDURE InsertAndMergeHole(list_head: POINTER TO HoleNode, new_hole: POINTER TO HoleNode): // Insert in address-sorted order InsertByAddress(list_head, new_hole) // Check for merge with previous hole IF new_hole.prev != NULL THEN IF new_hole.prev.base + new_hole.prev.size = new_hole.base THEN // Previous hole is adjacent: merge new_hole.prev.size := new_hole.prev.size + new_hole.size RemoveFromList(list_head, new_hole) FreeHoleNode(new_hole) new_hole := new_hole.prev // Continue checking with merged hole END IF END IF // Check for merge with next hole IF new_hole.next != NULL THEN IF new_hole.base + new_hole.size = new_hole.next.base THEN // Next hole is adjacent: merge new_hole.size := new_hole.size + new_hole.next.size old_next := new_hole.next RemoveFromList(list_head, old_next) FreeHoleNode(old_next) END IF END IFEND PROCEDUREVariable partitioning eliminates internal fragmentation—each process gets exactly the memory it needs. But it introduces external fragmentation: free memory becomes scattered into many small holes. Even if total free memory exceeds a process's needs, the process cannot load if no single hole is large enough. We address this problem with compaction and allocation strategies in later pages.
Each partitioning scheme has distinct advantages and disadvantages. The choice depends on workload characteristics and system requirements.
Fixed Partitioning Analysis
Fixed partitioning excels when:
Fixed partitioning struggles when:
Variable Partitioning Analysis
Variable partitioning excels when:
Variable partitioning struggles when:
| Characteristic | Fixed Partitioning | Variable Partitioning |
|---|---|---|
| Internal Fragmentation | Severe (unused space in partitions) | None (exact-fit allocation) |
| External Fragmentation | None (partition boundaries fixed) | Severe (holes between processes) |
| Implementation Complexity | Simple (static table) | Moderate (dynamic hole tracking) |
| Allocation Speed | O(n) partitions, fast | O(n) holes, depends on strategy |
| Deallocation Speed | O(1) - just mark free | O(n) - may need hole merging |
| Maximum Process Size | Largest partition size | Largest contiguous hole |
| Degree of Multiprogramming | Fixed (number of partitions) | Variable (depends on process sizes) |
| Memory Utilization | Poor to moderate | Moderate to good |
| Configuration | Requires tuning for workload | Adapts automatically |
| Compaction Needed | Never | Periodically (costly) |
Neither pure fixed nor pure variable partitioning is used in modern general-purpose systems. They evolved into paging (which combines benefits of both) and segmentation. However, embedded systems and specialized allocators still use these principles—understanding them is essential for kernel development and memory allocator design.
In variable partitioning, when a process needs memory, multiple holes may be suitable. Which one should we choose? This decision profoundly affects system performance and fragmentation patterns.
The Placement Problem
Given:
Which hole should we allocate from? Four classic strategies exist:
Each strategy has distinct performance characteristics that we'll analyze in detail in a later page.
| Strategy | Search Time | Fragmentation Tendency | Best For |
|---|---|---|---|
| First Fit | O(n) average, fast in practice | Moderate (front of memory fragments) | General purpose |
| Best Fit | O(n) always (scan all) | Creates many tiny holes | When minimizing waste matters |
| Worst Fit | O(n) always (scan all) | Uniform hole distribution | When future allocations are similar |
| Next Fit | O(n) worst, fast average | Even distribution of fragments | When allocation patterns are uniform |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
// First Fit: Find the first hole large enoughFUNCTION FirstFit(hole_list: POINTER TO HoleNode, size: SIZE) -> POINTER TO HoleNode: current := hole_list WHILE current != NULL: IF current.size >= size THEN RETURN current // Found suitable hole END IF current := current.next END WHILE RETURN NULL // No suitable hole foundEND FUNCTION // Best Fit: Find the smallest hole that's large enoughFUNCTION BestFit(hole_list: POINTER TO HoleNode, size: SIZE) -> POINTER TO HoleNode: best_hole := NULL current := hole_list WHILE current != NULL: IF current.size >= size THEN IF best_hole = NULL OR current.size < best_hole.size THEN best_hole := current END IF END IF current := current.next END WHILE RETURN best_holeEND FUNCTION // Worst Fit: Find the largest holeFUNCTION WorstFit(hole_list: POINTER TO HoleNode, size: SIZE) -> POINTER TO HoleNode: worst_hole := NULL current := hole_list WHILE current != NULL: IF current.size >= size THEN IF worst_hole = NULL OR current.size > worst_hole.size THEN worst_hole := current END IF END IF current := current.next END WHILE RETURN worst_holeEND FUNCTION // Next Fit: Start searching from last allocation positionGLOBAL last_allocated: POINTER TO HoleNode := NULL FUNCTION NextFit(hole_list: POINTER TO HoleNode, size: SIZE) -> POINTER TO HoleNode: // Start from last position, or beginning if no prior allocation start := last_allocated IF start = NULL THEN start := hole_list END IF // Search from last position to end current := start WHILE current != NULL: IF current.size >= size THEN last_allocated := current RETURN current END IF current := current.next END WHILE // Wrap around: search from beginning to last position current := hole_list WHILE current != start: IF current.size >= size THEN last_allocated := current RETURN current END IF current := current.next END WHILE RETURN NULL // Complete circle, no suitable holeEND FUNCTIONExtensive simulation studies (Knuth, Shore, Bays) found that First Fit and Best Fit generally outperform Worst Fit in memory utilization. First Fit is slightly faster (stops at first suitable hole), while Best Fit produces slightly better utilization but creates many tiny, unusable holes. Next Fit distributes fragmentation more evenly but may perform worse than First Fit overall.
With multiple programs in memory simultaneously, protection becomes critical. A bug in one program must not corrupt other programs or the operating system. The hardware mechanisms introduced for single-partition protection now must protect multiple boundaries.
Protection Requirements
The system must enforce:
Implementation with Base and Limit Registers
The same base/limit mechanism works for multiple partitions, with one crucial addition: the registers must be context-switched when the CPU switches between processes.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
STRUCTURE ProcessControlBlock: pid: PID state: ProcessState registers: RegisterSet base_register: ADDRESS // Process's memory base limit_register: SIZE // Process's memory size // ... other fields // Context switch includes protection register updatePROCEDURE ContextSwitch(old_process: PCB, new_process: PCB): IF old_process != NULL THEN // Save current process state including protection registers old_process.registers := CPU.Registers old_process.base_register := CPU.BaseRegister old_process.limit_register := CPU.LimitRegister END IF // Load new process state CPU.Registers := new_process.registers // CRITICAL: Update protection registers // This defines which memory new_process can access CPU.BaseRegister := new_process.base_register CPU.LimitRegister := new_process.limit_register // Now when new_process runs, hardware enforces its boundariesEND PROCEDURE // Hardware memory access check (performed on EVERY memory access)HARDWARE_FUNCTION MemoryAccessCheck(logical_address: ADDRESS) -> ADDRESS: mode := CPU.Mode IF mode = SUPERVISOR THEN // OS mode: no translation, no restrictions RETURN logical_address END IF // User mode: check against base/limit IF logical_address >= CPU.LimitRegister THEN RaiseException(SEGMENTATION_FAULT) HALT // Never reaches here END IF // Translate: add base to get physical address physical_address := CPU.BaseRegister + logical_address RETURN physical_addressEND HARDWARE_FUNCTIONAddress Translation Bonus
Notice that base/limit protection also provides address translation. The user program sees logical addresses starting from 0, but the hardware adds the base to get the actual physical address:
| User Address | Base Register | Physical Address |
|---|---|---|
| 0x0000 | 0x40000 | 0x40000 |
| 0x1000 | 0x40000 | 0x41000 |
| 0xFFFF | 0x40000 | 0x4FFFF |
This relocation means programs can be compiled for address 0 and loaded anywhere—the hardware transparently adjusts all addresses. This is a primitive form of what evolved into MMU-based virtual memory.
Only the operating system (running in supervisor mode) can modify the base and limit registers. User programs cannot change their own boundaries, as the register-modification instructions are privileged. This hardware enforcement is what makes protection reliable—software alone cannot be trusted.
Multiple partition allocation was the dominant memory management strategy from the mid-1960s through the early 1970s. Several historically significant systems illustrated its practical application.
IBM System/360: MFT and MVT
IBM's System/360 offered two operating systems demonstrating both partitioning approaches:
| System | Full Name | Partitioning | Target Use |
|---|---|---|---|
| MFT | Multiprogramming with Fixed Tasks | Fixed | Smaller installations, simpler workloads |
| MVT | Multiprogramming with Variable Tasks | Variable | Larger installations, varied workloads |
MFT required operators to define partition sizes at system generation time. Common configurations included 3-5 partitions for different job classes (small, medium, large). This approach was simpler but inflexible.
MVT created partitions dynamically, tracking memory as a list of allocated regions and holes. It supported more sophisticated scheduling and better memory utilization but required more complex memory management code.
CDC 6600: Peripheral Processors
The CDC 6600 supercomputer used an innovative hybrid approach. Its 10 Peripheral Processors (PPs) each had 4096 12-bit words of memory—essentially single-partition per PP. The central processor had 128K words accessed via base/limit registers.
This design separated I/O handling (PPs) from computation (central processor), reducing the impact of I/O waits on computational workloads.
Modern Echoes
While general-purpose systems moved to paging, partition-like concepts persist:
| Modern Context | Partition Analogy |
|---|---|
| Container memory limits | Fixed memory "partition" per container |
| NUMA memory regions | Partitioned physical memory |
| GPU memory management | Fixed chunks for different kernels |
| Real-time OS task pools | Fixed partitions for predictable allocation |
| Hypervisor memory assignment | Fixed memory per VM |
Modern memory allocators like jemalloc, tcmalloc, and mimalloc use slab allocation, which organizes memory into fixed-size chunks based on allocation size classes. This is conceptually similar to fixed partitioning within a heap, trading some internal fragmentation for faster allocation and reduced external fragmentation.
Multiple partition allocation represents the first practical implementation of multiprogramming. Let's consolidate the essential concepts:
What's Next:
With multiple partition allocation understood, we're ready to explore the hardware mechanisms that make it work—specifically base/limit registers for protection and relocation. The next page examines how hardware enforces memory boundaries and translates addresses.
You now understand both fixed and variable partitioning—their architectures, tradeoffs, and the fragmentation problems they create. This foundation is essential for understanding why paging was developed and how modern memory allocators work. Next, we examine protection with base/limit registers.