Loading content...
The equal-size partitioning scheme we examined previously embodies a powerful simplicity, but that simplicity comes at a cost. When system administrators observed real workloads, they noticed a persistent problem:
Real processes don't come in uniform sizes.
A typical computing environment might run:
With equal-size partitions, system administrators faced an impossible choice. Size partitions for large programs, and small programs waste enormous memory. Size partitions for small programs, and large programs cannot run at all.
Unequal-size partitioning emerged as the natural evolution: instead of dividing memory into identical slices, why not create partitions of different sizes to match the diversity of process requirements?
This seemingly simple idea introduces layers of complexity that illuminate fundamental computer science trade-offs. Let's explore them systematically.
By the end of this page, you will:
• Understand the design rationale for unequal-size partitions • Analyze the complexity increase compared to equal-size schemes • Master multiple allocation strategies for partition assignment • Evaluate trade-offs in partition size selection • Implement partition tables with heterogeneous sizes • Recognize historical systems that employed this approach
Unequal-size fixed partitioning divides physical memory into a fixed number of partitions at system initialization, but unlike the equal-size variant, each partition may have a different, administrator-defined size.
Let's formalize the properties that distinguish this scheme:
1. Static Division, Variable Sizes The memory is divided once at system startup into n partitions. However, partition sizes S₁, S₂, ..., Sₙ can differ from each other. Once established, these sizes remain constant during system operation.
2. Size Constraint The sum of all partition sizes must equal the available user memory:
S₁ + S₂ + S₃ + ... + Sₙ = M (total user memory)
3. Exclusive Allocation (Preserved) Each partition still holds at most one process. A partition is either free or occupied by exactly one process.
4. Fixed Boundaries (Preserved) Partition boundaries remain immutable. Partitions cannot be split, merged, or resized during operation.
| Partition # | Size | Base Address | Target Workload |
|---|---|---|---|
| 0 | 2 MB | 0x00800000 | Small utilities |
| 1 | 2 MB | 0x00A00000 | Small utilities |
| 2 | 4 MB | 0x00C00000 | Small applications |
| 3 | 4 MB | 0x01000000 | Small applications |
| 4 | 8 MB | 0x01400000 | Medium applications |
| 5 | 8 MB | 0x01C00000 | Medium applications |
| 6 | 16 MB | 0x02400000 | Large programs |
| 7 | 20 MB | 0x03400000 | Very large programs |
| — | 64 MB Total | — | — |
Unequal-size partitioning represents a fundamental shift in design philosophy:
Equal-Size Philosophy: "Treat all workloads identically."
Unequal-Size Philosophy: "Match infrastructure to workload characteristics."
This shift requires the system administrator to possess knowledge about expected workload distributions—a form of capacity planning that would become increasingly important as systems grew more complex.
Effective unequal-size partitioning requires analyzing historical workloads:
• What sizes do processes typically require? • What's the distribution (many small, few large)? • What are the peak concurrent process counts per size category? • How does the mix vary by time of day or processing cycle?
This analysis drives partition configuration. Poor analysis leads to poor partition fit; excellent analysis leads to efficient memory utilization.
Unequal-size partitioning requires more sophisticated data structures than its equal-size counterpart, since we can no longer compute partition addresses from a uniform size.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
// Partition configuration (set at boot via configuration file)#define MAX_PARTITIONS 16 // Partition statestypedef enum { PARTITION_FREE, PARTITION_OCCUPIED} PartitionState; // Partition descriptor with variable sizetypedef struct { PartitionState state; int process_id; // PID (-1 if free) uint32_t base_address; // Starting physical address uint32_t size; // Size in bytes (varies per partition) uint32_t wasted_space; // Tracking internal fragmentation uint64_t allocation_time;} PartitionDescriptor; // System configurationtypedef struct { int num_partitions; uint32_t total_memory; PartitionDescriptor partitions[MAX_PARTITIONS];} PartitionConfig; PartitionConfig system_config; // Initialize from configuration fileint init_partition_config(const char* config_file) { FILE* f = fopen(config_file, "r"); if (!f) return -1; uint32_t current_address = USER_MEM_START; system_config.num_partitions = 0; system_config.total_memory = 0; // Read partition sizes from config // Format: one size (in bytes) per line uint32_t size; while (fscanf(f, "%u", &size) == 1 && system_config.num_partitions < MAX_PARTITIONS) { int i = system_config.num_partitions; system_config.partitions[i].state = PARTITION_FREE; system_config.partitions[i].process_id = -1; system_config.partitions[i].base_address = current_address; system_config.partitions[i].size = size; system_config.partitions[i].wasted_space = 0; system_config.partitions[i].allocation_time = 0; current_address += size; system_config.total_memory += size; system_config.num_partitions++; } fclose(f); log_info("Configured %d partitions, total %u MB", system_config.num_partitions, system_config.total_memory / (1024 * 1024)); return 0;}Notice the critical differences in this implementation:
1. Variable Size Field Each partition now stores its own size. We can no longer compute size from a global constant.
2. Pre-computed Base Addresses
Base addresses must be stored explicitly. With equal sizes, we computed: base = START + (i × SIZE). With variable sizes, we must accumulate: base[i] = base[i-1] + size[i-1].
3. Wasted Space Tracking
We track internal fragmentation per partition: wasted = partition.size - process.actual_size. This enables monitoring and optimization.
4. Configuration Complexity The initialization requires external configuration rather than simple arithmetic. This adds a failure mode (misconfiguration) but enables flexibility.
A typical partition configuration file might contain:
# partition_config.txt
# Sizes in bytes
2097152 # 2 MB - small utilities
2097152 # 2 MB - small utilities
4194304 # 4 MB - small apps
4194304 # 4 MB - small apps
8388608 # 8 MB - medium apps
8388608 # 8 MB - medium apps
16777216 # 16 MB - large programs
20971520 # 20 MB - very large programs
Changing this configuration requires a system reboot—the partitions are truly fixed at runtime.
With unequal-size partitions, a new question emerges that didn't exist before:
When multiple free partitions can accommodate a process, which partition should we choose?
This question defines the field of partition allocation strategies. Each strategy optimizes for different goals and produces different trade-offs.
Allocate the first partition (in address order) that is free and large enough.
1234567891011121314151617181920
// First Fit: Allocate first partition large enoughint allocate_first_fit(int process_id, uint32_t required_size) { for (int i = 0; i < system_config.num_partitions; i++) { PartitionDescriptor* p = &system_config.partitions[i]; if (p->state == PARTITION_FREE && p->size >= required_size) { // Found suitable partition p->state = PARTITION_OCCUPIED; p->process_id = process_id; p->wasted_space = p->size - required_size; p->allocation_time = get_current_time(); log_info("First-fit: Process %d (%u bytes) -> Partition %d (%u bytes), waste: %u", process_id, required_size, i, p->size, p->wasted_space); return i; } } return -1; // No suitable partition found}Allocate the smallest partition that is free and large enough. This minimizes wasted space per allocation.
123456789101112131415161718192021222324252627282930313233343536
// Best Fit: Allocate smallest partition large enoughint allocate_best_fit(int process_id, uint32_t required_size) { int best_index = -1; uint32_t best_waste = UINT32_MAX; for (int i = 0; i < system_config.num_partitions; i++) { PartitionDescriptor* p = &system_config.partitions[i]; if (p->state == PARTITION_FREE && p->size >= required_size) { uint32_t waste = p->size - required_size; if (waste < best_waste) { best_waste = waste; best_index = i; // Optimization: perfect fit found if (waste == 0) break; } } } if (best_index >= 0) { PartitionDescriptor* p = &system_config.partitions[best_index]; p->state = PARTITION_OCCUPIED; p->process_id = process_id; p->wasted_space = best_waste; p->allocation_time = get_current_time(); log_info("Best-fit: Process %d (%u bytes) -> Partition %d (%u bytes), waste: %u", process_id, required_size, best_index, p->size, best_waste); return best_index; } return -1; // No suitable partition found}Allocate the largest free partition. The rationale: the leftover space (internal fragmentation) might be large enough to be "useful" in some conceptual sense.
1234567891011121314151617181920212223242526272829303132
// Worst Fit: Allocate largest partition availableint allocate_worst_fit(int process_id, uint32_t required_size) { int worst_index = -1; uint32_t max_size = 0; for (int i = 0; i < system_config.num_partitions; i++) { PartitionDescriptor* p = &system_config.partitions[i]; if (p->state == PARTITION_FREE && p->size >= required_size && p->size > max_size) { max_size = p->size; worst_index = i; } } if (worst_index >= 0) { PartitionDescriptor* p = &system_config.partitions[worst_index]; p->state = PARTITION_OCCUPIED; p->process_id = process_id; p->wasted_space = p->size - required_size; p->allocation_time = get_current_time(); log_info("Worst-fit: Process %d (%u bytes) -> Partition %d (%u bytes), waste: %u", process_id, required_size, worst_index, p->size, p->wasted_space); return worst_index; } return -1; // No suitable partition found}Worst fit was designed for variable-size partitioning (where leftover space creates new holes), not fixed partitioning. In fixed partitioning, the "leftover space" is internal fragmentation that cannot be used by other processes. Worst fit in fixed partitioning typically performs poorly—it wastes large partitions on small processes that could have used small partitions.
When all suitable partitions are occupied, arriving processes must wait. The organization of waiting processes creates another design dimension.
All waiting processes share one global queue. When a partition becomes free, we scan the queue for a process that can use it.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
// Single global queue for waiting processestypedef struct QueueNode { int process_id; uint32_t required_size; uint64_t arrival_time; struct QueueNode* next;} QueueNode; QueueNode* wait_queue_head = NULL;QueueNode* wait_queue_tail = NULL; // Add process to waiting queuevoid enqueue_waiting(int process_id, uint32_t required_size) { QueueNode* node = malloc(sizeof(QueueNode)); node->process_id = process_id; node->required_size = required_size; node->arrival_time = get_current_time(); node->next = NULL; if (!wait_queue_tail) { wait_queue_head = wait_queue_tail = node; } else { wait_queue_tail->next = node; wait_queue_tail = node; }} // When partition i becomes free, try to allocate from queuevoid on_partition_freed(int partition_index) { uint32_t partition_size = system_config.partitions[partition_index].size; QueueNode* prev = NULL; QueueNode* curr = wait_queue_head; while (curr) { if (curr->required_size <= partition_size) { // Found a process that fits allocate_specific_partition(partition_index, curr->process_id, curr->required_size); // Remove from queue if (prev) prev->next = curr->next; else wait_queue_head = curr->next; if (!curr->next) wait_queue_tail = prev; free(curr); return; } prev = curr; curr = curr->next; } // No suitable process found; partition remains free}Each partition (or partition size class) maintains its own dedicated queue. Processes are assigned to the queue for the smallest partition that can accommodate them.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
// Multiple queues: one per partitionQueueNode* partition_queues[MAX_PARTITIONS]; // Associate process with specific partition queuevoid enqueue_for_partition(int partition_index, int process_id) { QueueNode* node = malloc(sizeof(QueueNode)); node->process_id = process_id; node->arrival_time = get_current_time(); node->next = NULL; // Append to partition's queue if (!partition_queues[partition_index]) { partition_queues[partition_index] = node; } else { QueueNode* tail = partition_queues[partition_index]; while (tail->next) tail = tail->next; tail->next = node; }} // Find best partition queue for a processint find_best_queue(uint32_t required_size) { int best = -1; uint32_t best_size = UINT32_MAX; for (int i = 0; i < system_config.num_partitions; i++) { uint32_t psize = system_config.partitions[i].size; if (psize >= required_size && psize < best_size) { best_size = psize; best = i; } } return best; // -1 if no partition can fit} // When partition i frees: serve its queuevoid on_partition_freed_multi_queue(int partition_index) { if (partition_queues[partition_index]) { QueueNode* node = partition_queues[partition_index]; partition_queues[partition_index] = node->next; allocate_specific_partition(partition_index, node->process_id, /* size tracked elsewhere */); free(node); } // If queue empty, partition remains free}Advantage: Processes always get their assigned partition size—predictable allocation, minimal waste.
Disadvantage: A small process waits in the small-partition queue even while large partitions sit empty. Utilization can suffer when workload doesn't match partition distribution.
This mirrors a classic systems trade-off: optimization versus fairness. Multiple queues optimize per-size-class but may be "unfair" from a global utilization perspective.
Let's systematically compare the two fixed partitioning schemes across multiple dimensions.
| Dimension | Equal-Size | Unequal-Size |
|---|---|---|
| Implementation complexity | Very simple | Moderate |
| Configuration effort | Choose one size | Design size distribution |
| Memory utilization (homogeneous workload) | High if well-matched | Can match closely |
| Memory utilization (heterogeneous workload) | Poor—one size forces waste | Better—multiple sizes available |
| Maximum process size | Fixed at S | Varies up to largest partition |
| Allocation time | O(n) simple scan | O(n) with strategy logic |
| Deallocation time | O(1) | O(1) |
| Internal fragmentation | Uniform waste pattern | Variable, often lower average |
| External fragmentation | None | None |
| Partition count flexibility | Easy to adjust | Requires careful planning |
Consider a workload of 8 processes with the following memory requirements:
Equal-Size Approach (8 × 8 MB = 64 MB)
| Process | Size | Partition | Waste |
|---|---|---|---|
| P1 | 1 MB | 8 MB | 7 MB |
| P2 | 1 MB | 8 MB | 7 MB |
| P3 | 2 MB | 8 MB | 6 MB |
| P4 | 2 MB | 8 MB | 6 MB |
| P5 | 4 MB | 8 MB | 4 MB |
| P6 | 4 MB | 8 MB | 4 MB |
| P7 | 8 MB | 8 MB | 0 MB |
| P8 | 16 MB | Cannot fit! | — |
Total Used: 38 MB (excluding P8) Total Allocated: 56 MB Total Waste: 34 MB (47%) Rejected: 1 process
Unequal-Size Approach (2×1, 2×2, 2×4, 1×8, 1×16 = 38 MB)
| Process | Size | Partition | Waste |
|---|---|---|---|
| P1 | 1 MB | 1 MB | 0 MB |
| P2 | 1 MB | 1 MB | 0 MB |
| P3 | 2 MB | 2 MB | 0 MB |
| P4 | 2 MB | 2 MB | 0 MB |
| P5 | 4 MB | 4 MB | 0 MB |
| P6 | 4 MB | 4 MB | 0 MB |
| P7 | 8 MB | 8 MB | 0 MB |
| P8 | 16 MB | 16 MB | 0 MB |
Total Used: 38 MB Total Allocated: 38 MB Total Waste: 0 MB (0%) Rejected: 0 processes
When partition sizes match process sizes perfectly, unequal-size partitioning achieves optimal utilization. The challenge is that real workloads vary, and perfect matching is rarely possible. The administrator must make educated guesses about future workloads, and these guesses are fixed until reboot.
The most historically significant implementation of unequal-size fixed partitioning was IBM's OS/MFT (Operating System with a fixed number of Tasks), part of the OS/360 family introduced in 1966.
OS/MFT divided memory into a fixed set of regions (IBM's term for partitions). Key characteristics:
1. Operator-Configured Regions System operators defined region sizes at system generation (SYSGEN) or via configuration statements. A typical configuration might include:
2. Job-to-Region Assignment Jobs specified their region requirements in Job Control Language (JCL). The system matched jobs to appropriately-sized free regions.
//MYJOB JOB (...)
//STEP1 EXEC PGM=PROCESS,REGION=256K
3. Multiple Queues OS/MFT typically used multiple input queues, each associated with specific region sizes. Jobs waited in their designated queue until a matching region became available.
OS/MFT's limitations led IBM to develop OS/MVT (Multiprogramming with a Variable number of Tasks), which used variable-size partitioning. The evolution from MFT to MVT mirrors our curriculum progression from fixed to variable partitioning. MVT's added flexibility came with increased complexity—a trade-off that defines memory management history.
Studying OS/MFT reveals enduring principles:
1. Configuration is a Form of Programming Partition configuration required understanding workload characteristics. Poor configuration led to poor system performance, regardless of OS code quality.
2. Trade-offs Are Fundamental More regions meant more concurrency but smaller maximum job size. Fewer regions meant larger jobs could run but with less concurrency.
3. Static Decisions Have Long-Term Consequences Once configured, the system operated with those partitions until reconfiguration. This incentivized careful upfront analysis.
4. Real-World Pressure Drives Innovation OS/MFT's limitations in production environments drove the demand for more flexible schemes, leading to MVT and eventually paging.
Designing an optimal partition configuration is a constrained optimization problem. Let's analyze the key trade-offs.
Experienced system administrators developed rules of thumb:
1. Analyze Historical Logs Study process sizes from past workloads. Build a histogram of memory requirements.
2. Match Distribution Configure partitions to match the frequency distribution. If 60% of jobs need ≤2 MB, allocate 60% of memory to small partitions.
3. Ensure Large-Job Capacity Always include at least one partition large enough for the largest expected job. Running important large jobs is often critical.
4. Leave Margin Don't size partitions exactly to current maximums. Leave 10-20% margin for growth.
5. Monitor and Adjust Track utilization over time. If certain partitions are always empty while their queues are full, reconfigure.
| Process Size Range | % of Workload | Suggested Partition Allocation |
|---|---|---|
| 0-1 MB | 40% | 4 partitions × 1 MB |
| 1-4 MB | 35% | 3 partitions × 4 MB |
| 4-8 MB | 15% | 2 partitions × 8 MB |
| 8-16 MB | 8% | 1 partition × 16 MB |
16 MB | 2% | 1 partition × 32 MB |
No static configuration optimally serves a dynamic workload. Workloads change—by time of day, season, project phase, or business evolution. Fixed partitioning forces administrators to choose a single compromise configuration that serves the average case but may perform poorly during workload peaks or shifts. This fundamental limitation drove the development of dynamic partitioning schemes.
We have thoroughly examined unequal-size fixed partitioning, understanding how relaxing the uniformity constraint enables better workload matching at the cost of increased complexity.
Both equal-size and unequal-size fixed partitioning suffer from a common problem: internal fragmentation—wasted space inside allocated partitions. The next page examines this phenomenon in detail, providing quantitative analysis and understanding its impact on system efficiency.
You now understand both variants of fixed partitioning and can analyze their trade-offs. This knowledge prepares you to appreciate why internal fragmentation is such a significant concern, and why operating systems eventually moved to more dynamic memory management schemes.