Loading learning content...
When multiple processes compete for a finite pool of physical memory frames, the operating system faces a fundamental question: How should these precious resources be divided? This question lies at the heart of memory management, and the answer profoundly impacts system performance, fairness, and stability.
The simplest answer—and our starting point in understanding allocation strategies—is equal allocation: give every process the same number of frames, regardless of their size, importance, or actual needs. While this approach may seem naive at first glance, understanding its mechanics, tradeoffs, and limitations provides the essential foundation for comprehending more sophisticated allocation schemes.
Equal allocation embodies a democratic principle: every process is equal in the eyes of the memory manager. But as we'll discover, this egalitarian approach, while elegant in its simplicity, often fails to match the complex reality of diverse workloads with vastly different memory requirements.
By the end of this page, you will deeply understand equal allocation: its mathematical formulation, implementation mechanisms, performance characteristics, and critical limitations. You will be able to analyze when equal allocation is appropriate, calculate frame distributions for any system configuration, and articulate precisely why more sophisticated strategies are often necessary.
Equal allocation is conceptually straightforward: if the system has m frames available for user processes and there are n active processes, each process receives ⌊m/n⌋ frames (the floor of m divided by n). Any remaining frames—typically fewer than n—are either kept in a free pool or distributed among processes using a secondary criterion.
This approach treats memory allocation like dividing a pie among diners: everyone gets an equal slice, regardless of appetite. The mathematical elegance of this formulation masks significant practical complexities that emerge when we consider real-world workloads.
For a system with m available frames and n processes:
Frames per process = ⌊m/n⌋ Leftover frames = m mod n
Example: With 93 frames and 5 processes, each process gets ⌊93/5⌋ = 18 frames, with 93 mod 5 = 3 frames remaining in the free pool.
The Implicit Assumptions
Equal allocation rests on several assumptions that are worth making explicit, as violations of these assumptions reveal the strategy's limitations:
Assumption 1: Homogeneous Memory Requirements — All processes need approximately the same amount of memory. In practice, this is almost never true. A small utility process might need 50KB while a database server needs 8GB.
Assumption 2: Equal Importance — All processes deserve equal treatment. In hierarchical systems with users, services, and critical infrastructure, this assumption often conflicts with organizational priorities.
Assumption 3: Static Process Count — The number of processes remains relatively stable. Frequent process creation and termination can cause allocation fluctuations that impact running processes.
Assumption 4: Adequate Total Memory — The total available memory is sufficient to give each process a viable allocation. When m/n falls below a process's minimum requirement, equal allocation fails entirely.
| Processes (n) | Frames per Process | Leftover Frames | Notes |
|---|---|---|---|
| 2 | 50 | 0 | Generous allocation, likely wasteful |
| 5 | 20 | 0 | Reasonable for moderate-sized processes |
| 10 | 10 | 0 | Tight but workable for small processes |
| 20 | 5 | 0 | Marginal; may trigger thrashing |
| 33 | 3 | 1 | Below minimum for many processes |
| 50 | 2 | 0 | Critical: most processes will thrash |
| 100 | 1 | 0 | Failure: no process can function |
A critical constraint on all allocation strategies—but one that equal allocation often violates—is the minimum frame requirement. Every process needs a certain minimum number of frames to execute even a single instruction. This minimum is architecturally determined and varies based on the instruction set architecture (ISA), addressing modes, and memory reference patterns.
Why Minimums Exist
Consider what happens when a single instruction executes. On many architectures, even a simple instruction like MOV [dest], [source] might require:
If indirect addressing is involved, additional frames may be needed for pointer resolution. Some architectures with complex addressing modes (like the PDP-11's multiple levels of indirection) can require many frames for a single instruction.
123456789101112131415161718192021
; Example: Instruction with multiple memory references; Consider: MOV EAX, [EBX + ECX*4 + displacement]; MOV [EDI + ESI*2 + displacement], EAX ; Worst case memory references:; 1. Instruction fetch (1-2 pages if instruction spans page boundary); 2. Source operand calculation and fetch (1-2 pages); 3. Destination operand calculation and store (1-2 pages); ; Minimum frames needed: 3-6 frames depending on alignment ; For instructions with indirect addressing:; MOV EAX, [[ptr]] ; Double indirection; 1. Instruction page; 2. First pointer page; 3. Second pointer page ; 4. Final data page; Minimum frames: 4 ; x86-64 with RIP-relative addressing reduces some requirements; but complex instructions still need multiple framesDifferent architectures have different minimum frame requirements:
• IBM 370: 6 frames (MVC instruction can span 2 pages each for source, destination, and instruction) • PDP-11: Up to 7 frames (due to multi-level indirect addressing) • x86: 3-6 frames (depending on instruction complexity) • RISC architectures: Typically 2-3 frames (simpler addressing modes)
Equal allocation must never allocate fewer frames than the architectural minimum, or processes will fail immediately.
The Violation Scenario
Equal allocation can violate minimum frame requirements when:
Scenario 1: Too Many Processes — If n processes compete for m frames and m/n < minimum, equal allocation fails. With 50 frames, a minimum of 3, and 20 processes, equal allocation gives 2 frames per process—below the minimum.
Scenario 2: Dynamic Process Creation — A system running 10 processes with 100 frames (10 each) might seem healthy, but if 30 more processes start, allocation drops to 2.5 → 2 frames per process, potentially below the minimum.
Consequences of Violation:
| Architecture | Minimum Frames | Determining Factor |
|---|---|---|
| IBM System/370 | 6 | MVC instruction spans multiple pages |
| PDP-11 | 6-7 | Multi-level indirect addressing |
| VAX | 6 | Complex memory-to-memory operations |
| Intel x86 | 3-6 | Segment + page + data references |
| ARM | 2-3 | Load/store architecture, simpler addressing |
| MIPS | 2 | RISC design, fixed instruction format |
| RISC-V | 2 | Clean RISC design |
Implementing equal allocation requires careful bookkeeping and efficient frame management. While the concept is simple, robust implementation must handle edge cases, maintain consistency, and integrate with the broader memory management subsystem.
Core Data Structures
Equal allocation typically relies on these data structures:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120
#include <stdio.h>#include <stdlib.h>#include <stdbool.h> #define MAX_PROCESSES 256#define TOTAL_FRAMES 1024#define MIN_FRAMES_PER_PROCESS 3 // Architecture-dependent // Frame pool managementtypedef struct { int total_frames; // Total physical frames available int free_frames; // Currently unallocated frames int reserved_frames; // Frames reserved for kernel/buffers int* frame_bitmap; // 1 = allocated, 0 = free} FramePool; // Per-process allocation trackingtypedef struct { int pid; int allocated_frames; // Current frame count int target_allocation; // Equal allocation target bool active;} ProcessAllocation; // Global allocation statetypedef struct { FramePool frame_pool; ProcessAllocation processes[MAX_PROCESSES]; int active_process_count; int frames_per_process; // Current equal allocation quota int leftover_frames; // Frames not evenly divisible} AllocationState; // Calculate equal allocation for all processesvoid calculate_equal_allocation(AllocationState* state) { if (state->active_process_count == 0) { state->frames_per_process = 0; state->leftover_frames = 0; return; } int available = state->frame_pool.total_frames - state->frame_pool.reserved_frames; state->frames_per_process = available / state->active_process_count; state->leftover_frames = available % state->active_process_count; // Enforce minimum frame constraint if (state->frames_per_process < MIN_FRAMES_PER_PROCESS) { fprintf(stderr, "WARNING: Equal allocation (%d frames) below minimum (%d)\n", state->frames_per_process, MIN_FRAMES_PER_PROCESS); // Calculate maximum processes supportable int max_processes = available / MIN_FRAMES_PER_PROCESS; fprintf(stderr, "System can support at most %d processes with current memory\n", max_processes); }} // Allocate frames to a new processint allocate_to_new_process(AllocationState* state, int pid) { // Find empty slot int slot = -1; for (int i = 0; i < MAX_PROCESSES; i++) { if (!state->processes[i].active) { slot = i; break; } } if (slot == -1) { return -1; // No process slots available } // Register process state->processes[slot].pid = pid; state->processes[slot].active = true; state->processes[slot].allocated_frames = 0; state->active_process_count++; // Recalculate equal allocation for ALL processes calculate_equal_allocation(state); // Rebalance: may need to reclaim frames from other processes rebalance_allocations(state); return state->frames_per_process;} // Rebalance frames after process creation/terminationvoid rebalance_allocations(AllocationState* state) { int target = state->frames_per_process; // First pass: reclaim excess frames from over-allocated processes for (int i = 0; i < MAX_PROCESSES; i++) { if (!state->processes[i].active) continue; int excess = state->processes[i].allocated_frames - target; if (excess > 0) { // Reclaim excess frames (triggers page-out if needed) reclaim_frames(&state->processes[i], excess, &state->frame_pool); } } // Second pass: allocate to under-allocated processes for (int i = 0; i < MAX_PROCESSES; i++) { if (!state->processes[i].active) continue; int deficit = target - state->processes[i].allocated_frames; if (deficit > 0) { // Allocate additional frames grant_frames(&state->processes[i], deficit, &state->frame_pool); } } // Distribute leftover frames (optional: could use round-robin) distribute_leftovers(state);}Handling Leftover Frames
When m frames don't divide evenly among n processes, several strategies can distribute the remainder:
Strategy 1: Free Pool Reserve — Keep leftover frames in the free pool, available for demand paging spikes. Simple but wastes memory.
Strategy 2: Round-Robin Distribution — Distribute one extra frame to the first (m mod n) processes. Fair but requires tracking which processes received extras.
Strategy 3: Priority-Based Distribution — Give extras to higher-priority processes. Introduces priority considerations into an otherwise egalitarian scheme.
Strategy 4: First-Fault Allocation — Hold leftovers until page faults occur, then allocate to faulting processes. Adaptive but adds complexity.
Most production systems using equal allocation keep leftover frames in a free pool rather than distributing them. This provides a buffer for page fault handling and simplifies the implementation. The marginal efficiency loss (at most n-1 frames) is usually acceptable.
In a real operating system, the process count changes continuously. Processes are created, terminate, get suspended, and resume. Each change potentially affects the equal allocation for all processes. Managing these transitions gracefully is crucial for system stability.
Process Creation Scenario
When a new process starts, equal allocation must:
Process Termination Scenario
When a process terminates, equal allocation can distribute the freed frames:
The Thrashing Risk During Transitions
The period immediately after a new process starts is particularly dangerous. Existing processes have just lost frames and will immediately begin page faulting as they try to access pages that were evicted. The new process also starts with page faults as it loads its working set. This creates a page fault storm that can temporarily spike system load significantly.
Mitigation strategies include:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
#define REBALANCE_RATE 5 // Max frames to reclaim per interval typedef struct { int pid; int current_allocation; int target_allocation; bool pending_adjustment;} ProcessBalance; // Gradual rebalancing to prevent page fault stormsvoid gradual_rebalance(AllocationState* state) { bool any_pending = false; for (int i = 0; i < MAX_PROCESSES; i++) { if (!state->processes[i].active) continue; ProcessBalance* pb = &state->processes[i].balance; int diff = pb->current_allocation - pb->target_allocation; if (diff > 0) { // Over-allocated: reclaim gradually int to_reclaim = (diff > REBALANCE_RATE) ? REBALANCE_RATE : diff; reclaim_frames(&state->processes[i], to_reclaim, &state->frame_pool); pb->current_allocation -= to_reclaim; pb->pending_adjustment = (diff > REBALANCE_RATE); any_pending |= pb->pending_adjustment; } else if (diff < 0) { // Under-allocated: grant if frames available int deficit = -diff; int to_grant = (deficit > REBALANCE_RATE) ? REBALANCE_RATE : deficit; int granted = grant_frames(&state->processes[i], to_grant, &state->frame_pool); pb->current_allocation += granted; pb->pending_adjustment = (deficit > REBALANCE_RATE); any_pending |= pb->pending_adjustment; } } if (any_pending) { // Schedule another rebalance pass schedule_rebalance_timer(REBALANCE_INTERVAL_MS); }} // Admission control: check if new process can be accommodatedbool can_admit_process(AllocationState* state) { int available = state->frame_pool.total_frames - state->frame_pool.reserved_frames; int new_count = state->active_process_count + 1; int new_quota = available / new_count; // Check 1: Would violate minimum frame constraint? if (new_quota < MIN_FRAMES_PER_PROCESS) { return false; } // Check 2: Would cause excessive frame reclamation? int frames_to_reclaim = 0; for (int i = 0; i < MAX_PROCESSES; i++) { if (!state->processes[i].active) continue; int excess = state->processes[i].allocated_frames - new_quota; if (excess > 0) frames_to_reclaim += excess; } // Threshold: don't admit if reclaiming more than 20% of frames if (frames_to_reclaim > available * 0.2) { return false; // System under memory pressure } return true;}Despite its limitations, equal allocation has genuine advantages that make it appropriate for certain scenarios. Understanding when to use equal allocation—and when to avoid it—is essential for system designers.
Key Advantages
Appropriate Use Cases
Equal allocation works well in specific scenarios:
| Scenario | Why Equal Allocation Works | Example |
|---|---|---|
| Homogeneous Workloads | All processes have similar memory needs | Batch processing of similar jobs |
| Educational/Teaching Systems | Simplicity aids understanding; fairness prevents grade disputes | University computer labs |
| Dedicated Single-Application Servers | All instances of same application have same requirements | Web server farm with identical worker processes |
| Resource-Constrained Embedded Systems | Predictable allocation simplifies real-time guarantees | Industrial controllers |
| Prototype/Development Systems | Quick implementation; behavior easy to analyze | Early-stage OS development |
Early time-sharing systems like CTSS (Compatible Time-Sharing System) and Multics used variants of equal allocation. With limited memory and a focus on fair multi-user access, the simplicity and fairness of equal allocation outweighed its inefficiencies. As systems grew more sophisticated and workloads more diverse, proportional and dynamic allocation schemes became necessary.
Equal allocation's simplicity comes at a significant cost. In heterogeneous environments—which describe nearly all modern systems—equal allocation exhibits serious pathologies that can cripple system performance.
Quantifying the Waste: A Detailed Example
Consider a system with 1000 frames and these five processes:
| Process | Virtual Size | Working Set | Equal Alloc (200 frames) |
|---|---|---|---|
| P1 (Editor) | 50 pages | 30 pages | 200 frames (170 wasted) |
| P2 (Compiler) | 800 pages | 400 pages | 200 frames (THRASHING) |
| P3 (DB Server) | 2000 pages | 600 pages | 200 frames (SEVERE THRASHING) |
| P4 (Background) | 20 pages | 10 pages | 200 frames (190 wasted) |
| P5 (Web Server) | 300 pages | 150 pages | 200 frames (THRASHING) |
With equal allocation:
With proportional allocation based on working set:
Equal allocation creates a perverse incentive: running more processes makes every process slower, regardless of total memory demand. A user might have plenty of aggregate memory for their workload, but equal allocation prevents efficient distribution. This is particularly damaging in modern multi-process architectures (e.g., microservices, browser tabs, GUI applications with helper processes).
To fully appreciate equal allocation's tradeoffs, we must compare it against alternative strategies. This comparison also previews the more sophisticated approaches we'll explore in subsequent pages.
| Criterion | Equal Allocation | Proportional | Priority-Based | Dynamic (Working Set) |
|---|---|---|---|---|
| Complexity | Very Low | Low | Medium | High |
| Fairness | Process-level equal | Size-proportional | Priority-weighted | Need-based |
| Adaptability | None | Static sizing only | Static priority only | Continuous adaptation |
| Overhead | Minimal | Low (process size tracking) | Low (priority tracking) | High (working set tracking) |
| Heterogeneous Workloads | Poor | Good | Good (if priorities set correctly) | Excellent |
| Thrashing Prevention | None | Limited | Limited | Strong |
| Implementation Effort | Trivial | Easy | Easy | Significant |
When to Choose Equal Allocation
Choose equal allocation when:
When to Avoid Equal Allocation
Avoid equal allocation when:
No major production operating system uses pure equal allocation today. Linux uses a complex combination of proportional allocation, working set tracking, and priority-based adjustments. Windows uses working set management with trimming. Even embedded RTOSes typically use fixed per-process allocations (predetermined, not equal). Equal allocation remains valuable as a conceptual baseline and for understanding why more sophisticated approaches are necessary.
We have conducted an exhaustive examination of equal allocation, the foundational frame allocation strategy. Let's consolidate the critical insights:
Looking Ahead
Equal allocation's fundamental insight—that frames must be explicitly distributed among processes—remains valid. Its limitation—treating all processes identically—motivates the strategies we'll explore next.
In the following page, we examine proportional allocation, which addresses equal allocation's size blindness by distributing frames in proportion to process sizes. This seemingly simple adjustment dramatically improves utilization in heterogeneous workloads while maintaining implementation simplicity.
You now possess a complete understanding of equal allocation: its mathematical formulation, implementation requirements, advantages, limitations, and appropriate use cases. This foundation enables you to appreciate why more sophisticated allocation strategies are necessary and how they improve upon equal allocation's baseline approach.