Loading content...
For decades, operating system designers wrestled with a seemingly intractable problem: external fragmentation. As processes came and went, memory became riddled with scattered free regions—holes too small to be useful. Theoretical analysis showed that up to one-third of memory could be lost to this phenomenon. Compaction was expensive, and the problem always returned.
Then came paging, and external fragmentation simply... vanished. Not reduced. Not mitigated. Eliminated entirely. This wasn't a clever optimization—it was a fundamental shift in how memory allocation works. Understanding why paging eliminates external fragmentation provides deep insight into the elegance of this design.
By the end of this page, you will understand the precise definition of external fragmentation, prove rigorously why paging eliminates it, contrast this with the fragmentation behavior of contiguous allocation, and appreciate the system-level implications of this fundamental property.
To understand why external fragmentation disappears with paging, we must first define it precisely.
External Fragmentation Defined:
External fragmentation occurs when the total amount of free memory is sufficient to satisfy a memory request, but no single contiguous block of free memory is large enough.
Key Components:
Quantifying External Fragmentation:
External Fragmentation = Total free memory - Largest usable contiguous block
────────────────────────────────────────────────────
Total free memory
If you have 100 KB free but the largest block is 20 KB:
| Scenario | Total Free | Largest Block | Fragmentation % | Status |
|---|---|---|---|---|
| One big hole | 64 KB | 64 KB | 0% | No fragmentation |
| Two equal holes | 64 KB | 32 KB | 50% | Moderate fragmentation |
| Many small holes | 64 KB | 8 KB | 87.5% | Severe fragmentation |
| Many tiny holes | 64 KB | 1 KB | 98.4% | Critical fragmentation |
The most frustrating aspect of external fragmentation is that the memory exists—you can see it in free memory statistics—but you can't use it. It's like having $100 in your wallet as 10,000 pennies: nominally sufficient for a $50 purchase, but practically useless.
Under contiguous allocation, external fragmentation follows predictable patterns. The 50% rule (also known as the Knuth 50% rule) provides a theoretical framework for understanding these patterns.
The 50% Rule Statement:
In a steady-state system using contiguous allocation with first-fit strategy, the number of holes tends toward half the number of allocated blocks.
Derivation Intuition:
Consider what happens when blocks are allocated and freed:
Analysis shows that with N allocated blocks:
Memory Waste Calculation:
If there are N blocks and N/2 holes:
Total memory = N blocks + N/2 holes
Allocated = N blocks
Free = N/2 holes
Fraction wasted = (N/2 holes) / (N + N/2) = (N/2) / (3N/2) = 1/3
Approximately 33% of memory is in holes!
Real-World Implications:
| Physical Memory | Theoretical Waste (33%) | Effective Usable |
|---|---|---|
| 4 GB | 1.3 GB | 2.7 GB |
| 16 GB | 5.3 GB | 10.7 GB |
| 64 GB | 21 GB | 43 GB |
| 256 GB | 85 GB | 171 GB |
Why This Matters:
This was the reality before paging. A 64 GB server often couldn't allocate a 40 GB process despite appearing to have sufficient memory.
The 50% rule has been validated through simulation and real-world observation. While exact fragmentation levels vary with workload characteristics (allocation sizes, lifetimes, arrival rates), the general pattern of significant memory loss to external fragmentation is consistent across contiguous allocation systems.
Paging eliminates external fragmentation through a single, powerful property: uniform allocation unit size. Let's prove this rigorously.
The Fundamental Property:
In paging:
Proof of No External Fragmentation:
Theorem: Paging produces zero external fragmentation.
Proof:
1. Let S be the page/frame size
2. Let F be the number of free frames
3. Total free memory = F × S
For any allocation request:
4. Request for N pages requires exactly N frames
5. Each frame is the same size S
6. Any N free frames can satisfy the request
Condition for external fragmentation:
7. External fragmentation occurs when free memory exists but
is in pieces too small to use.
8. In paging, the smallest 'piece' of free memory is one frame.
9. One frame (size S) can hold exactly one page (size S).
10. Therefore, every free frame is usable.
11. If F ≥ N, the request can always be satisfied.
12. External fragmentation = 0. QED.
The Key Insight:
External fragmentation is fundamentally a shape-matching problem. In contiguous allocation, you need to find a hole that matches the shape (size) of the allocation. Different allocations have different sizes, creating the matching problem.
In paging, all allocations are decomposed into identical-sized pages, and all free space is decomposed into identical-sized frames. Every frame has the exact shape needed for every page. The matching problem disappears because there's only one 'shape'—and everything matches it.
Contiguous Allocation: Paging:
Request: [=====] Request: [=][=][=]
(3 pages)
Free: Free:
[==] [=] [===] [=] [F] [F] [F] [F] [F]
Problem: No single hole Solution: Any 3 free frames work!
is large enough (5 units) F1, F3, F4 → pages placed, done.
The uniformity of page and frame sizes is not merely convenient—it's mathematically sufficient to eliminate external fragmentation entirely. This is not an approximation or typical case; it's a proven guarantee that applies to every allocation under all circumstances.
Let's visualize how paging handles the exact scenario that causes external fragmentation in contiguous systems.
Contiguous Allocation Scenario:
Initial State: 128 KB memory
┌────────────────────────────────────────────────────────────────┐
│ All Free (128 KB) │
└────────────────────────────────────────────────────────────────┘
After loading processes A, B, C, D:
┌──────────┬──────────┬──────────┬──────────┬──────────┬─────────┐
│ A (20KB)│ B (16KB)│ C (28KB)│ D (24KB)│ E (16KB)│ Free │
│ │ │ │ │ │ (24KB) │
└──────────┴──────────┴──────────┴──────────┴──────────┴─────────┘
After B and D terminate:
┌──────────┬──────────┬──────────┬──────────┬──────────┬─────────┐
│ A (20KB)│ HOLE(16KB│ C (28KB)│HOLE(24KB)│ E (16KB)│ Free │
│ │ FREE) │ │ (FREE) │ │ (24KB) │
└──────────┴──────────┴──────────┴──────────┴──────────┴─────────┘
New process F needs 40 KB:
- Total free: 16 + 24 + 24 = 64 KB ✓
- Largest contiguous block: 24 KB ✗
- F CANNOT BE LOADED despite ample free memory!
Same Scenario with Paging (4KB pages):
After loading processes A, B, C, D, E:
Frame: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16...31
Used: [A][A][A][A][A][B][B][B][B][C][C ][C ][C ][C ][C ][C ][D][...]
└─ A: 5 pages ─┘└ B: 4 ─┘ └── C: 7 pages ──┘ └ D: 6pages
After B and D terminate:
Frame: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19...
[A][A][A][A][A][F][F][F][F][C][C ][C ][C ][C ][C ][C ][F ][F][F][F]...
└─ FREE ─┘ └── FREE ──┘
New process F needs 40 KB = 10 pages:
- Free frames: 5, 6, 7, 8, 16, 17, 18, 19, 20, 21, 22, 23 (12 frames)
- Need 10 frames
- Grab frames 5, 6, 7, 8, 16, 17, 18, 19, 20, 21
- F SUCCESSFULLY LOADED!
Process F's page table:
┌──────┬────────┐
│ Page │ Frame │
├──────┼────────┤
│ 0 │ 5 │
│ 1 │ 6 │
│ 2 │ 7 │
│ 3 │ 8 │
│ 4 │ 16 │
│ 5 │ 17 │
│ 6 │ 18 │
│ 7 │ 19 │
│ 8 │ 20 │
│ 9 │ 21 │
└──────┴────────┘
The 'scattered' free frames are just as useful as contiguous ones. Process F sees contiguous addresses 0-40KB while physically occupying two separate regions.
Notice that the arrangement of free frames doesn't matter at all. Whether they're all together, alternating with used frames, or in any other pattern, the count is all that matters. If 10 frames are free anywhere, any 10-page process can be loaded.
Let's compare the memory efficiency of contiguous allocation versus paging mathematically.
Contiguous Allocation Model:
With N allocated blocks in steady state:
Number of holes ≈ N/2 (50% rule)
Total memory = Allocated + Holes
= A + H
= A + (A/2) (since holes ≈ half of blocks)
= 1.5A
Memory efficiency = A / 1.5A = 67%
Waste = 33%
Paging Model:
With P total pages allocated:
External fragmentation = 0
All free frames are usable
Memory overhead = Page tables + Internal fragmentation
≈ 0.1% (page tables) + 50% of one page per segment
≈ 0.1% + ~0.01% (negligible for large processes)
Memory efficiency ≈ 99%+
| Metric | Contiguous Allocation | Paging |
|---|---|---|
| External Fragmentation | ~33% of memory | 0% |
| Compaction Required | Yes, periodic | Never |
| Allocation Failure Rate | Increases over time | Constant (based only on available frames) |
| Usable Memory (128 GB system) | ~85 GB | ~127 GB |
| Lost to Fragmentation (128 GB) | ~43 GB | 0 GB |
| Page Table Overhead | 0 | ~1-2 GB (negligible) |
| Net Usable Memory | ~85 GB | ~125 GB (+47%!) |
The Allocation Success Guarantee:
In a paging system, we can state precisely when allocation succeeds:
Allocation of N pages succeeds if and only if:
Free frames ≥ N
This is:
- Necessary: Can't place N pages without N frames
- Sufficient: Any N frames can hold any N pages
In contiguous allocation, the condition is weaker:
Allocation of size S may fail even when:
Total free memory ≥ S
Because the condition is actually:
Largest contiguous hole ≥ S (harder to satisfy)
With paging, memory allocation becomes completely predictable again. If you have X frames free, you can allocate up to X pages. Period. This predictability is invaluable for system reliability and capacity planning.
Eliminating external fragmentation has profound implications for operating system design and system behavior.
1. Compaction Becomes Unnecessary:
2. Long-Running System Stability:
Contiguous allocation systems degrade over time:
Week 1: Utilization 80%, Fragmentation 10%
Week 2: Utilization 75%, Fragmentation 20%
Week 4: Utilization 65%, Fragmentation 30%
Week 8: Utilization 50%, Fragmentation 45% → Requires compaction/reboot
Paging systems maintain constant behavior:
Week 1: Utilization 98%, External Frag 0%
Week 2: Utilization 98%, External Frag 0%
Week 4: Utilization 98%, External Frag 0%
Week 52: Utilization 98%, External Frag 0% → Still running perfectly
3. Capacity Planning Simplification:
4. Quality of Service:
Cloud computing and containerization magnify these benefits. A cloud server running 100+ containers must allocate memory thousands of times per hour. With contiguous allocation, this would fragment instantly. With paging, each allocation is instant and equally successful regardless of history.
While paging eliminates fragmentation of physical memory, there are related scenarios worth understanding.
1. Virtual Address Space Fragmentation:
Within a process's virtual address space, the heap can become fragmented:
Heap fragmentation (within a process):
┌──────────────────────────────────────────────────────┐
│Used│Free│Used│Free│Used│Free│Free│Used│Free│Used│Used│
└──────────────────────────────────────────────────────┘
This is internal to the process's memory allocator (malloc).
Paging doesn't solve this—it's a different abstraction layer.
This is heap fragmentation, managed by user-space allocators, not the OS's memory management.
2. Huge Page Fragmentation:
With huge pages (2MB or 1GB), similar fragmentation can occur:
Physical memory (2MB huge page perspective):
┌─────────────────────────────────────────────────────────────┐
│ Huge │Small│Small│Small│ Huge │Small│Small│ Huge │
│ Page │Pages│Pages│Pages│ Page │Pages│Pages│ Page │
└─────────────────────────────────────────────────────────────┘
To allocate a huge page, you need 512 contiguous 4KB frames.
If normal pages are interleaved, huge page allocation can fail.
This is why Linux has khugepaged (a daemon to defragment for huge pages) and why explicit huge page reservations are common in servers.
3. Kernel Memory:
The kernel sometimes needs physically contiguous memory for:
Modern solutions:
4. NUMA and Memory Location:
On NUMA (Non-Uniform Memory Access) systems, while any frame works functionally, performance varies based on frame location:
CPU 0 CPU 1
│ │
┌─────┴─────┐ ┌─────┴─────┐
│ RAM 0-31 │←──│ RAM 32-63│
│ (local) │ │ (local) │
└───────────┘ └───────────┘
CPU 0 accessing frame 10: Fast (local memory)
CPU 0 accessing frame 40: Slower (remote memory)
Smart frame allocation prefers local but can use any frame—a benefit of paging.
Be precise about which layer you're discussing. Paging eliminates physical memory fragmentation. But heap fragmentation (malloc), huge page fragmentation, and address space fragmentation are different phenomena at different layers. Each requires its own solutions.
The elimination of external fragmentation was one of paging's primary motivations. Understanding the historical context illuminates why this property was so revolutionary.
Life Before Paging:
Operators in the 1960s dealt with fragmentation constantly:
Operator log, typical day (1965):
08:00 - System boot, 128K memory available
08:30 - Batch job A loaded (32K)
09:15 - Batch job B loaded (48K)
10:00 - Job A completes, 32K freed
10:30 - Job C (40K) fails to load - not enough contiguous space!
(48K + 32K = 80K free, but fragmented)
11:00 - Initiating memory compaction... system halted
11:15 - Compaction complete, resuming
11:20 - Job C loaded
12:00 - Job B completes
[cycle repeats...]
Total time lost to compaction this shift: 1.5 hours
System administrators accepted this as a fact of life. Memory management required constant attention.
| Aspect | Contiguous (1960s) | Paging (Post-1970) |
|---|---|---|
| Operator intervention | Frequent (compaction, restarts) | Rare |
| System uptime | Hours to days (reboot to defrag) | Weeks to months |
| Memory utilization | 50-70% due to fragmentation | 95%+ achievable |
| Batch job failures | Common (fragmentation) | Rare (only if truly out of memory) |
| Capacity planning | Difficult (fragmentation unpredictable) | Straightforward |
| Programmer awareness | Must consider memory layout | Abstracted away |
The Paradigm Shift:
When Atlas, Multics, and IBM System/370 introduced paging, the operational difference was dramatic. Systems that previously required daily attention could run for weeks. Memory utilization stabilized at high levels. The 'memory puzzle' that operators had to solve constantly simply disappeared.
This freed computing from a significant operational burden, enabling the multi-user, time-sharing systems that became the foundation for modern computing. Without paging's solution to external fragmentation, supporting dozens or hundreds of concurrent users would have been operationally infeasible.
Today, no programmer or operator thinks about external fragmentation because paging solved it so completely. This is the mark of a truly successful solution—it makes the problem invisible. We take for granted that memory 'just works,' forgetting how much engineering went into achieving that simplicity.
We've rigorously analyzed why and how paging eliminates external fragmentation. Let's consolidate the key insights:
What's Next:
We've seen that paging eliminates external fragmentation. But paging does have its own form of fragmentation: internal fragmentation. The next page explores why paging still has internal fragmentation (though typically minimal), its causes, impacts, and how to analyze it in practice.
External fragmentation, once a major operational headache, is now a non-issue in modern systems. By understanding why paging eliminated it, you gain deep insight into the elegant engineering that makes today's computing infrastructure reliable and efficient.