Loading learning content...
Imagine you're organizing books in a library. Traditional contiguous allocation is like requiring all volumes of an encyclopedia to sit on adjacent shelves—Volume 1 next to Volume 2 next to Volume 3. When the library fills up, you might have empty space scattered throughout, but inserting a new 20-volume encyclopedia becomes impossible because there's no single shelf section with 20 adjacent empty spaces.
Non-contiguous allocation is the revolutionary insight that volumes don't need to be adjacent. Volume 1 can be on shelf A4, Volume 2 on shelf C12, Volume 3 on shelf B7—as long as you have a good catalog (the page table), readers can find any volume instantly. The total number of empty shelf spaces is all that matters, not their arrangement.
This simple shift—from requiring contiguous placement to allowing scattered placement—fundamentally transforms memory management.
By the end of this page, you will understand how paging enables non-contiguous memory allocation, why this capability is so transformative, how the operating system manages scattered pages efficiently, and the profound implications for memory utilization and system flexibility.
Before paging, memory allocation suffered from a fundamental constraint: a process's entire memory had to occupy a single, contiguous region of physical addresses. This requirement created cascading problems:
The Fitting Problem:
As processes come and go, memory becomes fragmented:
Time T1: Three processes loaded
┌────────────┬────────────┬────────────┬────────────┐
│ Process A │ Process B │ Process C │ Free │
│ (32 KB) │ (24 KB) │ (40 KB) │ (32 KB) │
└────────────┴────────────┴────────────┴────────────┘
Time T2: Process B terminates
┌────────────┬────────────┬────────────┬────────────┐
│ Process A │ Hole │ Process C │ Free │
│ (32 KB) │ (24 KB) │ (40 KB) │ (32 KB) │
└────────────┴────────────┴────────────┴────────────┘
Time T3: New process D needs 48 KB
- Total free space: 24 KB + 32 KB = 56 KB ✓
- Largest contiguous block: 32 KB ✗
- Process D CANNOT be loaded!
Despite having 56 KB free—more than enough for Process D—the system cannot accommodate it because no single contiguous region is large enough.
Why Did Early Systems Require Contiguity?
Contiguity was required because of how address translation worked:
Base-and-Bound Translation: The hardware used a 'base register' plus an offset. Every address in a process was calculated as physical = base + logical. This only works if all addresses are contiguous from the base.
Simple Hardware: Early computers had limited transistor budgets. Complex address translation was impractical.
Single Process Systems: When only one program ran at a time, contiguity was natural—the program owned all memory.
No Abstraction Layer: Programs often contained absolute physical addresses. Moving a program meant patching every address reference.
Paging breaks free from all these limitations with a simple but powerful abstraction.
External fragmentation in contiguous systems wasn't merely a theoretical problem. It caused real system failures: programs crashed because they couldn't be loaded, systems slowed to a crawl during compaction, and administrators had to constantly reboot to 'defragment' memory. Paging solved a genuine operational nightmare.
Paging solves the contiguity problem through a fundamental insight: if all allocation units are the same size, any free unit can hold any allocation. There's never a problem of 'fitting' because everything fits everywhere.
The Key Mechanism:
Because all pages are identical in size, and all frames are identical in size, the problem transforms from 'finding a large enough hole' to simply 'having enough frames total.'
Memory has 24 KB hole + 32 KB free block = 56 KB total. Process D needs 48 KB = 12 pages (4 KB each)Process D successfully loaded: 6 pages in the hole area, 6 pages in the free areaWith 4 KB pages:
The 24 KB hole = 6 free frames (frames 8-13) The 32 KB free block = 8 free frames (frames 24-31) Total free frames = 14 frames = 56 KB
Process D needs 48 KB = 12 pages
Allocation:
Process D sees contiguous addresses 0x0000-0xBFFF. Physically, it's split across two non-adjacent memory regions. The page table creates the illusion of contiguity!
What Changed Fundamentally:
| Aspect | Contiguous Allocation | Paging |
|---|---|---|
| Allocation unit | Variable (whole process) | Fixed (single page) |
| Placement constraint | Must find large enough hole | Any frame works |
| Fragmentation concern | External (unusable holes) | Internal only (in last page) |
| Load blocking | Can't load if no big enough hole | Can load if total frames sufficient |
| Compaction needed | Yes, frequently | Never |
| Memory utilization | Degrades over time | Stays high permanently |
The magic of paging comes from uniformity. Because every page is the same size and every frame is the same size, they're completely interchangeable. This eliminates the 'geometric' problem of fitting irregular shapes—there's only one shape, and it fits everywhere.
Non-contiguous allocation is only possible because of the page table acting as an indirection layer between logical and physical addresses. This indirection is what allows the process to see contiguous memory while physical memory is scattered.
The Power of Indirection:
Process P's View (Logical) Page Table Physical Memory
┌─────────────────────┐ ┌───────────┐ ┌─────────────────────┐
│ Page 0 │──────▶ │ 0 → 47 │ │ Frame 0 (kernel) │
│ (0x0000-0x0FFF) │ │ 1 → 12 │ │ Frame 1 (kernel) │
├─────────────────────┤ │ 2 → 91 │ │ ... │
│ Page 1 │──────▶ │ 3 → 5 │ │ Frame 5 (P's pg 3)│
│ (0x1000-0x1FFF) │ │ 4 → 203 │ │ ... │
├─────────────────────┤ └───────────┘ │ Frame 12 (P's pg 1)│
│ Page 2 │──────▶ │ ... │
│ (0x2000-0x2FFF) │ │ Frame 47 (P's pg 0)│
├─────────────────────┤ │ ... │
│ Page 3 │──────▶ │ Frame 91 (P's pg 2)│
│ (0x3000-0x3FFF) │ │ ... │
├─────────────────────┤ │ Frame 203(P's pg 4)│
│ Page 4 │──────▶ │ ... │
│ (0x4000-0x4FFF) │ └─────────────────────┘
└─────────────────────┘
The process executes code referencing addresses 0x0000 through 0x4FFF as if they were contiguous. The page table seamlessly redirects each access to the correct, scattered physical frame.
Properties Enabled by Indirection:
Location Independence: A page's logical address is fixed, but its physical location can change. The OS can move pages between frames by updating the page table.
Dynamic Rearrangement: The OS can relocate pages at any time (e.g., during memory optimization) without the process knowing or caring.
Lazy Allocation: Pages can be added to the address space without immediately consuming physical frames (demand paging).
Sparse Address Spaces: A process can have a large logical address space with only some pages actually allocated.
Memory Sharing: Multiple processes can have page table entries pointing to the same frame, sharing data or code efficiently.
The Cost of Indirection:
Indirection isn't free. Every memory access now requires:
This is why TLBs exist—to cache recent translations and avoid the page table lookup overhead.
The principle 'all problems in computer science can be solved by another level of indirection' (attributed to David Wheeler) is perfectly illustrated by paging. The page table adds a layer of indirection that solves external fragmentation, enables sharing, supports virtual memory, and more—all from one elegant abstraction.
Understanding exactly how pages are allocated reveals why non-contiguous allocation works so smoothly.
When a New Process Starts:
Process creation: 5 logical pages required
Step 1: Determine page requirements
- Code segment: 10 KB → 3 pages (0, 1, 2)
- Data segment: 4 KB → 1 page (3)
- Stack segment: 4 KB → 1 page (4)
- Total: 5 pages needed
Step 2: Consult free frame list
- Free frames available: [7, 12, 18, 21, 37, 42, 56, 89, 103, 127, ...]
- Grab 5 frames: 7, 12, 18, 21, 37
Step 3: Build page table
┌──────┬────────┐
│ Page │ Frame │
├──────┼────────┤
│ 0 │ 7 │
│ 1 │ 12 │
│ 2 │ 18 │
│ 3 │ 21 │
│ 4 │ 37 │
└──────┴────────┘
Step 4: Load process content
- Load code into frames 7, 12, 18
- Initialize data in frame 21
- Set up stack in frame 37
Step 5: Update frame table
- Mark frames 7, 12, 18, 21, 37 as allocated to this process
The Free Frame List:
The operating system maintains a free frame list (or pool) containing all unallocated frames. This can be implemented as:
Linked List: Each free frame contains a pointer to the next free frame. O(1) allocation, O(1) deallocation.
Bitmap: One bit per frame (0=free, 1=allocated). O(n) to find free frame, but compact representation.
Stack: Free frame numbers pushed/popped. O(1) operations, commonly used.
Key Insight: Order Doesn't Matter
When allocating frames, the OS can grab any available frames—it doesn't matter which ones or in what order. This is fundamentally different from contiguous allocation where you need frames that are physically adjacent.
Contiguous: Need frames 10, 11, 12, 13, 14 specifically (adjacent)
Paging: Any 5 frames work: 7, 42, 18, 103, 21 are fine!
This freedom means allocation almost never fails due to fragmentation. If you have N free frames anywhere in memory, you can allocate N pages.
While any frames technically work, clever operating systems might prefer frames that improve cache behavior. For example, allocating frames from the same physical memory region might improve DRAM row buffer hits. But this is optimization, not requirement—functionally, any frame works.
Non-contiguous allocation dramatically improves memory utilization. Let's quantify this improvement.
Utilization Under Contiguous Allocation:
With external fragmentation, memory utilization degrades over time:
Simulation: 100 process arrivals/departures
Time Allocated Free Usable Free Utilization
T=0 0 KB 128 KB 128 KB 0%
T=10 96 KB 32 KB 32 KB 75%
T=50 80 KB 48 KB 20 KB (holes) 62%
T=100 72 KB 56 KB 8 KB (holes!) 56%
Despite 56 KB being 'free', only 8 KB is usable as the rest is in scattered holes too small for any waiting process.
Utilization With Paging:
With non-contiguous allocation, utilization stays high:
Simulation: Same 100 process arrivals/departures
Time Allocated Free Usable Free Utilization
T=0 0 KB 128 KB 128 KB 0%
T=10 96 KB 32 KB 32 KB 75%
T=50 80 KB 48 KB 48 KB 62%
T=100 72 KB 56 KB 56 KB 56%
Every free frame is usable! The 56 KB free at T=100 can accommodate any process up to 56 KB.
| Metric | Contiguous Allocation | Paging |
|---|---|---|
| External Fragmentation | Grows over time (can reach 50%+) | Zero |
| Compaction Required | Yes, periodically | Never |
| Allocation Success Rate | Degrades with fragmentation | Constant (limited only by total free) |
| Memory Overhead | Zero | Page tables (1-4% of RAM) |
| Effective Utilization | 50-70% sustainable | 95%+ sustainable |
| System Stability | Degrades, needs restarts | Stable indefinitely |
The 50% Rule Validation:
Under contiguous allocation, the '50% rule' states that external fragmentation consumes approximately one-third of memory (N holes of size N/2 each, wasting N/2 of total 3N/2 memory = 1/3 = 33%).
With paging, this waste disappears entirely because there are no holes—every frame is either fully used or fully free.
Real-World Impact:
Consider a server with 128 GB RAM:
That's potentially 35+ GB of recovered memory—equivalent to significant hardware savings.
Non-contiguous allocation isn't just 'better'—it's measurably, substantially better. Systems running for months or years maintain near-perfect memory utilization, where contiguous systems would have degraded to 50% or less effective capacity.
From the process's viewpoint, nothing changes. The process has no idea—and no need to know—that its memory is scattered across physical RAM.
The Contiguous Illusion:
Process P's mental model:
0x0000 ┌─────────────────────┐
│ Code │ } 12 KB code segment
0x3000 ├─────────────────────┤
│ Data │ } 8 KB data segment
0x5000 ├─────────────────────┤
│ Heap │ } grows upward
│ ↓ │
│ . │
│ . │
│ ↑ │
│ Stack │ } grows downward
0xFFFF └─────────────────────┘
Process P sees a simple, contiguous 64 KB address space.
It can access any address from 0x0000 to 0xFFFF.
Pointers work normally. Arrays are contiguous. Stack grows down.
The process codes, compiles, and executes exactly as if memory were contiguous. This abstraction gives us the best of both worlds:
What the Process Can't See:
The process is unaware of:
What the Process Does See:
Protection Through Invisibility:
This isolation isn't just convenient—it's essential for security and stability. Process P cannot:
The page table defines what exists for each process. Anything not in the page table simply doesn't exist from that process's perspective.
This abstraction is why you can write code with pointers, arrays, and data structures without worrying about physical memory layout. The compiler and programmer assume contiguous memory; the OS and hardware deliver the illusion. Neither side needs to know about the other's complexities.
Non-contiguous allocation enables capabilities far beyond just avoiding fragmentation. The flexibility it provides is foundational for modern operating system features.
1. Page-Level Sharing:
Multiple processes can share physical frames by having page table entries point to the same frame:
Process A's Page Table: Process B's Page Table:
┌──────┬─────────┐ ┌──────┬─────────┐
│ Pg 0 │ Frm 47 │ │ Pg 0 │ Frm 12 │ ← Different
│ Pg 1 │ Frm 200 │←─────┬─────│ Pg 1 │ Frm 200 │ ← Same! Shared code
│ Pg 2 │ Frm 201 │←─────┼─────│ Pg 2 │ Frm 201 │ ← Same! Shared code
│ Pg 3 │ Frm 53 │ │ │ Pg 3 │ Frm 88 │ ← Different (private data)
└──────┴─────────┘ └─────└──────┴─────────┘
Frames 200 and 201 contain shared library code.
Both processes read from the same physical memory!
This is how shared libraries (libc, DLLs) work without duplicating code in every process.
2. Sparse Address Spaces:
Non-contiguous allocation permits sparse address spaces—large logical address spaces with most addresses unmapped:
64-bit process address space (theoretical 16 EB):
0x0000000000000000 ┌─────────────────┐
│ Code (4 MB) │ ← mapped, 1024 pages
├─────────────────┤
│ unmapped │
│ (huge gap) │
0x0000700000000000 ├─────────────────┤
│ Heap (64 MB) │ ← mapped, 16384 pages
├─────────────────┤
│ unmapped │
│ (huge gap) │
0x00007FFFE0000000 ├─────────────────┤
│ Stack (8 MB) │ ← mapped, 2048 pages
0x00007FFFFFFFFFFF └─────────────────┘
Actual physical memory used: 76 MB (19456 pages)
Address space spanned: ~140 TB
Ratio: > 99.999% of address space is unmapped but costs nothing!
The OS only needs page table entries (and frames) for actually-used regions. The gaps cost nothing.
Everything we'll study in virtual memory—demand paging, page replacement, working sets, thrashing—builds on non-contiguous allocation. The ability to place pages anywhere, and to not place pages at all until needed, is what makes virtual memory possible.
While non-contiguous allocation eliminates external fragmentation, it introduces its own implementation complexities that the OS must handle.
Frame Tracking:
The OS must efficiently track which frames are free and which are allocated:
Common Data Structures:
1. Stack-based free list:
┌─────┐
│ 127 │ ← Top: next frame to allocate
│ 89 │
│ 56 │
│ 42 │
│ 37 │
│ ... │
└─────┘
- O(1) allocation: pop
- O(1) deallocation: push
2. Bitmap:
Frame: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Bit: 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0
▲ ▲ ▲ ▲
Used Free Used Free
- Compact: 1 bit per frame
- O(n) to find free frame (can optimize with hierarchical bitmap)
Page Table Management:
Each process needs its own page table, and these tables themselves consume memory:
Page Table Allocation: Page tables are stored in kernel memory, typically in frames themselves (page tables are paged!).
Context Switch Updates: When switching processes, the Page Table Base Register must be updated to point to the new process's page table.
TLB Flushing: Context switches typically require TLB flushes since cached translations belong to the previous process. (Some TLBs support ASIDs to avoid this.)
Hierarchical Tables: For 64-bit systems, multi-level page tables are required to avoid impossible memory overhead.
Memory Overhead Analysis:
For a 32-bit system with 4KB pages and 4-byte PTEs:
For a 64-bit system with hierarchical tables:
Page tables are stored in physical memory—frames used for page tables aren't available for application data. A system with high page table overhead (many processes, non-hierarchical tables, small pages) can waste significant memory on metadata. Modern systems mitigate this with hierarchical page tables and huge pages.
Non-contiguous allocation is perhaps paging's most transformative feature. Let's consolidate what we've learned:
What's Next:
We've seen that paging eliminates external fragmentation through non-contiguous allocation. But what about the fragmentation it does have? The next page explores why paging has no external fragmentation at all, providing a rigorous understanding of this critical property and its implications.
You now understand the revolutionary nature of non-contiguous allocation. This single capability—placing pages anywhere—transformed memory management from a brittle, degrading system into a robust, efficient one. Every modern operating system relies on this foundation.