Loading learning content...
When a video player streams a 4K movie or a database performs a full table scan, performance depends critically on how fast data can be read sequentially from disk. Contiguous allocation excels here because physical proximity on disk translates directly to I/O performance.
This isn't a marginal improvement—contiguous sequential access can be 10-100× faster than random access patterns. Understanding why requires diving into disk mechanics, operating system I/O scheduling, and hardware optimization.
By the end of this page, you'll understand the mechanics of disk access, why sequential reads are dramatically faster, how contiguous allocation enables prefetching and DMA optimization, and the quantitative performance differences between sequential and random access.
To understand why contiguous allocation is fast, we must understand the physical mechanics of disk access. Both HDDs and SSDs benefit from sequential access, though for different reasons.
Hard Disk Drive (HDD) Access Components:
Total Access Time = Seek + Rotational Latency + Transfer
For a single 4 KB block: ~8-20ms (mostly seek + rotation) For sequential 4 KB blocks: ~0.1ms each (no additional seek/rotation)
| Component | Random Access | Sequential Access | Difference |
|---|---|---|---|
| Seek Time | 8ms (average) | 0ms (same track) | ∞ improvement |
| Rotational Latency | 4.2ms (average) | 0ms (already positioned) | ∞ improvement |
| Transfer (4KB) | 0.05ms | 0.05ms | Same |
| Total per block | ~12ms | ~0.05ms | 240× faster |
| Throughput | ~0.3 MB/s | ~150 MB/s | 500× faster |
Seek time dominates random access performance. Moving the disk head just once can take longer than transferring thousands of sequential blocks. Contiguous allocation eliminates seeks within a file entirely.
Solid-state drives have no moving parts, so there's no seek time or rotational latency. Does contiguous allocation still matter? Yes, significantly.
Why SSDs Still Favor Sequential Access:
| Metric | Sequential Read | Random Read (4KB) | Ratio |
|---|---|---|---|
| Throughput | 3,500 MB/s | 400 MB/s | 8.75× |
| IOPS | 875,000 | 100,000 | 8.75× |
| Latency | ~10 μs | ~80 μs | 8× |
Even on modern NVMe SSDs, sequential access is 8-10× faster than random access. The gap is smaller than HDDs but still substantial enough to make contiguous allocation meaningful for performance-critical applications.
Operating systems detect sequential access patterns and proactively read ahead, loading data into cache before it's requested. Contiguous allocation makes this trivially predictable.
How Prefetching Works:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
/* * Simplified Read-Ahead Logic for Contiguous Files * * The OS detects sequential patterns and prefetches * upcoming blocks into the page cache. */ #define INITIAL_READAHEAD 32 /* 128 KB initial window */#define MAX_READAHEAD 256 /* 1 MB maximum window */ typedef struct { uint64_t prev_block; /* Last block accessed */ uint32_t sequential_count; /* Consecutive sequential reads */ uint32_t readahead_size; /* Current prefetch window */} readahead_state_t; void update_readahead(readahead_state_t *ra, uint64_t current_block) { /* Detect sequential pattern */ if (current_block == ra->prev_block + 1) { ra->sequential_count++; /* Grow readahead window exponentially */ if (ra->sequential_count > 2) { ra->readahead_size = min( ra->readahead_size * 2, MAX_READAHEAD ); } } else { /* Non-sequential: reset */ ra->sequential_count = 0; ra->readahead_size = INITIAL_READAHEAD; } ra->prev_block = current_block;} /* For contiguous files: next blocks are 100% predictable */void prefetch_contiguous(file_t *file, uint64_t current_pos) { uint64_t start = file->start_block + (current_pos / BLOCK_SIZE); uint64_t end = min(start + readahead_size, file->start_block + file->block_count); /* Issue asynchronous read for upcoming blocks */ async_read_blocks(start, end - start);}With contiguous allocation, the OS knows exactly which blocks come next—they're simply the next blocks on disk. With linked allocation, it can't prefetch because each block's successor is stored in the current block. With indexed allocation, extra index block reads are needed.
Direct Memory Access (DMA) allows disk controllers to transfer data directly to memory without CPU involvement. Contiguous allocation maximizes DMA efficiency.
Single DMA Transfer for Contiguous Data:
When reading 1 MB from a contiguous file:
The DMA Advantage:
| Allocation Method | DMA Transfers | CPU Interrupts | Overhead |
|---|---|---|---|
| Contiguous | 1 | 1 | Minimal |
| Linked | 256 | 256 | Severe |
| Indexed (single) | 2 | 2 | Low |
| Indexed (multi-level) | 2-4 | 2-4 | Moderate |
Each DMA setup and interrupt has overhead. With contiguous allocation, a single large transfer replaces hundreds of small transfers, dramatically reducing CPU overhead and maximizing effective throughput.
Operating systems use I/O schedulers to order disk requests for efficiency. Contiguous allocation simplifies this process significantly.
Scheduling Advantages:
123456789101112131415161718192021222324252627282930313233343536373839
/* * I/O Request Merging for Contiguous Files * * The block layer can merge adjacent requests into * single large operations, reducing overhead. */ typedef struct io_request { uint64_t start_block; uint32_t block_count; void *buffer; struct io_request *next;} io_request_t; /* Try to merge new request with pending requests */bool try_merge_request(io_request_t *queue, io_request_t *new_req) { io_request_t *curr = queue; while (curr != NULL) { /* Back merge: new request extends current */ if (new_req->start_block == curr->start_block + curr->block_count) { curr->block_count += new_req->block_count; return true; /* Merged! */ } /* Front merge: new request precedes current */ if (curr->start_block == new_req->start_block + new_req->block_count) { curr->start_block = new_req->start_block; curr->block_count += new_req->block_count; return true; /* Merged! */ } curr = curr->next; } return false; /* Cannot merge */}Let's put concrete numbers on the performance advantage of contiguous sequential access versus fragmented access:
Benchmark Scenario: Reading a 100 MB file
| Storage | Contiguous | Fragmented (1000 pieces) | Improvement |
|---|---|---|---|
| HDD (7200 RPM) | 0.7 seconds | 45 seconds | 64× |
| SATA SSD | 0.3 seconds | 1.2 seconds | 4× |
| NVMe SSD | 0.03 seconds | 0.15 seconds | 5× |
These numbers explain why video playback stutters on fragmented HDDs, why databases prefer contiguous tablespaces, and why OS swap files are allocated contiguously. The performance difference is not theoretical—it's directly perceptible.
You now understand why contiguous allocation delivers exceptional sequential performance. Next, we'll examine the dark side: external fragmentation and the challenges it creates for dynamic file systems.