Loading content...
Understanding FIFO's theoretical properties—its simplicity, susceptibility to Belady's anomaly, and age-based victim selection—provides necessary background. But in practice, the critical question is: How well does FIFO actually perform?
Performance in page replacement means minimizing page faults, which directly translates to minimizing disk I/O and maximizing throughput. A page fault on a rotational hard drive costs approximately 10 milliseconds—an eternity compared to the nanoseconds required for a memory access. Even with modern SSDs, a page fault costs hundreds of microseconds.
In this page, we systematically analyze FIFO's performance characteristics: how it compares to optimal and other practical algorithms, what workloads expose its weaknesses, and how to predict its behavior in real systems.
By the end of this page, you will be able to quantitatively compare FIFO against OPT and LRU, identify workload patterns that create performance problems, understand the performance gap between FIFO and better algorithms, and make informed decisions about when FIFO is acceptable versus when it's not.
The Optimal (OPT) algorithm—also known as Bélády's algorithm—serves as the theoretical lower bound for page faults. OPT evicts the page that won't be used for the longest time in the future, guaranteeing the minimum possible page faults.
OPT is unimplementable in practice (we can't see the future), but it provides an essential benchmark. Comparing FIFO to OPT reveals FIFO's fundamental limitations.
Comparative Example:
Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2 (15 references) Frames: 3
| Step | Page | Frame 0 | Frame 1 | Frame 2 | Result | Reason for Eviction |
|---|---|---|---|---|---|---|
| 1 | 7 | 7 | MISS | |||
| 2 | 0 | 7 | 0 | MISS | ||
| 3 | 1 | 7 | 0 | 1 | MISS | |
| 4 | 2 | 2 | 0 | 1 | MISS | 7 never used again—evict 7 |
| 5 | 0 | 2 | 0 | 1 | HIT | |
| 6 | 3 | 2 | 0 | 3 | MISS | 1 used farthest in future—evict 1 |
| 7 | 0 | 2 | 0 | 3 | HIT | |
| 8 | 4 | 2 | 4 | 3 | MISS | 0 vs 2 vs 3: 0 at step 11, 2 at step 9, 3 at step 10—evict 0 |
| 9 | 2 | 2 | 4 | 3 | HIT | |
| 10 | 3 | 2 | 4 | 3 | HIT | |
| 11 | 0 | 2 | 0 | 3 | MISS | 4 never used again—evict 4 |
| 12 | 3 | 2 | 0 | 3 | HIT | |
| 13 | 2 | 2 | 0 | 3 | HIT | |
| 14 | 1 | 1 | 0 | 3 | MISS | 2 never used again—evict 2 |
| 15 | 2 | 1 | 0 | 2 | MISS | 3 never used again—evict 3 |
Results Comparison:
| Algorithm | Page Faults | Page Hits | Fault Rate |
|---|---|---|---|
| OPT | 9 | 6 | 60.0% |
| FIFO | 12 | 3 | 80.0% |
| Difference | +3 faults | -3 hits | +20% |
| FIFO overhead | 33% more faults than necessary |
Analysis:
FIFO produces 33% more page faults than optimal on this reference string. This overhead translates directly to:
FIFO typically produces 20-50% more page faults than OPT across diverse workloads. While OPT is unimplementable, LRU often comes within 5-15% of OPT. The gap between FIFO and OPT is thus significantly larger than the gap between LRU and OPT.
Least Recently Used (LRU) is FIFO's most direct competitor—a practical algorithm that can be implemented (albeit with overhead). Comparing FIFO to LRU reveals what we lose by using the simpler algorithm.
The Fundamental Difference:
| Aspect | FIFO | LRU |
|---|---|---|
| Eviction criterion | Oldest by load time | Longest since last access |
| Information used | When page was loaded | When page was last used |
| On page hit | No action | Update timestamp |
| Hot page protection | None | High (recently used = protected) |
Key Insight:
LRU considers how pages are used after loading, while FIFO only considers when they arrived. This additional information allows LRU to protect frequently-accessed pages regardless of when they were loaded.
Comparative Performance Study:
Using the same reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2
| Metric | FIFO | LRU | Difference |
|---|---|---|---|
| Page faults (3 frames) | 12 | 9 | -3 (LRU better) |
| Fault rate | 80% | 60% | -20% |
| Matches OPT | No | Sometimes | LRU often achieves OPT |
When FIFO Equals LRU:
FIFO and LRU produce identical results when:
In practice, these conditions rarely hold. LRU's adaptive behavior typically outperforms FIFO.
LRU requires updating state on every memory access—a significant overhead. This is why most real systems use LRU approximations (like Clock/Second Chance) that capture most of LRU's benefits with FIFO-like overhead. Pure FIFO's only advantage is zero hit overhead.
FIFO's performance varies dramatically with workload characteristics. Understanding these patterns allows prediction and informed algorithm selection.
Pattern 1: Sequential Scan (FIFO Optimal)
Access pattern: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (no repetition)
With 3 frames:
FIFO and LRU are identical for pure sequential scans. Both are also optimal. FIFO wins on overhead.
Pattern 2: Repeated Sequential (FIFO Near-Optimal)
Access pattern: 1, 2, 3, 1, 2, 3, 1, 2, 3
With 3 frames:
When the repeating pattern size equals frame count, FIFO achieves optimal.
Pattern 3: Cyclic Larger Than Frames (FIFO Catastrophic)
Access pattern: 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 (cycle of 4)
With 3 frames:
This is the worst case for any algorithm. The cycle length (4) exceeds frame count (3), guaranteeing constant eviction of about-to-be-used pages.
Pattern 4: Hot-Cold Mixed (FIFO Poor)
Access pattern: 1, 2, 3, 1, 1, 1, 4, 1, 1, 1, 5, 1, 1, 1 (page 1 is hot)
With 3 frames:
Hot-cold patterns expose FIFO's blindness to usage frequency.
| Pattern Type | Characteristics | FIFO Performance | LRU Comparison |
|---|---|---|---|
| Sequential Scan | Each page accessed once | Optimal | Equal |
| Small Loop | Cycle size ≤ frames | Optimal after warmup | Equal |
| Large Loop | Cycle size > frames | Catastrophic (100% faults) | Equally bad |
| Hot-Cold | Few hot pages, many cold | Poor (evicts hot pages) | Much better |
| Working Set Phases | Distinct phases, no overlap | Good (old pages obsolete) | Similar |
| Random | Uniform random access | Probabilistically okay | Similar |
| Zipfian/Power Law | Skewed popularity | Poor | Better (protects popular) |
Real applications combine multiple patterns: hot startup pages (permanent), transient computation data (sequential), and user-triggered code paths (random). FIFO handles sequential data fine but fails on hot pages—precisely the pages that matter most for overall performance.
Let's perform rigorous quantitative analysis to understand FIFO's performance characteristics mathematically.
Hit Ratio Analysis:
The hit ratio h is defined as:
h = (Total References - Page Faults) / Total References
For FIFO, the hit ratio depends heavily on the relationship between working set size W and frame count F:
| Condition | Hit Ratio Range | Behavior |
|---|---|---|
| F ≫ W | High (0.8-1.0) | Working set always resident |
| F ≈ W | Variable (0.3-0.8) | Border case, pattern-dependent |
| F < W | Low (0.0-0.3) | Constant eviction required |
Effective Access Time (EAT) Analysis:
Effective access time combines hit and miss costs:
EAT = h × T_memory + (1-h) × T_fault
Where:
Example Calculation:
| Scenario | FIFO Hit Ratio | LRU Hit Ratio | FIFO EAT (SSD) | LRU EAT (SSD) |
|---|---|---|---|---|
| Good (F ≫ W) | 0.95 | 0.98 | 5.1 μs | 2.1 μs |
| Moderate (F ≈ W) | 0.70 | 0.85 | 30.0 μs | 15.0 μs |
| Poor (F < W) | 0.40 | 0.55 | 60.0 μs | 45.0 μs |
Even small differences in hit ratio translate to significant EAT differences due to the high cost of page faults.
FIFO's performance gap with LRU is largest in the 'moderate' regime where frames approximately equal working set size. This is precisely where most memory-constrained systems operate, making FIFO's weakness most visible in practice.
Memory pressure occurs when the system's memory demands exceed available physical RAM. This is the regime where page replacement algorithms matter most—and where FIFO's weaknesses become critical.
Thrashing Risk:
Thrashing occurs when the system spends more time handling page faults than executing useful work. With FIFO:
FIFO's Thrashing Behavior:
Under memory pressure, FIFO exhibits:
| Pressure Level | FIFO Behavior | LRU Behavior | Impact Differential |
|---|---|---|---|
| Low (F > 2×W) | Excellent | Excellent | Negligible |
| Moderate (F ≈ 1.2×W) | Good | Very Good | 10-15% more faults |
| High (F ≈ W) | Degraded | Acceptable | 20-30% more faults |
| Severe (F ≈ 0.8×W) | Poor | Degraded | 30-50% more faults |
| Critical (F ≈ 0.5×W) | Near-thrashing | Poor | Variable, both struggling |
Real-World Implications:
Modern systems face memory pressure during:
| Scenario | Typical Pressure | FIFO Risk |
|---|---|---|
| Virtual machine overcommit | High | Severe—VMs share physical RAM |
| Container density | Moderate-High | Significant—containers compete |
| Memory-intensive compilation | Moderate | Noticeable—large working sets |
| Database operations | Variable | Pattern-dependent |
| Browser with many tabs | High | Severe—many working sets |
| IDE with large projects | Moderate | Noticeable—index structures |
When FIFO thrashes, it creates a vicious cycle: evicting page A causes a fault, bringing in A evicts B, referencing B evicts C, and so on. Unlike LRU which stabilizes around truly hot pages, FIFO offers no stability mechanism. The system oscillates indefinitely.
To rigorously evaluate FIFO, we need systematic benchmarking methodology.
Benchmarking Framework:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
import randomfrom collections import dequefrom typing import List, Tuple class PageReplacementBenchmark: """Framework for comparing page replacement algorithms.""" def __init__(self, num_frames: int): self.num_frames = num_frames def generate_workloads(self) -> dict: """Generate standardized test workloads.""" workloads = {} # Workload 1: Sequential scan workloads['sequential'] = list(range(1, 101)) # Workload 2: Repeated small loop workloads['small_loop'] = [i % 5 + 1 for i in range(100)] # Workload 3: Large loop (cycle > frames) workloads['large_loop'] = [i % (self.num_frames + 2) + 1 for i in range(100)] # Workload 4: Hot-cold (80% hot, 20% cold) hot_pages = [1, 2, 3] cold_pages = list(range(4, 21)) workloads['hot_cold'] = [ random.choice(hot_pages if random.random() < 0.8 else cold_pages) for _ in range(100) ] # Workload 5: Zipfian distribution workloads['zipfian'] = self._generate_zipfian(100, 20) # Workload 6: Locality bursts workloads['locality_bursts'] = self._generate_bursts(100) return workloads def _generate_zipfian(self, length: int, pages: int) -> List[int]: """Generate Zipfian-distributed page references.""" result = [] for _ in range(length): # Zipfian: P(page k) ∝ 1/k r = random.random() cumulative = 0 harmonic = sum(1/k for k in range(1, pages+1)) for k in range(1, pages+1): cumulative += (1/k) / harmonic if r <= cumulative: result.append(k) break return result def _generate_bursts(self, length: int) -> List[int]: """Generate locality burst pattern.""" result = [] current_locality = list(range(1, 6)) # Start with pages 1-5 for i in range(length): if i % 20 == 0: # Shift locality every 20 references base = random.randint(1, 10) * 5 current_locality = list(range(base, base + 5)) result.append(random.choice(current_locality)) return result def run_comparison(self) -> dict: """Run all algorithms on all workloads.""" results = {} workloads = self.generate_workloads() for name, refs in workloads.items(): fifo_faults = self.simulate_fifo(refs) lru_faults = self.simulate_lru(refs) opt_faults = self.simulate_opt(refs) results[name] = { 'fifo': fifo_faults, 'lru': lru_faults, 'opt': opt_faults, 'fifo_vs_opt': (fifo_faults - opt_faults) / opt_faults * 100, 'fifo_vs_lru': (fifo_faults - lru_faults) / lru_faults * 100 if lru_faults > 0 else 0, } return resultsTypical Benchmark Results (10 Frames):
| Workload | FIFO Faults | LRU Faults | OPT Faults | FIFO vs OPT | FIFO vs LRU |
|---|---|---|---|---|---|
| Sequential | 100 | 100 | 100 | +0% | +0% |
| Small Loop | 5 | 5 | 5 | +0% | +0% |
| Large Loop | 100 | 100 | 17 | +488% | +0% |
| Hot-Cold | 45 | 28 | 22 | +105% | +61% |
| Zipfian | 38 | 25 | 20 | +90% | +52% |
| Locality Bursts | 42 | 31 | 18 | +133% | +35% |
The large loop workload shows both FIFO and LRU struggling equally—neither can help when the cycle exceeds frames. Hot-cold and Zipfian workloads show FIFO's specific weakness: ignoring access frequency. These patterns are common in real applications.
If FIFO must be used despite its performance limitations, several strategies can mitigate its weaknesses:
Strategy 1: Prefetching
Anticipate page needs and load them before faults occur:
On fault for page N:
Load page N
Also load pages N+1, N+2 (spatial prefetch)
This helps sequential patterns where FIFO performs well anyway, but doesn't address hot-page eviction.
Strategy 2: Working Set Sizing
Ensure frame count significantly exceeds working set:
Strategy 3: Hybrid Approaches
Combine FIFO with basic protection:
Modified FIFO:
If page_to_evict has been accessed recently:
Move to back of queue (give second chance)
Else:
Evict normally
This becomes the Clock algorithm—essentially FIFO with one bit of LRU information.
The most effective mitigation is to not use pure FIFO. The Clock algorithm provides FIFO-like simplicity and overhead while capturing most of LRU's benefits. If FIFO is mandated, over-provisioning memory is the only reliable mitigation—ensure frames significantly exceed working set size.
We've comprehensively analyzed FIFO's performance characteristics. Let's consolidate the key insights:
What's Next:
In the final page of this module, we examine the practical aspects of queue management in FIFO implementations—the data structure operations, memory layout considerations, and integration with operating system memory management infrastructure.
You now have a quantitative understanding of FIFO's performance characteristics, its relationship to other algorithms, and the workload patterns that reveal its strengths and weaknesses. This knowledge enables informed algorithm selection based on expected workload profiles.