Loading content...
The FIFO (First-In, First-Out) algorithm offers simplicity: evict the page that has been in memory the longest. But as we learned, FIFO ignores whether a page is actively being used. A page loaded long ago but still frequently accessed will be evicted, causing unnecessary page faults.
The Second-Chance Algorithm (also known as the Clock Algorithm) elegantly addresses this flaw with a single modification to FIFO: before evicting a page, check its reference bit. If the page has been referenced since it was last considered for eviction, give it a "second chance" by clearing the bit and moving on to the next candidate.
This simple idea transforms FIFO from a purely temporal policy into an LRU approximation that respects actual page usage, while maintaining FIFO's implementation simplicity.
By the end of this page, you will deeply understand the Second-Chance algorithm: its conceptual foundation, why it approximates LRU, the circular queue (Clock) implementation that makes it efficient, detailed pseudocode and implementation, complexity analysis, and comparison with pure FIFO. You'll also understand why this algorithm remains a cornerstone of memory management in production systems.
To appreciate Second-Chance, we must first understand exactly what's wrong with FIFO and how Second-Chance addresses it.
FIFO's fundamental blindness:
FIFO maintains a queue of pages ordered by arrival time. When replacement is needed, it evicts the page at the front of the queue (oldest arrival). This completely ignores access patterns:
Example scenario:
Consider a 3-frame system with the reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
With pure FIFO, pages 1 and 2 (loaded early) get evicted despite being heavily reused. Each subsequent access to 1 or 2 causes another page fault.
| Scenario | FIFO Decision | Second-Chance Decision |
|---|---|---|
| Old page, actively used | Evict (oldest = victim) | Keep (R=1, give second chance) |
| Old page, not used | Evict (oldest = victim) | Evict (R=0, fair victim) |
| New page, actively used | Keep (not oldest) | Keep (not at front) |
| New page, not used | Keep (not oldest) | Keep (not at front yet) |
The key insight:
If a page has been referenced since it arrived (or since its last second chance), it's probably still useful. Evicting it would likely cause a future page fault. By checking the reference bit before eviction, we can avoid replacing pages that are actively in use.
This single check transforms FIFO from a naive temporal policy into an approximation of LRU that costs almost nothing extra.
In the absence of any reference bit information, Second-Chance degrades exactly to FIFO. If all pages have R=0 (none referenced recently), the algorithm evicts the page at the front of the queue. This means Second-Chance is never worse than FIFO—only better.
The Second-Chance algorithm modifies FIFO page replacement with a simple rule: inspect the reference bit before evicting a page. If the bit is set, the page gets another chance.
Algorithm description (linear queue version):
Why "Second Chance"?
When a page has R=1, we don't evict it immediately. Instead, we:
This gives the page a "second chance" at survival. If it gets referenced again before cycling back to the front (before the algorithm needs another victim), it will have R=1 again and receive yet another chance.
Conceptually: Moving a page to the back of the queue is equivalent to treating it as newly arrived. Combined with clearing R, this "resets" the page's status. The page effectively starts fresh, earning its keep only if it's actually used before the queue cycles around.
Termination guarantee:
The algorithm must terminate. In the worst case, every page has R=1. As we cycle through:
After examining all N pages, we've cleared all reference bits. The next page we see must have R=0 and can be evicted. Maximum iterations = 2N (each page examined at most twice).
Second-Chance approximates LRU because recently-used pages (with R=1) are moved to the back of the queue, simulating the effect of being "most recently used." The approximation isn't perfect—we can't distinguish among pages that were all used recently—but it captures the essential distinction between used and unused pages.
The linear queue implementation of Second-Chance has a significant inefficiency: moving pages to the back of the queue requires list manipulation. With hundreds of thousands of pages, this overhead adds up.
The Clock Algorithm provides an equivalent implementation using a circular buffer instead of a linear queue. Instead of moving pages, we move a single pointer—the "clock hand."
The circular buffer insight:
If pages are arranged in a circle, moving a page "to the back of the queue" is equivalent to simply advancing the hand past it. No actual data movement occurs; only the hand's position changes:
This is functionally identical to Second-Chance but eliminates all list manipulation overhead.
Step-by-step Clock behavior:
Suppose the hand points to Page A and we need a victim:
Now suppose another page fault occurs:
The clock metaphor:
Imagine the pages arranged like hours on a clock face. The hand sweeps around, checking each page:
The name "Clock Algorithm" comes from this visual metaphor of a sweeping clock hand.
While Second-Chance and Clock are algorithmically equivalent, Clock is universally preferred in practice because:
Let's examine a complete implementation of the Clock algorithm, including data structures, victim selection, and page loading.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186
/* * Clock (Second-Chance) Page Replacement Algorithm * Production-quality implementation with full commentary */ #include <stdint.h>#include <stdbool.h>#include <string.h> #define NUM_FRAMES 256#define PAGE_SIZE 4096#define REFERENCE_BIT_MASK 0x20 // Bit 5 in x86 PTE#define DIRTY_BIT_MASK 0x40 // Bit 6 in x86 PTE #define PRESENT_BIT_MASK 0x01 // Bit 0 in x86 PTE /* * Frame table entry: tracks what's in each physical frame */typedef struct { uint32_t process_id; // Owning process (0 if free) uint32_t virtual_page; // Virtual page number in frame uint64_t *pte_ptr; // Pointer to page table entry bool occupied; // Is this frame in use?} frame_entry_t; /* * The clock structure */typedef struct { frame_entry_t frames[NUM_FRAMES]; uint32_t clock_hand; // Current position (0 to NUM_FRAMES-1) uint32_t num_occupied; // Number of frames in use} clock_t; // Global clock instanceclock_t page_clock; /* * Initialize the clock algorithm */void clock_init(void) { memset(&page_clock, 0, sizeof(clock_t)); page_clock.clock_hand = 0; page_clock.num_occupied = 0;} /* * Check if reference bit is set for a frame */static inline bool is_referenced(frame_entry_t *frame) { return (*frame->pte_ptr & REFERENCE_BIT_MASK) != 0;} /* * Clear reference bit for a frame * Must also handle TLB invalidation */static void clear_reference_bit(frame_entry_t *frame) { // Atomically clear the reference bit in the PTE __atomic_and_fetch(frame->pte_ptr, ~REFERENCE_BIT_MASK, __ATOMIC_SEQ_CST); // Invalidate TLB entry for this page // On x86, this is done with the INVLPG instruction invalidate_tlb_page(frame->process_id, frame->virtual_page);} /* * Check if page is dirty (modified) */static inline bool is_dirty(frame_entry_t *frame) { return (*frame->pte_ptr & DIRTY_BIT_MASK) != 0;} /* * Find a free frame or select a victim using the clock algorithm * Returns the frame index */uint32_t clock_find_frame(void) { // First, check for free frames for (uint32_t i = 0; i < NUM_FRAMES; i++) { if (!page_clock.frames[i].occupied) { return i; // Found a free frame } } // All frames occupied: need to select a victim // Clock algorithm: sweep until finding R=0 uint32_t iterations = 0; const uint32_t max_iterations = 2 * NUM_FRAMES; // Termination guarantee while (iterations < max_iterations) { frame_entry_t *frame = &page_clock.frames[page_clock.clock_hand]; if (!is_referenced(frame)) { // Found victim: R=0, not recently used uint32_t victim_index = page_clock.clock_hand; // Advance hand past victim page_clock.clock_hand = (page_clock.clock_hand + 1) % NUM_FRAMES; return victim_index; } // Give second chance: clear R, advance hand clear_reference_bit(frame); page_clock.clock_hand = (page_clock.clock_hand + 1) % NUM_FRAMES; iterations++; } // Should never reach here due to termination guarantee panic("Clock algorithm failed to find victim"); return 0;} /* * Handle a page fault using the clock algorithm */void clock_handle_page_fault(uint32_t process_id, uint32_t virtual_page) { // Step 1: Find or select a frame uint32_t frame_index = clock_find_frame(); frame_entry_t *frame = &page_clock.frames[frame_index]; // Step 2: If frame was occupied, handle eviction if (frame->occupied) { // Check if victim is dirty (needs write-back) if (is_dirty(frame)) { // Write page contents to swap/disk write_page_to_disk(frame->process_id, frame->virtual_page, frame_index); } // Clear present bit in victim's PTE *frame->pte_ptr &= ~PRESENT_BIT_MASK; invalidate_tlb_page(frame->process_id, frame->virtual_page); } // Step 3: Load new page from disk read_page_from_disk(process_id, virtual_page, frame_index); // Step 4: Update frame table entry frame->process_id = process_id; frame->virtual_page = virtual_page; frame->pte_ptr = get_pte_pointer(process_id, virtual_page); frame->occupied = true; // Step 5: Update page table entry for new page // Set present bit, clear reference and dirty bits *frame->pte_ptr = create_pte(frame_index, PRESENT_BIT_MASK); // No need to set reference bit - hardware will set it on access page_clock.num_occupied++;} /* * Allocate a new page (e.g., for anonymous memory) */void clock_allocate_page(uint32_t process_id, uint32_t virtual_page) { uint32_t frame_index = clock_find_frame(); frame_entry_t *frame = &page_clock.frames[frame_index]; if (frame->occupied) { // Handle eviction (same as page fault case) if (is_dirty(frame)) { write_page_to_disk(frame->process_id, frame->virtual_page, frame_index); } *frame->pte_ptr &= ~PRESENT_BIT_MASK; invalidate_tlb_page(frame->process_id, frame->virtual_page); } // Zero-fill the new page for security memset(frame_to_physical_addr(frame_index), 0, PAGE_SIZE); // Set up the frame and page table frame->process_id = process_id; frame->virtual_page = virtual_page; frame->pte_ptr = get_pte_pointer(process_id, virtual_page); frame->occupied = true; *frame->pte_ptr = create_pte(frame_index, PRESENT_BIT_MASK);}Implementation notes:
Atomic operations: The reference bit update uses atomic operations because multiple CPUs might access the PTE simultaneously
TLB invalidation: Every reference bit clear must be followed by TLB invalidation to ensure consistency
Dirty page handling: When evicting a dirty page, we must write it back to disk before freeing the frame
Termination guarantee: The loop has a maximum iteration count of 2N, though in practice it usually completes much faster
Free frame optimization: We first scan for free frames before invoking the clock algorithm, avoiding unnecessary victim selection
Let's trace through the Clock algorithm with a concrete example to solidify your understanding.
Setup:
Initial state after loading pages 1, 2, 3, 4:
Frame 0: Page 1, R=1
Frame 1: Page 2, R=1
Frame 2: Page 3, R=1
Frame 3: Page 4, R=1
Hand → Frame 0
| Reference | Action | Frame 0 | Frame 1 | Frame 2 | Frame 3 | Hand | Fault? |
|---|---|---|---|---|---|---|---|
| 1 | Load into F0 | 1,R=1 | →F1 | Yes | |||
| 2 | Load into F1 | 1,R=1 | 2,R=1 | →F2 | Yes | ||
| 3 | Load into F2 | 1,R=1 | 2,R=1 | 3,R=1 | →F3 | Yes | |
| 4 | Load into F3 | 1,R=1 | 2,R=1 | 3,R=1 | 4,R=1 | →F0 | Yes |
| 1 | Hit, set R=1 | 1,R=1 | 2,R=1 | 3,R=1 | 4,R=1 | →F0 | No |
| 2 | Hit, set R=1 | 1,R=1 | 2,R=1 | 3,R=1 | 4,R=1 | →F0 | No |
| 5 | Sweep F0-F3, evict F0 | 5,R=1 | 2,R=0 | 3,R=0 | 4,R=0 | →F1 | Yes |
| 1 | Evict F1, load 1 | 5,R=1 | 1,R=1 | 3,R=0 | 4,R=0 | →F2 | Yes |
| 2 | Evict F2, load 2 | 5,R=1 | 1,R=1 | 2,R=1 | 4,R=0 | →F3 | Yes |
| 3 | Evict F3, load 3 | 5,R=1 | 1,R=1 | 2,R=1 | 3,R=1 | →F0 | Yes |
| 4 | Sweep F0-F3, evict F0 | 4,R=1 | 1,R=0 | 2,R=0 | 3,R=0 | →F1 | Yes |
| 5 | Evict F1, load 5 | 4,R=1 | 5,R=1 | 2,R=0 | 3,R=0 | →F2 | Yes |
Detailed trace for reference 5 (first occurrence):
At this point, frames contain pages 1-4, all with R=1. Hand at F0.
Final state: F0 has Page 5 with R=1. Hand points to F1.
Comparison with FIFO:
| Algorithm | Total Page Faults |
|---|---|
| FIFO | 10 |
| Clock | 10 (same for this example) |
In this particular example, Clock performs similarly to FIFO because the reference pattern (pages 1-2 referenced, then 5, then 1-2 again) doesn't provide much opportunity for Second-Chance to help. In workloads with more locality, Clock significantly outperforms FIFO.
This particular example shows a challenging pattern for Clock because accessing pages 1 and 2 sets their R bits, but then they're immediately evicted to make room for other pages before we can check them again. In practice, with more frames and realistic workloads, Clock substantially outperforms FIFO due to working set behavior.
Understanding the Clock algorithm's complexity helps us appreciate its efficiency and understand when it performs well or poorly.
Time Complexity:
| Operation | Best Case | Worst Case | Amortized |
|---|---|---|---|
| Find victim | O(1) | O(n) | O(1) |
| Page fault handling | O(1) + I/O | O(n) + I/O | O(1) + I/O |
| Page access (no fault) | O(1) | O(1) | O(1) |
Analysis:
Best case (O(1)): The page under the clock hand has R=0. We immediately select it as victim. This is the common case when memory pressure is low.
Worst case (O(n)): All pages have R=1. We must sweep the entire clock once to clear all bits, then find the first page again. This requires n reference bit clears and n+1 checks.
Amortized O(1): Here's the key insight. Each time we pass a page and clear its reference bit, we "charge" that work to the previous access that set the bit. Since each page access can only set one reference bit, and clearing each bit is O(1), the total work amortizes across accesses.
More formally: In any sequence of m page faults, we perform at most 2n reference bit clears total (each page can be cleared at most twice before being evicted). With n frames and m faults, amortized cost per fault is O(2n/m) = O(n/m). When m >> n (many more faults than frames, the realistic case), this approaches O(1).
Comparison with other algorithms:
| Algorithm | Victim Selection | Memory Overhead | Implementation Complexity |
|---|---|---|---|
| FIFO | O(1) | O(n) queue | Simple |
| Clock | O(1) amortized | O(1) extra | Simple |
| LRU (timestamp) | O(n) | O(n) timestamps | Moderate |
| LRU (stack) | O(n) updates | O(n) pointers | Complex |
| Optimal | O(n) future scan | Oracle required | Impossible |
Clock achieves LRU-like behavior with FIFO-like simplicity and overhead—an excellent engineering tradeoff.
The O(n) worst case occurs when the working set size exceeds available memory. All pages are constantly referenced, so the clock must sweep fully each time. In this state, the system is likely thrashing anyway—high clock sweep cost is a symptom, not the cause, of the problem.
The Clock algorithm and its variants are used extensively in production operating systems and other memory management systems.
Where Clock is used:
Common variants and enhancements:
Two-handed Clock: Uses two clock hands separated by a fixed distance. The leading hand clears reference bits; the trailing hand evicts pages with R=0. This spreads the clearing work and improves responsiveness.
Clock-Pro: Uses three "hands" to track hot and cold pages separately, achieving better performance on workloads with repeated access patterns.
GCLOCK (Generalized Clock): Maintains a counter instead of a single bit, allowing finer-grained tracking of access frequency.
WSClock (Working Set Clock): Combines Clock with working set concepts, using virtual time instead of reference bits.
CAR (Clock with Adaptive Replacement): Self-tuning algorithm that adapts between recency and frequency based on workload characteristics.
| Variant | Key Feature | Benefit | Complexity |
|---|---|---|---|
| Basic Clock | Single hand, R bit | Simple, effective | Low |
| Two-Handed | Separate clear/evict | Smoother operation | Low-Medium |
| Enhanced Clock | Use R + D bits | Better write handling | Low-Medium |
| Clock-Pro | Hot/cold tracking | Handles loops well | Medium |
| CAR | Adaptive tuning | Workload adaptation | Medium-High |
Production OS memory managers are rarely pure Clock implementations. They combine Clock-like mechanics with additional heuristics for: page type (code vs. data), dirty page batching (writing back multiple pages together for efficiency), memory pressure detection, and process fairness. Understanding basic Clock is essential for understanding these more sophisticated systems.
Let's quantify exactly when and how much Clock improves over FIFO.
Theoretical relationship:
Performance factors:
| Workload Type | FIFO Behavior | Clock Behavior | Clock Advantage |
|---|---|---|---|
| Random access | ~n/m faults | ~n/m faults | Minimal |
| Strong locality (hot set) | Evicts hot pages | Keeps hot pages | Significant |
| Scan pattern | Evicts oldest | Evicts oldest (same) | None |
| Working set < frames | Low faults | Same low faults | None |
| Working set > frames | High faults | Fewer faults | Moderate |
| Bursty access | May evict active | Tracks activity | Significant |
Empirical results (typical workloads):
Studies on real workloads show Clock typically reduces page faults by 10-30% compared to FIFO:
| Workload | FIFO Page Faults | Clock Page Faults | Reduction |
|---|---|---|---|
| Compile job | 15,423 | 11,890 | 23% |
| Database query | 8,721 | 7,104 | 19% |
| Web server | 4,256 | 3,410 | 20% |
| Scientific simulation | 45,789 | 38,120 | 17% |
(Illustrative numbers based on typical research benchmarks)
When Clock doesn't help:
True sequential scans: Reading a large file sequentially means no page is accessed twice. Both FIFO and Clock perform identically (and poorly—memory fills with useless pages).
Perfect random access: Every access is to a random page with no repeat access patterns. No algorithm can exploit locality that doesn't exist.
Thrashing scenarios: When the working set vastly exceeds memory, all algorithms perform poorly. Clock wastes time clearing bits that immediately get set again.
Unlike FIFO, the Clock algorithm is a stack algorithm, meaning it doesn't suffer from Belady's Anomaly. Increasing the number of frames never increases the number of page faults. This is because Clock respects page usage—pages that would be kept with fewer frames remain kept with more frames.
The Second-Chance/Clock algorithm represents a perfect example of practical systems engineering: a small modification to a simple algorithm (FIFO) that yields significant benefits with minimal additional complexity.
What's next:
The basic Clock algorithm considers only the reference bit. But we also have the dirty bit—a page that has been modified requires a disk write before eviction, which is expensive. The next page explores the Enhanced Second-Chance Algorithm, which uses both reference and dirty bits to make smarter victim selection decisions that minimize disk I/O.
You now understand the Second-Chance and Clock algorithms in depth—from conceptual foundation to implementation details to complexity analysis. This algorithm family represents the foundation of practical page replacement in real operating systems.