Loading learning content...
We've explored algorithms that use circular scanning to find victims. But what if we took a different approach? Instead of scanning in a specific order, what if we simply classified pages into quality tiers and then randomly selected a victim from the worst tier?
This is the essence of the Not Recently Used (NRU) algorithm. Like Enhanced Second-Chance, NRU uses both the reference bit (R) and modify bit (M) to categorize pages into four classes. But instead of scanning through memory in order, NRU selects a random page from the lowest-numbered (best victim) non-empty class.
NRU represents a different design philosophy: simplicity and probabilistic fairness over deterministic ordering. It's remarkably easy to implement, requires minimal bookkeeping, and performs surprisingly well in practice.
By the end of this page, you will understand: the NRU classification scheme (identical to Enhanced Second-Chance), the algorithm's random selection strategy, implementation approaches, the role of periodic reference bit clearing, NRU's strengths and weaknesses, and when to prefer NRU over other LRU approximations.
NRU uses the same four-class categorization as Enhanced Second-Chance, based on the combination of reference (R) and modify (M) bits:
The key difference from Enhanced Second-Chance:
While both algorithms use the same classification, they differ in victim selection strategy:
| Aspect | Enhanced Second-Chance | NRU |
|---|---|---|
| Selection method | Circular scanning | Random selection |
| Ordering | Deterministic (clock position) | Probabilistic |
| Fairness | FIFO within class | Random within class |
| Implementation | Maintains clock hand | Maintains class lists |
Why these classes?
Class 0 (0,0) — "Cold and clean" The page hasn't been accessed since the last reference bit clearing, and it hasn't been modified. Evicting it requires no disk I/O (we can simply discard it) and is unlikely to cause an immediate re-fault.
Class 1 (0,1) — "Cold but dirty" The page hasn't been accessed recently, but was modified at some point. Eviction requires writing the page back to disk, but since it's not being actively used, the write-back is justified.
Class 2 (1,0) — "Hot but clean" The page was accessed recently, suggesting it may be needed again soon. However, if we must evict it, at least no write is needed.
Class 3 (1,1) — "Hot and dirty" The worst possible victim: recently accessed (likely to fault again soon) AND dirty (requires expensive write-back).
How can a page be modified (M=1) but not referenced (R=0)? This occurs when: (1) the page was written before the last reference bit clearing, or (2) the page was written through a write operation that sets M but the R bit was subsequently cleared. The M bit is only cleared when the page is written back to disk.
The NRU algorithm is elegantly simple:
Algorithm:
Pseudocode:
1234567891011121314151617181920212223242526272829303132333435
function NRU_find_victim(): // Classify all pages into 4 classes class_lists = [[], [], [], []] // Classes 0, 1, 2, 3 for each page in memory: r = page.reference_bit m = page.modify_bit class_num = r * 2 + m // 0, 1, 2, or 3 class_lists[class_num].append(page) // Find lowest non-empty class for class_num in [0, 1, 2, 3]: if class_lists[class_num] is not empty: // Randomly select victim from this class index = random(0, len(class_lists[class_num]) - 1) victim = class_lists[class_num][index] return victim // Should never reach here if memory has pages error("No pages in memory to evict") function handle_page_fault(new_page): if free_frames_available(): frame = get_free_frame() else: victim = NRU_find_victim() if victim.modify_bit == 1: write_page_to_disk(victim) frame = victim.frame unmap_page(victim) load_page(new_page, frame) map_page(new_page, frame)Key characteristics:
No scanning order: Unlike Clock, there's no "hand" position. All pages in a class are equally likely to be selected.
Random selection: Within a class, selection is uniformly random. This prevents any single page from being unfairly victimized repeatedly (or unfairly protected).
Priority by class: Classes are strictly ordered. We never evict a class 1 page if any class 0 pages exist.
Simple state: The only state is the pages' R and M bits—no additional per-page counters or positions.
Random selection within a class provides fairness without bookkeeping. If we always picked the first page in class 0, some pages might be repeatedly victimized while others escape. Randomness distributes eviction load evenly across pages with similar access patterns.
For NRU to work effectively, the operating system must periodically clear all reference bits. Without this, all pages would eventually have R=1 (once accessed), and NRU would degrade to selecting among only class 2 and 3 pages.
Why periodic clearing is essential:
Typical clearing strategy:
1234567891011121314151617181920212223242526272829303132333435363738
/* * Periodic Reference Bit Clearing for NRU * Called from timer interrupt handler */ #define CLEAR_INTERVAL_TICKS 100 // Clear every 100 timer ticks#define TIMER_HZ 100 // 100 ticks per second = 1 tick per 10ms static uint32_t tick_counter = 0; void timer_interrupt_handler(void) { tick_counter++; // Every CLEAR_INTERVAL_TICKS (1 second in this case) if (tick_counter >= CLEAR_INTERVAL_TICKS) { tick_counter = 0; clear_all_reference_bits(); } // Other timer handling... schedule();} void clear_all_reference_bits(void) { for (int frame = 0; frame < NUM_FRAMES; frame++) { if (!frame_table[frame].occupied) continue; page_table_entry_t *pte = frame_table[frame].pte_ptr; // Clear the reference bit (atomic operation) __atomic_and_fetch(pte, ~REFERENCE_BIT, __ATOMIC_SEQ_CST); } // CRITICAL: Invalidate all TLB entries // Otherwise TLB may return stale R bit values flush_tlb_all();}Clearing interval tradeoffs:
| Interval | Advantages | Disadvantages |
|---|---|---|
| Too short (< 100ms) | Finer-grained recency info | High TLB flush overhead |
| Too long (> 10s) | Low overhead | Poor recency resolution |
| Typical (1-5s) | Good balance | Standard choice |
The effect on class distribution:
Immediately after clearing:
Class 0 (was 0+2): All previously-cold pages + previously-hot-clean pages
Class 1 (was 1+3): All previously-cold-dirty + previously-hot-dirty pages
Class 2: Empty (no page has R=1 yet)
Class 3: Empty (no page has R=1 yet)
As time passes and pages are accessed:
Class 0: Shrinks as pages get referenced
Class 1: Shrinks as pages get referenced (→ class 3)
Class 2: Grows as clean pages are accessed
Class 3: Grows as dirty pages are accessed
Clearing reference bits requires a full TLB flush, which is expensive on multi-core systems (requires inter-processor interrupts). This is why clearing intervals are typically measured in seconds, not milliseconds. Some systems use lazy TLB invalidation or per-CPU clearing to reduce overhead.
There are several ways to implement NRU, each with different performance characteristics:
Approach 1: On-Demand Classification
Classify pages at eviction time by scanning all pages:
12345678910111213141516171819202122232425262728293031323334
/* * NRU with on-demand classification * Simple but O(n) per eviction */ uint32_t nru_find_victim_on_demand(void) { // Arrays to hold pages in each class uint32_t class_pages[4][NUM_FRAMES]; uint32_t class_counts[4] = {0, 0, 0, 0}; // Scan all frames and classify for (uint32_t f = 0; f < NUM_FRAMES; f++) { if (!frame_table[f].occupied) continue; uint64_t pte = *frame_table[f].pte_ptr; int r = (pte & REFERENCE_BIT) ? 1 : 0; int m = (pte & DIRTY_BIT) ? 1 : 0; int cls = (r << 1) | m; class_pages[cls][class_counts[cls]++] = f; } // Select from lowest non-empty class for (int cls = 0; cls < 4; cls++) { if (class_counts[cls] > 0) { uint32_t idx = random() % class_counts[cls]; return class_pages[cls][idx]; } } panic("No pages to evict"); return 0;}Approach 2: Maintained Class Lists
Maintain lists of pages in each class, updated as pages' bits change:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
/* * NRU with maintained class lists * O(1) victim selection, but requires list maintenance */ typedef struct { list_head_t class_lists[4]; uint32_t class_counts[4];} nru_state_t; nru_state_t nru; // Called when a page is loaded into memoryvoid nru_page_loaded(uint32_t frame) { // New pages start in class 0 (R=0, M=0) list_add(&nru.class_lists[0], &frame_table[frame].list_node); nru.class_counts[0]++;} // Called when reference bit clearing occursvoid nru_on_clear_reference_bits(void) { // Move all class 2 pages to class 0 // Move all class 3 pages to class 1 list_concat(&nru.class_lists[0], &nru.class_lists[2]); nru.class_counts[0] += nru.class_counts[2]; nru.class_counts[2] = 0; list_concat(&nru.class_lists[1], &nru.class_lists[3]); nru.class_counts[1] += nru.class_counts[3]; nru.class_counts[3] = 0;} // Called on page access (via page fault or software tracking)// Note: With hardware-set R bit, this is hard to intercept efficientlyvoid nru_page_accessed(uint32_t frame, bool is_write) { frame_entry_t *fe = &frame_table[frame]; int old_class = get_class(fe); // Access sets R bit, write also sets M bit int new_r = 1; int new_m = fe->modify_bit | (is_write ? 1 : 0); int new_class = (new_r << 1) | new_m; if (new_class != old_class) { list_remove(&nru.class_lists[old_class], &fe->list_node); nru.class_counts[old_class]--; list_add(&nru.class_lists[new_class], &fe->list_node); nru.class_counts[new_class]++; }} uint32_t nru_find_victim_fast(void) { for (int cls = 0; cls < 4; cls++) { if (nru.class_counts[cls] > 0) { // Select random page from this class uint32_t idx = random() % nru.class_counts[cls]; list_node_t *node = list_get_at(&nru.class_lists[cls], idx); frame_entry_t *fe = container_of(node, frame_entry_t, list_node); return fe->frame_number; } } panic("No pages to evict"); return 0;}| Aspect | On-Demand | Maintained Lists |
|---|---|---|
| Victim selection | O(n) | O(class size) |
| Memory overhead | O(n) temporary arrays | O(n) list nodes |
| Page access overhead | None | List updates needed |
| Complexity | Simple | More complex |
| Accuracy | Always current | May lag bit changes |
In practice, the on-demand approach is often preferred despite its O(n) cost, because: (1) page faults are relatively rare, (2) maintaining lists requires intercepting every page access, which is expensive, and (3) the constant factors for scanning are low. The maintained-lists approach is more useful when combined with software-managed page tables.
Let's trace through NRU with a concrete scenario.
Setup:
| Frame | Page | R | M | Class |
|---|---|---|---|---|
| 0 | A | 0 | 0 | 0 |
| 1 | B | 0 | 1 | 1 |
| 2 | C | 0 | 0 | 0 |
| 3 | D | 0 | 1 | 1 |
| 4 | E | 0 | 0 | 0 |
| 5 | F | 0 | 0 | 0 |
Class distribution:
Scenario 1: Page fault, need to evict
Result: Frame 4 now contains Page G (R=0, M=0)
Now some time passes with memory accesses...
Pages accessed (with hardware setting R bits):
| Frame | Page | R | M | Class |
|---|---|---|---|---|
| 0 | A | 1 | 0 | 2 |
| 1 | B | 1 | 1 | 3 |
| 2 | C | 1 | 1 | 3 |
| 3 | D | 0 | 1 | 1 |
| 4 | G | 1 | 0 | 2 |
| 5 | F | 0 | 0 | 0 |
Class distribution:
Scenario 2: Another page fault
Scenario 3: Another page fault (after Page H loaded)
Now Class 0 is empty. Next victim comes from Class 1.
Key observation: NRU correctly avoided evicting hot pages (A, B, C, G which had R=1) as long as colder pages were available.
Let's analyze NRU's complexity in detail:
Time Complexity:
| Operation | Complexity | Notes |
|---|---|---|
| Victim selection | O(n) | Must scan all pages to classify |
| Random selection | O(n) worst | O(k) where k = class size |
| Page fault total | O(n) + I/O | Classification dominates |
| R-bit clearing | O(n) | All pages + TLB flush |
Comparison with Clock:
| Aspect | Clock | NRU |
|---|---|---|
| Best case victim selection | O(1) | O(n) |
| Worst case victim selection | O(n) | O(n) |
| Amortized victim selection | O(1) | O(n) |
| Periodic clearing | Integrated | Separate O(n) |
NRU's O(n) classification on every page fault appears worse than Clock's O(1) amortized. However:
Space Complexity:
NRU's primary advantage isn't raw performance—it's simplicity. With no persistent state beyond the R/M bits that already exist, NRU is nearly impossible to implement incorrectly. This matters in kernel code where bugs can crash systems or corrupt data.
How does NRU compare with other LRU approximation algorithms we've studied?
Comparison matrix:
| Aspect | Basic Clock | Enhanced Clock | NRU |
|---|---|---|---|
| Bits used | R only | R and M | R and M |
| Page classes | 2 | 4 | 4 |
| Selection method | Circular scan | Circular scan (4 passes) | Random from class |
| Per-eviction CPU | O(1) amortized | O(n) worst | O(n) |
| State maintained | Clock hand | Clock hand | None |
| Fairness within class | FIFO order | FIFO order | Random (uniform) |
| I/O awareness | No | Yes (prefers clean) | Yes (prefers clean) |
| Implementation complexity | Low | Medium | Very low |
When to choose NRU:
When to choose Clock instead:
NRU has an interesting history and has inspired several variants.
Historical origins:
NRU was described in early operating systems textbooks as a practical approximation to LRU. It appears in Tanenbaum's "Modern Operating Systems" and has been taught for decades as an accessible introduction to page replacement.
Relationship to other algorithms:
Variants and extensions:
Modern relevance:
While production operating systems use more sophisticated algorithms (Linux's multi-queue approach, Windows' working set manager), NRU concepts remain relevant:
In virtualization:
NRU's simplicity makes it attractive for hypervisor memory management:
NRU's key insight—that two bits of information (recently accessed? modified?) can effectively categorize page eviction priority—has proven remarkably durable. Every major OS uses some form of this classification, even if the selection mechanism differs from NRU's random approach.
NRU demonstrates that effective page replacement doesn't require complex algorithms—sometimes a simple classification scheme with random selection is enough.
What's next:
We've now covered NRU and clock algorithms separately. The final page of this module revisits the Clock Algorithm to provide a complete synthesis—examining advanced variants, production implementations, and how the concepts from all LRU approximation algorithms combine in modern memory managers.
You now understand the Not Recently Used (NRU) algorithm—its four-class classification, random victim selection, dependence on periodic bit clearing, and appropriate use cases. NRU's elegant simplicity makes it a valuable addition to your page replacement algorithm toolkit.