Loading content...
In the previous pages, we established that no single data structure can satisfy all LRU cache requirements. Hash maps provide O(1) key lookup but have no concept of ordering. Linked lists provide O(1) insertion and removal but require O(n) search.
The breakthrough comes from recognizing that these weaknesses are complementary. What one structure lacks, the other provides. By combining them synergistically, we create something greater than the sum of its parts — an O(1) LRU cache.
This page presents the complete design: how the hash map and doubly linked list work together, how data flows between them, and why this combination achieves O(1) for every operation. Understanding this design deeply is essential — it's one of the most elegant examples of composite data structure design in computer science.
By the end of this page, you will understand the complete architecture of an O(1) LRU cache, how the hash map and doubly linked list complement each other, the role of sentinel nodes in simplifying edge cases, and the invariants that keep everything consistent.
The key insight is deceptively simple: use the hash map to store pointers to linked list nodes, not the values themselves.
Traditionally, when we think of hash maps, we think:
HashMap<Key, Value>
For LRU cache, we instead use:
HashMap<Key, Node<Key, Value>>
Where Node is a doubly linked list node that contains both the key, the value, and pointers to adjacent nodes.
What this enables:
The synergy:
The hash map compensates for the linked list's O(n) search by providing O(1) access to any node by key.
The linked list compensates for the hash map's lack of ordering by maintaining a total order from most to least recently used.
Neither alone is sufficient. Together, they form a complete solution.
You might wonder: if we look up by key, why store the key in the node too? The answer: during eviction. When we remove the tail node, we need to also remove its entry from the hash map. To do that, we need to know what key to remove — which requires storing the key in the node itself.
Let's visualize the complete LRU cache architecture. Understanding the physical layout is essential for implementing it correctly.
Visual representation:
12345678910111213141516171819202122232425262728293031323334
LRU Cache Architecture═══════════════════════════════════════════════════════════════════ HASH MAP ┌─────────────────┐ │ key → node │ ├─────────────────┤ │ "A" ──────────────────────────────────┐ │ "B" ───────────────────────┐ │ │ "C" ────────────┐ │ │ └─────────────────┘ │ │ │ │ │ ▼ ▼ ▼ DOUBLY LINKED LIST (recency order) ┌───────────────────────────────────────────────────┐ │ │ Most Recent │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ Least Recent ┌─────────┼──▶│ Node C │◀─▶│ Node B │◀─▶│ Node A │◀───────┼─────┐ │ │ │ key="C" │ │ key="B" │ │ key="A" │ │ │ │ │ │ val=300 │ │ val=200 │ │ val=100 │ │ │ ┌────┴────┐ │ └─────────┘ └─────────┘ └─────────┘ │ ┌──┴──┐ │ HEAD │ │ ▲ ▲ ▲ │ │TAIL │ │(sentinel)────┘ │ │ │ └──(sentinel) └─────────┘ │ │ │ └─────┘ │ │ │ └─────────────┴─────────────┘ Bidirectional links ═══════════════════════════════════════════════════════════════════Key Properties:• HEAD.next = Most Recently Used item• TAIL.prev = Least Recently Used item (eviction candidate)• Every node is reachable via hash map in O(1)• Every node can be removed/moved in O(1) via its linksComponent breakdown:
1. The Hash Map (Map<Key, Node>):
2. The Doubly Linked List:
3. Sentinel Nodes (Head and Tail):
The hash map and linked list don't store separate copies of data — they store references to the same node objects. When you update a node's value via the hash map, the linked list 'sees' the update because it's the same object. This shared reference is crucial for consistency and space efficiency.
Sentinel nodes (also called dummy nodes or boundary nodes) are a crucial implementation technique that dramatically simplifies linked list code. Let's understand why they matter.
The problem without sentinels:
Without sentinel nodes, every operation must handle special cases for empty lists, single-element lists, and operations at the head or tail:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
// WITHOUT SENTINELS - Complex edge case handlingclass LRUWithoutSentinels { private head: Node | null = null; private tail: Node | null = null; removeNode(node: Node): void { // Case 1: Node is the only element if (node.prev === null && node.next === null) { this.head = null; this.tail = null; return; } // Case 2: Node is at the head if (node.prev === null) { this.head = node.next; this.head!.prev = null; return; } // Case 3: Node is at the tail if (node.next === null) { this.tail = node.prev; this.tail!.prev = null; return; } // Case 4: Node is in the middle node.prev.next = node.next; node.next.prev = node.prev; } addToHead(node: Node): void { // Case 1: List is empty if (this.head === null) { this.head = node; this.tail = node; node.prev = null; node.next = null; return; } // Case 2: List has elements node.next = this.head; node.prev = null; this.head.prev = node; this.head = node; } // Every method needs 3-4 cases!}The elegance with sentinels:
With sentinel nodes, the list is never truly empty — it always contains at least the head and tail sentinels. This eliminates null checks and special cases:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
// WITH SENTINELS - Clean, uniform codeclass LRUWithSentinels<K, V> { private head: Node<K, V>; // Sentinel - never null private tail: Node<K, V>; // Sentinel - never null constructor() { // Create sentinel nodes with dummy values this.head = new Node(null as any, null as any); this.tail = new Node(null as any, null as any); // Link them together - this is the "empty" state this.head.next = this.tail; this.tail.prev = this.head; } removeNode(node: Node<K, V>): void { // ONE case handles ALL scenarios! // Works whether node is first, last, middle, or only element node.prev!.next = node.next; node.next!.prev = node.prev; // That's it. No null checks. No special cases. } addToHead(node: Node<K, V>): void { // ONE case handles ALL scenarios! // Works whether list is empty or has elements node.next = this.head.next; node.prev = this.head; this.head.next!.prev = node; this.head.next = node; // That's it. Always works. } removeTail(): Node<K, V> | null { // Get the actual tail (not the sentinel) const lru = this.tail.prev!; // If lru is the head sentinel, list is empty if (lru === this.head) return null; this.removeNode(lru); return lru; }}With sentinels: • Every real node always has a valid prev (never null) • Every real node always has a valid next (never null) • head.next is always valid (might be tail if empty) • tail.prev is always valid (might be head if empty)
This invariant eliminates entire categories of null pointer bugs.
Let's trace through exactly what happens during a get(key) operation, step by step.
Scenario: Cache contains keys A, B, C. We call get("B").
Before state:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
BEFORE get("B"):═══════════════════════════════════════════════════════════════════HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC } Linked List (MRU → LRU):HEAD ←→ [C:300] ←→ [B:200] ←→ [A:100] ←→ TAIL ↑ most recent ↑ least recent ═══════════════════════════════════════════════════════════════════STEP 1: Hash map lookup — O(1)═══════════════════════════════════════════════════════════════════node = map.get("B") // Returns pointer to NodeBif (node === undefined) return -1; // Cache miss - but B exists node is now: [B:200] with prev→[C:300], next→[A:100] ═══════════════════════════════════════════════════════════════════STEP 2: Remove node from current position — O(1)═══════════════════════════════════════════════════════════════════// Update C's next pointernode.prev.next = node.next // C.next = A // Update A's prev pointer node.next.prev = node.prev // A.prev = C Intermediate state:HEAD ←→ [C:300] ←→ [A:100] ←→ TAIL (B is disconnected but still in hash map) ═══════════════════════════════════════════════════════════════════STEP 3: Insert node at head (most recent) — O(1)═══════════════════════════════════════════════════════════════════node.next = head.next // B.next = Cnode.prev = head // B.prev = HEADhead.next.prev = node // C.prev = Bhead.next = node // HEAD.next = B ═══════════════════════════════════════════════════════════════════AFTER get("B"):═══════════════════════════════════════════════════════════════════HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC } // Unchanged! Linked List (MRU → LRU):HEAD ←→ [B:200] ←→ [C:300] ←→ [A:100] ←→ TAIL ↑ B is now most recent STEP 4: Return value — O(1)return node.value // Returns 200Key observations:
Cache miss case:
If get("X") is called and X isn't in the hash map, we simply return -1/null without touching the linked list. The recency order remains unchanged.
The combination of 'remove from current position' and 'insert at head' is so common it's often extracted as a single helper method: moveToHead(node). This encapsulates the recency update logic and is used by both get() and put().
The put(key, value) operation is more complex because it has two main branches: updating an existing key vs. inserting a new key (which may trigger eviction).
Case 1: Key already exists
1234567891011121314151617181920212223
put("B", 999) when B already exists:═══════════════════════════════════════════════════════════════════BEFORE:HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC }List: HEAD ←→ [C:300] ←→ [B:200] ←→ [A:100] ←→ TAIL ═══════════════════════════════════════════════════════════════════STEP 1: Hash map lookup — O(1)node = map.get("B") // Returns NodeB// Key exists! This is an update, not an insert. STEP 2: Update value — O(1)node.value = 999 // NodeB now contains 999 STEP 3: Move to head — O(1)removeNode(node) // Disconnect from current positionaddToHead(node) // Insert after head sentinel ═══════════════════════════════════════════════════════════════════AFTER:HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC } // Same referencesList: HEAD ←→ [B:999] ←→ [C:300] ←→ [A:100] ←→ TAIL ↑ B updated and moved to frontCase 2: Key doesn't exist, no eviction needed
12345678910111213141516171819202122232425262728
put("D", 400) when D doesn't exist, capacity = 4:═══════════════════════════════════════════════════════════════════BEFORE (size = 3, capacity = 4):HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC }List: HEAD ←→ [C:300] ←→ [B:200] ←→ [A:100] ←→ TAIL ═══════════════════════════════════════════════════════════════════STEP 1: Hash map lookup — O(1)node = map.get("D") // Returns undefined// Key doesn't exist! This is an insert. STEP 2: Check capacity — O(1)if (size >= capacity) { evict() } // size=3 < capacity=4, no eviction STEP 3: Create new node — O(1)newNode = new Node("D", 400) STEP 4: Add to hash map — O(1)map.set("D", newNode) STEP 5: Add to head of list — O(1)addToHead(newNode) ═══════════════════════════════════════════════════════════════════AFTER (size = 4):HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC, "D" → NodeD }List: HEAD ←→ [D:400] ←→ [C:300] ←→ [B:200] ←→ [A:100] ←→ TAIL ↑ D is most recentCase 3: Key doesn't exist, eviction required
123456789101112131415161718192021222324252627282930313233343536373839404142434445
put("E", 500) when E doesn't exist, capacity = 4, size = 4:═══════════════════════════════════════════════════════════════════BEFORE (size = 4, capacity = 4 — FULL):HashMap: { "A" → NodeA, "B" → NodeB, "C" → NodeC, "D" → NodeD }List: HEAD ←→ [D:400] ←→ [C:300] ←→ [B:200] ←→ [A:100] ←→ TAIL ↑ A is LRU (eviction target) ═══════════════════════════════════════════════════════════════════STEP 1: Hash map lookup — O(1)node = map.get("E") // Returns undefined - new key STEP 2: Check capacity — triggers eviction!if (size >= capacity) { evict() } // size=4 >= capacity=4, must evict EVICTION SUB-STEPS: ───────────────────────────────────────────────────────────── a) Find LRU node — O(1) lruNode = tail.prev // NodeA (the one before tail sentinel) b) Remove from linked list — O(1) removeNode(lruNode) c) Remove from hash map — O(1) // This is why we store the key in the node! map.delete(lruNode.key) // map.delete("A") ───────────────────────────────────────────────────────────── Intermediate state:HashMap: { "B" → NodeB, "C" → NodeC, "D" → NodeD } // A removedList: HEAD ←→ [D:400] ←→ [C:300] ←→ [B:200] ←→ TAIL STEP 3: Create new node — O(1)newNode = new Node("E", 500) STEP 4: Add to hash map — O(1)map.set("E", newNode) STEP 5: Add to head of list — O(1)addToHead(newNode) ═══════════════════════════════════════════════════════════════════AFTER (size = 4):HashMap: { "B" → NodeB, "C" → NodeC, "D" → NodeD, "E" → NodeE }List: HEAD ←→ [E:500] ←→ [D:400] ←→ [C:300] ←→ [B:200] ←→ TAIL ↑ E is most recent B is now LRU (was second-oldest)A common bug is forgetting to remove the evicted key from the hash map. If you only remove from the linked list, the hash map still points to a 'zombie' node. Future get() calls for that key would return stale data or crash. Always remove from BOTH structures.
For the LRU cache to function correctly, certain conditions must always be true. These are called invariants. Violating any invariant leads to bugs.
The LRU Cache Invariants:
map.size equals the number of real nodes in the linked list (excluding sentinels)map.size <= capacity — the cache never exceeds its capacityN.prev.next === N and N.next.prev === N (bidirectional links are consistent)head.prev === null and tail.next === null — sentinels mark boundariesCommon invariant violations and their symptoms:
| Violation | Symptom | Likely Cause |
|---|---|---|
| Map size ≠ list size | Memory leak or missing items | Forgot to add/remove from one structure |
| Size > capacity | Eviction not triggered | Capacity check uses > instead of >= |
| Stale map entry | get() returns wrong/deleted value | Removed from list but not map during eviction |
| Broken prev/next link | Traversal skips nodes or crashes | Pointer update in wrong order |
| Lost reference | Node unreachable | Overwrote pointer before reading old value |
During development, add an assertInvariants() method that walks the list and verifies all invariants. Call it after every operation. This catches bugs immediately at their source rather than later when symptoms appear. Remove or disable in production for performance.
Let's formally verify that our hash map + doubly linked list design achieves the O(1) requirements.
Time complexity analysis:
| Operation | Sub-operations | Each Sub-op | Total |
|---|---|---|---|
| get(key) | map.get + removeNode + addToHead | O(1) + O(1) + O(1) | O(1) |
| put(key, value) - update | map.get + update value + removeNode + addToHead | O(1) each | O(1) |
| put(key, value) - insert | map.get + new Node + map.set + addToHead | O(1) each | O(1) |
| put(key, value) - evict | Above + tail.prev + removeNode + map.delete | O(1) each | O(1) |
Space complexity analysis:
Why this is (essentially) optimal:
The only additional overhead is the constant factor of pointer storage and the two sentinel nodes — both unavoidable given our requirements.
No known alternative achieves O(1) get and O(1) put with less overhead. This design is the canonical solution taught in advanced data structures courses and used in production systems worldwide.
The hash map + doubly linked list combination isn't just for LRU caches. It's a general pattern for any problem requiring both O(1) key lookup AND O(1) order manipulation. Ordered dictionaries, LFU caches, and various scheduling algorithms use the same core idea.
We've covered the complete design of an O(1) LRU cache. Let's consolidate the key insights:
What's next:
With the design fully understood, the next page provides a complete, production-quality implementation of the LRU cache. We'll translate these concepts into working code, handle all edge cases, and discuss implementation details that affect real-world performance.
You now understand the complete architecture of an O(1) LRU cache: how hash maps and doubly linked lists work together, the role of sentinel nodes, the data flow through get() and put() operations, and the invariants that keep everything consistent. Next, we'll implement it!