Loading content...
Ask an experienced software engineer: "What's the most important difference between arrays and linked lists?"
Most will immediately respond: "Indexed access."
In an array, accessing element 0 and element 999,999 takes exactly the same time—O(1). In a linked list, accessing element 999,999 requires walking through 999,999 nodes—O(n). This single difference fundamentally shapes when to use each data structure.
This page explores indexed access in linked lists with rigor: why it costs O(n), how this compares to arrays at the hardware level, what this means for algorithm design, and when this limitation matters (or doesn't).
By the end of this page, you'll understand exactly why indexed access is O(n) in linked lists, how this contrasts with O(1) array access at both logical and hardware levels, when O(n) access is acceptable and when it's fatal, and how to design algorithms that minimize the penalty of sequential access.
Let's be precise about terminology. Indexed access means retrieving an element at a specific position in a sequence.
In most programming languages, this is expressed as collection[index] or collection.get(index).
Arrays: O(1) access by design
Arrays provide O(1) indexed access because they use contiguous memory with fixed-size elements. The address of any element can be computed directly:
address(element[i]) = base_address + (i × element_size)
This is pure arithmetic—no memory reads required to compute the location. One multiplication, one addition, then a single memory fetch. The index value itself doesn't affect the cost.
123456789
// Array: O(1) access for any indexconst arr = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]; console.log(arr[0]); // O(1) - compute address, readconsole.log(arr[4]); // O(1) - compute address, readconsole.log(arr[9]); // O(1) - compute address, read // All three accesses cost exactly the same// The time doesn't depend on the index valueLinked Lists: No such formula exists
Linked list nodes are scattered throughout memory. There's no mathematical formula to compute the address of node 5—you must physically visit nodes 0, 1, 2, 3, and 4 to find the pointer to node 5.
The only information we have is:
To access index k, we must traverse k nodes. The cost grows linearly with the index.
There is no clever trick to make linked list indexed access O(1). It's not a matter of a better algorithm—it's a fundamental property of the data structure. If you need O(1) indexed access, you need a different data structure (like an array).
Let's implement indexed access and analyze it carefully:
The naive (and correct) implementation:
1234567891011121314151617181920212223242526272829303132
class ListNode<T> { data: T; next: ListNode<T> | null; constructor(data: T) { this.data = data; this.next = null; }} function getAtIndex<T>(head: ListNode<T> | null, index: number): T | null { // Edge case: negative index or empty list if (index < 0 || head === null) { return null; } let current = head; let position = 0; // Traverse until we reach the desired index while (current !== null && position < index) { current = current.next; position++; } // Either we found it or ran out of nodes if (current === null) { return null; // Index out of bounds } return current.data;}Counting the operations:
For a list of length n and target index k:
Worst case: O(n)
Accessing the last element requires traversing the entire list.
Best case: O(1)
Accessing the first element (index 0) just returns head.data.
Average case: O(n/2) = O(n)
If we access uniformly random indices, we traverse half the list on average.
| Index Accessed | Nodes Traversed | Time Complexity |
|---|---|---|
| 0 (first) | 0 | O(1) |
| 1 | 1 | O(1) |
| n/2 (middle) | n/2 | O(n) |
| n-2 | n-2 | O(n) |
| n-1 (last) | n-1 | O(n) |
Unlike arrays where every position costs the same, linked list access cost is proportional to the index. This has algorithm design implications—accessing elements near the front is cheap, accessing elements near the end is expensive.
To truly understand why linked lists can't match array performance, we need to examine what makes array access O(1) at the hardware level.
The array memory model:
When you create an array int arr[5] with 4-byte integers starting at address 1000:
Memory Address | Content
----------------|--------
1000 | arr[0]
1004 | arr[1]
1008 | arr[2]
1012 | arr[3]
1016 | arr[4]
Every element is exactly 4 bytes apart. To access arr[i]:
1000 + (i × 4) — one multiply, one addThis is a single memory transaction. The CPU can compute the address in one cycle and request the memory in another. Total: ~3-5 cycles for a cache hit, ~100+ cycles for a cache miss—but the index value doesn't change this.
The linked list memory model:
Consider the same 5 elements in a linked list:
Node at 7040 Node at 2832 Node at 9156 Node at 1024 Node at 5512
| | | | | | | | | |
| data: 10 | | data: 20 | | data: 30 | | data: 40 | | data: 50 |
| next: 2832|--->| next: 9156|--->| next: 1024|--->| next: 5512|--->| next: null|
| | | | | | | | | |
To access element at index 3 (value 40):
Four memory reads instead of one. And each read likely incurs a cache miss because addresses are random.
| Operation | Array | Linked List |
|---|---|---|
| Address calculations | 1 | 0 |
| Memory reads required | 1 | k+1 |
| Cache misses (likely) | 0-1 | k+1 (all likely misses) |
| CPU cycles (typical) | ~5-100 | ~(k+1) × 100+ |
| Scales with index? | No | Yes, linearly |
Arrays trade flexibility for speed: they require contiguous allocation but enable O(1) access. Linked lists trade speed for flexibility: they allow scattered allocation but pay with O(n) access. Neither is universally better—the right choice depends on your operation mix.
Let's quantify when O(n) access matters. Consider a list of 10,000 elements:
Scenario 1: Sequential access (traversal)
Accessing every element from start to end:
Result: Both are O(n). For sequential access, linked lists are fine.
Scenario 2: Random access pattern
Accessing 1,000 random elements by index:
Result: Linked list is 5,000x slower!
Scenario 3: Binary search
Binary search requires accessing the middle element, then a quarter, etc.:
Result: Binary search becomes O(n), losing its entire benefit.
1234567891011121314151617181920212223242526272829303132333435363738
// Binary search in array: O(log n)function binarySearchArray(arr: number[], target: number): number { let low = 0; let high = arr.length - 1; while (low <= high) { const mid = Math.floor((low + high) / 2); // This access is O(1)! if (arr[mid] === target) return mid; if (arr[mid] < target) low = mid + 1; else high = mid - 1; } return -1; // ~13 iterations for n=10,000} // "Binary search" in linked list: O(n) — defeats the purposefunction binarySearchLinkedList<T>( head: ListNode<number> | null, length: number, target: number): number { let low = 0; let high = length - 1; while (low <= high) { const mid = Math.floor((low + high) / 2); // This access is O(mid) — not O(1)! const midValue = getAtIndex(head, mid); if (midValue === null) return -1; if (midValue === target) return mid; if (midValue < target) low = mid + 1; else high = mid - 1; } return -1; // Each iteration costs O(n/2) average // Total: O(n) — no better than linear search!}This is a common interview mistake and a real production bug. If you need binary search, you need O(1) indexed access. Use an array, or consider a balanced binary search tree which offers O(log n) search without indexed access.
One of the most common performance bugs when working with linked lists is treating them like arrays. Consider this innocent-looking code:
123456789101112131415161718
// ❌ ANTI-PATTERN: Index-based loop on linked listfunction sumBad(head: ListNode<number> | null, length: number): number { let sum = 0; for (let i = 0; i < length; i++) { const value = getAtIndex(head, i); // O(i) for each access! if (value !== null) { sum += value; } } return sum;} // Time complexity: O(0) + O(1) + O(2) + ... + O(n-1)// = O(0 + 1 + 2 + ... + (n-1))// = O(n(n-1)/2)// = O(n²) ← QUADRATIC!Why this is O(n²):
Each call to getAtIndex(head, i) starts from the head and traverses i nodes. The total traversals:
getAtIndex(head, 0): traverse 0 nodesgetAtIndex(head, 1): traverse 1 nodegetAtIndex(head, 2): traverse 2 nodesgetAtIndex(head, n-1): traverse n-1 nodesTotal: 0 + 1 + 2 + ... + (n-1) = n(n-1)/2 = O(n²)
The correct approach:
1234567891011121314
// ✅ CORRECT: Pointer-based traversalfunction sumGood(head: ListNode<number> | null): number { let sum = 0; let current = head; while (current !== null) { sum += current.data; // O(1) per iteration current = current.next; } return sum;} // Time complexity: O(n) — each node visited exactly oncefor (i = 0; i < length; i++) with getAtIndex(i)list.get(index) in a loopwhile (current !== null) with pointer advancementcurrent = current.next to progressJava's LinkedList.get(index) method tempts developers to write index loops. This is a performance trap—each .get(i) call is O(i). If you're using Java LinkedList and need to iterate, use an Iterator or enhanced for-loop, never index-based access.
Despite its cost, O(n) indexed access doesn't always matter. Here are scenarios where linked lists remain appropriate:
1. You rarely (or never) access by index
If your access pattern is:
Then you never pay the index penalty. Stacks and queues are perfect examples—they never need indexed access.
2. The list is small
For n < 100 elements, O(n) access might complete in microseconds. If you're looking up elements in a 50-item list a few times per second, optimizing isn't worth the effort.
3. Insertions/deletions dominate
If you frequently insert/delete in the middle of the collection, linked lists offer O(1) insertion (once you have a reference), while arrays require O(n) shifting. The cost model might favor linked lists overall.
| Scenario | Indexed Access Impact | Recommendation |
|---|---|---|
| Pure stack/queue operations | Never needed | Linked list is excellent |
| Small list (n < 100) | Negligible | Either works; prefer array for simplicity |
| Large list, sequential access only | Not triggered | Linked list is fine |
| Large list, random access needed | Critical performance issue | Use array instead |
| Large list, frequent mid-insertions | Trade-off needed | Profile your workload |
4. You maintain auxiliary pointers
If you keep references to commonly accessed nodes (e.g., a tail pointer for the last element, or pointers stored in external data structures), you can access those positions in O(1). This is a common optimization:
class LinkedListWithTail<T> {
head: ListNode<T> | null = null;
tail: ListNode<T> | null = null; // O(1) access to last element
length: number = 0;
getLast(): T | null {
return this.tail?.data ?? null; // O(1), not O(n)!
}
}
By caching key positions, you convert specific O(n) accesses into O(1)—at the cost of maintaining those pointers correctly during mutations.
Before choosing a data structure, ask: How will I access elements? Sequentially? By index? From front/back? At random positions? The answer determines whether linked list O(n) access matters for your use case.
When you need both dynamic size (like linked lists) and fast indexed access (like arrays), consider these hybrid strategies:
1. Skip Lists
A skip list maintains multiple layers of "express lanes" over the base linked list. By probabilistically adding shortcuts, it achieves O(log n) search and access—much better than O(n).
Level 3: head ─────────────────────────────→ 50 ─────────────→ null
Level 2: head ────────→ 20 ────────────────→ 50 ─────→ 70 ─→ null
Level 1: head ─→ 10 ─→ 20 ─→ 30 ─→ 40 ─→ 50 ─→ 60 ─→ 70 ─→ null
Skip lists power Redis sorted sets and are a practical alternative when O(n) access is too slow.
2. Unrolled Linked Lists
An unrolled linked list stores multiple elements per node (e.g., an array of 64 elements per node). This provides:
3. Rope Data Structure
Used in text editors, a rope is a binary tree of strings. It allows:
4. Cache the Index
If you frequently access the same index, cache it:
12345678910111213141516171819202122232425262728293031323334353637383940
class CachedLinkedList<T> { head: ListNode<T> | null = null; // Cache the most recently accessed position private cachedNode: ListNode<T> | null = null; private cachedIndex: number = -1; getAtIndex(index: number): T | null { if (index < 0 || this.head === null) return null; // If we have a cache hit for an earlier position, start there let current: ListNode<T> | null; let position: number; if (this.cachedNode !== null && this.cachedIndex <= index && this.cachedIndex >= 0) { // Start from cache - saves (cachedIndex) traversals current = this.cachedNode; position = this.cachedIndex; } else { // Start from head current = this.head; position = 0; } while (current !== null && position < index) { current = current.next; position++; } // Update cache if (current !== null) { this.cachedNode = current; this.cachedIndex = index; } return current?.data ?? null; }}Every optimization adds complexity and has costs. Skip lists need more memory and complex insertion logic. Caching can become stale after mutations. Choose optimizations based on measured performance needs, not speculation.
We've thoroughly examined why indexed access is O(n) in linked lists and what that means for practical software engineering. Let's consolidate the key insights:
getAtIndex(i) starts from head every time, turning O(n) traversal into O(n²) disaster.What's next:
We've covered the 'read' operations: traversal (O(n)) and indexed access (O(n)). Now we turn to 'write' operations. Next page: Insertion at Head: O(1) — where linked lists show their true strength.
You now deeply understand why indexed access costs O(n) in linked lists and how this shapes algorithm design. This knowledge helps you avoid performance traps and make informed data structure choices. Next, we'll see where linked lists shine: O(1) insertion.