Loading content...
We've established that the key insight of Dijkstra's algorithm is to process vertices in order of their distance from the source. But how do we efficiently find "the unprocessed vertex with minimum distance" among potentially millions of candidates? The answer lies in one of the most versatile data structures in computer science: the priority queue.
A priority queue maintains a collection of elements, each associated with a priority value, and efficiently supports two critical operations: insert a new element, and extract the element with minimum (or maximum) priority. This is exactly what Dijkstra's algorithm needs—repeatedly extracting the vertex with the smallest distance estimate.
By the end of this page, you will understand: (1) Why a priority queue is the optimal choice for Dijkstra's algorithm, (2) How min-heaps implement priority queues efficiently, (3) The complete algorithm structure with priority queue operations, and (4) Implementation variations and their trade-offs.
Before committing to a data structure, let's analyze the operations Dijkstra's algorithm requires and evaluate alternatives.
Operations needed in the main loop:
These operations repeat until all vertices are processed. For a graph with V vertices and E edges:
| Data Structure | Extract-Min | Insert/Update | Total Complexity | Notes |
|---|---|---|---|---|
| Unsorted Array | O(V) | O(1) | O(V² + E) | Simple but slow for sparse graphs |
| Sorted Array | O(1) | O(V) | O(V + VE) | Even worse due to sorting cost |
| Balanced BST | O(log V) | O(log V) | O((V+E) log V) | Good but has overhead |
| Binary Min-Heap | O(log V) | O(log V) | O((V+E) log V) | Optimal for most cases |
| Fibonacci Heap | O(log V) amortized | O(1) amortized | O(V log V + E) | Theoretically faster, complex |
Analysis of the trade-offs:
Unsorted Array Approach (O(V²)): Simple: maintain a distance array, scan it linearly to find minimum. Extract-Min takes O(V), but there's no overhead for updates. Total: O(V²) for V extractions. This is acceptable for dense graphs (where E ≈ V²) but wasteful for sparse graphs.
Binary Heap Approach (O((V+E) log V)): Use a min-heap keyed by distance. Extract-Min takes O(log V), and updates take O(log V). Total: O((V + E) log V). For sparse graphs (E << V²), this is a massive improvement.
The winner for most practical cases: The binary min-heap strikes the best balance between efficiency and implementation complexity. It's O((V+E) log V), which dominates O(V²) for sparse graphs—and most real-world graphs are sparse.
For dense graphs where E ≈ V², the O(V²) unsorted array approach is actually competitive with or better than the O((V+E) log V) heap approach. The log factor adds up. In practice, use a heap for sparse graphs and consider simpler approaches for very dense graphs or small inputs.
A binary min-heap is a complete binary tree where each node's value is less than or equal to its children's values. This heap property ensures the minimum element is always at the root, enabling O(1) access to the minimum and O(log n) extraction.
Key properties:
Array representation:
Min-heaps are typically stored in an array, exploiting the complete tree structure:
1234567891011
Visual representation: [2] Array: [2, 3, 5, 8, 10, 7, 9] / \ [3] [5] Index: 0 1 2 3 4 5 6 / \ / \ [8] [10] [7] [9] Relationships: - Parent of index 4 (value 10): (4-1)/2 = 1 (value 3) - Children of index 1 (value 3): 2*1+1=3, 2*1+2=4 (values 8, 10) The minimum (2) is always at index 0 — accessible in O(1)!Core operations for Dijkstra:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
class MinHeap<T> { private items: { value: T; priority: number }[] = []; // Return the minimum without removing it — O(1) peek(): T | undefined { return this.items[0]?.value; } // Remove and return the minimum — O(log n) extractMin(): T | undefined { if (this.items.length === 0) return undefined; const min = this.items[0].value; const last = this.items.pop()!; if (this.items.length > 0) { this.items[0] = last; this.siftDown(0); // Restore heap property } return min; } // Insert a new element — O(log n) insert(value: T, priority: number): void { this.items.push({ value, priority }); this.siftUp(this.items.length - 1); // Restore heap property } // Move element up until heap property restored private siftUp(index: number): void { while (index > 0) { const parent = Math.floor((index - 1) / 2); if (this.items[parent].priority <= this.items[index].priority) break; [this.items[parent], this.items[index]] = [this.items[index], this.items[parent]]; index = parent; } } // Move element down until heap property restored private siftDown(index: number): void { while (true) { let smallest = index; const left = 2 * index + 1; const right = 2 * index + 2; if (left < this.items.length && this.items[left].priority < this.items[smallest].priority) { smallest = left; } if (right < this.items.length && this.items[right].priority < this.items[smallest].priority) { smallest = right; } if (smallest === index) break; [this.items[smallest], this.items[index]] = [this.items[index], this.items[smallest]]; index = smallest; } }}A complete binary tree with n nodes has height log₂(n). Both insert (sift-up) and extract-min (sift-down) traverse at most one root-to-leaf path, moving through at most log₂(n) nodes. This logarithmic bound is what makes heaps efficient for priority queue operations.
With the priority queue foundation established, let's examine the complete Dijkstra's algorithm. The algorithm maintains three key data structures:
12345678910111213141516171819202122232425262728293031323334353637383940414243
interface Edge { to: number; weight: number;} type Graph = Edge[][]; // Adjacency list: graph[u] contains edges from u function dijkstra(graph: Graph, source: number): number[] { const n = graph.length; const dist: number[] = new Array(n).fill(Infinity); const processed: boolean[] = new Array(n).fill(false); // Priority queue: stores [distance, vertex] pairs // Using a simple implementation; in practice, use a proper heap const pq = new MinPriorityQueue<number>(); // Initialize source dist[source] = 0; pq.insert(source, 0); while (!pq.isEmpty()) { // Extract vertex with minimum distance const u = pq.extractMin()!; // Skip if already processed (handles duplicate entries) if (processed[u]) continue; processed[u] = true; // Relax all edges from u for (const edge of graph[u]) { const v = edge.to; const newDist = dist[u] + edge.weight; // If we found a shorter path to v, update it if (newDist < dist[v]) { dist[v] = newDist; pq.insert(v, newDist); // Add new entry with updated distance } } } return dist;}Step-by-step explanation:
Initialization:
Main loop (repeat until queue is empty):
Termination: The algorithm terminates when the priority queue is empty, meaning all reachable vertices have been processed. The dist[] array now contains the shortest path distances.
When we update a distance, we insert a new entry rather than updating the existing one. This means the same vertex might appear multiple times in the queue with different distances. The 'processed' check handles this: when we extract a vertex that's already processed, we skip it. The first extraction always has the minimum distance (the correct one).
Let's trace Dijkstra's algorithm on a concrete example to see how the pieces fit together. Consider this weighted directed graph:
12345678910111213141516
A -----(4)-----> B -----(3)-----> D | | ^ (2) (1) (1) | | | v v | C -----(5)-----> E -----(2)--------+ Vertices: A, B, C, D, E Edges: A → B : 4 A → C : 2 B → D : 3 B → E : 1 C → E : 5 E → D : 2 Source: A Goal: Find shortest paths to all vertices| Step | Extract | dist[A] | dist[B] | dist[C] | dist[D] | dist[E] | Priority Queue |
|---|---|---|---|---|---|---|---|
| Init | 0 | ∞ | ∞ | ∞ | ∞ | [(A,0)] | |
| 1 | A | 0 | 4 | 2 | ∞ | ∞ | [(C,2), (B,4)] |
| 2 | C | 0 | 4 | 2 | ∞ | 7 | [(B,4), (E,7)] |
| 3 | B | 0 | 4 | 2 | 7 | 5 | [(E,5), (E,7), (D,7)] |
| 4 | E | 0 | 4 | 2 | 7 | 5 | [(D,7), (E,7)] |
| 5 | D | 0 | 4 | 2 | 7 | 5 | [(E,7)] |
| 6 | E* | 0 | 4 | 2 | 7 | 5 | [] |
Detailed trace:
Step 1 — Process A:
Step 2 — Process C:
Step 3 — Process B:
Step 4 — Process E:
Step 5 — Process D:
Step 6 — Skip duplicate E:
Final distances: A:0, B:4, C:2, D:7, E:5
Notice how dist[E] was initially set to 7 (via C), then improved to 5 (via B). The algorithm correctly finds this shorter path because B was processed before E. The priority queue ensures we always process closer vertices first, giving them a chance to improve paths to farther vertices.
The algorithm we've presented uses the lazy deletion approach—adding new entries to the priority queue instead of updating existing ones. But there are several valid implementation strategies, each with trade-offs.
Approach 1: Lazy Deletion (Insert Duplicates)
123456789
// When distance improves, insert a new entryif (newDist < dist[v]) { dist[v] = newDist; pq.insert(v, newDist); // May create duplicate entries} // In extraction, skip already-processed verticesconst u = pq.extractMin();if (processed[u]) continue; // Skip stale entryPros: Simple implementation; doesn't require decrease-key operation Cons: Priority queue may contain duplicates; extra memory and extraction overhead
Approach 2: Decrease-Key Operation
123456789
// When distance improves, update existing entryif (newDist < dist[v]) { dist[v] = newDist; if (pq.contains(v)) { pq.decreaseKey(v, newDist); // Update priority in-place } else { pq.insert(v, newDist); }}Pros: No duplicates; smaller queue size; true O((V+E) log V) guarantee Cons: Requires indexed heap or map; more complex implementation
Approach 3: Array-Based (For Dense Graphs)
123456789101112131415161718192021222324252627282930
function dijkstraArray(graph: Graph, source: number): number[] { const n = graph.length; const dist = new Array(n).fill(Infinity); const processed = new Array(n).fill(false); dist[source] = 0; for (let i = 0; i < n; i++) { // Find unprocessed vertex with minimum distance - O(V) let u = -1; for (let v = 0; v < n; v++) { if (!processed[v] && (u === -1 || dist[v] < dist[u])) { u = v; } } if (u === -1 || dist[u] === Infinity) break; // All remaining unreachable processed[u] = true; // Relax edges for (const edge of graph[u]) { const newDist = dist[u] + edge.weight; if (newDist < dist[edge.to]) { dist[edge.to] = newDist; } } } return dist;}| Approach | Time Complexity | Space Overhead | Best For |
|---|---|---|---|
| Lazy Deletion | O((V+E) log V) * | O(E) duplicates | General use, simple code |
| Decrease-Key | O((V+E) log V) | O(V) for index map | Strict performance needs |
| Array-Based | O(V²) | O(1) | Dense graphs, small V |
For most practical purposes, use the lazy deletion approach with a binary heap. It's simple to implement (just use the language's built-in priority queue), and the overhead from duplicates is usually manageable. Only optimize to decrease-key or array-based approaches when profiling shows queue operations are a bottleneck.
So far, we've focused on computing shortest path distances. But often we need the actual path—the sequence of vertices from source to destination. This requires tracking how we reached each vertex.
The predecessor array:
We add a parent[] (or predecessor[]) array that records, for each vertex, which vertex we came from to achieve the current shortest path. When we update a distance, we also update the parent.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
interface DijkstraResult { dist: number[]; parent: (number | null)[];} function dijkstraWithPath(graph: Graph, source: number): DijkstraResult { const n = graph.length; const dist: number[] = new Array(n).fill(Infinity); const parent: (number | null)[] = new Array(n).fill(null); const processed: boolean[] = new Array(n).fill(false); const pq = new MinPriorityQueue<number>(); dist[source] = 0; pq.insert(source, 0); while (!pq.isEmpty()) { const u = pq.extractMin()!; if (processed[u]) continue; processed[u] = true; for (const edge of graph[u]) { const v = edge.to; const newDist = dist[u] + edge.weight; if (newDist < dist[v]) { dist[v] = newDist; parent[v] = u; // Record: we reached v from u pq.insert(v, newDist); } } } return { dist, parent };} // Reconstruct path from source to targetfunction getPath(parent: (number | null)[], target: number): number[] { const path: number[] = []; let current: number | null = target; while (current !== null) { path.push(current); current = parent[current]; } return path.reverse(); // Reverse to get source → target order} // Usage example:const { dist, parent } = dijkstraWithPath(graph, 0); // Source = vertex 0const pathTo3 = getPath(parent, 3); // Get path from 0 to 3console.log(`Shortest path to 3: ${pathTo3.join(' → ')}`);console.log(`Distance: ${dist[3]}`);How path reconstruction works:
Example trace:
For our previous graph (source A, target D):
Wait, but dist[D] = 7 could be achieved two ways:
Both give the same distance! The parent array records whichever path was found last (or whichever updates happened). For ties, both are valid shortest paths.
When multiple paths have the same shortest distance, the algorithm finds one of them (depending on graph representation and tie-breaking). If you need all shortest paths, you must use a modified algorithm that tracks multiple parents per vertex.
In practice, you rarely need to implement a priority queue from scratch. Most programming languages provide efficient implementations. Here's how to use them for Dijkstra's algorithm:
JavaScript/TypeScript:
1234567891011121314151617181920212223242526272829303132333435
// JavaScript doesn't have a built-in priority queue,// so we often use a sorted array or a small MinHeap class.// For competitive programming, a simple approach: function dijkstra(graph, source) { const n = graph.length; const dist = Array(n).fill(Infinity); const processed = Array(n).fill(false); // Simple heap using array of [distance, vertex] pairs // Uses sorting on each extract - not optimal but clear const pq = []; dist[source] = 0; pq.push([0, source]); while (pq.length > 0) { // Sort to get minimum at end, pop for O(1) removal pq.sort((a, b) => b[0] - a[0]); const [d, u] = pq.pop(); if (processed[u]) continue; processed[u] = true; for (const [v, weight] of graph[u]) { const newDist = dist[u] + weight; if (newDist < dist[v]) { dist[v] = newDist; pq.push([newDist, v]); } } } return dist;}Python:
12345678910111213141516171819202122232425262728
import heapq def dijkstra(graph, source): """ graph: dict of {vertex: [(neighbor, weight), ...]} Returns: dict of {vertex: shortest_distance} """ dist = {v: float('inf') for v in graph} dist[source] = 0 processed = set() # heapq is a min-heap; push (distance, vertex) tuples pq = [(0, source)] while pq: d, u = heapq.heappop(pq) # Extract minimum if u in processed: continue processed.add(u) for v, weight in graph[u]: new_dist = dist[u] + weight if new_dist < dist[v]: dist[v] = new_dist heapq.heappush(pq, (new_dist, v)) return distC++ (STL priority_queue):
123456789101112131415161718192021222324252627282930313233343536
#include <vector>#include <queue>#include <limits> using namespace std; vector<int> dijkstra(const vector<vector<pair<int,int>>>& graph, int source) { int n = graph.size(); vector<int> dist(n, numeric_limits<int>::max()); vector<bool> processed(n, false); // Min-heap: pair<distance, vertex> // Note: C++ priority_queue is max-heap by default, so negate or use greater<> priority_queue<pair<int,int>, vector<pair<int,int>>, greater<>> pq; dist[source] = 0; pq.push({0, source}); while (!pq.empty()) { auto [d, u] = pq.top(); pq.pop(); if (processed[u]) continue; processed[u] = true; for (auto [v, weight] : graph[u]) { int newDist = dist[u] + weight; if (newDist < dist[v]) { dist[v] = newDist; pq.push({newDist, v}); } } } return dist;}C++'s std::priority_queue is a max-heap by default! For Dijkstra, use greater<> as the comparator to make it a min-heap, or negate distances when pushing/popping.
We've now mastered the mechanics of Dijkstra's algorithm through the priority queue lens. Let's consolidate the key points:
What's next:
We've seen the algorithm's structure, but we haven't examined the crucial operation that drives progress: relaxation. The next page explores the relaxation process in detail—what it means to "relax" an edge, why relaxation converges to correct answers, and how the relaxation order affects algorithm behavior.
You now understand how priority queues enable efficient implementation of Dijkstra's algorithm, the trade-offs between different implementation approaches, and how to reconstruct actual shortest paths. Next, we'll examine the relaxation process that makes the algorithm converge to correct shortest path distances.