Loading learning content...
Space complexity tells only half the story. The true test of a graph representation lies in how efficiently it supports the operations your algorithm actually performs. Different algorithms have vastly different operational profiles—some query edges constantly, others iterate through neighbors, and still others modify the graph structure dynamically.
The critical insight: There is no universally 'faster' representation. The optimal choice depends entirely on which operations dominate your algorithm's runtime.
This page dissects the time complexity of every fundamental graph operation, arming you with the precise knowledge to match representation to algorithmic need.
By the end of this page, you will understand the exact time complexity of edge queries, neighbor iteration, edge insertion, edge deletion, vertex operations, and graph traversal for both representations. You'll learn to analyze your algorithm's operation profile and select the representation that minimizes total runtime.
Before diving into complexity analysis, let's categorize the operations that graph algorithms perform. Every graph algorithm, no matter how complex, is built from these five fundamental operations:
1. Edge Existence Query (hasEdge)
2. Neighbor Iteration (getNeighbors)
3. Edge Modification (addEdge, removeEdge)
4. Vertex Operations (addVertex, removeVertex)
5. Degree Query
| Algorithm | Edge Query | Neighbor Scan | Edge Modify | Dominant Operation |
|---|---|---|---|---|
| BFS / DFS | Rare | Very High | None | Neighbor Iteration |
| Dijkstra's | Rare | Very High | None | Neighbor Iteration |
| Floyd-Warshall | Very High | None | None | Edge Query |
| Prim's MST (dense) | High | Moderate | None | Edge Query |
| Kruskal's MST | Moderate | None | High | Edge Modification |
| Graph Coloring | High | High | None | Both |
| Dynamic Connectivity | Moderate | Moderate | High | Edge Modification |
Before selecting a representation, mentally trace through your algorithm and categorize which operations it performs most frequently. The representation that optimizes your hot path is the right choice—even if it's slower for operations you rarely use.
The edge existence query asks: "Is there an edge from vertex u to vertex v?" This is perhaps the most revealing operation for differentiating the two representations.
Adjacency Matrix: O(1)
The matrix provides instant random access. Checking M[u][v] is a single array lookup—one memory access regardless of graph size.
hasEdge(u, v) → return matrix[u][v] ≠ 0
This is the matrix's killer feature. For algorithms that repeatedly query specific edges (like Floyd-Warshall checking all pairs, or checking if a specific connection exists), the matrix is unbeatable.
1234567891011121314151617181920212223242526272829303132333435363738
// ADJACENCY MATRIX: O(1) edge queryclass MatrixGraph { private matrix: number[][]; hasEdge(u: number, v: number): boolean { // Single array access - O(1) return this.matrix[u][v] !== 0; } getEdgeWeight(u: number, v: number): number | null { const weight = this.matrix[u][v]; return weight !== 0 ? weight : null; }} // ADJACENCY LIST: O(degree(u)) edge queryclass ListGraph { private adjacency: Map<number, Set<number>>; // Using Set for O(1) lookup // With Set: O(1) average, O(degree(u)) worst case hasEdge(u: number, v: number): boolean { return this.adjacency.get(u)?.has(v) ?? false; }} // ADJACENCY LIST with Array: O(degree(u)) alwaysclass ArrayListGraph { private adjacency: number[][]; // Using Array hasEdge(u: number, v: number): boolean { // Must scan neighbor array - O(degree(u)) const neighbors = this.adjacency[u]; for (const neighbor of neighbors) { if (neighbor === v) return true; } return false; }}Adjacency List: O(degree(u)) — or O(1) with hash-based neighbor storage
With a naive array-based neighbor list, we must scan through all neighbors of u to find v. This takes O(degree(u)) time—which can be O(|V|) in the worst case for a highly connected vertex.
Optimization: Using a Set or HashMap for each vertex's neighbors reduces average edge query to O(1), though with higher constant factors than the matrix.
When this matters:
Consider an algorithm that performs |V|² edge queries (like checking all pairs):
For a dense graph where |E| ≈ |V|², both are comparable. But even for dense graphs, the matrix's constant factor is much smaller (single array access vs. hash lookup).
Using a hash set for neighbors gives O(1) edge queries but increases memory overhead and slows neighbor iteration (hash set iteration is slower than array iteration). If you primarily query edges but rarely iterate neighbors, the hash set optimization might be worthwhile. Profile carefully!
Neighbor iteration answers: "What are all the vertices connected to v?" This operation is the heart of graph traversal algorithms—BFS, DFS, Dijkstra's, and countless others spend most of their time iterating through neighbors.
Adjacency List: O(degree(v))
This is the list's defining strength. The neighbors of v are stored contiguously (or at least in a dedicated collection), and iteration touches exactly degree(v) elements—no more, no less.
123456789101112131415161718192021222324252627282930313233343536373839404142
// ADJACENCY LIST: O(degree(v)) - optimalclass ListGraph { private adjacency: number[][]; *neighbors(v: number): Generator<number> { // Iterate through exactly degree(v) neighbors for (const neighbor of this.adjacency[v]) { yield neighbor; } } // For weighted graphs *weightedNeighbors(v: number): Generator<[number, number]> { for (const { to, weight } of this.adjacency[v]) { yield [to, weight]; } }} // ADJACENCY MATRIX: O(|V|) - must scan entire rowclass MatrixGraph { private matrix: number[][]; private vertexCount: number; *neighbors(v: number): Generator<number> { // Must check all |V| possible neighbors for (let u = 0; u < this.vertexCount; u++) { if (this.matrix[v][u] !== 0) { yield u; } } } // For weighted graphs *weightedNeighbors(v: number): Generator<[number, number]> { for (let u = 0; u < this.vertexCount; u++) { if (this.matrix[v][u] !== 0) { yield [u, this.matrix[v][u]]; } } }}Adjacency Matrix: O(|V|)
To find neighbors in a matrix, we must scan the entire row for vertex v, checking each of the |V| entries to see if an edge exists. This happens regardless of how many actual neighbors exist.
The implications are profound:
Consider BFS or DFS, which visits every vertex and iterates its neighbors once:
For a sparse graph with |E| = O(|V|), the list is |V| times faster for traversal!
For a dense graph with |E| = O(|V|²), both approaches are O(|V|²), but the matrix scans zeros while the list visits only actual edges.
| Graph Type | Edges | List: BFS Time | Matrix: BFS Time | List Advantage |
|---|---|---|---|---|
| Sparse (tree-like) | O(|V|) | O(|V|) | O(|V|²) | |V|× faster |
| Sparse (road network) | O(|V|) | O(|V|) | O(|V|²) | |V|× faster |
| Moderate | O(|V| log |V|) | O(|V| log |V|) | O(|V|²) | |V| / log |V|× faster |
| Semi-dense | O(|V|^1.5) | O(|V|^1.5) | O(|V|²) | √|V|× faster |
| Dense | O(|V|²) | O(|V|²) | O(|V|²) | Comparable (list still wins on constants) |
BFS, DFS, Dijkstra's, Bellman-Ford, Prim's, Kruskal's, topological sort, strongly connected components—all of these spend most of their time iterating neighbors. For nearly all graph algorithms, efficient neighbor iteration is paramount. This is why adjacency lists are the default choice.
Dynamic graphs require edge insertion and deletion. The complexity of these operations varies significantly between representations and depends on implementation choices.
Adjacency Matrix: O(1) for both add and remove
Modifying an edge in a matrix is trivial—set or clear a single cell:
addEdge(u, v, weight) → matrix[u][v] = weight
removeEdge(u, v) → matrix[u][v] = 0
For undirected graphs, update both M[u][v] and M[v][u]. Still O(1).
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// ADJACENCY MATRIX: O(1) edge modificationclass MatrixGraph { private matrix: number[][]; addEdge(u: number, v: number, weight: number = 1): void { this.matrix[u][v] = weight; // For undirected: this.matrix[v][u] = weight; } removeEdge(u: number, v: number): void { this.matrix[u][v] = 0; // For undirected: this.matrix[v][u] = 0; }} // ADJACENCY LIST: Complexity depends on implementationclass ArrayListGraph { private adjacency: number[][]; // O(1) amortized - append to array addEdge(u: number, v: number): void { this.adjacency[u].push(v); // For undirected: this.adjacency[v].push(u); } // O(degree(u)) - must find and remove element removeEdge(u: number, v: number): void { const neighbors = this.adjacency[u]; const index = neighbors.indexOf(v); // O(degree(u)) if (index !== -1) { neighbors.splice(index, 1); // O(degree(u)) shift } }} class SetListGraph { private adjacency: Map<number, Set<number>>; // O(1) average - Set.add() addEdge(u: number, v: number): void { if (!this.adjacency.has(u)) { this.adjacency.set(u, new Set()); } this.adjacency.get(u)!.add(v); } // O(1) average - Set.delete() removeEdge(u: number, v: number): void { this.adjacency.get(u)?.delete(v); }}Adjacency List: Varies by Implementation
| Implementation | Add Edge | Remove Edge | Notes |
|---|---|---|---|
| Array (unsorted) | O(1) amortized | O(degree) | Simple, fast add, slow remove |
| Linked List | O(1) | O(degree) | Same as array for this purpose |
| Hash Set | O(1) average | O(1) average | Fast modification, slower iteration |
| Sorted Array | O(degree) | O(degree) | Binary search for queries, but costly modification |
Preventing Duplicate Edges:
With arrays, adding an edge without checking for duplicates is O(1), but if you need to prevent duplicates, you must first search—making it O(degree). Hash sets naturally prevent duplicates in O(1).
If you build the graph once and never modify it, array-based lists are ideal (fast iteration, no overhead). If you frequently add/remove edges, hash-based neighbor sets provide O(1) modifications at the cost of slower iteration. Matrices offer O(1) modification but may be space-prohibitive for large sparse graphs.
Vertex operations are less common than edge operations but can be critical for certain algorithms (contraction hierarchies, vertex elimination, etc.).
Adjacency Matrix:
Adding a vertex: Requires allocating a new (|V|+1) × (|V|+1) matrix and copying the old one. O(|V|²) time and temporary O(|V|²) space.
Removing a vertex: Similarly requires matrix reconstruction or, at minimum, clearing a row and column. O(|V|) if we just mark as deleted, O(|V|²) for actual compaction.
The matrix is fundamentally hostile to dynamic vertex sets.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
// ADJACENCY MATRIX: Expensive vertex operationsclass MatrixGraph { private matrix: number[][]; // O(|V|²) - Must resize entire matrix addVertex(): number { const newSize = this.matrix.length + 1; const newMatrix = Array(newSize).fill(null) .map((_, i) => { if (i < this.matrix.length) { return [...this.matrix[i], 0]; // Copy old row + new column } return Array(newSize).fill(0); // New row }); this.matrix = newMatrix; return newSize - 1; } // O(|V|) - Clear row and column (or O(|V|²) for compaction) removeVertex(v: number): void { // Mark as removed by zeroing row and column for (let i = 0; i < this.matrix.length; i++) { this.matrix[v][i] = 0; // Clear row this.matrix[i][v] = 0; // Clear column } // Note: This leaves a "gap" - actual compaction is O(|V|²) }} // ADJACENCY LIST: More flexible vertex operationsclass ListGraph { private adjacency: Map<number, number[]>; private nextId: number = 0; // O(1) - Just add empty neighbor list addVertex(): number { const id = this.nextId++; this.adjacency.set(id, []); return id; } // O(|V| + |E|) worst case - Must remove from all neighbor lists removeVertex(v: number): void { // Remove v's own adjacency list: O(1) this.adjacency.delete(v); // Remove v from all other vertices' neighbor lists: O(|E|) worst case // In practice, O(degree(v) × average_degree) if we track back-edges for (const [u, neighbors] of this.adjacency) { const index = neighbors.indexOf(v); if (index !== -1) { neighbors.splice(index, 1); } } }} // OPTIMIZED: Track incoming edges for faster vertex removalclass BidirectionalListGraph { private outgoing: Map<number, Set<number>>; private incoming: Map<number, Set<number>>; // O(degree(v)) - Only update affected neighbors removeVertex(v: number): void { // Remove edges FROM v: O(out_degree(v)) for (const neighbor of this.outgoing.get(v) ?? []) { this.incoming.get(neighbor)?.delete(v); } // Remove edges TO v: O(in_degree(v)) for (const predecessor of this.incoming.get(v) ?? []) { this.outgoing.get(predecessor)?.delete(v); } this.outgoing.delete(v); this.incoming.delete(v); }}| Operation | Matrix | List (basic) | List (bidirectional) |
|---|---|---|---|
| Add Vertex | O(|V|²) | O(1) | O(1) |
| Remove Vertex | O(|V|) to O(|V|²) | O(|V| + |E|) | O(degree(v)) |
| Memory Overhead | None | None | 2× edge storage |
Maintaining both outgoing and incoming edge lists doubles storage but enables O(degree) vertex removal. This is worthwhile for algorithms that frequently modify vertex sets. Many graph databases use this approach internally.
Let's examine two more operations that frequently appear in graph algorithms:
Degree Query: degree(v)
Adjacency List: O(1) — The degree is simply the length of the neighbor list
Adjacency Matrix: O(|V|) — Must count non-zero entries in the row (unless we maintain a separate degree array)
Total Edge Count: edgeCount()
Adjacency List: O(|V|) — Sum the lengths of all neighbor lists, divide by 2 for undirected
Adjacency Matrix: O(|V|²) — Must scan entire matrix (unless we maintain a counter)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
// ADJACENCY LISTclass ListGraph { private adjacency: number[][]; // O(1) - Array length degree(v: number): number { return this.adjacency[v].length; } // O(|V|) - Sum all degrees edgeCount(): number { let total = 0; for (const neighbors of this.adjacency) { total += neighbors.length; } return total / 2; // Each edge counted twice for undirected } // O(|E|) - Iterate all edges without duplicates *allEdges(): Generator<[number, number]> { for (let u = 0; u < this.adjacency.length; u++) { for (const v of this.adjacency[u]) { if (u < v) { // Only yield each edge once yield [u, v]; } } } }} // ADJACENCY MATRIXclass MatrixGraph { private matrix: number[][]; private _edgeCount: number = 0; // Maintain counter for O(1) edge count // O(|V|) without caching degree(v: number): number { let count = 0; for (let u = 0; u < this.matrix.length; u++) { if (this.matrix[v][u] !== 0) count++; } return count; } // O(|V|²) to compute, O(1) if cached edgeCount(): number { return this._edgeCount; // Maintained during add/remove } // O(|V|²) - Must scan entire matrix *allEdges(): Generator<[number, number]> { for (let u = 0; u < this.matrix.length; u++) { for (let v = u + 1; v < this.matrix.length; v++) { if (this.matrix[u][v] !== 0) { yield [u, v]; } } } }}Iterating All Edges:
Adjacency List: O(|E|) — Touch each edge exactly once (with care to avoid duplicates)
Adjacency Matrix: O(|V|²) — Must check every cell even if most are empty
This matters for algorithms that process all edges:
For Kruskal's algorithm on a sparse graph:
Let's consolidate everything into a comprehensive reference table:
| Operation | Adjacency Matrix | Adjacency List (Array) | Adjacency List (Hash Set) |
|---|---|---|---|
| hasEdge(u, v) | O(1) | O(degree(u)) | O(1) average |
| getNeighbors(v) | O(|V|) | O(degree(v)) | O(degree(v)) |
| addEdge(u, v) | O(1) | O(1) amortized | O(1) average |
| removeEdge(u, v) | O(1) | O(degree(u)) | O(1) average |
| addVertex() | O(|V|²) | O(1) | O(1) |
| removeVertex(v) | O(|V|) to O(|V|²) | O(|V| + |E|) | O(degree(v)) with tracking |
| degree(v) | O(|V|) | O(1) | O(1) |
| edgeCount() | O(|V|²) | O(|V|) | O(|V|) |
| allEdges() | O(|V|²) | O(|E|) | O(|E|) |
| BFS / DFS | O(|V|²) | O(|V| + |E|) | O(|V| + |E|) |
We've conducted a thorough analysis of time complexity across all fundamental graph operations. Here are the essential takeaways:
What's Next:
We've established the space and time trade-offs. The next page explores the concepts of sparse vs dense graphs in depth—understanding graph density is the key to making the right representation choice without detailed profiling for every problem.
You now have a comprehensive understanding of time complexity for every fundamental graph operation. You can analyze any graph algorithm's operational profile and select the representation that minimizes total runtime. Next, we'll explore sparse vs dense graphs to develop quick intuition for representation choice.