Loading learning content...
In the real world, not all connections are created equal. The flight from New York to Los Angeles takes 5 hours, but LA to San Francisco takes only 1 hour. The highway has a higher speed limit but tolls; the back road is free but slower. A strong friendship differs from a casual acquaintance. These quantitative differences between edges is the defining characteristic of weighted graphs.
Weighted graphs extend our basic graph model by associating a numerical value—called a weight—with each edge. This simple addition dramatically expands the modeling power of graphs and fundamentally changes which algorithms we can and should use. The shortest path is no longer about fewest edges; it's about minimum total weight.
By the end of this page, you will understand the formal definition and various representations of weighted graphs, recognize their diverse real-world applications, grasp how weights fundamentally change algorithmic approaches, and appreciate the crucial distinction between positive, zero, and negative weights.
A weighted graph is a graph where each edge has an associated numerical value called its weight, cost, length, or capacity, depending on the application context.
Formal Definition:
A weighted graph G is typically defined as a triple G = (V, E, w), where:
For an edge e = (u, v) or e = {u, v}, we write w(e) or w(u, v) to denote its weight.
Alternative Notation:
Some authors define weighted graphs as G = (V, E) where each edge is a tuple containing its weight: e = (u, v, w). Both notations capture the same concept—edges with associated values.
Weights can represent distance (roads), time (flights), cost (pricing), capacity (network flow), probability (Markov chains), similarity (clustering), or any other quantitative relationship. The same graph structure with different weight interpretations yields different problem domains.
Weighted Path and Distance:
In a weighted graph, the weight of a path is the sum of weights of edges along the path:
w(P) = Σw(eᵢ) for all edges eᵢ in path P
The weighted shortest path from u to v is the path minimizing this sum. This differs fundamentally from unweighted graphs, where the shortest path simply minimizes edge count.
Example:
Consider weighted edges: A→B(3), A→C(1), C→B(1)
This is why BFS (which finds shortest edge-count paths) doesn't work for weighted graphs—we need algorithms like Dijkstra's that consider edge weights.
| Term | Definition | Example |
|---|---|---|
| Weight Function | Mapping from edges to real numbers: w: E → ℝ | w(A,B) = 5 means edge A-B has weight 5 |
| Path Weight | Sum of weights of edges in a path | Path A→B→C with w(A,B)=2, w(B,C)=3 has weight 5 |
| Weighted Shortest Path | Path minimizing total weight (not edge count) | May use more edges if their total weight is less |
| Positive Weights | All edge weights > 0 | Enables efficient algorithms like Dijkstra's |
| Non-negative Weights | All edge weights ≥ 0 | Zero-weight edges are valid; Dijkstra's still works |
| Negative Weights | Some edge weights < 0 | Requires Bellman-Ford; may have negative cycles |
| Negative Cycle | Cycle whose total weight is negative | Infinitely decreasing paths; no shortest path exists |
Weighted graphs require modified data structures to store both connectivity and edge weights. The two primary representations—adjacency matrix and adjacency list—adapt naturally to weighted graphs.
Weighted Adjacency Matrix:
For a graph with n vertices, the adjacency matrix M is an n×n matrix where:
Pros: O(1) edge weight lookup, simple implementation, good for dense graphs Cons: O(V²) space regardless of edge count, wasteful for sparse graphs
Weighted Adjacency List:
For each vertex, store a list of (neighbor, weight) pairs:
Pros: O(V + E) space, efficient for sparse graphs, natural for traversals Cons: O(degree) edge lookup, slightly more complex than unweighted
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
from typing import Dict, List, Tuple, Optionalimport math class WeightedGraphMatrix: """ Weighted graph using adjacency matrix representation. Best for: Dense graphs, frequent edge weight lookups, algorithms that iterate over all edges. """ def __init__(self, vertices: List[str]): self.vertices = vertices self.vertex_to_idx = {v: i for i, v in enumerate(vertices)} n = len(vertices) # Initialize with infinity (no edge) self.matrix = [[math.inf] * n for _ in range(n)] # Distance to self is 0 for i in range(n): self.matrix[i][i] = 0 def add_edge(self, u: str, v: str, weight: float, directed: bool = True): i, j = self.vertex_to_idx[u], self.vertex_to_idx[v] self.matrix[i][j] = weight if not directed: self.matrix[j][i] = weight def get_weight(self, u: str, v: str) -> float: i, j = self.vertex_to_idx[u], self.vertex_to_idx[v] return self.matrix[i][j] def has_edge(self, u: str, v: str) -> bool: return self.get_weight(u, v) != math.inf class WeightedGraphList: """ Weighted graph using adjacency list representation. Best for: Sparse graphs, traversal algorithms, memory-constrained environments. """ def __init__(self): # graph[u] = [(v, weight), ...] self.graph: Dict[str, List[Tuple[str, float]]] = {} def add_vertex(self, v: str): if v not in self.graph: self.graph[v] = [] def add_edge(self, u: str, v: str, weight: float, directed: bool = True): self.add_vertex(u) self.add_vertex(v) self.graph[u].append((v, weight)) if not directed: self.graph[v].append((u, weight)) def get_neighbors(self, u: str) -> List[Tuple[str, float]]: return self.graph.get(u, []) def get_weight(self, u: str, v: str) -> Optional[float]: for neighbor, weight in self.graph.get(u, []): if neighbor == v: return weight return None def get_all_edges(self) -> List[Tuple[str, str, float]]: edges = [] for u, neighbors in self.graph.items(): for v, w in neighbors: edges.append((u, v, w)) return edges # Example usageprint("=== Adjacency List (Sparse Graph) ===")graph = WeightedGraphList()graph.add_edge('A', 'B', 4)graph.add_edge('A', 'C', 2)graph.add_edge('B', 'C', 1)graph.add_edge('B', 'D', 5)graph.add_edge('C', 'D', 8)graph.add_edge('C', 'E', 10)graph.add_edge('D', 'E', 2) print(f"Neighbors of A: {graph.get_neighbors('A')}")# Output: [('B', 4), ('C', 2)] print(f"Weight A->B: {graph.get_weight('A', 'B')}")# Output: 4 print(f"All edges: {graph.get_all_edges()}") print("\n=== Adjacency Matrix (Dense Graph) ===")matrix_graph = WeightedGraphMatrix(['A', 'B', 'C', 'D'])matrix_graph.add_edge('A', 'B', 1)matrix_graph.add_edge('A', 'C', 4)matrix_graph.add_edge('B', 'C', 2)matrix_graph.add_edge('C', 'D', 3) print(f"Weight A->C: {matrix_graph.get_weight('A', 'C')}") # 4print(f"Weight A->D: {matrix_graph.get_weight('A', 'D')}") # inf (no edge)Weighted graphs are everywhere in practical applications. The quantitative nature of edge weights makes them indispensable for optimization problems and realistic modeling. Let's explore major application domains.
| Domain | Weight Represents | Optimization Goal | Algorithm |
|---|---|---|---|
| Navigation | Travel time / Distance | Shortest path | Dijkstra / A* |
| Network Flow | Capacity | Maximum throughput | Ford-Fulkerson |
| Airlines | Ticket price | Cheapest route | Dijkstra / Bellman-Ford |
| Infrastructure | Construction cost | Minimum spanning tree | Kruskal / Prim |
| Similarity Graphs | Distance / Dissimilarity | Clustering | MST-based clustering |
| Influence Networks | Interaction strength | Influence maximization | Weighted centrality |
When weights represent costs/distances, we minimize path weight. When weights represent similarity/affinity, we might maximize path weight or use weights inversely. Always clarify: 'Is a larger weight good or bad?' This determines whether you minimize or maximize, and whether algorithms apply directly or need adaptation.
The sign of edge weights profoundly impacts which algorithms work and how we interpret results. This is one of the most important practical considerations when working with weighted graphs.
Positive Weights (w(e) > 0 for all e):
All edge weights are strictly positive. This is the most common and well-behaved case.
Non-Negative Weights (w(e) ≥ 0 for all e):
Zero-weight edges are allowed. Dijkstra's still works, but zero-weight edges mean multiple distinct paths may have the same total weight.
Negative Weights (some w(e) < 0):
Some edges have negative weight. This creates serious complications:
Dijkstra's algorithm assumes that once we've found the shortest path to a vertex, no future discovery will improve it. Negative edges violate this: we might find a 'longer' path initially, then later discover a negative edge that makes an alternative path shorter. Dijkstra's greedy choice becomes incorrect.
Handling Negative Weights:
Bellman-Ford Algorithm: Works correctly with negative weights. Runs in O(V × E) time—slower than Dijkstra but handles negative edges. Also detects negative cycles.
Johnson's Algorithm: For all-pairs shortest paths with negative weights. Reweights edges to eliminate negatives, then runs Dijkstra from each vertex.
Negative Cycles:
A negative cycle is a cycle whose total weight is negative. If a negative cycle exists:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117
import heapqfrom typing import Dict, List, Tuple, Optionalimport math def dijkstra(graph: Dict[str, List[Tuple[str, float]]], start: str) -> Dict[str, float]: """ Dijkstra's algorithm for shortest paths from start vertex. REQUIREMENT: All edge weights must be non-negative! Time Complexity: O((V + E) log V) with binary heap Space Complexity: O(V) """ distances = {v: math.inf for v in graph} distances[start] = 0 priority_queue = [(0, start)] # (distance, vertex) visited = set() while priority_queue: current_dist, current = heapq.heappop(priority_queue) if current in visited: continue visited.add(current) for neighbor, weight in graph.get(current, []): if neighbor not in visited: new_dist = current_dist + weight if new_dist < distances[neighbor]: distances[neighbor] = new_dist heapq.heappush(priority_queue, (new_dist, neighbor)) return distances def bellman_ford( graph: Dict[str, List[Tuple[str, float]]], start: str) -> Tuple[Dict[str, float], bool]: """ Bellman-Ford algorithm for shortest paths. Handles negative edge weights. Detects negative cycles. Time Complexity: O(V × E) Space Complexity: O(V) Returns: (distances, has_negative_cycle) """ # Get all vertices vertices = set(graph.keys()) for neighbors in graph.values(): for v, _ in neighbors: vertices.add(v) distances = {v: math.inf for v in vertices} distances[start] = 0 # Relax all edges |V| - 1 times for _ in range(len(vertices) - 1): for u in graph: for v, weight in graph[u]: if distances[u] != math.inf and distances[u] + weight < distances[v]: distances[v] = distances[u] + weight # Check for negative cycle: if we can still relax, cycle exists has_negative_cycle = False for u in graph: for v, weight in graph[u]: if distances[u] != math.inf and distances[u] + weight < distances[v]: has_negative_cycle = True break return distances, has_negative_cycle # Example: Graph with positive weights (Dijkstra works)positive_graph = { 'A': [('B', 4), ('C', 2)], 'B': [('C', 1), ('D', 5)], 'C': [('D', 8), ('E', 10)], 'D': [('E', 2)], 'E': []} print("=== Dijkstra (Positive Weights) ===")distances = dijkstra(positive_graph, 'A')print(f"Shortest distances from A: {distances}")# Output: {'A': 0, 'B': 4, 'C': 2, 'D': 9, 'E': 11} # Example: Graph with negative weights (need Bellman-Ford)negative_graph = { 'A': [('B', 4), ('C', 2)], 'B': [('D', 3)], 'C': [('B', -3), ('D', 5)], # Negative edge C->B 'D': []} print("\n=== Bellman-Ford (Negative Weights) ===")distances, has_cycle = bellman_ford(negative_graph, 'A')print(f"Shortest distances from A: {distances}")print(f"Has negative cycle: {has_cycle}")# C->B is negative but no negative cycle exists# A->C->B has total weight 2 + (-3) = -1, better than A->B (4) # Example: Graph with negative cyclenegative_cycle_graph = { 'A': [('B', 1)], 'B': [('C', 2)], 'C': [('A', -4)] # Cycle A->B->C->A has weight 1+2-4 = -1} print("\n=== Bellman-Ford (Negative Cycle) ===")distances, has_cycle = bellman_ford(negative_cycle_graph, 'A')print(f"Has negative cycle: {has_cycle}") # TrueOne of the most elegant problems on weighted graphs is finding a Minimum Spanning Tree (MST)—a subset of edges that connects all vertices with minimum total weight, without forming cycles.
Definition:
Given a connected, undirected, weighted graph G = (V, E, w), a spanning tree is a subgraph that:
A minimum spanning tree is a spanning tree with minimum total edge weight.
Key Properties:
The cut property is key to MST algorithms: For any cut (partition of vertices into two sets), the minimum-weight edge crossing the cut is in some MST. Both Prim's and Kruskal's algorithms exploit this property, just in different ways.
Two Classic Algorithms:
Kruskal's Algorithm:
Time Complexity: O(E log E) — dominated by sorting
Prim's Algorithm:
Time Complexity: O(E log V) with binary heap
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
from typing import Dict, List, Tupleimport heapq class UnionFind: """Disjoint Set Union for cycle detection in Kruskal's.""" def __init__(self, vertices): self.parent = {v: v for v in vertices} self.rank = {v: 0 for v in vertices} def find(self, x): if self.parent[x] != x: self.parent[x] = self.find(self.parent[x]) return self.parent[x] def union(self, x, y) -> bool: px, py = self.find(x), self.find(y) if px == py: return False # Already connected (would create cycle) if self.rank[px] < self.rank[py]: px, py = py, px self.parent[py] = px if self.rank[px] == self.rank[py]: self.rank[px] += 1 return True def kruskal_mst(vertices: List[str], edges: List[Tuple[str, str, float]]) -> Tuple[List[Tuple[str, str, float]], float]: """ Kruskal's algorithm for Minimum Spanning Tree. Args: vertices: List of all vertices edges: List of (u, v, weight) tuples Returns: (MST edges, total MST weight) Time Complexity: O(E log E) """ # Sort edges by weight sorted_edges = sorted(edges, key=lambda e: e[2]) uf = UnionFind(vertices) mst_edges = [] total_weight = 0 for u, v, weight in sorted_edges: if uf.union(u, v): # No cycle created mst_edges.append((u, v, weight)) total_weight += weight if len(mst_edges) == len(vertices) - 1: break # MST complete return mst_edges, total_weight def prim_mst(graph: Dict[str, List[Tuple[str, float]]], start: str) -> Tuple[List[Tuple[str, str, float]], float]: """ Prim's algorithm for Minimum Spanning Tree. Args: graph: Adjacency list with (neighbor, weight) pairs start: Starting vertex Returns: (MST edges, total MST weight) Time Complexity: O(E log V) """ mst_edges = [] total_weight = 0 visited = {start} # Priority queue: (weight, from_vertex, to_vertex) edges = [(w, start, v) for v, w in graph.get(start, [])] heapq.heapify(edges) while edges and len(visited) < len(graph): weight, u, v = heapq.heappop(edges) if v in visited: continue visited.add(v) mst_edges.append((u, v, weight)) total_weight += weight # Add edges from newly added vertex for neighbor, w in graph.get(v, []): if neighbor not in visited: heapq.heappush(edges, (w, v, neighbor)) return mst_edges, total_weight # Example: Undirected weighted graphvertices = ['A', 'B', 'C', 'D', 'E']edges = [ ('A', 'B', 4), ('A', 'C', 2), ('B', 'C', 1), ('B', 'D', 5), ('C', 'D', 8), ('C', 'E', 10), ('D', 'E', 2)] # Kruskal's MSTmst_edges, total = kruskal_mst(vertices, edges)print("=== Kruskal's MST ===")print(f"MST Edges: {mst_edges}")print(f"Total Weight: {total}") # Build adjacency list for Prim'sgraph = {v: [] for v in vertices}for u, v, w in edges: graph[u].append((v, w)) graph[v].append((u, w)) # Undirected # Prim's MSTmst_edges, total = prim_mst(graph, 'A')print("\n=== Prim's MST ===")print(f"MST Edges: {mst_edges}")print(f"Total Weight: {total}")When working with weighted graphs in practice, several considerations beyond the core algorithms become important:
Weight Representation:
float('inf') or sys.maxsize for 'no edge' / unreachable distancesMulti-Edge and Self-Loop Handling:
Real-world edges often have multiple attributes (distance, time, cost). Solutions: (1) Combine into a single weight (e.g., weighted sum), (2) Use multi-objective optimization (Pareto-optimal paths), (3) Apply constraints (minimize time subject to cost ≤ budget). The choice depends on your problem requirements.
Dynamic Weights:
In many applications, weights change over time:
Static algorithms (Dijkstra, Bellman-Ford) must recompute from scratch when weights change. For frequently changing weights, consider:
Scale Considerations:
For very large graphs (millions of vertices):
| Scenario | Recommended Algorithm | Notes |
|---|---|---|
| Single-source, positive weights | Dijkstra | O((V+E) log V), most common case |
| Single-source, negative weights | Bellman-Ford | O(V × E), detects negative cycles |
| All-pairs, dense graph | Floyd-Warshall | O(V³), simple implementation |
| All-pairs, sparse + negative | Johnson's | Runs Dijkstra from each vertex after reweighting |
| Minimum spanning tree | Kruskal or Prim | Choose based on graph density |
| Very large graphs | A* with heuristic | If goal-directed search applies |
We've explored weighted graphs comprehensively—from their formal definition to practical algorithms and real-world applications. Let's consolidate the key concepts:
What's Next:
We've covered directed graphs, undirected graphs, and weighted graphs. The final page explores mixed and special graph types—multigraphs, hypergraphs, bipartite graphs, and other specialized structures that extend the basic graph model for specific application domains.
You now have thorough understanding of weighted graphs: their formal definition, representation strategies, real-world applications, the critical importance of weight signs, and fundamental algorithms for shortest paths and minimum spanning trees. You can identify when problems require weighted modeling and choose appropriate algorithms.