Loading content...
The best way to develop greedy intuition is through examples—lots of them. Each example teaches a pattern, reveals a pitfall, or demonstrates a proof technique. This page walks through diverse greedy scenarios, from everyday situations to classic algorithmic problems.
We'll see greedy succeed brilliantly, fail spectacularly, and sometimes work "well enough" even when not optimal. By the end, you'll have a mental library of greedy patterns and the intuition to recognize new ones.
By the end of this page, you will understand greedy thinking through multiple concrete examples, develop pattern recognition for greedy-solvable problems, see the contrast between successful and failed greedy attempts, and build intuition transferable to new problems.
Greedy algorithms mirror how humans often make decisions in daily life. Recognizing these parallel helps build intuition for when greedy approaches make sense.
Example 1: Making Change at a Cash Register
When a cashier gives you change, they typically use the greedy approach:
For US currency (25¢, 10¢, 5¢, 1¢), this produces the minimum number of coins. The cashier doesn't think ahead or reconsider—they just grab the largest coin that fits.
Why it works: US denominations form a "canonical" coin system where greedy is provably optimal.
Example 2: Filling a Suitcase for Travel
Greedy approach: Pack the most valuable items first until the suitcase is full.
Does it work? For the fractional version (you can take half a book), yes—it's optimal. For the 0/1 version (whole items only), no—you might fill space with one big item when two smaller items would be more valuable.
| Situation | Greedy Approach | Does It Work? | Why? |
|---|---|---|---|
| Restaurant ordering (fixed budget) | Order most satisfying item that fits | Usually no | Combinations might be better |
| Driving shortest route | Take road that seems most direct | Often no | Traffic, construction, shortcuts exist |
| Exercise priority | Do most calorie-burning exercise first | Depends | Muscle fatigue creates dependencies |
| Email triage | Answer most urgent email first | Often yes | Urgency usually correlates with importance |
| Tree pruning | Cut largest branch first | Yes | Branches don't interact |
Humans use greedy strategies constantly because they're cognitively simple. We can't evaluate all possibilities, so we satisfice. Understanding when human greedy thinking succeeds or fails reveals problem structures. Ask: 'When I use this heuristic, do I ever regret it?'
The Activity Selection Problem is the canonical example for introducing greedy algorithms. It's simple enough to understand immediately yet rich enough to illustrate key concepts.
Problem Statement: Given n activities, each with a start time and end time, select the maximum number of non-overlapping activities.
Greedy Strategy: Always select the activity that ends earliest among remaining feasible activities.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
from dataclasses import dataclassfrom typing import List @dataclassclass Activity: name: str start: int end: int def __repr__(self): return f"{self.name}[{self.start}-{self.end}]" def activity_selection(activities: List[Activity]) -> List[Activity]: """ Greedy activity selection: maximize count of non-overlapping activities. Strategy: Always pick the activity that ends earliest. Time: O(n log n) for sorting + O(n) for selection = O(n log n) Space: O(n) for storing sorted activities PROVABLY OPTIMAL """ # Sort by end time (earliest first) sorted_activities = sorted(activities, key=lambda a: a.end) selected = [] current_end = float('-inf') for activity in sorted_activities: if activity.start >= current_end: # Non-overlapping selected.append(activity) current_end = activity.end print(f" Selected {activity} (next available after {current_end})") else: print(f" Skipped {activity} (overlaps with current end {current_end})") return selected # Example: Conference room schedulingactivities = [ Activity("Meeting A", 1, 4), Activity("Meeting B", 3, 5), Activity("Meeting C", 0, 6), Activity("Meeting D", 5, 7), Activity("Meeting E", 3, 9), Activity("Meeting F", 5, 9), Activity("Meeting G", 6, 10), Activity("Meeting H", 8, 11), Activity("Meeting I", 8, 12), Activity("Meeting J", 2, 14), Activity("Meeting K", 12, 16),] print("Conference Room Scheduling")print("=" * 50)print(f"Available meetings: {len(activities)}")print() selected = activity_selection(activities) print()print(f"Selected {len(selected)} meetings: {[a.name for a in selected]}")print()print("Visual Timeline:")print("=" * 50)for a in sorted(selected, key=lambda x: x.start): bar = " " * a.start + "█" * (a.end - a.start) print(f"{a.name:12} |{bar}")Why Earliest-End-Time Works (Intuition):
By taking the activity that ends earliest, we:
Formal Proof Sketch:
Let S be the greedy solution and O* be any optimal solution. Let a₁ be the first activity in S (earliest-ending).
Claim: There exists an optimal solution containing a₁.
Proof: If O* contains a₁, we're done. If not, O* contains some first activity b₁. Since a₁ ends earliest, a₁.end ≤ b₁.end. We can replace b₁ with a₁ in O*—the new solution is still valid (no new overlaps) and has the same count. Thus, there exists an optimal solution containing a₁.
By induction, greedy constructs an optimal solution.
This proof uses the 'exchange argument': any optimal solution can be transformed to include the greedy choice without losing optimality. This is the most common technique for proving greedy correctness. Look for opportunities to 'swap' non-greedy choices for greedy ones.
The Fractional Knapsack problem demonstrates greedy's power when problem structure aligns perfectly with local optimization.
Problem Statement: Given n items with weights and values, and a knapsack of capacity W, maximize total value. You may take fractions of items.
Greedy Strategy: Sort items by value-to-weight ratio (descending). Take items in order until the knapsack is full, taking fractions as needed.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
from dataclasses import dataclassfrom typing import List, Tuple @dataclassclass Item: name: str weight: float value: float @property def ratio(self) -> float: return self.value / self.weight if self.weight > 0 else float('inf') def fractional_knapsack(items: List[Item], capacity: float) -> Tuple[float, List[Tuple[str, float]]]: """ Fractional Knapsack: Take fractions of items to maximize value. Greedy by value/weight ratio is PROVABLY OPTIMAL. Intuition: Each unit of capacity should earn maximum value. Taking by ratio ensures every "gram" of capacity earns the most. """ # Sort by ratio (highest first) sorted_items = sorted(items, key=lambda x: x.ratio, reverse=True) total_value = 0.0 taken = [] remaining = capacity print("Packing knapsack (capacity = {:.1f})".format(capacity)) print("-" * 60) for item in sorted_items: if remaining <= 0: break take_weight = min(item.weight, remaining) take_fraction = take_weight / item.weight take_value = take_fraction * item.value taken.append((item.name, take_fraction)) total_value += take_value remaining -= take_weight if take_fraction == 1.0: print(f" {item.name}: Take ALL ({item.weight:.1f}kg, ${item.value: .0f }, ratio = { item.ratio: .2f })") else: print(f" {item.name}: Take {take_fraction:.1%} ({take_weight:.1f}kg, ${take_value:.0f}, ratio={item.ratio:.2f})") print("-" * 60) print(f"Total value: ${total_value:.2f}") return total_value, taken # Example: Thief's dilemma items = [ Item("Gold dust", 10, 600), # ratio = 60 Item("Silver bars", 20, 800), # ratio = 40 Item("Diamond ring", 5, 250), # ratio = 50 Item("Platinum watch", 8, 320), # ratio = 40 Item("Rare coin", 2, 70), # ratio = 35 ] capacity = 30 print("\n" + "=" * 60) print("FRACTIONAL KNAPSACK EXAMPLE") print("=" * 60) print("\nItems available:") for item in items: print(f" {item.name}: {item.weight}kg, ${item.value}, ratio={item.ratio:.1f}") print() value, taken = fractional_knapsack(items, capacity) print("\nWhy greedy works:") print(" • Value per kg is what matters") print(" • Highest ratio items should fill every available gram") print(" • Fractional taking ensures NO capacity is wasted on low-ratio items") print(" • There's no 'blocking' effect—taking X doesn't prevent taking Y")Why Greedy Works for Fractional Knapsack:
The key insight is that every unit of capacity is independent. Each gram of the knapsack can be evaluated separately:
Since we can take fractions, there's no "lumpiness" that forces suboptimal combinations. The optimal solution literally fills every unit of capacity with the highest-value-per-unit item available.
Contrast with 0/1 Knapsack:
In 0/1 knapsack, items are indivisible. This creates interference:
This interference destroys the greedy choice property. Greedy by ratio is NOT optimal for 0/1 knapsack.
The ability to take fractions is what makes greedy work. It's a subtle structural difference with massive algorithmic implications. Fractional: O(n log n) greedy. 0/1: O(nW) DP or exponential. Always clarify whether items are divisible!
Minimum Spanning Tree (MST) is one of the most important greedy problems, with applications in network design, clustering, and approximation algorithms.
Problem Statement: Given a connected, weighted graph, find a tree that connects all vertices with minimum total edge weight.
Two Greedy Approaches:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138
"""Minimum Spanning Tree: Two Greedy Approaches Both Kruskal's and Prim's are greedy and optimal.They make different greedy choices but both satisfythe greedy choice property for MST.""" from typing import List, Tuple Edge = Tuple[int, int, float] #(u, v, weight) class UnionFind: """For Kruskal's algorithm: efficient cycle detection.""" def __init__(self, n: int): self.parent = list(range(n)) self.rank = [0] * n def find(self, x: int) -> int: if self.parent[x] != x: self.parent[x] = self.find(self.parent[x]) return self.parent[x] def union(self, x: int, y: int) -> bool: px, py = self.find(x), self.find(y) if px == py: return False # Already connected if self.rank[px] < self.rank[py]: px, py = py, px self.parent[py] = px if self.rank[px] == self.rank[py]: self.rank[px] += 1 return True def kruskal_mst(n: int, edges: List[Edge]) -> Tuple[List[Edge], float]: """ Kruskal's MST: Greedy by global minimum edge weight. Strategy: Sort all edges.Add each edge if it connects two previously unconnected components. Time: O(E log E) for sorting + O(E α(V)) for union - find ≈ O(E log E) """ sorted_edges = sorted(edges, key = lambda e: e[2]) uf = UnionFind(n) mst = [] total_weight = 0.0 print("Kruskal's Algorithm (global greedy)") print("-" * 50) for u, v, w in sorted_edges: if uf.union(u, v): mst.append((u, v, w)) total_weight += w print(f" ADD edge ({u},{v}) weight {w:.1f} ✓") if len(mst) == n - 1: break else: print(f" SKIP edge ({u},{v}) weight {w:.1f} (would create cycle)") return mst, total_weight def prim_mst(n: int, adj: List[List[Tuple[int, float]]]) -> Tuple[List[Edge], float]: """ Prim's MST: Greedy by minimum edge from current tree. Strategy: Start from vertex 0. Repeatedly add the minimum edge connecting tree to non - tree vertex. Time: O(E log V) with priority queue """ import heapq in_tree =[False] * n mst = [] total_weight = 0.0 #(weight, from, to) pq = [(0, -1, 0)] # Start from vertex 0 print("\nPrim's Algorithm (local greedy)") print("-" * 50) while pq and len(mst) < n: w, u, v = heapq.heappop(pq) if in_tree[v]: continue in_tree[v] = True if u != -1: mst.append((u, v, w)) total_weight += w print(f" ADD edge ({u},{v}) weight {w:.1f} ✓") else: print(f" Start from vertex {v}") for neighbor, weight in adj[v]: if not in_tree[neighbor]: heapq.heappush(pq, (weight, v, neighbor)) return mst, total_weight # Example graph n = 5 edges = [ (0, 1, 2), (0, 3, 6), (1, 2, 3), (1, 3, 8), (1, 4, 5), (2, 4, 7), (3, 4, 9), ] # Build adjacency list for Prim'sadj = [[] for _ in range(n)] for u, v, w in edges: adj[u].append((v, w)) adj[v].append((u, w)) print("=" * 50) print("MST EXAMPLE: 5 vertices") print("=" * 50) print(f"Edges: {edges}") print() kruskal_result, kruskal_weight = kruskal_mst(n, edges) prim_result, prim_weight = prim_mst(n, adj) print() print(f"Kruskal's MST: {kruskal_result}, weight = {kruskal_weight}") print(f"Prim's MST: {prim_result}, weight = {prim_weight}") print() print("Both produce optimal MST (may differ in edge order)!")Why Both Approaches Work (Intuition):
Both Kruskal's and Prim's exploit the Cut Property of MSTs:
For any cut (partition of vertices into two sets), the minimum-weight edge crossing the cut is in some MST.
This property guarantees that choosing the minimum edge that "helps" (connects components or extends the tree) is always safe.
Kruskal's Intuition:
Prim's Intuition:
MST shows that different greedy criteria can both be optimal. Kruskal sorts globally; Prim expands locally. Both exploit the same underlying structure (cut property) but through different lenses. Not all greedy strategies are equivalent, but sometimes multiple work.
Job Sequencing with Deadlines adds complexity to the selection process, demonstrating more sophisticated greedy reasoning.
Problem Statement: Given n jobs, each with a profit and a deadline, schedule jobs to maximize total profit. Each job takes 1 unit of time. A job must be completed by its deadline.
Greedy Strategy: Sort jobs by profit (descending). For each job, schedule it in the latest available slot before its deadline.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
from dataclasses import dataclass from typing import List, Optional @dataclass class Job: id: str deadline: int profit: int def job_sequencing(jobs: List[Job]) -> tuple[int, List[str]]: """ Job Sequencing with Deadlines: Maximize profit. Strategy: 1. Sort jobs by profit(highest first) 2. For each job, find the latest available slot ≤ deadline 3. If slot found, schedule the job; else, skip it Time: O(n²) naive, O(n log n) with Union - Find """ # Find maximum deadline to know schedule size max_deadline = max(job.deadline for job in jobs) # Schedule slots: slot[i] = job scheduled at time i(or None) schedule = [None] * (max_deadline + 1) # 1 - indexed # Sort by profit(highest first) sorted_jobs = sorted(jobs, key = lambda j: j.profit, reverse = True) total_profit = 0 scheduled_jobs = [] print("Job Sequencing with Deadlines") print("=" * 60) print(f"Max deadline: {max_deadline}") print() for job in sorted_jobs: # Find latest available slot ≤ deadline scheduled = False for slot in range(min(job.deadline, max_deadline), 0, -1): if schedule[slot] is None: schedule[slot] = job.id scheduled_jobs.append(job.id) total_profit += job.profit print(f" {job.id} (profit={job.profit}, deadline={job.deadline}) → slot {slot} ✓") scheduled = True break if not scheduled: print(f" {job.id} (profit={job.profit}, deadline={job.deadline}) → NO SLOT ✗") print() print(f"Schedule: {schedule[1:]}") # Exclude slot 0 print(f"Total profit: {total_profit}") return total_profit, scheduled_jobs # Example jobs = [ Job("J1", deadline = 2, profit = 100), Job("J2", deadline = 1, profit = 19), Job("J3", deadline = 2, profit = 27), Job("J4", deadline = 1, profit = 25), Job("J5", deadline = 3, profit = 15), ] print("\nJobs:") for j in jobs: print(f" {j.id}: deadline={j.deadline}, profit={j.profit}") print() profit, scheduled = job_sequencing(jobs) print() print("Analysis:") print(" • J1 (profit 100) must run by slot 2 → takes slot 2") print(" • J3 (profit 27) must run by slot 2 → slot 2 taken, try slot 1 → takes slot 1") print(" • J4 (profit 25) must run by slot 1 → slot 1 taken → SKIPPED") print(" • J2 (profit 19) must run by slot 1 → slot 1 taken → SKIPPED") print(" • J5 (profit 15) must run by slot 3 → takes slot 3")Why This Works (Intuition):
The key insight is that scheduling high-profit jobs later (close to their deadline) preserves flexibility for earlier slots:
Exchange Argument:
Suppose optimal solution O* doesn't include some job J that greedy includes. If J is included in greedy, there was a slot for it. If O* excludes J, O* must include some job J' in that slot (or leave it empty). Since greedy chose J (higher profit), including J instead of J' doesn't decrease total profit.
This problem shows two-stage greedy selection: (1) which job to consider next (by profit), (2) where to schedule it (latest available slot). Both choices matter. Greedy by profit but earliest slot would be suboptimal (less flexibility). The combination of both greedy choices is what makes the algorithm work.
Understanding failure is as important as understanding success. Here are problems where greedy approaches—even seemingly reasonable ones—fail.
Failure 1: 0/1 Knapsack
Greedy by value/weight ratio fails because items are indivisible.
Example: Items [(value=60, weight=10), (value=100, weight=20), (value=120, weight=30)], capacity=50
Greedy by ratio: Takes item 1 (ratio 6), then item 2 (ratio 5), total value 160, weight 30
Can try item 3: weight 30 > remaining 20. Skip.
Greedy result: 160
Optimal: Items 2+3 = value 220, weight 50
Greedy is 27% worse!
Failure 2: Minimum Coin Change (Non-Canonical Systems)
Greedy by largest coin fails for coin systems without the "canonical" property.
Example: Coins [1, 3, 4], target = 6
Greedy is 50% more coins!
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117
"""Demonstration of greedy failures with counterexamples.""" # Failure 3: Traveling Salesman Problem# Nearest neighbor can be arbitrarily bad def nearest_neighbor_tsp(distances: list[list[float]], start: int = 0) -> tuple[float, list[int]]: n = len(distances) visited = [False] * n tour = [start] visited[start] = True total = 0.0 for _ in range(n - 1): current = tour[-1] nearest = min( (j for j in range(n) if not visited[j]), key = lambda j: distances[current][j] ) total += distances[current][nearest] tour.append(nearest) visited[nearest] = True total += distances[tour[-1]][start] # Return to start tour.append(start) return total, tour # Adversarial TSP example# Cities arranged so greedy goes the wrong way distances = [ [0, 1, 5, 5, 10], # City 0 [1, 0, 1, 5, 5], # City 1 [5, 1, 0, 1, 5], # City 2 [5, 5, 1, 0, 1], # City 3 [10, 5, 5, 1, 0], # City 4 ] greedy_dist, greedy_tour = nearest_neighbor_tsp(distances) optimal_tour = [0, 1, 2, 3, 4, 0] # Known optimal optimal_dist = sum(distances[optimal_tour[i]][optimal_tour[i + 1]] for i in range(5)) print("TSP Failure Example") print(f"Greedy tour: {greedy_tour}, distance = {greedy_dist}") print(f"Optimal tour: {optimal_tour}, distance = {optimal_dist}") print(f"Greedy excess: {(greedy_dist/optimal_dist - 1)*100:.0f}%") print() # Failure 4: Longest Path in DAG# Greedy by heaviest edge fails def greedy_longest_path(adj: dict, start: str) -> tuple[int, list[str]]: """Greedy: always take heaviest outgoing edge.""" path = [start] total = 0 current = start while current in adj and adj[current]: edges = adj[current] next_node, weight = max(edges, key = lambda x: x[1]) path.append(next_node) total += weight current = next_node return total, path # Graph where greedy fails# A--5-- > B--1-- > D# A--3-- > C--10 -> D adj = { 'A': [('B', 5), ('C', 3)], 'B': [('D', 1)], 'C': [('D', 10)], } greedy_length, greedy_path = greedy_longest_path(adj, 'A') optimal_length = 3 + 10 # A -> C -> D optimal_path = ['A', 'C', 'D'] print("Longest Path Failure Example") print(f"Greedy path: {greedy_path}, length = {greedy_length}") # A -> B -> D = 6 print(f"Optimal path: {optimal_path}, length = {optimal_length}") # A -> C -> D = 13 print(f"Greedy achieves only {100*greedy_length/optimal_length:.0f}% of optimal") print() # Failure 5: Set Cover# Greedy achieves only O(log n) approximation def greedy_set_cover(universe: set, sets: list[set]) -> list[int]: """Greedy: always pick set covering most uncovered elements.""" covered = set() chosen = [] while covered != universe: best_idx = max(range(len(sets)), key = lambda i: len(sets[i] - covered)) chosen.append(best_idx) covered |= sets[best_idx] return chosen universe = { 1, 2, 3, 4} sets = [ { 1, 2}, # Set 0 { 3, 4}, # Set 1 { 1, 3}, # Set 2: covers 2 new if first { 2, 4 }, # Set 3] greedy_cover = greedy_set_cover(universe, sets) print("Set Cover Example") print(f"Universe: {universe}") print(f"Sets: {sets}") print(f"Greedy solution: {greedy_cover} ({len(greedy_cover)} sets)") print(f"Optimal: [0, 1] (2 sets)") print("Greedy happens to find optimal here, but can be O(log n) worse in general")Common failure patterns: (1) Indivisibility creates blocking (0/1 knapsack), (2) Short-term gains prevent long-term optimization (TSP, longest path), (3) Complex constraint interactions (graph coloring). When you see these structures, be suspicious of greedy!
Even when greedy isn't optimal, it's often a valuable approximation algorithm. Many NP-hard problems have greedy algorithms with provable approximation guarantees.
| Problem | Greedy Strategy | Approximation Factor | Notes |
|---|---|---|---|
| Set Cover | Largest uncovered set | O(log n) | Best possible unless P=NP |
| Vertex Cover | Pick edge, include both endpoints | 2 | Optimal approximation for this approach |
| Metric TSP | Nearest neighbor | O(log n) | Christofides gets 1.5 with MST |
| Load Balancing | Assign to least loaded machine | 2 | Simple and practical |
| Bin Packing | First Fit Decreasing | 1.22 | Very close to optimal |
Why Use Greedy Approximations?
Example: Greedy Set Cover
For Set Cover, greedy (pick the set covering most uncovered elements) achieves:
Greedy ≤ H(n) × OPT
where H(n) = 1 + 1/2 + 1/3 + ... + 1/n ≈ ln(n).
This is provably the best polynomial-time algorithm can do (unless P=NP). Greedy isn't just "good enough"—it's optimal among efficient algorithms!
When greedy is suboptimal, ask: 'How suboptimal?' A 2-approximation is excellent for NP-hard problems. An O(n)-approximation is often too loose to be useful. Knowing the guarantee helps you decide if greedy is acceptable for your application.
We've built greedy intuition through diverse examples. Let's consolidate the patterns:
The Greedy Intuition Summary:
Look for problems where immediate choices don't constrain future flexibility.
When each greedy choice "opens doors" rather than "closes them," greedy likely works. When a choice now might block better choices later, greedy likely fails.
What's Next:
You've completed the foundational module on greedy algorithms. You understand:
Subsequent modules will explore specific greedy problems in depth: proving correctness, optimizing implementations, and mastering classic greedy patterns.
Congratulations! You've completed Module 1: What Is a Greedy Algorithm? You now have a solid foundation in greedy thinking—from formal definitions to intuitive examples. The next module deepens this understanding by exploring the greedy choice property: what makes a choice 'safe' and how to prove it.