Loading learning content...
When facing an optimization problem, you have three powerful paradigms at your disposal: plain recursion (the most straightforward approach), Dynamic Programming (recursion with memory), and Greedy algorithms (local optimization without looking back).
These three approaches form a spectrum of design choices. Understanding their relationships, trade-offs, and when each applies is essential to making informed algorithmic decisions.
In this final page of our introductory module, we'll place Dynamic Programming in context—comparing it rigorously with recursion and greedy strategies. By the end, you'll have a clear mental framework for choosing the right tool for the job.
By the end of this page, you will understand the precise relationship between DP and plain recursion, how DP differs from greedy algorithms, when each approach is appropriate, and how to recognize which paradigm a problem requires.
Dynamic Programming IS recursion—plus memory.
This is the most important insight about the DP-recursion relationship. DP doesn't replace recursive thinking; it enhances it. Every DP solution is built on a recursive foundation: the recurrence relation.
The evolution from recursion to DP:
Plain Recursion — Express the solution to a problem in terms of solutions to smaller subproblems. Elegant and directly mirrors the mathematical definition. BUT: may recompute the same subproblems exponentially many times.
Memoized Recursion (Top-Down DP) — Add a cache to store computed subproblems. Same recursive logic, but each subproblem is computed only once.
Tabulated Iteration (Bottom-Up DP) — Eliminate recursion entirely by computing subproblems in order. Same results, no recursive overhead.
All three produce the same final answer. The difference is purely in efficiency and implementation style.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
"""Evolution from Plain Recursion to Dynamic ProgrammingExample: Compute nth Fibonacci number""" # Stage 1: Plain Recursiondef fib_recursive(n): """ Pure recursive approach - O(2^n) time, O(n) stack space Directly implements the mathematical definition. Problem: Exponentially many redundant computations. """ if n <= 1: return n return fib_recursive(n - 1) + fib_recursive(n - 2) # Stage 2: Memoized Recursion (Top-Down DP)def fib_memoized(n, memo={}): """ Recursive with caching - O(n) time, O(n) space Same logic as above, but we store computed values. Each subproblem computed exactly once. """ if n in memo: return memo[n] if n <= 1: return n memo[n] = fib_memoized(n - 1, memo) + fib_memoized(n - 2, memo) return memo[n] # Stage 3: Tabulated Iteration (Bottom-Up DP)def fib_tabulated(n): """ Iterative with table - O(n) time, O(n) space No recursion. We fill the table from smallest to largest. """ if n <= 1: return n dp = [0] * (n + 1) dp[1] = 1 for i in range(2, n + 1): dp[i] = dp[i - 1] + dp[i - 2] return dp[n] # Stage 4: Space-Optimized DPdef fib_optimized(n): """ Iterative with O(1) space - O(n) time, O(1) space We only need the last two values, so no table needed. """ if n <= 1: return n prev2, prev1 = 0, 1 for i in range(2, n + 1): current = prev1 + prev2 prev2/prev1 = prev1, current return prev1 # All four produce the same answer, but:# fib_recursive(40) - takes several seconds# fib_memoized(40) - instant# fib_tabulated(40) - instant# fib_optimized(40) - instant (and uses least memory)Always start by formulating the recursive solution. The recurrence relation is the heart of DP. Once you have a correct recursive formula, adding memoization or converting to iteration is mechanical. Without the correct recursive structure, no amount of optimization will help.
| Aspect | Plain Recursion | DP (Memoization/Tabulation) |
|---|---|---|
| Core Logic | Recurrence relation | Same recurrence relation |
| Subproblem Handling | Compute every time needed | Compute once, store, reuse |
| Time Complexity | Often exponential | Polynomial (typically) |
| Space for Storage | Call stack only | Cache/table + call stack (memo) or table only (tab) |
| When to Use | Problems without overlapping subproblems | Problems with overlapping subproblems |
| Relationship | Foundation for DP thinking | Recursion + storage optimization |
Not every recursive problem needs DP. When subproblems don't overlap, plain recursion is already optimal—adding memoization would waste space without saving time.
The test for overlap:
Ask: 'Will the same subproblem be computed more than once?'
Classic examples without overlap: Divide and Conquer problems
In these problems, the recursion tree is a proper tree (not a DAG with reconvergence), so there's no redundant computation to eliminate.
In Merge Sort (left), the recursion tree is a proper tree—each node is unique. In Fibonacci (right), the 'tree' is actually a DAG with nodes appearing multiple times (highlighted in red/yellow). When you see repeated subproblems in the recursion structure, that's the signal that DP will help.
Both DP and Greedy are optimization paradigms. Both require optimal substructure. But they differ fundamentally in how they make decisions.
Greedy Algorithms:
Dynamic Programming:
The key insight:
Greedy algorithms take a shortcut—they assume the locally best choice is globally safe. This works beautifully when the problem has the greedy choice property (local optima compose to global optima). But for problems lacking this property, greedy fails silently, producing suboptimal solutions.
DP is more thorough—it makes no assumptions. By considering all choices and comparing them using optimal subproblem solutions, DP guarantees the optimal answer at the cost of more computation.
amount = 12, coins = [1, 5, 6]Minimum coins = 2 (using 6 + 6)Greedy Approach (choose largest coin first):
For amount = 11:
For amount = 9:
For amount = 6 with coins [1, 3, 4]:
The greedy choice of 4 (the largest fitting coin) locks us into a suboptimal path. DP would consider both options (4 vs 3) and find the better one.
| Property | Dynamic Programming | Greedy |
|---|---|---|
| Decision Strategy | Evaluate all choices, pick best | Make locally optimal choice |
| Backtracking | Implicitly considers all paths | Never backtracks |
| Optimality Guarantee | Always finds optimal | Only with greedy choice property |
| Required Properties | Optimal substructure + overlapping subproblems | Optimal substructure + greedy choice property |
| Time Complexity | Higher (polynomial, but more work) | Lower (often linear-ish) |
| Space Complexity | Higher (table/cache storage) | Lower (often O(1) or O(n)) |
| Implementation Style | Recursion with memo OR iterative with table | Usually simple iteration or sorting |
| Proof Requirement | Recurrence correctness | Proof that greedy choice is safe |
Greedy solutions are faster to write and execute. This makes them appealing. But greedy only works when provably correct. Applying greedy without verification often leads to wrong answers that look plausible. When in doubt, use DP—it's slower but safe.
Greedy algorithms work when the problem has the Greedy Choice Property: making the locally optimal choice at each step leads to a globally optimal solution.
Mathematical Statement:
A problem has the greedy choice property if there exists some optimal solution that makes the greedy choice. In other words, we can always safely make the locally best choice without foreclosing the possibility of reaching a global optimum.
Classic problems where greedy works:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
"""Activity Selection Problem - Classic Greedy Problem: Given n activities with start and finish times,select the maximum number of non-overlapping activities. Greedy Strategy: Always pick the activity with the earliest finish timeamong activities compatible with already selected ones. Why it works: Earlier finish time means more room for future activities.This is provably optimal via exchange argument.""" def activity_selection(activities): """ activities: list of (start, finish) tuples Returns: list of selected activities (indices) Time: O(n log n) for sorting, O(n) for selection """ # Sort by finish time (greedy criterion) sorted_activities = sorted(enumerate(activities), key=lambda x: x[1][1]) selected = [] last_finish = 0 for idx, (start, finish) in sorted_activities: if start >= last_finish: # Compatible with previous selection selected.append(idx) last_finish = finish return selected # Exampleactivities = [(1, 4), (3, 5), (0, 6), (5, 7), (3, 9), (5, 9), (6, 10), (8, 11)]# Greedy selects activities finishing at 4, 7, 11 → 3 activities# This is provably optimal result = activity_selection(activities)print(f"Selected activities: {result}") # [0, 3, 7] or similar """Contrast: Coin Change - Greedy FAILS for arbitrary coin systems""" def coin_change_greedy(amount, coins): """ Greedy coin change: always use the largest coin that fits. WARNING: Only optimal for certain coin systems (like US coins). """ coins_sorted = sorted(coins, reverse=True) count = 0 remaining = amount for coin in coins_sorted: while remaining >= coin: remaining -= coin count += 1 return count if remaining == 0 else -1 # Works for US coins:# coin_change_greedy(87, [25, 10, 5, 1]) = 6 (3*25 + 1*10 + 0*5 + 2*1) ✓ # Fails for arbitrary coins:# coin_change_greedy(6, [4, 3, 1]) = 3 (4+1+1) ✗# Optimal: 2 (3+3)To prove greedy works, use the 'exchange argument': assume an optimal solution that doesn't make the greedy choice, and show you can swap in the greedy choice without worsening the solution. This proves the greedy choice is always safe.
DP is necessary when:
The problem has optimal substructure — The optimal solution can be constructed from optimal solutions to subproblems.
The problem has overlapping subproblems — The same subproblems are needed multiple times.
Greedy doesn't work — There's no safe local choice that guarantees global optimality.
Classic problems requiring DP:
| Problem | Why Greedy Fails | DP Approach |
|---|---|---|
| 0/1 Knapsack | Can't take fractions; best ratio item may not fit optimally | Consider take/not-take for each item |
| Longest Common Subsequence | No local criterion for matching vs skipping characters | Try all match/skip combinations via recurrence |
| Edit Distance | No greedy rule for insert/delete/replace sequence | Consider all operations at each position |
| Matrix Chain Multiplication | No obvious way to choose split points | Try all split points, pick minimum |
| Coin Change (general) | Largest coin may lead to suboptimal path | Try all coins at each step, pick minimum |
| Subset Sum | Can't greedily include/exclude items | Consider include/exclude for each item |
| Palindrome Partitioning | No local criterion for cut positions | Try all cut positions, pick minimum |
The common thread:
In each of these problems, you must consider multiple choices at each step, and the correct choice depends on the global context—not just local information. DP handles this by exhaustively exploring all choices and using stored subproblem solutions to efficiently evaluate each.
Warning signs that greedy won't work:
When facing an optimization problem, try to find a greedy solution first—it's faster. But if you can't prove greedy correctness (or can find a counterexample), switch to DP. DP is the safe fallback that always finds the optimal solution when it's applicable.
Here's a practical framework for choosing among recursion, DP, and greedy when facing an optimization problem:
Step 1: Identify the problem type
Step 2: Check for optimal substructure
If NO → Problem may require different approach (e.g., exhaustive search, mathematical formula)
Step 3: Check for greedy choice property
If YES (provable) → Use Greedy If UNSURE → Continue to Step 4
Step 4: Check for overlapping subproblems
If YES (polynomial overlap) → Use DP If NO → Use Plain Recursion (Divide and Conquer)
Decision criteria summary:
| Condition | Strategy |
|---|---|
| Optimal substructure + Greedy choice property | Greedy (fastest when correct) |
| Optimal substructure + Overlapping subproblems + No greedy property | DP (guaranteed optimal) |
| Optimal substructure + No overlap | Divide & Conquer (plain recursion) |
| No optimal substructure | Problem-specific approach |
The pragmatic approach:
Let's see all three approaches applied to related problems to reinforce the distinctions:
Example 1: Maximum Sum Subarray
1234567891011121314151617181920212223
def max_subarray_kadane(nums): """ Kadane's Algorithm - Greedy approach. The greedy choice: at each position, decide whether to extend the current subarray or start fresh. This works because the locally optimal choice (extend if positive sum, restart otherwise) leads to global optimum. """ max_sum = current_sum = nums[0] for num in nums[1:]: # Greedy choice: extend or restart current_sum = max(num, current_sum + num) max_sum = max(max_sum, current_sum) return max_sum # Why greedy works here:# If current_sum < 0, extending would only hurt us.# So we greedily restart. This is provably optimal.Example 2: Coin Change — Where Greedy Fails
1234567891011121314151617
def coin_change_greedy(amount, coins): """ Greedy coin change - INCORRECT for general coin systems! """ coins = sorted(coins, reverse=True) count = 0 for coin in coins: while amount >= coin: amount -= coin count += 1 return count if amount == 0 else -1 # Test: amount=6, coins=[4, 3, 1]# Greedy: 4+1+1 = 3 coins ✗ WRONG# Optimal: 3+3 = 2 coins ✓Sometimes the "best" DP solution simplifies to greedy. Maximum subarray (Kadane's) is an example—the DP recurrence has such simple structure that it's effectively greedy. This happens when there's only one "good" choice at each step, making exhaustive comparison unnecessary.
We've completed our exploration of how Dynamic Programming relates to plain recursion and greedy algorithms. Let's consolidate the key insights:
| Paradigm | When to Use | Guarantee | Typical Complexity |
|---|---|---|---|
| Plain Recursion | No overlapping subproblems (D&C) | Correct if recurrence is correct | Depends on problem |
| Dynamic Programming | Optimal substructure + overlapping subproblems | Optimal solution guaranteed | Polynomial |
| Greedy | Optimal substructure + greedy choice property | Optimal only if property holds | Often O(n) or O(n log n) |
Congratulations! You've completed the foundational module on Dynamic Programming. You now understand what DP is (an optimization technique), how it works (breaking into subproblems with storage), and how it relates to recursion and greedy algorithms. In the next module, we'll dive deeper into overlapping subproblems—the key property that makes DP efficient.
Coming up next:
Module 2 explores Overlapping Subproblems in depth—what makes subproblems 'overlapping', how to identify overlap, and why this property is the key to DP's efficiency gains. We'll analyze the structure of recursion trees and see exactly where the exponential savings come from.