Loading content...
In the previous page, we learned that tabulation fills a table iteratively, computing subproblem solutions from base cases to the final answer. But we glossed over a critical question: In what order do we fill the cells?
This question might seem trivial for simple problems like Fibonacci, where the answer is obvious: compute dp[0], then dp[1], then dp[2], and so on. But for more complex problems—especially those with 2D or multi-dimensional state spaces—the filling order becomes a crucial design decision.
Get it wrong, and your algorithm attempts to read values that haven't been computed yet, producing garbage. Get it right, and every cell is computed exactly when all its dependencies are ready.
The dependency order is the heartbeat of tabulation. Understanding it transforms tabulation from a mechanical technique into a principled methodology.
By the end of this page, you will understand how to analyze the dependency structure of any DP problem, visualize dependencies as a directed graph, and derive the correct loop structure—including loop directions and nesting—that guarantees all dependencies are satisfied before each cell is computed.
A dependency in dynamic programming exists when the solution to one subproblem relies on the solutions to other subproblems. If dp[i] needs the value of dp[j] to compute its answer, we say dp[i] depends on dp[j], or equivalently, dp[j] is a prerequisite for dp[i].
This dependency relationship can be visualized as a directed edge: an arrow from dp[j] to dp[i] indicating that j must be computed before i.
The Dependency Graph:
If we draw all such edges for all subproblems, we get the dependency graph or subproblem DAG (Directed Acyclic Graph). This graph captures the entire computational structure of the DP problem:
The absence of cycles is guaranteed by the problem structure: subproblems are defined in terms of "smaller" or "simpler" subproblems, never in terms of themselves or larger problems.
Analyzing dependencies from the transition formula:
The dependency structure is encoded directly in the transition formula (recurrence relation). For any DP problem, examine the right-hand side of the recurrence to identify dependencies:
| Recurrence | Dependencies of dp[i] or dp[i][j] |
|---|---|
dp[i] = dp[i-1] + dp[i-2] | dp[i-1], dp[i-2] (both have smaller indices) |
dp[i][j] = dp[i-1][j] + dp[i][j-1] | dp[i-1][j], dp[i][j-1] (row above, column left) |
dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] + v[i]) | dp[i-1][w], dp[i-1][w-wt[i]] (both in previous row) |
dp[i] = max(dp[j]) + 1 for all j < i where arr[j] < arr[i] | All dp[j] with j < i (all smaller indices) |
In each case, the dependencies point to subproblems with smaller indices (earlier in some dimension). This is what makes bottom-up computation possible: we can process indices in increasing order and always have dependencies ready.
A well-posed DP problem always has an acyclic dependency graph. If you find yourself with circular dependencies (A depends on B, B depends on A), you've either made an error in your formulation or discovered that the problem isn't suitable for standard DP.
Given a dependency DAG, a topological order is any linear ordering of the nodes such that for every directed edge (A → B), node A appears before node B in the ordering. In other words, every node appears after all its dependencies.
For DP, we need to fill the table in some topological order of the dependency graph. The key insight is:
A simple nested loop structure, with the right directions, implicitly implements a topological ordering.
We don't need to explicitly construct the DAG or run a topological sort algorithm. Instead, we analyze the dependencies and choose loop directions that naturally satisfy the ordering constraints.
The Loop Direction Principle:
dp[i] depends only on cells with smaller i values, loop i from low to high (forward loop)dp[i] depends only on cells with larger i values, loop i from high to low (backward loop)dp[i] depends on cells in both directions... you have a problem (the formulation may need restructuring)For multi-dimensional tables, apply this principle independently to each dimension:
For dp[i][j]:
- If dependencies are on smaller i: outer loop i from low to high
- If dependencies are on larger i: outer loop i from high to low
- If dependencies are on smaller j: inner loop j from low to high
- If dependencies are on larger j: inner loop j from high to low
| Dependency Pattern | Example Problems | Loop Structure |
|---|---|---|
| Smaller indices only (dp[i-1], dp[i-2]) | Fibonacci, Climbing Stairs | for i from 2 to n |
| Row above only (dp[i-1][...]) | 0/1 Knapsack, LCS | for i from 1 to n (outer) |
| Column left only (dp[...][j-1]) | Grid paths (specific) | for j from 1 to m (inner) |
| Above and left (dp[i-1][j], dp[i][j-1]) | Unique Paths, Grid Min-Path | for i 0→n, for j 0→m |
| Interval expansion (dp[i][j] from dp[i+1][j-1]) | Palindrome DP | for length 1→n, for start 0→n-length |
Sketch a small version of your table (3×3 or 4×4). For each cell, draw arrows TO that cell FROM the cells it depends on. Then ask: in what order can I visit cells such that all incoming arrows come from already-visited cells? That order determines your loops.
For one-dimensional DP problems, dependency analysis is straightforward. The key question is: does dp[i] depend on cells with smaller indices, larger indices, or both?
Pattern 1: Dependencies on Smaller Indices (Forward Fill)
This is the most common pattern. Each cell depends on previously computed cells with smaller indices.
1234567891011121314151617181920212223242526
# Example: House Robber# dp[i] = max money robbing houses 0..i# dp[i] = max(dp[i-1], dp[i-2] + nums[i])# Dependencies: dp[i-1], dp[i-2] — both smaller def house_robber(nums: list[int]) -> int: if not nums: return 0 n = len(nums) if n == 1: return nums[0] dp = [0] * n dp[0] = nums[0] # Base case 1 dp[1] = max(nums[0], nums[1]) # Base case 2 # Forward fill: i from 2 to n-1 for i in range(2, n): dp[i] = max(dp[i - 1], dp[i - 2] + nums[i]) return dp[n - 1] # Dependency visualization:# dp[0] → dp[1] → dp[2] → dp[3] → dp[4]# ↘___________↗ ↘___________↗ ↘___________↗# Each cell depends on the one or two cells to its left.Pattern 2: Dependencies on Larger Indices (Backward Fill)
Less common, but occurs when we define the subproblem from the perspective of "suffix" rather than "prefix."
12345678910111213141516171819202122232425
# Example: Can reach end from position i?# dp[i] = True if we can reach index n-1 starting from i# dp[i] = any(dp[i+1], dp[i+2], ..., dp[i+jump[i]])# Dependencies: dp[i+1], dp[i+2], etc. — all larger def can_reach_end(jump: list[int]) -> bool: n = len(jump) if n == 0: return False dp = [False] * n dp[n - 1] = True # Base case: at the end, we've reached it # Backward fill: i from n-2 down to 0 for i in range(n - 2, -1, -1): for j in range(1, jump[i] + 1): if i + j < n and dp[i + j]: dp[i] = True break return dp[0] # Dependency visualization:# dp[4] ← dp[3] ← dp[2] ← dp[1] ← dp[0]# Each cell depends on cells to its right.Pattern 3: Dependencies on All Smaller Indices
Some problems require looking at all previously computed values, not just a fixed number of predecessors.
12345678910111213141516171819202122232425262728
# Example: Longest Increasing Subsequence (O(n²) DP)# dp[i] = length of LIS ending at index i# dp[i] = 1 + max(dp[j] for all j < i where arr[j] < arr[i])# Dependencies: ALL dp[j] with j < i (where condition holds) def longest_increasing_subsequence(arr: list[int]) -> int: n = len(arr) if n == 0: return 0 dp = [1] * n # Base case: each element is LIS of length 1 # Forward fill: i from 1 to n-1 for i in range(1, n): # Look at ALL smaller indices for j in range(i): if arr[j] < arr[i]: dp[i] = max(dp[i], dp[j] + 1) return max(dp) # Dependency visualization for dp[3]:# dp[0] ↘# ↘# dp[1] ───→ dp[3]# ↗# dp[2] ↗# Cell dp[3] may depend on any/all of dp[0], dp[1], dp[2].Look at your recurrence. If the indices on the right side are always i-1, i-2, etc. (fixed offsets), you have a local dependency pattern. If they involve a variable bound (like 'all j < i'), you have a global dependency pattern. Both are handled with forward loops; the difference is in the inner loop structure.
Two-dimensional DP problems are where dependency analysis becomes genuinely interesting. The 2D table has rows and columns, and dependencies can point in various directions. Visualizing these dependencies is key to designing correct loops.
Pattern 1: Depends on Row Above Only
When dp[i][j] depends only on cells in row i-1, each row can be filled independently once the previous row is complete.
1234567891011121314151617181920212223242526272829
# Example: 0/1 Knapsack# dp[i][w] = max value using first i items with capacity w# dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] + val[i])# Dependencies: dp[i-1][w] and dp[i-1][w-wt[i]] — both in row i-1 def knapsack_01(weights: list[int], values: list[int], W: int) -> int: n = len(weights) # dp[i][w] = max value using items 0..i-1 with capacity w dp = [[0] * (W + 1) for _ in range(n + 1)] # Base case: row 0 is all zeros (0 items = 0 value) # Fill row by row, left to right within each row for i in range(1, n + 1): # Each item for w in range(W + 1): # Each capacity # Don't take item i-1 dp[i][w] = dp[i - 1][w] # Take item i-1 (if it fits) if weights[i - 1] <= w: dp[i][w] = max(dp[i][w], dp[i - 1][w - weights[i - 1]] + values[i - 1]) return dp[n][W] # Dependency for dp[i][w]:# ← dp[i-1][w-wt] dp[i-1][w] →# ↘ ↓# → dp[i][w]# Both dependencies are in the row ABOVE.Pattern 2: Depends on Row Above AND Column Left
This is the "upper-left corner" dependency pattern, common in grid navigation and sequence alignment problems.
12345678910111213141516171819202122232425262728293031323334353637
# Example: Edit Distance (Levenshtein Distance)# dp[i][j] = min edits to transform s1[0..i-1] into s2[0..j-1]# dp[i][j] = min of:# - dp[i-1][j] + 1 (delete from s1)# - dp[i][j-1] + 1 (insert into s1)# - dp[i-1][j-1] + cost (replace/match)# Dependencies: dp[i-1][j], dp[i][j-1], dp[i-1][j-1] def edit_distance(s1: str, s2: str) -> int: m, n = len(s1), len(s2) dp = [[0] * (n + 1) for _ in range(m + 1)] # Base cases: first row and first column for i in range(m + 1): dp[i][0] = i # Delete all chars from s1 for j in range(n + 1): dp[0][j] = j # Insert all chars into s1 # Fill top-to-bottom, left-to-right for i in range(1, m + 1): for j in range(1, n + 1): if s1[i - 1] == s2[j - 1]: dp[i][j] = dp[i - 1][j - 1] # No edit needed else: dp[i][j] = 1 + min( dp[i - 1][j], # Delete dp[i][j - 1], # Insert dp[i - 1][j - 1] # Replace ) return dp[m][n] # Dependency visualization:# dp[i-1][j-1] → dp[i-1][j]# ↓ ↓# dp[i][j-1] ───→ dp[i][j]# The cell depends on three neighbors: above, left, and diagonal.Pattern 3: Interval Dependencies (Diagonal Fill)
Some problems define dp[i][j] in terms of subintervals, creating diagonal dependencies that require a more careful iteration strategy.
12345678910111213141516171819202122232425262728293031323334353637383940
# Example: Longest Palindromic Subsequence# dp[i][j] = length of LPS in s[i..j]# dp[i][j] = dp[i+1][j-1] + 2 (if s[i] == s[j])# = max(dp[i+1][j], dp[i][j-1]) (otherwise)# Dependencies: dp[i+1][j], dp[i][j-1], dp[i+1][j-1]# (all shorter intervals) def longest_palindrome_subseq(s: str) -> int: n = len(s) if n == 0: return 0 # dp[i][j] = LPS length for s[i..j] dp = [[0] * n for _ in range(n)] # Base case: single characters for i in range(n): dp[i][i] = 1 # Fill by increasing interval length (diagonal-by-diagonal) for length in range(2, n + 1): # Interval length 2, 3, ..., n for i in range(n - length + 1): # Starting position j = i + length - 1 # Ending position if s[i] == s[j]: dp[i][j] = dp[i + 1][j - 1] + 2 else: dp[i][j] = max(dp[i + 1][j], dp[i][j - 1]) return dp[0][n - 1] # Dependency visualization (for 5-char string):# The table is filled diagonal-by-diagonal:# ij 0 1 2 3 4# 0 [1] [2] [3] [4] [5] Diagonal numbers indicate fill order# 1 [1] [2] [3] [4]# 2 [1] [2] [3]# 3 [1] [2]# 4 [1]# # Cell dp[0][4] depends on dp[1][3] (inner diagonal), etc.When dependencies span from index i to j but depend on dp[i+1][...] or dp[...][j-1], iterate by interval length. Start with length 1 (base case), then length 2, then 3, etc. Within each length, iterate through all valid starting positions. This guarantees shorter intervals are computed before longer ones.
Visual representations of dependencies are invaluable for understanding and debugging tabulation. Let's examine how to draw and interpret these diagrams.
Arrow Diagrams for 2D Tables:
For a 2D DP table, draw a small grid (say, 4×4) and add arrows to show dependencies. There are several common patterns:
Pattern A: Upper-Left Dependencies (Unique Paths, LCS)
[↓][↓][↓][↓]
→[·][·][·][·]
→[·][·][·][·]
→[·][·][·][·]
Arrow convention:
↓ = dependency from cell above
→ = dependency from cell to the left
↘ = dependency from diagonal (upper-left)
For each interior cell:
↓
→ [X]
Fill order: top-to-bottom, left-to-right
Pattern B: Previous Row Only (0/1 Knapsack)
[B][B][B][B] ← Base row (i=0)
↓ ↘ ← All arrows come from row above
[·][·][·][·] ← Row i=1
↓ ↘
[·][·][·][·] ← Row i=2
For each cell in row i:
↓ ↘
[X] (from i-1,w and i-1,w-wt)
Fill order: row by row (outer), any order within row (but typically left-to-right)
Pattern C: Diagonal/Interval Fill (Palindrome DP, Matrix Chain)
Fill order by diagonal:
[1][2][3][4][5] Numbers indicate which diagonal "wave"
[1][2][3][4]
[1][2][3]
[1][2]
[1]
Cell dp[i][j] depends on:
- dp[i+1][j] (one row below, same column)
- dp[i][j-1] (same row, one column left)
- dp[i+1][j-1] (one row below, one column left)
All dependencies are in shorter intervals (closer to diagonal).
For any new DP problem, before writing code: (1) Draw a small table. (2) Write the recurrence. (3) For one interior cell, draw arrows from all its dependencies. (4) Identify the pattern. (5) Deduce the loop structure. This 5-step process prevents the majority of tabulation bugs.
Verification technique: The wavefront test
Imagine a "wavefront" sweeping through the table in your proposed fill order. At each moment, the wavefront represents the cells currently being computed. All arrows into these cells must come from behind the wavefront (already computed cells). If any arrow comes from ahead of the wavefront (not yet computed), your fill order is wrong.
For standard row-by-row, left-to-right filling:
This is why upper-left dependency patterns work with this fill order, but interval patterns require diagonal filling.
Some DP problems have more complex dependency structures that don't fit neatly into the patterns above. Here are strategies for handling them.
Strategy 1: Redefining the Subproblem
If your natural formulation has awkward dependencies, consider redefining what dp[i][j] represents. Often, a different subproblem definition yields cleaner dependencies.
Example: Consider the "jump game" problem where you can jump up to arr[i] steps from position i. A forward-looking formulation ("can I reach position i from the start?") has complex dependencies. A backward-looking formulation ("can I reach the end from position i?") has simple dependencies (only on larger indices), enabling a clean backward loop.
Strategy 2: Multiple Passes
Some problems benefit from filling the table in multiple passes, each with a simple dependency structure.
Example: In bidirectional DP (like certain stock trading problems), you might do:
Strategy 3: Explicit Topological Sort
For truly arbitrary dependency graphs (rare in practice), you can explicitly construct the dependency graph and perform a topological sort to determine the fill order. This is unusual for standard DP problems but may arise in advanced scenarios.
# Pseudocode for explicit topological order
graph = build_dependency_graph(problem)
order = topological_sort(graph)
for state in order:
dp[state] = compute_from_dependencies(dp, state)
If you find yourself struggling to determine a fill order, step back and ask: Is my subproblem definition correct? Most well-posed DP problems have clean dependency structures. Convoluted dependencies often indicate a suboptimal formulation.
Let's walk through the complete process of deriving loop structure from scratch for two problems.
Example 1: Coin Change (Minimum Coins)
Problem: Given coin denominations and a target amount, find the minimum number of coins needed.
dp[a] = minimum coins to make amount adp[a] = 1 + min(dp[a - c] for each coin c if a >= c)dp[a] depends on dp[a - c] for various c. Since c > 0, we have a - c < a. Dependencies are on smaller indices.dp[0] = 0 (zero coins for amount zero).123456789101112131415161718
def coin_change(coins: list[int], amount: int) -> int: """ Derived loop structure: - Dependencies: dp[a] depends on dp[a - coin] (smaller) - Loop direction: forward (0 to amount) """ # Use infinity to represent impossible amounts INF = float('inf') dp = [INF] * (amount + 1) dp[0] = 0 # Base case # Forward fill for a in range(1, amount + 1): for coin in coins: if coin <= a and dp[a - coin] != INF: dp[a] = min(dp[a], dp[a - coin] + 1) return dp[amount] if dp[amount] != INF else -1Example 2: Matrix Chain Multiplication (Minimum Cost)
Problem: Given matrix dimensions, find minimum scalar multiplications to compute the product.
dp[i][j] = min cost to multiply matrices i through jdp[i][j] = min(dp[i][k] + dp[k+1][j] + d[i-1]*d[k]*d[j]) for all split points kdp[i][j] depends on dp[i][k] and dp[k+1][j]. Both are shorter intervals (chains of fewer matrices).dp[i][i] = 0 (single matrix, no multiplication needed).12345678910111213141516171819202122232425262728293031
def matrix_chain_order(dims: list[int]) -> int: """ dims[i-1] x dims[i] gives dimensions of matrix i. For n matrices, dims has n+1 elements. Derived loop structure: - Dependencies: dp[i][j] depends on shorter intervals - Loop direction: by increasing interval length """ n = len(dims) - 1 # Number of matrices if n <= 1: return 0 # dp[i][j] = min cost to multiply matrices i..j (1-indexed) INF = float('inf') dp = [[0] * (n + 1) for _ in range(n + 1)] # Base case: single matrices cost 0 # (dp[i][i] = 0, already initialized) # Fill by interval length for length in range(2, n + 1): # 2 matrices, 3 matrices, etc. for i in range(1, n - length + 2): # Starting matrix j = i + length - 1 # Ending matrix dp[i][j] = INF for k in range(i, j): # Split point # Cost = left chain + right chain + multiplication cost cost = dp[i][k] + dp[k + 1][j] + dims[i-1] * dims[k] * dims[j] dp[i][j] = min(dp[i][j], cost) return dp[1][n]For every DP problem: (1) Write the state definition in plain English. (2) Write the recurrence relation. (3) List all dependencies — which cells does dp[i] or dp[i][j] read from? (4) Determine if dependencies are smaller/larger in each dimension. (5) Set loop directions accordingly. (6) Identify and initialize all base cases.
Dependency order is the invisible architecture of tabulation. Understanding it transforms DP from mysterious to mechanical. Let's consolidate the key principles:
What's next:
We've mastered the structure of tabulation—tables and dependency order. But tabulation has a hidden cost: recursion overhead (or rather, the absence of it). In the next page, we'll examine why eliminating recursion matters, exploring stack usage, function call costs, and the practical performance implications of choosing tabulation over memoization.
You now understand how to analyze dependencies, visualize them as a DAG, and derive the correct loop structure for any DP problem. This skill transforms tabulation from trial-and-error into principled engineering. Next, we'll explore the performance benefits of eliminating recursion entirely.