Loading learning content...
Once you've identified that a problem is suitable for Dynamic Programming, the next challenge is the most critical and difficult step: defining the state and the transition function. This is where DP transforms from recognition into implementation.
A well-defined state captures exactly the information needed to solve subproblems. A correct transition function describes how to build solutions from previously solved subproblems. Together, they form a complete mathematical specification that translates directly into code.
This page teaches you a systematic methodology for state design—turning vague problem intuition into precise, implementable DP solutions.
By completing this page, you will be able to:
• Design DP states that capture exactly the necessary information • Write precise transition equations from subproblem relationships • Handle base cases and boundary conditions systematically • Avoid common state design pitfalls that lead to incorrect solutions • Validate your state definition before implementation
In Dynamic Programming, the state is a compact representation of a subproblem. It contains all the information necessary to:
Think of the state as the 'address' of a subproblem in your memoization table. Two subproblems with the same state are identical—they have the same solution.
1234567891011121314151617181920212223
// FIBONACCI// State: fib(n) = "the n-th Fibonacci number"// Parameters: n (a single integer)// Value: the n-th Fibonacci number (integer)// Domain: n ≥ 0 // 0/1 KNAPSACK// State: dp(i, w) = "maximum value achievable using items 0..i with capacity w"// Parameters: i (item index), w (remaining capacity)// Value: maximum total value (integer)// Domain: 0 ≤ i < n, 0 ≤ w ≤ W // LONGEST COMMON SUBSEQUENCE// State: lcs(i, j) = "length of LCS of first i chars of X and first j chars of Y"// Parameters: i, j (positions in two strings)// Value: length of LCS (integer)// Domain: 0 ≤ i ≤ len(X), 0 ≤ j ≤ len(Y) // COIN CHANGE (Minimum Coins)// State: dp(amount) = "minimum coins needed to make 'amount'"// Parameters: amount (target value)// Value: minimum number of coins (integer), or ∞ if impossible// Domain: 0 ≤ amount ≤ targetAlways write your state definition in words before coding. Use this template:
"dp(params) = [what we're computing] for [what the parameters represent]"
Example: "dp(i, j) = minimum edit distance to transform the first i characters of word1 into the first j characters of word2"
This verbal definition prevents confusion and guides your transition equations.
The most critical requirement for a DP state is that it contains sufficient information to solve the subproblem. This means: given only the state parameters, you must be able to compute the answer without knowing HOW you arrived at that state.
The Markov Property of DP:
Just like Markov chains, DP states must be 'memoryless'—the future depends only on the current state, not the path taken to reach it. If your current decision depends on choices you made earlier (beyond what's captured in the state), your state is insufficient.
123456789101112131415161718
// PROBLEM: Best Time to Buy and Sell Stock with Cooldown// After selling, you must wait one day before buying again. // ❌ INSUFFICIENT STATE: dp(day) = "max profit on day i"// Problem: To decide if we can buy today, we need to know// if we sold yesterday (cooldown) or earlier. The state doesn't capture this! // ✓ SUFFICIENT STATE: dp(day, state) where state ∈ {HOLDING, NOT_HOLDING, COOLDOWN}// Now we know:// - HOLDING: We own stock, can sell today// - NOT_HOLDING: We don't own stock, can buy today// - COOLDOWN: We sold yesterday, must wait // Alternatively: dp(day, holding) where holding ∈ {true, false}// And handle cooldown in transitions by looking at dp(day-2, _) for buy // Key insight: The "path to get here" matters for decisions,// so we must encode relevant path information in the state.Testing for Sufficiency:
Ask yourself: "If I'm at state S, can I determine my optimal action without knowing how I got to S?"
If the answer is no, you need to add more information to your state. Common additions include:
• Current 'mode' or 'phase' (holding/not holding, ascending/descending) • Last action taken (bought, sold, skipped) • Constraints consumed so far (transactions used, items selected) • Previous element processed (for problems with adjacent constraints)
While the state must be sufficient, adding unnecessary information explodes the state space. A state with parameters (i, j, k, l, m) when only (i, j) are needed wastes memory and time.
The art is finding the MINIMAL state that is SUFFICIENT. This often requires deep understanding of the problem structure.
Designing a DP state isn't guesswork—it follows a systematic methodology. Here's a step-by-step process that works for most problems:
12345678910111213141516171819202122232425262728293031
// PROBLEM: Given items with weights and values, maximize value// within capacity W. Each item can be taken at most once. // STEP 1: What remains after a decision?// After deciding on item i (take or skip), we have:// - Remaining items: items i+1 to n-1// - Remaining capacity: W minus weight if taken, W if skipped // STEP 2: What affects future decisions?// - Which items remain (determined by current index)// - How much capacity remains (determines what we can still take)// Note: We don't need to know WHICH items were taken, just the remaining capacity // STEP 3: Verbal definition// dp(i, w) = "maximum value achievable using items i to n-1 with capacity w" // STEP 4: Verify sufficiency// Given i and w, can we decide optimally? Yes!// - Option 1: Skip item i → value = dp(i+1, w)// - Option 2: Take item i (if weight[i] ≤ w) → value = value[i] + dp(i+1, w - weight[i])// We take max of options. No other information needed. ✓ // STEP 5: Verify minimality// - Remove i? Can't - we need to know which items remain// - Remove w? Can't - we need to know if items fit// Both parameters are necessary. ✓ // STEP 6: Count state space// - i ranges from 0 to n → O(n) values// - w ranges from 0 to W → O(W) values// Total: O(n × W) states - polynomial, tractable ✓The same problem often has multiple valid state definitions. For knapsack, we could also define:
• dp(i, w) = 'max value using items 0 to i-1 with capacity w' (prefix instead of suffix) • dp(i, v) = 'minimum weight to achieve value v using items 0 to i-1'
Different definitions lead to different implementations. Choose based on clarity and efficiency.
The transition function (also called the recurrence relation) describes how to compute the value of a state from the values of its dependent states. It's the mathematical equation that captures the optimal substructure.
A transition function has three components:
12345678910111213141516171819202122232425
// FIBONACCI// Transition: fib(n) = fib(n-1) + fib(n-2)// Combination: addition (sum of two previous)// Base cases: fib(0) = 0, fib(1) = 1 // 0/1 KNAPSACK// Transition: dp(i, w) = max(// dp(i+1, w), // Skip item i// value[i] + dp(i+1, w - weight[i]) // Take item i (if weight[i] ≤ w)// )// Combination: maximum of two choices// Base cases: dp(n, w) = 0 for all w (no items left) // LONGEST COMMON SUBSEQUENCE// Transition:// If X[i-1] == Y[j-1]: lcs(i, j) = 1 + lcs(i-1, j-1)// Else: lcs(i, j) = max(lcs(i-1, j), lcs(i, j-1))// Combination: +1 for match, max of alternatives otherwise// Base cases: lcs(0, j) = 0, lcs(i, 0) = 0 (empty prefix) // MINIMUM COINS// Transition: dp(amount) = min over all coins c: 1 + dp(amount - c)// Combination: minimum of all coin choices, plus 1 for using that coin// Base cases: dp(0) = 0 (zero amount needs zero coins)// dp(negative) = ∞ (impossible)Often it's easier to think: 'What was the LAST decision that led to this state?' rather than 'What decisions CAN I make from here?'
For dp(i, j), ask: 'How could I have arrived at state (i, j)?' The answers reveal your transition sources.
Deriving the transition function requires systematic thinking about subproblem relationships. Here's a methodology that works reliably:
123456789101112131415161718192021222324252627282930313233343536373839
// PROBLEM: Given two strings word1 and word2, return the minimum// number of operations (insert, delete, replace) to convert word1 to word2. // STATE: dp(i, j) = "min operations to convert word1[0..i-1] to word2[0..j-1]" // DERIVATION PROCESS: // Step 1: List all actions at state (i, j)// - We're looking at word1[i-1] and word2[j-1] (current characters)// - Actions: Insert, Delete, Replace, or Match (if chars equal) // Step 2: Resulting states for each action// - Insert char from word2 into word1: advances j → state (i, j-1)// - Delete char from word1: advances i → state (i-1, j) // - Replace word1[i-1] with word2[j-1]: advances both → state (i-1, j-1)// - Match (if word1[i-1] == word2[j-1]): advances both, no cost → state (i-1, j-1) // Step 3: Express value in terms of subproblems// - Insert: 1 + dp(i, j-1)// - Delete: 1 + dp(i-1, j)// - Replace: 1 + dp(i-1, j-1)// - Match: 0 + dp(i-1, j-1) // Step 4: Combine with min (minimization problem)// Step 5: Handle constraints (no invalid actions here) // FINAL TRANSITION:// If word1[i-1] == word2[j-1]:// dp(i, j) = dp(i-1, j-1) // Match, no operation needed// Else:// dp(i, j) = 1 + min(// dp(i, j-1), // Insert// dp(i-1, j), // Delete // dp(i-1, j-1) // Replace// ) // BASE CASES:// dp(0, j) = j // Converting empty string to word2[0..j-1] needs j insertions// dp(i, 0) = i // Converting word1[0..i-1] to empty needs i deletions• Off-by-one errors: Be precise about whether dp(i) includes or excludes element i • Missing cases: Ensure all possible actions are covered • Wrong direction: Transitions must go from larger to smaller problems (or track dependencies correctly for bottom-up) • Constraint violations: Don't transition to invalid states (negative indices, over-capacity)
Base cases are the foundation of DP recursion—they're the states whose values are known without further decomposition. Getting base cases wrong is one of the most common sources of bugs in DP solutions.
Characteristics of Base Cases:
1234567891011121314151617181920
// PATTERN 1: Empty input// Fibonacci: fib(0) = 0, fib(1) = 1// LCS: lcs(0, _) = 0, lcs(_, 0) = 0 (empty string has no subsequence)// Edit Distance: dp(0, j) = j, dp(i, 0) = i // PATTERN 2: Single element// House Robber: dp(0) = nums[0] (only one house: rob it)// Climbing Stairs: dp(1) = 1 (one way to reach step 1) // PATTERN 3: Goal reached// Grid paths: dp(m-1, n-1) = 1 (one way to reach destination when already there)// Coin change: dp(0) = 0 (zero coins needed for amount 0) // PATTERN 4: Impossible states// Coin change: dp(negative) = ∞ (can't make negative amount)// Knapsack: Implicit - skip items that don't fit // PATTERN 5: Boundary initialization// Many problems initialize dp array with ∞ (for min) or -∞ (for max)// Or initialize with 0 when we're counting/summingSystematic Base Case Identification:
Verification:
After defining base cases, trace through a small example by hand. Ensure every state is either:
One of the trickiest aspects of base cases is index conventions. Decide early:
• Does dp(i) mean 'first i elements' (0..i-1) or 'up to and including index i' (0..i)? • For strings, does dp(i, j) use 1-indexed lengths or 0-indexed positions?
Document your convention and be consistent. Many bugs come from mixing conventions.
While every problem is unique, certain state design patterns appear repeatedly. Recognizing these patterns accelerates your solution design.
| Pattern | State Structure | Example Problems |
|---|---|---|
| Single Index | dp(i) = answer for prefix/suffix up to i | House Robber, Climbing Stairs, LIS |
| Two Indices (Sequences) | dp(i, j) = answer for prefixes of both inputs | LCS, Edit Distance, Interleaving String |
| Two Indices (Grid) | dp(r, c) = answer for position (r, c) | Unique Paths, Minimum Path Sum, Dungeon Game |
| Index + Resource | dp(i, w) = answer at index i with resource w remaining | 0/1 Knapsack, Coin Change, Subset Sum |
| Index + State Machine | dp(i, state) = answer at i in state | Stock Trading, State-Based Parsing |
| Interval | dp(l, r) = answer for subarray/substring [l, r] | Matrix Chain, Burst Balloons, Palindrome Partition |
| Bitmask | dp(mask) = answer for subset represented by mask | TSP, Assignment Problem, Subsets with Constraints |
1234567891011121314151617181920212223242526272829
// PROBLEM: Best Time to Buy and Sell Stock with Transaction Fee// You can complete as many transactions as you like, but you pay a fee for each. // STATE MACHINE STATES:// - HOLDING: We currently own stock// - NOT_HOLDING: We don't own stock // STATE DESIGN:// dp(i, HOLDING) = max profit at day i while holding stock// dp(i, NOT_HOLDING) = max profit at day i while not holding stock // TRANSITIONS:// dp(i, HOLDING) = max(// dp(i-1, HOLDING), // Stay holding// dp(i-1, NOT_HOLDING) - prices[i] // Buy today// )// // dp(i, NOT_HOLDING) = max(// dp(i-1, NOT_HOLDING), // Stay not holding// dp(i-1, HOLDING) + prices[i] - fee // Sell today, pay fee// ) // BASE CASES:// dp(0, HOLDING) = -prices[0] // Bought on day 0// dp(0, NOT_HOLDING) = 0 // Did nothing on day 0 // ANSWER: dp(n-1, NOT_HOLDING) // End without holding // This pattern applies to all stock trading variants!12345678910111213141516171819202122232425
// PROBLEM: Matrix Chain Multiplication// Given dimensions of matrices, find minimum multiplications to compute product. // STATE DESIGN:// dp(i, j) = "minimum multiplications to compute product M_i × M_{i+1} × ... × M_j"// Parameters: i, j = interval of matrices [i, j] // KEY INSIGHT: To compute M_i...M_j, we must split somewhere:// (M_i...M_k) × (M_{k+1}...M_j) for some k in [i, j-1] // TRANSITIONS:// dp(i, j) = min over all k in [i, j-1] of {// dp(i, k) + dp(k+1, j) + cost(combining the two products)// }// where cost = dims[i-1] × dims[k] × dims[j] // BASE CASES:// dp(i, i) = 0 // Single matrix needs no multiplication // ITERATION ORDER: Process by increasing interval length// - First: all length-1 intervals// - Then: all length-2 intervals// - And so on until the full interval // This pattern applies to: Burst Balloons, Optimal BST, Circle Game, etc.When you encounter a new problem, first identify which pattern it resembles. You'll often find that the problem is a variation of a pattern you already know, with minor tweaks to the state or transitions.
Before implementing, validate your state design. This catches errors before they become debugging nightmares. Here's a validation checklist:
1234567891011121314151617181920212223
// PROBLEM: Coin Change - minimum coins for amount// coins = [1, 3, 4], amount = 6 // STATE: dp(a) = minimum coins for amount a// TRANSITION: dp(a) = 1 + min(dp(a-1), dp(a-3), dp(a-4)) for valid coin choices// BASE: dp(0) = 0 // MANUAL TRACE:// dp(0) = 0 (base case)// dp(1) = 1 + dp(0) = 1 (only coin 1 fits)// dp(2) = 1 + dp(1) = 2 (coin 1 → dp(1))// dp(3) = 1 + min(dp(2), dp(0)) = 1 + 0 = 1 (coin 3)// dp(4) = 1 + min(dp(3), dp(1), dp(0)) = 1 + 0 = 1 (coin 4)// dp(5) = 1 + min(dp(4), dp(2), dp(1)) = 1 + 1 = 2 (coin 4 → dp(1) or coin 1 → dp(4))// dp(6) = 1 + min(dp(5), dp(3), dp(2)) = 1 + min(2, 1, 2) = 1 + 1 = 2 (coin 3 → dp(3)) // VERIFY: dp(6) = 2 coins. Indeed: 3 + 3 = 6 ✓ // This trace confirms:// ✓ Transitions compute correct values// ✓ Base case is correct// ✓ No missing cases// ✓ Final answer is extractable at dp(amount)Always trace through 2-3 small examples by hand BEFORE coding. This catches:
• Transition errors • Base case mistakes • Off-by-one bugs • Missing cases
Five minutes of manual tracing saves hours of debugging.
A well-designed state translates directly to code. The mapping is remarkably mechanical once your state and transitions are correct:
| Design Element | Top-Down (Memoization) | Bottom-Up (Tabulation) |
|---|---|---|
| State parameters | Function parameters | Array indices / loop variables |
| State value | Function return value | Array cell value |
| Transition | Recursive calls + combination | Array lookups + combination |
| Base cases | if conditions at function start | Array initialization before loops |
| Memoization | Check/store in memo object | Inherent in array filling order |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// PROBLEM: Longest Increasing Subsequence// STATE: dp(i) = length of LIS ending at index i// TRANSITION: dp(i) = 1 + max(dp(j)) for all j < i where nums[j] < nums[i]// BASE: dp(i) = 1 for all i (each element is a LIS of length 1 by itself)// ANSWER: max(dp(i)) for all i // TOP-DOWN (MEMOIZATION)function lisMemo(nums: number[]): number { const n = nums.length; const memo: Map<number, number> = new Map(); function dp(i: number): number { if (memo.has(i)) return memo.get(i)!; let maxLen = 1; // Base case: just the element itself for (let j = 0; j < i; j++) { if (nums[j] < nums[i]) { maxLen = Math.max(maxLen, 1 + dp(j)); } } memo.set(i, maxLen); return maxLen; } let answer = 0; for (let i = 0; i < n; i++) { answer = Math.max(answer, dp(i)); } return answer;} // BOTTOM-UP (TABULATION)function lisTab(nums: number[]): number { const n = nums.length; const dp: number[] = new Array(n).fill(1); // Base case: all 1s for (let i = 1; i < n; i++) { for (let j = 0; j < i; j++) { if (nums[j] < nums[i]) { dp[i] = Math.max(dp[i], 1 + dp[j]); } } } return Math.max(...dp); // Answer: max over all ending positions}Notice how the code almost writes itself once state and transitions are defined. The hard work is in the design; implementation is largely translation.
This is why spending time on state design is so valuable—correct design leads to correct code almost automatically.
State and transition design is the intellectual core of Dynamic Programming. Let's consolidate what we've learned:
What's Next:
With state and transitions mastered, the next question is: top-down or bottom-up? The next page explores when to use memoization versus tabulation, their tradeoffs, and how to choose the right approach for each problem.
You now possess a systematic methodology for designing DP states and transitions. This skill transforms DP from 'pattern matching to known solutions' into 'deriving solutions from first principles.' Next, we'll master the choice between top-down and bottom-up approaches.