Loading content...
In the previous page, we learned to reduce 2D DP tables to 1D arrays by carefully managing iteration order and saving diagonal values. While effective, this approach can become error-prone when dependencies grow more complex.
The Rolling Array Technique offers a more systematic and intuitive approach. Instead of using a single array with tricky iteration order, we use two or more arrays that 'roll' over each other — trading off a small amount of space for significant clarity and correctness guarantees.
This technique gets its name from the visual metaphor: imagine a cylinder rolling forward, where the back surface becomes the front surface as it rotates. Similarly, our 'previous' array becomes the 'current' array in each iteration.
By the end of this page, you will master the rolling array technique — understanding when to use it, how to implement it correctly, and why it sometimes provides a better balance between space efficiency and code clarity than single-array optimization.
The fundamental insight of the rolling array technique is simple:
If row i depends only on row i-1, we only need two arrays: one for the current row and one for the previous row.
After computing the current row, the previous row becomes obsolete. So we swap their roles:
This creates a 'rolling' effect where two arrays alternate as previous and current.
12345678910111213141516171819202122232425262728293031
# Abstract pattern of rolling array techniquedef rolling_array_pattern(n, m): """ Generic pattern for 2D DP reduced to two 1D arrays. Space: O(m) instead of O(n × m) """ # Two arrays of size m (the 'inner' dimension) prev = [0] * m curr = [0] * m # Initialize prev with base case values # prev = base_case_row_0(...) for i in range(1, n): # Outer loop: 'rows' for j in range(m): # Inner loop: 'columns' # Compute curr[j] using values from prev # This is the key: curr[j] can freely access any prev[k] # without worrying about overwriting issues! curr[j] = some_function(prev[j], prev[j-1], ...) # ROLL: Swap the arrays for next iteration # What was 'curr' becomes 'prev' for the next iteration prev, curr = curr, prev # Note: After the last iteration, 'prev' contains the final result # (because we swapped after the last update) return prev # The key advantage: No iteration order constraints!# We can iterate left-to-right, right-to-left, or any order.# The separation of prev and curr guarantees correctness.With two arrays, we completely separate reading (from prev) and writing (to curr). This eliminates the subtle iteration order bugs that plague single-array optimization. We can iterate in any direction because we never overwrite values we still need!
Instead of explicitly swapping arrays, we can use a clever indexing trick with the modulo operator. This approach treats our two arrays as if they were rows 0 and 1 of a virtual 2D table, using i % 2 to determine which is 'current'.
This technique is especially useful in languages where array swapping has overhead, or when you prefer to keep the logic closer to the original 2D formulation.
123456789101112131415161718192021222324252627282930
# Rolling array using modulo indexingdef rolling_with_modulo(n, m): """ Use dp[2][m] as the rolling array. Access current row with dp[i % 2][j]. Access previous row with dp[(i-1) % 2][j] = dp[(i+1) % 2][j]. """ # 2 rows (rolling), m columns dp = [[0] * m for _ in range(2)] # Initialize row 0 (base case) # dp[0] = base_case_values for i in range(1, n): curr = i % 2 # Current row index: alternates 1, 0, 1, 0, ... prev = (i - 1) % 2 # Previous row index: alternates 0, 1, 0, 1, ... for j in range(m): # Access previous row: dp[prev][j], dp[prev][j-1], etc. # Write to current row: dp[curr][j] dp[curr][j] = compute_from(dp[prev][j], dp[prev][j-1], ...) # Final result is in dp[(n-1) % 2] return dp[(n - 1) % 2] # Equivalent: Using (i + 1) % 2 for previous (same as (i - 1) % 2)# When i = 1: curr = 1, prev = 0# When i = 2: curr = 0, prev = 1# When i = 3: curr = 1, prev = 0# ...and so on, alternatingYou can use i & 1 instead of i % 2 for a slight performance improvement. And curr ^ 1 gives the opposite index (if curr is 0, curr ^ 1 is 1, and vice versa). This is a common competitive programming optimization.
Let's implement the 0/1 Knapsack problem using the rolling array technique. Compare this with the single-array approach from the previous page — notice how the rolling version removes iteration order concerns.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
def knapsack_rolling(weights: list, values: list, W: int) -> int: """ 0/1 Knapsack using rolling array technique. Space: O(W) — specifically, 2 × (W + 1) = O(W) """ n = len(weights) # Two arrays for rolling # prev[w] = max value with capacity w, using items 0..i-1 # curr[w] = max value with capacity w, using items 0..i prev = [0] * (W + 1) curr = [0] * (W + 1) for i in range(n): wt, val = weights[i], values[i] for w in range(W + 1): if w < wt: # Cannot take item i curr[w] = prev[w] else: # Max of: (1) not taking item, (2) taking item curr[w] = max(prev[w], prev[w - wt] + val) # ROLL: swap prev and curr for next iteration prev, curr = curr, prev # After n iterations, result is in prev (due to final swap) return prev[W] def knapsack_rolling_modulo(weights: list, values: list, W: int) -> int: """ Alternative: using modulo indexing instead of swapping. """ n = len(weights) dp = [[0] * (W + 1) for _ in range(2)] for i in range(n): curr, prev = i % 2, (i + 1) % 2 wt, val = weights[i], values[i] for w in range(W + 1): if w < wt: dp[curr][w] = dp[prev][w] else: dp[curr][w] = max(dp[prev][w], dp[prev][w - wt] + val) return dp[(n - 1) % 2][W] # Key advantage: We can iterate LEFT-TO-RIGHT here!# No need for reverse iteration because prev and curr are separate.# This makes the code more intuitive and matches the 2D version directly.The rolling array technique generalizes elegantly to cases where dependencies span more than one previous row. Some DP problems have recurrences like:
dp[i][j] = f(dp[i-1][...], dp[i-2][...])
For such problems, we use a rolling array of size k+1 where k is the maximum lookback distance.
Example: The Tribonacci sequence where T[n] = T[n-1] + T[n-2] + T[n-3] requires looking back 3 rows, so we'd use 3 arrays (or variables for 1D tribonacci).
123456789101112131415161718192021222324252627282930313233343536373839
# Example: DP with 3-row lookback using rolling arraysdef three_row_rolling_dp(n, m): """ Pattern for recurrence: dp[i] = f(dp[i-1], dp[i-2], dp[i-3]) Uses 3 rolling arrays. """ # Create 3 arrays (rows) rows = [[0] * m for _ in range(3)] # Initialize base cases # rows[0], rows[1], rows[2] = base cases for i=0,1,2 for i in range(3, n): # Compute new row using the 3 previous rows # Use modulo 3 to identify which array is current, which are previous curr = i % 3 prev1 = (i - 1) % 3 # row i-1 prev2 = (i - 2) % 3 # row i-2 prev3 = (i - 3) % 3 # row i-3 (will be overwritten with new values) # Note: prev3 == curr because (i-3) % 3 == i % 3 # So we're reusing the oldest row for the newest computation for j in range(m): rows[curr][j] = compute_from( rows[prev1][j], rows[prev2][j], rows[prev3][j] # Actually using old value before overwrite ) return rows[(n - 1) % 3] # General pattern: For k-row lookback, use k rolling arrays# Space: O(k × m) instead of O(n × m)# When n >> k, this is a massive savings # Example application: Certain grid DP problems where the# current row depends on multiple previous rowsIf dp[i] depends on dp[i-1], dp[i-2], ..., dp[i-k], you need exactly k rolling arrays. The oldest row (row i-k) gets recycled to hold the newest row (row i). Access via i % k indexing.
Grid-based DP problems are excellent candidates for rolling array optimization. Consider the Unique Paths problem:
Given an m × n grid, count the number of unique paths from top-left to bottom-right, moving only right or down.
Recurrence: dp[i][j] = dp[i-1][j] + dp[i][j-1]
Dependencies: Cell above (previous row) and cell to the left (same row).
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
def unique_paths_2d(m: int, n: int) -> int: """Full 2D solution — O(m × n) space""" dp = [[1] * n for _ in range(m)] for i in range(1, m): for j in range(1, n): dp[i][j] = dp[i-1][j] + dp[i][j-1] return dp[m-1][n-1] def unique_paths_rolling(m: int, n: int) -> int: """ Rolling array solution — O(n) space Key insight: dp[i][j] = dp[i-1][j] + dp[i][j-1] - dp[i-1][j] is from previous row (prev array) - dp[i][j-1] is from current row (curr array, already computed) """ prev = [1] * n # First row: all 1s curr = [1] * n # Current row being computed for i in range(1, m): curr[0] = 1 # First column is always 1 for j in range(1, n): curr[j] = prev[j] + curr[j-1] prev, curr = curr, prev # Roll return prev[n-1] def unique_paths_single_array(m: int, n: int) -> int: """ Single array optimization — O(n) space, tightest possible Insight: When iterating left-to-right: - dp[j] before update = dp[i-1][j] (from previous row) - dp[j-1] after its update = dp[i][j-1] (from current row) We can do this in-place! """ dp = [1] * n for i in range(1, m): for j in range(1, n): # dp[j] on right side = OLD value (from row i-1) # dp[j-1] = NEW value (from row i, just updated) dp[j] = dp[j] + dp[j-1] return dp[n-1] # Space comparison:# unique_paths_2d: O(m × n)# unique_paths_rolling: O(2n) = O(n) # unique_paths_single_array: O(n)## All produce the same correct result!Notice the single-array version works here! Because we depend on dp[i][j-1] (same row, to the left), we WANT to use the updated value. Left-to-right iteration naturally provides this. And dp[i-1][j] is the current dp[j] before we update it — also naturally available!
This is a case where dependencies align perfectly with left-to-right iteration, making single-array optimization trivial.
For Unique Paths, both rolling and single-array work cleanly. Use rolling arrays when dependencies are complex or when you want clearer code. Use single-array when you're confident in the iteration order and want minimal space.
For 1D DP problems with fixed lookback, rolling arrays reduce to rolling variables. This achieves constant O(1) space — the ultimate optimization.
The Fibonacci family includes many problems with this pattern:
F[n] = F[n-1] + F[n-2]ways[n] = ways[n-1] + ways[n-2]best[n] = max(best[n-1], best[n-2] + value[n])T[n] = T[n-1] + T[n-2] + T[n-3]1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
# Fibonacci: O(n) time, O(1) space using rolling variablesdef fibonacci(n: int) -> int: if n <= 1: return n # Rolling variables instead of rolling arrays prev2, prev1 = 0, 1 # F[n-2], F[n-1] for _ in range(2, n + 1): curr = prev1 + prev2 prev2, prev1 = prev1, curr # Roll forward return prev1 # Climbing Stairs: O(n) time, O(1) spacedef climb_stairs(n: int) -> int: if n <= 2: return n prev2, prev1 = 1, 2 # ways[1], ways[2] for _ in range(3, n + 1): curr = prev1 + prev2 prev2, prev1 = prev1, curr return prev1 # House Robber: O(n) time, O(1) spacedef house_robber(nums: list) -> int: if not nums: return 0 if len(nums) == 1: return nums[0] # prev2 = best robbing up to house i-2 # prev1 = best robbing up to house i-1 prev2, prev1 = 0, nums[0] for i in range(1, len(nums)): # Either skip current house (prev1) or rob it (prev2 + nums[i]) curr = max(prev1, prev2 + nums[i]) prev2, prev1 = prev1, curr return prev1 # Tribonacci: O(n) time, O(1) spacedef tribonacci(n: int) -> int: if n == 0: return 0 if n <= 2: return 1 prev3, prev2, prev1 = 0, 1, 1 # T[0], T[1], T[2] for _ in range(3, n + 1): curr = prev1 + prev2 + prev3 prev3, prev2, prev1 = prev2, prev1, curr # Roll forward return prev1For 1D DP with k-step lookback, use k+1 rolling variables (or just k variables if you compute curr separately). This reduces O(n) space to O(k) = O(1) for constant k. The same principle applies: after computing the current value, 'roll' the variables forward.
While rolling arrays are more robust than single-array optimization, there are still pitfalls to avoid:
prev (not curr) because we just swapped. Many bugs come from returning curr[n-1] instead of prev[n-1].i % 2, when i = n-1, the result is in dp[(n-1) % 2]. Off-by-one errors here are common.curr needs to be reset at the start of each outer iteration. Forgetting this carries stale values.dp[i][j-1] (same row), you can't use two separate arrays naively — you need the already-updated value from curr[j-1].prev and curr (or dp0 and dp1) are clearer than a and b. With modulo, comment which index means what.Let's consolidate what we've learned about rolling arrays:
prev, curr = curr, prev) or use i % k indexing.| Approach | Space | Iteration Order | Complexity | Best For |
|---|---|---|---|---|
| Full 2D Table | O(n×m) | Any | Simple | Solution reconstruction |
| Rolling Arrays | O(2m) = O(m) | Any | Medium | Most 2D DP problems |
| Single Array | O(m) | Constrained | Tricky | Maximum space savings |
| Rolling Variables | O(k) | N/A | Simple | 1D with constant lookback |
What's next:
In the next page, we'll explore when space optimization applies — understanding which problems are amenable to these techniques and learning to quickly recognize the patterns that enable dramatic space reductions.
You've mastered the rolling array technique — a versatile, systematic approach to space optimization that balances efficiency with code clarity. This technique will serve you well across countless DP problems.