Loading learning content...
If there is one problem that serves as the universal introduction to dynamic programming, it is computing the Fibonacci sequence. This isn't because Fibonacci is particularly practical — in real-world applications, you'll rarely need the 50th Fibonacci number. Its value lies in its pedagogical clarity: the Fibonacci sequence perfectly illustrates every core DP concept with minimal complexity.
The Fibonacci sequence is defined mathematically as:
F(0) = 0, F(1) = 1, F(n) = F(n-1) + F(n-2) for n ≥ 2
This yields the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
At first glance, this seems trivially simple — just add the previous two numbers. But when you try to compute F(n) directly from this recursive definition, you encounter a phenomenon that exposes the fundamental inefficiency that dynamic programming exists to solve: exponential redundant computation.
By the end of this page, you will understand: (1) Why naive recursion fails catastrophically for Fibonacci, (2) How overlapping subproblems manifest in the recursion tree, (3) How memoization eliminates redundancy with O(n) time and O(n) space, (4) How tabulation achieves the same with iterative bottom-up construction, (5) How space optimization reduces to O(1) memory, and (6) Why Fibonacci is the canonical DP teaching example.
The Fibonacci sequence, named after the Italian mathematician Leonardo of Pisa (c. 1170–1250), also known as Fibonacci, was first introduced to Western mathematics in his 1202 book Liber Abaci. The problem he posed was deceptively simple:
Suppose a pair of rabbits produce one new pair every month, and each new pair becomes fertile after one month. Starting with one pair, how many pairs will there be after n months?
This biological puzzle encodes a recurrence relation that appears throughout nature—in spiral patterns of shells, the arrangement of leaves, branching in trees, and even financial markets.
Why does this matter for DP?
The Fibonacci recurrence relation has a critical property: each subproblem depends on exactly two smaller subproblems. This creates a clean, tree-structured dependency graph with substantial overlap. As we'll see, naive recursion computes the same values exponentially many times, while DP ensures each value is computed exactly once.
Binet's closed-form formula allows computing F(n) directly in O(1) time with matrix exponentiation or even O(1) with floating point — but floating-point precision fails for large n. Even for moderate values like n=70, floating-point errors accumulate. DP provides exact integer arithmetic, making it essential for many practical computations.
The most straightforward way to compute Fibonacci numbers is to directly translate the mathematical definition into code. This approach, while elegant in its simplicity, harbors a devastating inefficiency.
Let's examine the naive recursive implementation:
12345678910111213141516171819202122232425
/** * Naive Recursive Fibonacci * * Time Complexity: O(2^n) — exponential * Space Complexity: O(n) — recursive call stack depth * * WARNING: This is deliberately inefficient for educational purposes. * Do NOT use this in production code. */function fibonacciNaive(n: number): number { // Base cases: F(0) = 0, F(1) = 1 if (n <= 1) { return n; } // Recursive case: F(n) = F(n-1) + F(n-2) // This innocent-looking line causes exponential explosion return fibonacciNaive(n - 1) + fibonacciNaive(n - 2);} // Test the functionconsole.log(fibonacciNaive(10)); // Output: 55console.log(fibonacciNaive(20)); // Output: 6765console.log(fibonacciNaive(40)); // WARNING: Takes several seconds!console.log(fibonacciNaive(50)); // DANGER: May hang your system!Why does this simple code become catastrophically slow?
The problem isn't the recursion itself — it's the redundant computation. Each call to fibonacciNaive(n) spawns two more calls, and those calls spawn four more, and so on. The result is an exponentially growing tree of function calls.
To understand the magnitude of this inefficiency, let's count exactly how many times each subproblem is computed:
| Subproblem | Times Computed for F(10) | Times Computed for F(20) | Times Computed for F(40) |
|---|---|---|---|
| F(0) | 34 times | 4,181 times | ~63 million times |
| F(1) | 55 times | 6,765 times | ~102 million times |
| F(5) | 8 times | 610 times | ~6.3 million times |
| F(10) | 1 time | 55 times | ~102,334 times |
| Total calls | 177 | 21,891 | ~331 million |
For F(40), the naive algorithm makes approximately 331 million function calls. For F(50), it's over 40 billion. For F(100), it would take longer than the age of the universe on any computer. This is the cost of not recognizing overlapping subproblems.
To truly understand why naive recursion fails, we must visualize the recursion tree. This tree shows every function call as a node, with child nodes representing the recursive calls made by each invocation.
Let's trace through computing F(6) with naive recursion:
123456789101112131415161718192021
F(6) / \ F(5) F(4) / \ / \ F(4) F(3) F(3) F(2) / \ / \ / \ / \ F(3) F(2) F(2) F(1) F(2) F(1) F(1) F(0) / \ / \ / \ / \ F(2) F(1) ... ... ... F(1) F(0) / \ F(1) F(0) Total nodes: 25 for computing just F(6)Each node represents one function call with its own stack frame. REDUNDANT COMPUTATIONS:- F(4) is computed 2 times- F(3) is computed 3 times- F(2) is computed 5 times- F(1) is computed 8 times (appears at leaves)- F(0) is computed 5 times (appears at leaves)The critical insight:
Notice how F(4) appears twice as a completely separate subtree. The algorithm doesn't remember that it already computed F(4) = 3 on the left side — it recomputes the entire subtree from scratch on the right side. This duplication cascades: F(3) appears three times, F(2) appears five times, and so on.
This is the defining characteristic of overlapping subproblems: the same computational work is performed multiple times because the recursive structure doesn't preserve results.
What if we could remember results? If after computing F(4) once, we stored the result and simply looked it up every subsequent time, we'd eliminate the redundant subtrees entirely. This is exactly what memoization provides — and it transforms O(2ⁿ) into O(n).
Memoization is the technique of storing computed results so they can be reused instead of recalculated. In the context of Fibonacci, we add a cache (or "memo") that records each F(i) value the first time it's computed. On subsequent calls with the same argument, we return the cached value immediately.
This transformation is simple but profound — it converts exponential time complexity to linear.
123456789101112131415161718192021222324252627282930313233343536373839404142
/** * Memoized Fibonacci (Top-Down DP) * * Time Complexity: O(n) — each subproblem solved exactly once * Space Complexity: O(n) — memo array + O(n) call stack * * This approach keeps the intuitive recursive structure while * eliminating all redundant computation through caching. */function fibonacciMemoized(n: number, memo: Map<number, number> = new Map()): number { // Base cases if (n <= 1) { return n; } // Check if already computed if (memo.has(n)) { return memo.get(n)!; // Cache hit — return immediately } // Compute and store before returning const result = fibonacciMemoized(n - 1, memo) + fibonacciMemoized(n - 2, memo); memo.set(n, result); return result;} // Alternative using an array (often faster for dense integer keys)function fibonacciMemoArray(n: number, memo: number[] = []): number { if (n <= 1) return n; if (memo[n] !== undefined) { return memo[n]; } memo[n] = fibonacciMemoArray(n - 1, memo) + fibonacciMemoArray(n - 2, memo); return memo[n];} // Now F(50) computes instantly instead of taking yearsconsole.log(fibonacciMemoized(50)); // Output: 12586269025console.log(fibonacciMemoized(100)); // Output: 354224848179261915075n (needs BigInt)How memoization transforms the recursion tree:
With memoization, the recursion tree collapses dramatically. Each unique subproblem F(i) is computed exactly once, then every subsequent reference becomes an O(1) lookup.
12345678910111213141516171819202122
Call: F(6) memo: {}├─ Call: F(5) memo: {}│ ├─ Call: F(4) memo: {}│ │ ├─ Call: F(3) memo: {}│ │ │ ├─ Call: F(2) memo: {}│ │ │ │ ├─ Call: F(1) → return 1 (base case)│ │ │ │ └─ Call: F(0) → return 0 (base case)│ │ │ │ └─ Store: memo[2] = 1│ │ │ └─ Call: F(1) → return 1 (base case)│ │ │ └─ Store: memo[3] = 2│ │ └─ CACHE HIT: F(2) → return 1 memo: {2:1, 3:2}│ │ └─ Store: memo[4] = 3│ └─ CACHE HIT: F(3) → return 2 memo: {2:1, 3:2, 4:3}│ └─ Store: memo[5] = 5└─ CACHE HIT: F(4) → return 3 memo: {2:1, 3:2, 4:3, 5:5}└─ Store: memo[6] = 8 Result: F(6) = 8 Total unique computations: 5 (F(2), F(3), F(4), F(5), F(6))Total cache hits: 4 (second calls to F(2), F(3), F(4), F(1))Total function calls: ~11 (vs 25 in naive approach)Tabulation takes the opposite approach from memoization. Instead of starting from the top and working down with caching, we start from the base cases and build upward iteratively. We fill a table (array) with solutions to subproblems in order of increasing size, ensuring that when we need to compute F(i), we've already computed F(i-1) and F(i-2).
This eliminates recursion entirely, avoiding stack overflow risks and often improving cache efficiency.
123456789101112131415161718192021222324252627282930313233343536
/** * Tabulated Fibonacci (Bottom-Up DP) * * Time Complexity: O(n) — single pass through the array * Space Complexity: O(n) — the dp table * * This approach eliminates recursion entirely, building solutions * from the smallest subproblems upward. */function fibonacciTabulated(n: number): number { // Handle base cases if (n <= 1) return n; // Create DP table with space for F(0) through F(n) const dp: number[] = new Array(n + 1); // Initialize base cases dp[0] = 0; dp[1] = 1; // Fill the table in order of increasing index // This guarantees dependencies are satisfied: dp[i-1] and dp[i-2] exist for (let i = 2; i <= n; i++) { dp[i] = dp[i - 1] + dp[i - 2]; } // The answer is at the final position return dp[n];} // Visualization of the table being filled:// i: 0 1 2 3 4 5 6 7 8 9 10// dp: 0 1 1 2 3 5 8 13 21 34 55 console.log(fibonacciTabulated(10)); // Output: 55console.log(fibonacciTabulated(50)); // Output: 12586269025Why tabulation often outperforms memoization:
No recursion overhead: Each iteration is a simple addition and array access — no function call setup/teardown.
Better cache locality: We access array elements in sequential order, which modern CPUs love (prefetching, cache lines).
No stack overflow risk: For n = 1,000,000, memoization might exceed stack limits. Tabulation handles any n that fits in memory.
Predictable memory usage: The array size is known upfront (n+1 elements).
However, tabulation requires us to know the computation order in advance — specifically, which subproblems are needed and in what sequence. For Fibonacci, this is trivial (just count upward), but for more complex problems, determining the correct order can be challenging.
Use tabulation when: (1) All subproblems will be computed anyway, (2) The dependency order is clear and simple, (3) You need to avoid stack overflow. Use memoization when: (1) Many subproblems might not be needed, (2) Dependencies are complex or hard to order, (3) You want to start with the natural recursive formulation.
A key insight elevates our Fibonacci solution from O(n) space to O(1): we don't need the entire table, only the last two values.
At each step, computing F(i) requires only F(i-1) and F(i-2). Once we've used F(i-2) to compute F(i), we'll never need F(i-2) again. This means we can discard old values as we progress, maintaining only a constant amount of state.
123456789101112131415161718192021222324252627282930313233343536373839404142434445
/** * Space-Optimized Fibonacci * * Time Complexity: O(n) * Space Complexity: O(1) — only three variables regardless of n * * The key insight: F(i) only depends on F(i-1) and F(i-2). * We don't need to remember earlier values. */function fibonacciOptimized(n: number): number { if (n <= 1) return n; // Only track the last two Fibonacci numbers let prev2 = 0; // F(i-2), starts as F(0) let prev1 = 1; // F(i-1), starts as F(1) for (let i = 2; i <= n; i++) { // Compute current Fibonacci number const current = prev1 + prev2; // Slide the window: what was prev1 becomes prev2 // and current becomes the new prev1 prev2 = prev1; prev1 = current; } // After the loop, prev1 contains F(n) return prev1;} // Alternative with destructuring (more elegant, same logic)function fibonacciOptimizedAlt(n: number): number { if (n <= 1) return n; let [a, b] = [0, 1]; for (let i = 2; i <= n; i++) { [a, b] = [b, a + b]; } return b;} // Works efficiently for any n without memory concernsconsole.log(fibonacciOptimized(100)); // Works instantlyEvolution of our Fibonacci solutions:
| Approach | Time Complexity | Space Complexity | Max Practical n | Key Advantage |
|---|---|---|---|---|
| Naive Recursion | O(2ⁿ) | O(n) stack | ~45 | Simplest code |
| Memoization | O(n) | O(n) cache + O(n) stack | ~10,000 (stack limit) | Natural structure preserved |
| Tabulation | O(n) | O(n) array | ~10⁸ (memory limit) | No stack overflow |
| Space-Optimized | O(n) | O(1) | Unlimited (integer limits only) | Minimal memory footprint |
This O(n) → O(1) space reduction is a recurring pattern in DP: when the recurrence only looks back a fixed number of steps, maintain a 'sliding window' of just those values. We'll see this pattern repeatedly in climbing stairs, house robber, and many other 1D DP problems.
A production-quality Fibonacci implementation must handle edge cases, potential integer overflow, and provide clear documentation. Here's a comprehensive implementation:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110
/** * Production-Quality Fibonacci Calculator * * Handles edge cases, provides multiple interfaces, and manages * integer overflow using BigInt when necessary. */class FibonacciCalculator { /** * Compute the nth Fibonacci number. * * @param n - The index (0-indexed) of the Fibonacci number to compute. * n must be a non-negative integer. * @returns The nth Fibonacci number as a BigInt (safe for large n). * @throws Error if n is negative or not an integer. * * Time Complexity: O(n) * Space Complexity: O(1) * * @example * FibonacciCalculator.compute(10) // Returns 55n * FibonacciCalculator.compute(100) // Returns 354224848179261915075n */ static compute(n: number): bigint { // Input validation if (!Number.isInteger(n) || n < 0) { throw new Error(`Invalid input: n must be a non-negative integer, got ${n}`); } // Base cases if (n === 0) return 0n; if (n === 1) return 1n; // Space-optimized computation using BigInt for arbitrary precision let prev2 = 0n; let prev1 = 1n; for (let i = 2; i <= n; i++) { const current = prev1 + prev2; prev2 = prev1; prev1 = current; } return prev1; } /** * Compute a range of Fibonacci numbers. * Efficient when multiple values are needed. */ static computeRange(start: number, end: number): bigint[] { if (start < 0 || end < start) { throw new Error(`Invalid range: [${start}, ${end}]`); } const result: bigint[] = []; let prev2 = 0n; let prev1 = 1n; for (let i = 0; i <= end; i++) { const current = i <= 1 ? BigInt(i) : prev1 + prev2; if (i >= start) { result.push(i <= 1 ? BigInt(i) : current); } if (i >= 1) { prev2 = prev1; prev1 = current; } } return result; } /** * Check if a number is a Fibonacci number. * A number is Fibonacci iff 5n² ± 4 is a perfect square. */ static isFibonacci(num: bigint): boolean { if (num < 0n) return false; const isPerfectSquare = (n: bigint): boolean => { if (n < 0n) return false; const sqrt = this.bigIntSqrt(n); return sqrt * sqrt === n; }; const test1 = 5n * num * num + 4n; const test2 = 5n * num * num - 4n; return isPerfectSquare(test1) || isPerfectSquare(test2); } private static bigIntSqrt(n: bigint): bigint { if (n < 0n) throw new Error("Square root of negative number"); if (n === 0n) return 0n; let x = n; let y = (x + 1n) / 2n; while (y < x) { x = y; y = (x + n / x) / 2n; } return x; }} // Usage examplesconsole.log(FibonacciCalculator.compute(50)); // 12586269025nconsole.log(FibonacciCalculator.computeRange(0, 10)); // [0n, 1n, 1n, 2n, ...]console.log(FibonacciCalculator.isFibonacci(55n)); // trueconsole.log(FibonacciCalculator.isFibonacci(56n)); // falseThe Fibonacci sequence isn't chosen for its practical value — it's chosen because it's the simplest possible problem that exhibits all core DP characteristics:
The mental model Fibonacci teaches:
After fully understanding Fibonacci as DP, you have internalized the fundamental questions for any DP problem:
Every DP problem you'll ever encounter is just a more complex version of Fibonacci. The core structure remains the same.
Once you've truly mastered Fibonacci, you've mastered the DP paradigm. Climbing stairs is Fibonacci in disguise. House robber is Fibonacci with a choice. Coin change is multi-branch Fibonacci. Every 1D DP problem is a variation on this theme.
We've explored the Fibonacci sequence as the gateway problem to dynamic programming, tracing its evolution from naive recursion to optimal solution:
You now have a deep understanding of Fibonacci as the canonical introduction to dynamic programming. The next page extends these concepts to the Climbing Stairs problem — a disguised Fibonacci problem that introduces the idea of 'decisions' in DP.