Loading content...
Analyzing the time complexity of iterative algorithms is relatively straightforward: count the loops, multiply by the work done per iteration, and you have your answer. But recursive algorithms present a unique challenge. When a function calls itself—potentially multiple times, with different input sizes—how do we determine the total work performed?
This question isn't merely academic. The difference between an O(n) and O(2ⁿ) recursive algorithm could mean the difference between a solution that runs in milliseconds and one that takes longer than the age of the universe. Understanding how to analyze recursive complexity is essential for any engineer who wants to predict algorithm behavior before deploying to production.
By the end of this page, you will understand what recurrence relations are, how to write them for recursive algorithms, and the fundamental techniques for solving them to determine time complexity. You'll be able to look at any recursive function and systematically derive its Big-O performance.
Let's start with why our standard complexity analysis tools don't work directly for recursive functions.
Iterative analysis relies on a clear loop structure:
12345678
def iterative_sum(arr): """Sum all elements in an array - O(n) time.""" total = 0 for x in arr: # Loop runs n times total += x # O(1) work per iteration return total # Analysis: n iterations × O(1) work = O(n) totalThe analysis above is direct: count the iterations, multiply by the work per iteration. But now consider the recursive equivalent:
123456789
def recursive_sum(arr, i=0): """Sum all elements recursively.""" if i == len(arr): # Base case return 0 return arr[i] + recursive_sum(arr, i + 1) # Recursive case # Analysis: ???# How many times does this function execute?# What's the total work across all executions?The recursive version doesn't have an obvious loop to count. Instead, the function repeatedly calls itself, each call triggering another call, until we reach the base case. The "loop" is implicit in the call structure.
The fundamental problem:
With recursion, we need to reason about:
This is where recurrence relations become essential—they give us a mathematical framework to express and solve these recursive relationships.
Every recursive algorithm can be understood as an implicit loop where the 'iteration count' is determined by the structure of recursive calls. Recurrence relations formalize this implicit structure into an equation we can solve.
A recurrence relation is an equation that defines a function in terms of its value on smaller inputs. In the context of algorithm analysis, a recurrence relation expresses the time complexity T(n) of a recursive algorithm in terms of T on smaller problem sizes.
The key insight: A recurrence relation captures the recursive structure of the algorithm mathematically, allowing us to reason about the total work without tracing through every call.
General form of a recurrence:
12345678
T(n) = [number of recursive calls] × T([size of subproblem]) + [work done outside recursive calls] Example: T(n) = 2·T(n/2) + O(n) This means:- The algorithm makes 2 recursive calls- Each call is on a problem of size n/2- Outside the recursive calls, O(n) work is done (at this level)Components of a recurrence relation:
T(n) represents the total time to solve a problem of size n.T(n/2) or T(n-1) that represent the time for recursive subproblems. These capture the 'dividing' aspect of recursive algorithms.2·T(n/2)) indicates how many recursive calls are made at each level.f(n) term (often O(1), O(n), etc.) representing work done at the current level excluding recursive calls.T(1) = O(1) or similar, representing the time for the smallest subproblem that doesn't recurse further.Think of a recurrence as telling a story: 'To solve a problem of size n, I first do some work [f(n)], then I solve [a] problems of size [n/b], and my answer combines all of that.' The recurrence is this story written in mathematical notation.
The skill of writing recurrence relations from code is fundamental to recursive analysis. Let's develop this skill systematically through increasingly complex examples.
Example 1: Linear Recursion (Single Recursive Call)
12345
def factorial(n): if n <= 1: # Base case: O(1) return 1 return n * factorial(n - 1) # One recursive call on (n-1) # Plus O(1) multiplicationDeriving the recurrence:
n ≤ 1, we do constant work → T(1) = O(1)factorial(n-1) → that's T(n-1)O(1)Recurrence: T(n) = T(n-1) + O(1), with T(1) = O(1)
Example 2: Binary Recursion (Two Recursive Calls)
12345
def fibonacci(n): if n <= 1: # Base case: O(1) return n return fibonacci(n-1) + fibonacci(n-2) # TWO recursive calls # Plus O(1) additionDeriving the recurrence:
T(0) = T(1) = O(1)fibonacci(n-1) + fibonacci(n-2)O(1)Recurrence: T(n) = T(n-1) + T(n-2) + O(1)
Important: This recurrence is more complex because the subproblem sizes differ. We'll see this leads to exponential complexity.
Example 3: Divide-and-Conquer (Halving the Problem)
12345678910
def binary_search(arr, target, lo, hi): if lo > hi: # Base case: O(1) return -1 mid = (lo + hi) // 2 # O(1) computation if arr[mid] == target: return mid elif arr[mid] < target: return binary_search(arr, target, mid + 1, hi) # ONE call on n/2 else: return binary_search(arr, target, lo, mid - 1) # OR ONE call on n/2Deriving the recurrence:
T(1) = O(1)n/2O(1)Recurrence: T(n) = T(n/2) + O(1)
Example 4: Divide-and-Conquer with Linear Work
1234567
def merge_sort(arr): if len(arr) <= 1: # Base case: O(1) return arr mid = len(arr) // 2 left = merge_sort(arr[:mid]) # Recursive call on n/2 elements right = merge_sort(arr[mid:]) # Recursive call on n/2 elements return merge(left, right) # Merge step: O(n) workDeriving the recurrence:
T(1) = O(1)n/2 elements → 2·T(n/2)O(n)Recurrence: T(n) = 2·T(n/2) + O(n)
A common mistake is counting both branches in binary search as 2·T(n/2). Remember: only ONE branch executes per call (it's an if-else). Merge sort, in contrast, executes BOTH branches (left and right sorting), hence 2·T(n/2).
The most intuitive technique for solving recurrences is expansion (also called the iteration method or unfolding). We repeatedly substitute the recurrence into itself until we see a pattern.
Solving T(n) = T(n-1) + O(1):
1234567891011121314151617181920
T(n) = T(n-1) + c where c = O(1) Expand T(n-1):T(n) = [T(n-2) + c] + c = T(n-2) + 2c Expand T(n-2):T(n) = [T(n-3) + c] + 2c = T(n-3) + 3c Pattern emerges:T(n) = T(n-k) + k·c We reach base case when n-k = 1, so k = n-1:T(n) = T(1) + (n-1)·c = c + (n-1)·c = n·c = O(n) ✓ Factorial has O(n) time complexitySolving T(n) = T(n/2) + O(1):
12345678910111213141516171819
T(n) = T(n/2) + c where c = O(1) Expand T(n/2):T(n) = [T(n/4) + c] + c = T(n/4) + 2c Expand T(n/4):T(n) = [T(n/8) + c] + 2c = T(n/8) + 3c Pattern emerges:T(n) = T(n/2^k) + k·c We reach base case when n/2^k = 1, so 2^k = n, meaning k = log₂(n):T(n) = T(1) + log₂(n)·c = c + c·log₂(n) = O(log n) ✓ Binary search has O(log n) time complexitySolving T(n) = 2·T(n/2) + O(n):
This is the merge sort recurrence, which is slightly more complex:
12345678910111213141516171819202122
T(n) = 2·T(n/2) + cn where cn represents O(n) work Expand:T(n) = 2·[2·T(n/4) + c(n/2)] + cn = 4·T(n/4) + cn + cn = 4·T(n/4) + 2cn Expand again:T(n) = 4·[2·T(n/8) + c(n/4)] + 2cn = 8·T(n/8) + cn + 2cn = 8·T(n/8) + 3cn Pattern:T(n) = 2^k · T(n/2^k) + k·cn At base case, n/2^k = 1 → k = log₂(n):T(n) = 2^(log n) · T(1) + log₂(n) · cn = n · c + cn·log(n) = O(n) + O(n log n) = O(n log n) ✓ Merge sort has O(n log n) time complexityThe key to the expansion method is recognizing when the pattern stabilizes. After 2-3 expansions, look for the general term and express it as T(n) = [something involving k]. Then find what value of k brings you to the base case.
The recursion tree method provides a visual approach to solving recurrences. We draw the tree of recursive calls and sum up the work at each level.
Visualizing T(n) = 2·T(n/2) + cn (Merge Sort):
123456789101112131415
Level 0: [ n ] Cost: cn / \Level 1: [n/2] [n/2] Cost: 2 × c(n/2) = cn / \ / \Level 2: [n/4] [n/4] [n/4] [n/4] Cost: 4 × c(n/4) = cn ... ... Level k: [n/2^k] × 2^k nodes Cost: 2^k × c(n/2^k) = cn Level log(n): [1] × n nodes Cost: cn (base cases) Key observations:• Tree has log₂(n) + 1 levels (from n down to 1)• Each level does exactly cn work• Total = cn × (log n + 1) = O(n log n)Visualizing T(n) = T(n-1) + O(1) (Factorial):
1234567891011121314
Level 0: [n] Cost: c |Level 1: [n-1] Cost: c |Level 2: [n-2] Cost: c | ... |Level n-1: [1] Cost: c Observations:• Tree is a single chain (no branching)• n levels, each doing c work• Total = n × c = O(n)Visualizing T(n) = 2·T(n-1) + O(1) (Exponential):
12345678910111213141516
Level 0: [n] Cost: c = c × 2^0 / \Level 1: [n-1] [n-1] Cost: 2c = c × 2^1 / \ / \Level 2: [n-2][n-2][n-2][n-2] Cost: 4c = c × 2^2 ... Level k: 2^k nodes Cost: c × 2^k Level n-1: 2^(n-1) nodes Cost: c × 2^(n-1) Total = c × (2^0 + 2^1 + 2^2 + ... + 2^(n-1)) = c × (2^n - 1) = O(2^n) ⚠️ EXPONENTIAL! Every level doubles the work!The recursion tree reveals complexity through its shape: A chain (factorial) → O(depth). A balanced tree with constant work per level (merge sort) → O(depth × nodes per level). A tree that doubles each level (naive Fibonacci) → O(2^depth). The shape tells the story.
With practice, you'll recognize common recurrence patterns and their solutions instantly. Here are the essential patterns every engineer should know:
| Recurrence | Solution | Example Algorithm | Intuition |
|---|---|---|---|
| T(n) = T(n-1) + O(1) | O(n) | Factorial, linear search | Chain of n calls, O(1) each |
| T(n) = T(n-1) + O(n) | O(n²) | Selection sort, insertion sort | Chain of n calls, O(n) each on average |
| T(n) = T(n/2) + O(1) | O(log n) | Binary search | Halving problem each call, constant work |
| T(n) = T(n/2) + O(n) | O(n) | Finding median (select) | Linear work shrinks geometrically |
| T(n) = 2·T(n/2) + O(1) | O(n) | Tree traversal | Work at leaves dominates |
| T(n) = 2·T(n/2) + O(n) | O(n log n) | Merge sort, quicksort (average) | Balanced tree, linear work per level |
| T(n) = 2·T(n-1) + O(1) | O(2ⁿ) | Tower of Hanoi, subset enumeration | Doubling tree, exponential nodes |
| T(n) = T(n-1) + T(n-2) + O(1) | O(φⁿ) ≈ O(1.618ⁿ) | Naive Fibonacci | Fibonacci-growth tree |
Why these patterns matter:
Recognizing these patterns allows you to analyze recursive algorithms at a glance:
T(n) = 2·T(n/2) + O(n)? That's O(n log n). Merge sort, good quicksort.T(n) = T(n-1) + O(n)? That's O(n²). Bubble sort, bad quicksort pivot.This pattern recognition accelerates your analysis from minutes of expansion to seconds of recognition.
When you see a recursive function, ask: (1) How many recursive calls? (2) By how much does the problem shrink? If calls = 1 and shrink by half → O(log n). If calls = 2 and shrink by half → O(n log n) or O(n). If calls = 2 and shrink by 1 → exponential. These rules cover 90% of cases.
Let's apply our knowledge to analyze real recursive code, walking through the complete process from code to recurrence to solution.
Example: Power Function
123456789101112131415
def power_naive(x, n): """Compute x^n using naive recursion.""" if n == 0: return 1 return x * power_naive(x, n - 1) def power_fast(x, n): """Compute x^n using fast exponentiation.""" if n == 0: return 1 if n % 2 == 0: half = power_fast(x, n // 2) return half * half else: return x * power_fast(x, n - 1)Analysis of power_naive:
Recurrence: T(n) = T(n-1) + O(1)
Solution: O(n)
Analysis of power_fast:
The even case halves n, the odd case decreases n by 1 (making it even). At most, every other call is odd, so effectively we halve n approximately every two calls.
Recurrence: T(n) = T(n/2) + O(1) (approximately)
Solution: O(log n)
The fast version is exponentially better—computing 2^1000 takes ~10 steps instead of 1000!
Example: Maximum Element in Array
1234567891011121314
def find_max_linear(arr, i=0): """Find maximum using linear recursion.""" if i == len(arr) - 1: return arr[i] return max(arr[i], find_max_linear(arr, i + 1)) def find_max_divide(arr, lo, hi): """Find maximum using divide and conquer.""" if lo == hi: return arr[lo] mid = (lo + hi) // 2 left_max = find_max_divide(arr, lo, mid) right_max = find_max_divide(arr, mid + 1, hi) return max(left_max, right_max)Both have O(n) time complexity, but different recurrences:
Linear version: T(n) = T(n-1) + O(1) → O(n)
Divide version: T(n) = 2·T(n/2) + O(1) → O(n)
Interesting! The divide-and-conquer approach doesn't improve complexity here because we still must examine every element. However, divide-and-conquer offers parallelization potential—the two halves can be processed simultaneously on different cores.
Two algorithms with the same Big-O can have very different practical characteristics. Divide-and-conquer enables parallelization, has different cache behavior, and may have different constant factors. Complexity is essential but not the whole story.
We've covered the fundamental approach to analyzing recursive time complexity. Let's consolidate the key concepts:
What's next:
Understanding time complexity is only half the picture. Recursive algorithms also consume memory through the call stack. In the next page, we'll explore space complexity for recursive algorithms—understanding how stack depth affects memory usage and how to reason about the hidden space cost of recursion.
You now understand how to analyze the time complexity of recursive algorithms using recurrence relations. You can write recurrences from code, solve them through expansion and recursion trees, and recognize common patterns. Next, we'll tackle the space complexity dimension of recursive analysis.