Loading learning content...
Here's a secret that experienced algorithm designers know: you rarely need to solve recurrences from first principles. The same patterns appear over and over across thousands of different algorithms. By memorizing a small set of common recurrence forms and their solutions, you can instantly recognize the complexity of most divide-and-conquer algorithms.
This page builds your mental library of recurrence patterns. By the end, you'll look at a recurrence and immediately know its solution—not through lengthy derivation, but through pattern matching against your internalized collection.
By the end of this page, you will recognize the most common recurrence patterns in algorithm analysis, understand intuitively why each pattern produces its complexity, and be able to instantly match new recurrences to their solved forms.
The Pattern:
T(n) = T(n/b) + O(1) → T(n) = O(log n)
Canonical Example: Binary Search
This is the simplest and most elegant D&C pattern. You make one recursive call on a problem of size n/b, doing only constant work at each level.
Why It's O(log n):
Visualize the recursion tree:
How many levels? We need n/bᵏ = 1, which means k = log_b(n).
Total work = O(1) × log_b(n) = O(log n)
Note: log_b(n) = log(n) / log(b), so any base gives Θ(log n).
| Algorithm | Recurrence | Division Factor (b) | Why Single Subproblem |
|---|---|---|---|
| Binary Search | T(n/2) + O(1) | 2 | Search left OR right, never both |
| Ternary Search (unimodal) | T(n/3) + O(1) | 3 | Eliminate 2/3 of search space each time |
| Exponentiation by Squaring | T(n/2) + O(1) | 2 | Compute x^n via x^(n/2) |
| Finding Majority Element (Boyer-Moore) | T(n/2) + O(1) | 2 | Reduce candidate at each step |
The logarithmic pattern emerges when you eliminate a constant fraction of possibilities without examining them. The 'trick' is that you never need to look at the discarded portion—you've proven it can't contain the answer.
The Pattern:
T(n) = T(n/b) + O(n) → T(n) = O(n)
Canonical Example: Select (Quickselect with median-of-medians pivot)
This pattern might seem surprising at first. You do O(n) work at the top level, then recurse on a smaller problem. The total is still just O(n)!
Why It's O(n), Not O(n log n):
Let's trace the work at each level for T(n) = T(n/2) + n:
Total = n + n/2 + n/4 + n/8 + ... = n × (1 + 1/2 + 1/4 + ...) = n × 2 = O(n)
The key is that the work at each level forms a geometric series that converges. The top level dominates—everything else combined is just another O(n).
Mathematical Insight:
For T(n) = T(n/b) + cn, expand the recurrence:
T(n) = cn + T(n/b)
= cn + c(n/b) + T(n/b²)
= cn + c(n/b) + c(n/b²) + T(n/b³)
= cn × (1 + 1/b + 1/b² + ...)
= cn × (b / (b-1)) [geometric series sum]
= O(n)
Since b > 1, the geometric series converges to a constant factor.
| Algorithm | Recurrence | Why O(n) Work at Each Level |
|---|---|---|
| Quickselect (best case) | T(n/2) + O(n) | Partition scans all elements, then recurse on one half |
| Median of Medians | T(n/5) + T(7n/10) + O(n) | Two subproblems but sizes sum to < n |
| Finding Closest Pair (1D) | T(n/2) + O(n) | Cross-boundary check is linear |
Whenever a recurrence does O(n) work but recursively handles only a constant fraction of the input, the total is O(n). The work decreases geometrically at each level, and the sum of a convergent geometric series is dominated by its first term.
The Pattern:
T(n) = 2T(n/2) + O(n) → T(n) = O(n log n)
Canonical Example: Merge Sort
This is perhaps the most famous recurrence pattern in computer science. Two recursive calls, each on half the problem, plus linear work to combine.
Why It's O(n log n):
The recursion tree is perfectly balanced:
n Level 0: n work
/ \
n/2 n/2 Level 1: n work total
/ \ / \
n/4 n/4 n/4 n/4 Level 2: n work total
... ... ...
[1] [1] ... [1] [1] Level log n: n leaves
Key observation: Every level does exactly O(n) work total!
Number of levels: log₂(n)
Total = n × log n = O(n log n)
Generalizing the Pattern:
The pattern T(n) = aT(n/a) + O(n) always gives O(n log n):
Why? When a = b (number of subproblems = division factor), each level has exactly a^k nodes, each doing n/a^k work, so total per level is always n.
| Algorithm | Exact Recurrence | Why Linear Combine Cost |
|---|---|---|
| Merge Sort | 2T(n/2) + cn | Merge operation touches each element once |
| Quicksort (average case) | 2T(n/2) + cn | Partition scans all elements |
| Counting Inversions | 2T(n/2) + O(n) | Count during merge phase |
| Closest Pair (2D) | 2T(n/2) + O(n) | Linear check in the strip |
| FFT (Fast Fourier Transform) | 2T(n/2) + O(n) | Combine partial transforms |
The O(n log n) pattern occurs when the tree is balanced (a = b) and each level does O(n) total work. This balance—doubling the nodes while halving the work per node—is what keeps each level constant.
The Pattern:
T(n) = 2T(n/2) + O(n²) → T(n) = O(n²)
This might seem counterintuitive—we're making recursive calls, but the complexity is just O(n²), the same as the non-recursive work!
Why It's O(n²), Not Worse:
Let's trace the work at each level:
Total = n² + n²/2 + n²/4 + n²/8 + ... = n² × (1 + 1/2 + 1/4 + ...) = n² × 2 = O(n²)
The work decreases geometrically, so the top level dominates.
The Domination Principle:
When the non-recursive work grows faster than the number of subproblems, the top level dominates:
Since 2ᵏ doubles but 4ᵏ quadruples, the ratio shrinks by half each level.
If your divide-and-conquer algorithm has a very expensive combine step (like O(n²)), the recursion doesn't hurt you—but it doesn't help either. You still get O(n²) total. To leverage D&C, you need the combine step to be efficient.
| Scenario | What's Happening | Why D&C Doesn't Help |
|---|---|---|
| Naive matrix multiplication D&C | 8 subproblems, O(n²) combine | Would give O(n³); Strassen improves this |
| Brute-force closest pair D&C | Check all pairs in combine | O(n²) combine dominates |
| Poorly designed D&C | Quadratic merge | Should refactor the combine step |
The Pattern:
T(n) = 2T(n/2) + O(1) → T(n) = O(n)
Canonical Example: Finding max/min via D&C
This pattern arises when you split into two subproblems but do only constant work at each node.
Why It's O(n):
The recursion tree has:
Total nodes = 1 + 2 + 4 + 8 + ... + n = 2n - 1 = O(n)
Alternatively: the leaves of the tree are the base cases, and there are exactly n leaves. All internal nodes combined are n - 1. So total = O(n).
Intuitive Understanding:
This pattern is doing exactly what a linear scan does, just organized as a tree. For finding the maximum:
Both approaches examine each element once, doing O(1) work per element. The structure differs, but the total work is the same.
| Algorithm | Recurrence | What the O(1) Work Is |
|---|---|---|
| D&C Maximum | 2T(n/2) + O(1) | max(left_max, right_max) |
| D&C Sum | 2T(n/2) + O(1) | left_sum + right_sum |
| Tree Construction (leaves) | 2T(n/2) + O(1) | Create internal node |
| Parallel Reduction | 2T(n/2) + O(1) | Combine partial results |
This pattern shows that D&C doesn't magically make algorithms faster. Finding the max is O(n) whether you use D&C or a simple loop. D&C is valuable when it enables parallelism or leads to a more clever combination step—not just by restructuring a linear process.
The Pattern:
T(n) = 7T(n/2) + O(n²) → T(n) = O(n^{log₂ 7}) = O(n^{2.807...})
Canonical Example: Strassen's Matrix Multiplication
This pattern represents one of the most celebrated results in algorithm design: reducing the exponent of matrix multiplication below 3.
Understanding the Mathematics:
Naive matrix multiplication of two n×n matrices:
Strassen's insight: Reduce to 7 subproblems through clever algebra:
To solve this, we need to understand when leaves vs. root dominates...
The General Principle:
For T(n) = aT(n/b) + O(n^d), the solution depends on comparing a to b^d:
For Strassen: a = 7, b = 2, d = 2
The leaves dominate because we're creating more subproblems (7) than the root work can "pay for" (4).
| Algorithm | Subproblems (a) | Recurrence | Complexity |
|---|---|---|---|
| Naive D&C | 8 | 8T(n/2) + O(n²) | O(n³) |
| Strassen | 7 | 7T(n/2) + O(n²) | O(n^{2.807}) |
| Coppersmith-Winograd | — | Complex | O(n^{2.376}) |
| Current best (2024) | — | Complex | O(n^{2.371552}) |
Strassen's genius was finding a way to compute the same result with 7 recursive calls instead of 8. Going from 8→7 subproblems dropped complexity from O(n³) to O(n^{2.807}). This illustrates a profound principle: reducing the number of subproblems can have dramatic effects on complexity.
The Pattern:
T(n) = 3T(n/2) + O(n) → T(n) = O(n^{log₂ 3}) = O(n^{1.585...})
Canonical Example: Karatsuba Multiplication
This pattern shows how D&C can beat the "obvious" quadratic bound for integer multiplication.
The Karatsuba Insight:
To multiply two n-digit numbers X and Y:
Karatsuba's trick: Compute (a+b)(c+d) = ac + ad + bc + bd
This reduces 4 multiplications to 3:
And we compute ad + bc by subtraction!
Complexity Analysis:
Naive multiplication: T(n) = 4T(n/2) + O(n) → O(n²) Karatsuba: T(n) = 3T(n/2) + O(n) → O(n^{log₂ 3}) ≈ O(n^{1.585})
For a = 3, b = 2, d = 1:
| Algorithm | Recurrence | Complexity | Improvement |
|---|---|---|---|
| Grade school | 4T(n/2) + O(n) | O(n²) | Baseline |
| Karatsuba | 3T(n/2) + O(n) | O(n^{1.585}) | ~37% exponent reduction |
| Toom-Cook-3 | 5T(n/3) + O(n) | O(n^{1.465}) | Further improvement |
| Schönhage-Strassen | Complex | O(n log n log log n) | Near-linear |
Karatsuba multiplication is the prototypical example of reducing subproblems through algebraic cleverness. The pattern 'compute extra products that reveal useful combinations' appears throughout algorithm design, from FFT to polynomial multiplication.
You now have a comprehensive library of recurrence patterns. Here's a unified reference table for quick matching:
| Pattern Type | Recurrence | Solution | Key Insight |
|---|---|---|---|
| Logarithmic | T(n/b) + O(1) | O(log n) | Single subproblem, constant work |
| Linear (decreasing) | T(n/b) + O(n) | O(n) | Geometric series converges |
| Linear (leaf-dominated) | 2T(n/2) + O(1) | O(n) | Leaves count: n nodes do O(1) each |
| Linearithmic | 2T(n/2) + O(n) | O(n log n) | Balanced: O(n) work × O(log n) levels |
| Quadratic (root-dominated) | 2T(n/2) + O(n²) | O(n²) | Root work dominates |
| Subquadratic (Karatsuba) | 3T(n/2) + O(n) | O(n^{1.585}) | 3 > 2¹, leaves dominate |
| Subcubic (Strassen) | 7T(n/2) + O(n²) | O(n^{2.807}) | 7 > 2², leaves dominate |
What's Next:
You've built intuition for common patterns. The next page formalizes this into the Master Theorem—a single formula that solves T(n) = aT(n/b) + f(n) for any a, b, and f(n). You'll see that everything in this page is a special case of one powerful result.
You now have a mental library of common recurrence patterns and their solutions. You can recognize whether a recurrence is logarithmic, linear, linearithmic, or polynomial based on its structure. Next, we'll formalize these patterns with the Master Theorem—the most powerful tool for solving D&C recurrences.