Loading content...
When you write a divide-and-conquer algorithm, something remarkable happens: the algorithm's structure directly mirrors its complexity analysis. The recursive calls that divide problems into subproblems create a mathematical pattern—a recurrence relation—that precisely captures how runtime scales with input size.
Understanding recurrence relations transforms you from someone who memorizes that merge sort is O(n log n) into someone who can derive that result from first principles. More importantly, you gain the ability to analyze novel divide-and-conquer algorithms you've never seen before.
By the end of this page, you will understand what recurrence relations are, how they arise naturally from divide-and-conquer algorithms, and how to express the runtime of any D&C algorithm as a recurrence. You'll see how the three phases—divide, conquer, and combine—each contribute to the recurrence equation.
A recurrence relation is an equation that defines a function in terms of its values at smaller inputs. In the context of algorithm analysis, it expresses the runtime of an algorithm on an input of size n in terms of its runtime on smaller inputs.
The defining characteristic is self-reference: to know T(n), you must first know T for some smaller value(s).
Just as a recursive function calls itself with smaller arguments, a recurrence relation defines T(n) using T of smaller arguments. The parallel is not coincidental—recurrence relations are the mathematical reflection of recursive algorithms.
Formal Definition:
A recurrence relation for runtime analysis consists of two components:
For example, consider the following recurrence:
T(n) = 2T(n/2) + n for n > 1
T(1) = 1 (base case)
This says: "The time to solve a problem of size n equals twice the time to solve two problems of size n/2, plus linear work at the current level."
Why Recurrences Matter:
Recurrence relations provide a declarative specification of complexity. Instead of tracing through every operation, you describe the recursive structure and let mathematical techniques derive the closed-form solution.
This is incredibly powerful because:
Every divide-and-conquer algorithm follows a three-phase pattern, and each phase contributes a specific component to the recurrence relation:
Let's examine how each phase shapes the recurrence.
| Phase | What It Does | Contribution to T(n) | Example (Merge Sort) |
|---|---|---|---|
| Divide | Split input into a subproblems of size n/b | a × T(n/b) | 2 × T(n/2) |
| Conquer | (Recursive calls—captured in the T(n/b) terms) | — | — |
| Combine | Non-recursive work to merge solutions | f(n) | O(n) for merging |
| Total | Sum of divide/conquer + combine | T(n) = a×T(n/b) + f(n) | T(n) = 2T(n/2) + n |
The General Form:
Most divide-and-conquer recurrences follow this pattern:
T(n) = a × T(n/b) + f(n)
Where:
This single template covers an enormous range of algorithms!
To write a recurrence, ask three questions:
With these three answers, you can write any D&C recurrence.
The most practical skill is reading algorithm code and translating it directly into a recurrence. Let's walk through this process systematically with several examples.
Example 1: Binary Search
Consider the classic binary search algorithm:
12345678910111213141516171819
def binary_search(arr, target, low, high): # Base case: element not found if low > high: return -1 # O(1) # Divide: compute midpoint mid = (low + high) // 2 # O(1) # Check if found if arr[mid] == target: return mid # O(1) # Conquer: search ONE half (not both!) elif arr[mid] > target: return binary_search(arr, target, low, mid - 1) # T(n/2) else: return binary_search(arr, target, mid + 1, high) # T(n/2) # Combine: nothing needed—just return the resultAnalysis:
Recurrence:
T(n) = 1 × T(n/2) + O(1)
= T(n/2) + c
This recurrence solves to O(log n), which matches our intuition: we halve the search space each time.
Example 2: Merge Sort
Now let's examine merge sort, which has a fundamentally different structure:
123456789101112131415161718192021222324252627282930313233343536
def merge_sort(arr): # Base case if len(arr) <= 1: return arr # O(1) # Divide: split into two halves mid = len(arr) // 2 # O(1) left = arr[:mid] # O(n/2) - but we'll count this in merge right = arr[mid:] # O(n/2) # Conquer: sort BOTH halves (two recursive calls!) sorted_left = merge_sort(left) # T(n/2) sorted_right = merge_sort(right) # T(n/2) # Combine: merge the sorted halves return merge(sorted_left, sorted_right) # O(n) def merge(left, right): """Merge two sorted arrays into one sorted array.""" result = [] i = j = 0 # Linear scan through both arrays while i < len(left) and j < len(right): if left[i] <= right[j]: result.append(left[i]) i += 1 else: result.append(right[j]) j += 1 # Append remaining elements result.extend(left[i:]) result.extend(right[j:]) return result # Total: O(n) where n = len(left) + len(right)Analysis:
Recurrence:
T(n) = 2 × T(n/2) + O(n)
= 2T(n/2) + n
This recurrence solves to O(n log n). The key difference from binary search? We make two recursive calls instead of one, and the combine step is O(n) instead of O(1).
A frequent error is confusing the number of subproblems (a) with the division factor (b). In merge sort, a=2 and b=2, but these are independent parameters. In a theoretical algorithm that splits into 3 subproblems of size n/4 each, we'd have a=3 and b=4.
Not all divide-and-conquer algorithms fit the simple T(n) = aT(n/b) + f(n) template exactly. Let's explore important variations you'll encounter in practice.
Variation 1: Unequal Subproblem Sizes
Some algorithms divide the input into subproblems of different sizes. Quick sort is the classic example:
T(n) = T(k) + T(n - k - 1) + O(n)
Where k is the position of the pivot. In the worst case (k=0 or k=n-1), this becomes:
T(n) = T(0) + T(n-1) + O(n) = T(n-1) + O(n)
Which solves to O(n²).
In the average case (k ≈ n/2):
T(n) = 2T(n/2) + O(n)
Which solves to O(n log n).
Variation 2: Non-Standard Division Factors
The division factor doesn't have to be 2. Consider an algorithm that divides the array into thirds:
T(n) = 3T(n/3) + O(n)
Or an algorithm that reduces the problem to 1/3 of the original size:
T(n) = T(n/3) + O(1)
Variation 3: Linear Rather Than Geometric Reduction
Some "divide-and-conquer-like" algorithms reduce input size by a constant amount rather than a constant factor:
T(n) = T(n-1) + O(1) → O(n) [like factorial]
T(n) = T(n-1) + O(n) → O(n²) [like selection sort]
T(n) = T(n-2) + O(1) → O(n) [like some graph algorithms]
These are technically decrease-and-conquer rather than true divide-and-conquer, and they don't fit the Master Theorem (which we'll cover later). They typically result in polynomial rather than logarithmic factors.
The power of divide-and-conquer comes from geometric reduction (n → n/b). When you divide by a constant factor, problem size decreases exponentially, leading to logarithmic recursion depth. Linear reduction (n → n-1) leads to linear recursion depth—much slower asymptotically.
| Reduction Type | Example | Recursion Depth | Typical Complexity |
|---|---|---|---|
| Geometric: n → n/2 | Binary Search, Merge Sort | O(log n) | Often O(log n) or O(n log n) |
| Geometric: n → n/3 | Ternary Search | O(log₃ n) = O(log n) | Similar to n/2 reduction |
| Linear: n → n-1 | Factorial, Selection Sort | O(n) | Often O(n) or O(n²) |
| Linear: n → n-k | Some graph algorithms | O(n/k) = O(n) | Polynomial |
One of the most powerful tools for understanding recurrences is the recursion tree. This visualization shows how the total work is distributed across recursive calls, making it easier to see patterns and derive closed-form solutions.
Constructing a Recursion Tree:
For the recurrence T(n) = 2T(n/2) + n:
The tree looks like this:
n Level 0: n work
/ \
n/2 n/2 Level 1: n work
/ \ / \
n/4 n/4 n/4 n/4 Level 2: n work
... ... ...
[1] [1] ... [1] [1] Level log n: n nodes, O(1) each = O(n)
Analyzing the Tree:
For T(n) = 2T(n/2) + n:
This visual approach makes it obvious why merge sort is O(n log n)!
Contrast: Binary Search Tree
For T(n) = T(n/2) + 1:
1 Level 0: 1 work
|
1 Level 1: 1 work
|
1 Level 2: 1 work
...
1 Level log n: 1 work
The tree is a single path, not a branching tree.
When using a recursion tree, always ask:
The total is (work per level) × (number of levels), adjusted for whether levels have equal or varying work.
Every recurrence needs a base case—a small enough input size where we know the runtime directly without recursion. Base cases are essential for:
Common Base Case Patterns:
T(1) = O(1) Most common: constant time for single element
T(0) = O(1) Empty input handling
T(n) = O(1) for n ≤ k Constant for any size up to threshold k
T(1) = T(0) = c Explicit constant (for exact analysis)
For asymptotic analysis (Big-O), the specific constant in the base case usually doesn't affect the final result. What matters is that it's bounded by a constant.
Why Base Cases Don't Affect Big-O (Usually):
Consider T(n) = 2T(n/2) + n with two different base cases:
In Case B, the base case is 1000× larger, but:
The base case constant only affects the constant factor, not the asymptotic class.
Base cases become important when:
Complete Recurrence Example:
A fully specified merge sort recurrence:
T(n) = 2T(n/2) + cn for n > 1
T(1) = d (constant time for single element)
Where c and d are specific constants. This preciseness is needed for:
Let's consolidate what we've learned into a systematic procedure for converting any divide-and-conquer algorithm into its recurrence relation.
Worked Example: Maximum Element (Divide and Conquer)
Let's apply this method to a D&C algorithm for finding the maximum element:
1234567891011121314
def find_max_dc(arr, low, high): # Base case: single element if low == high: return arr[low] # Step 5: T(1) = O(1) # Divide: split into two halves mid = (low + high) // 2 # O(1) # Conquer: find max in each half (Step 2: TWO recursive calls) left_max = find_max_dc(arr, low, mid) # T(n/2) right_max = find_max_dc(arr, mid + 1, high) # T(n/2) # Combine: return the larger (Step 4: O(1) non-recursive work) return max(left_max, right_max) # O(1)Applying the Method:
T(n) = 2T(n/2) + O(1) for n > 1
T(1) = O(1)
This solves to O(n)—which makes sense! We visit every element once, just organized differently than a linear scan.
Always verify your recurrence makes intuitive sense. Finding the maximum requires looking at every element—that's Ω(n) work. Our recurrence gives O(n), which matches. If we'd gotten O(log n), we'd know something was wrong.
You've now mastered the skill of translating divide-and-conquer algorithms into recurrence relations. This is the essential first step in analyzing D&C complexity—you can't solve a recurrence until you've correctly written it.
What's Next:
Now that you can express any D&C algorithm as a recurrence, the next question is: How do you solve these recurrences? The next page explores common recurrence patterns and their closed-form solutions, giving you a mental library of pre-solved forms that you'll recognize instantly.
You now understand how divide-and-conquer algorithms give rise to recurrence relations. You can dissect any D&C algorithm into its components—number of subproblems, size reduction, and non-recursive work—and express its runtime as a recurrence. Next, we'll explore the patterns that emerge when solving these recurrences.