Loading learning content...
In the divide-and-conquer paradigm, the CONQUER phase is where subproblems are actually solved. For traditional D&C algorithms like merge sort, this means recursively solving multiple subproblems. Binary search is different—and this difference is profound.
Binary search conquers by solving exactly one subproblem. The other subproblem is not merely postponed or set aside—it's provably irrelevant and completely eliminated from consideration. This is the essence of decrease-and-conquer and the key to binary search's logarithmic efficiency.
This page explores the CONQUER phase of binary search in rigorous detail. You'll understand why only one subproblem needs solving, how the sorted property enables elimination, the distinction between recursive and iterative conquest, and the implications for time and space complexity.
Traditional divide-and-conquer algorithms solve ALL subproblems created by the divide phase. Consider merge sort:
Binary search fundamentally differs:
This asymmetry is crucial. Let's understand why it's possible.
| Algorithm | Subproblems Created | Subproblems Solved | Recurrence |
|---|---|---|---|
| Merge Sort | 2 (left and right halves) | 2 (both solved) | T(n) = 2T(n/2) + O(n) |
| Quick Sort | 2 (less than and greater than pivot) | 2 (both solved) | T(n) = 2T(n/2) + O(n) average |
| Binary Search | 2 (left and right halves) | 1 (other eliminated) | T(n) = T(n/2) + O(1) |
| Finding Max/Min | 2 (left and right halves) | 2 (both solved) | T(n) = 2T(n/2) + O(1) |
The Key Question:
Why can binary search safely ignore one subproblem while sorting algorithms like merge sort must solve both?
The answer lies in the nature of the problem and the structure of the input:
Sorting: Every element must be placed correctly. We can't ignore any element—even the smallest element in the left half must be ordered relative to the right half.
Searching: We're looking for a specific target. If we can prove the target isn't in one half, that half is completely irrelevant to our answer.
The sorted property provides exactly this proof. After comparing with the middle element, we know definitively which half cannot contain the target.
Algorithms that solve only one subproblem per division are often called 'decrease-and-conquer' rather than 'divide-and-conquer.' This includes binary search, randomized selection (quickselect), and Euclid's GCD algorithm. The distinction matters for complexity analysis: solving one subproblem of size n/2 gives O(log n), while solving two gives O(n) or worse.
When binary search eliminates one half, it's not making a guess or a heuristic decision—it's applying a logical proof. Let's formalize this proof.
Given:
arr sorted in ascending order: arr[i] ≤ arr[j] for all i < j[left, right] with target (if exists) in this rangemid with value arr[mid]Case 1: arr[mid] < target
We want to prove: target cannot exist in arr[left..mid].
Proof:
arr[left] ≤ arr[left+1] ≤ ... ≤ arr[mid]arr[i] ≤ arr[mid] for all i ∈ [left, mid]arr[mid] < targetarr[i] < target for all i ∈ [left, mid]arr[left..mid] equals target ∎Case 2: arr[mid] > target
We want to prove: target cannot exist in arr[mid..right].
Proof:
arr[mid] ≤ arr[mid+1] ≤ ... ≤ arr[right]arr[mid] ≤ arr[i] for all i ∈ [mid, right]arr[mid] > targetarr[i] > target for all i ∈ [mid, right]arr[mid..right] equals target ∎123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
def binary_search_with_proof(arr: list[int], target: int) -> int: """ Binary search implementation that explicitly demonstrates the logical proof behind elimination at each step. """ left, right = 0, len(arr) - 1 print(f"Searching for {target} in sorted array: {arr}") print("=" * 70) step = 1 while left <= right: mid = left + (right - left) // 2 print(f"\nStep {step}:") print(f" Search space: [{left}, {right}]") print(f" Midpoint: mid = {mid}, arr[mid] = {arr[mid]}") if arr[mid] == target: print(f" ✓ FOUND: arr[{mid}] = {target}") return mid elif arr[mid] < target: eliminated = list(range(left, mid + 1)) remaining = list(range(mid + 1, right + 1)) print(f" Compare: arr[{mid}] = {arr[mid]} < target = {target}") print(f" ┌─ PROOF of elimination:") print(f" │ Since arr is sorted: arr[{left}] ≤ ... ≤ arr[{mid}] = {arr[mid]}") print(f" │ Since {arr[mid]} < {target}") print(f" │ Therefore: arr[i] < {target} for all i ∈ [{left}, {mid}]") print(f" └─ Conclusion: Eliminate indices {eliminated}") print(f" → Continue with indices {remaining}") left = mid + 1 else: # arr[mid] > target eliminated = list(range(mid, right + 1)) remaining = list(range(left, mid)) print(f" Compare: arr[{mid}] = {arr[mid]} > target = {target}") print(f" ┌─ PROOF of elimination:") print(f" │ Since arr is sorted: arr[{mid}] = {arr[mid]} ≤ ... ≤ arr[{right}]") print(f" │ Since {arr[mid]} > {target}") print(f" │ Therefore: arr[i] > {target} for all i ∈ [{mid}, {right}]") print(f" └─ Conclusion: Eliminate indices {eliminated}") print(f" → Continue with indices {remaining}") right = mid - 1 step += 1 print(f"\nSearch space exhausted (left={left} > right={right})") print("✗ Target not found in array") return -1 # Examplearr = [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]binary_search_with_proof(arr, 23)Without the sorted property, the proofs above fail completely. In an unsorted array, knowing arr[mid] < target tells us nothing about other elements—any of them could still equal target. This is why binary search REQUIRES sorted data. The sorted property is not a convenience; it's the foundation of correctness.
The CONQUER phase can be implemented either recursively or iteratively. Both are correct, but they have different properties and tradeoffs. Understanding both deepens your grasp of the algorithm.
Recursive Implementation:
The recursive approach most clearly expresses the D&C structure. Each recursive call represents 'conquering' a subproblem.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
def binary_search_recursive(arr: list[int], target: int, left: int, right: int, depth: int = 0) -> int: """ Recursive binary search emphasizing the CONQUER phase. Each recursive call is the 'conquer' step—we're solving a smaller subproblem of the same type. """ indent = " " * depth # BASE CASE 1: Empty search space if left > right: print(f"{indent}BASE CASE: Empty space ← return -1") return -1 # DIVIDE: Compute midpoint mid = left + (right - left) // 2 print(f"{indent}DIVIDE: mid={mid}, arr[mid]={arr[mid]}") # BASE CASE 2: Found target if arr[mid] == target: print(f"{indent}BASE CASE: Found! ← return {mid}") return mid # CONQUER: Solve exactly ONE subproblem if arr[mid] < target: print(f"{indent}CONQUER: Recurse on right subproblem [{mid+1}, {right}]") return binary_search_recursive(arr, target, mid + 1, right, depth + 1) else: print(f"{indent}CONQUER: Recurse on left subproblem [{left}, {mid-1}]") return binary_search_recursive(arr, target, left, mid - 1, depth + 1) # Note: No COMBINE step needed! We just return the recursive result. def binary_search(arr: list[int], target: int) -> int: """Wrapper function.""" print(f"Recursive search for {target} in {arr}:") print("-" * 50) result = binary_search_recursive(arr, target, 0, len(arr) - 1) print("-" * 50) return result # Tracearr = [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]result = binary_search(arr, 56)print(f"Result: {result}")Iterative Implementation:
The iterative version simulates the recursion with a loop, updating the search space boundaries instead of making recursive calls.
1234567891011121314151617181920212223242526272829303132333435
def binary_search_iterative(arr: list[int], target: int) -> int: """ Iterative binary search—same logic, different structure. Instead of recursive calls, we update boundaries in a loop. Each loop iteration corresponds to one 'conquer' step. """ left, right = 0, len(arr) - 1 iteration = 0 print(f"Iterative search for {target} in {arr}:") print("-" * 50) while left <= right: iteration += 1 mid = left + (right - left) // 2 print(f"Iteration {iteration}: [{left}, {right}], mid={mid}, arr[mid]={arr[mid]}") if arr[mid] == target: print(f" Found! Returning {mid}") return mid elif arr[mid] < target: print(f" {arr[mid]} < {target} → move left boundary") left = mid + 1 else: print(f" {arr[mid]} > {target} → move right boundary") right = mid - 1 print(f"Loop ended: left={left} > right={right}") return -1 result = binary_search_iterative([2, 5, 8, 12, 16, 23, 38, 56, 72, 91], 56)print(f"Result: {result}")| Aspect | Recursive | Iterative |
|---|---|---|
| Time Complexity | O(log n) | O(log n) |
| Space Complexity | O(log n) call stack | O(1) constant space |
| Clarity of D&C structure | Explicit in code structure | Implicit in loop logic |
| Function call overhead | Present (may matter for huge n) | Absent |
| Stack overflow risk | Possible for very large arrays | None |
| Tail call optimization | May allow O(1) space if supported | N/A |
| Ease of modification | Easier to add parameters | Easier to add state variables |
In production code, the iterative version is almost always preferred due to O(1) space and no recursion overhead. However, the recursive version is valuable for understanding the D&C pattern and for teaching. Some languages (like certain Scheme or Scala implementations) can optimize tail recursion to achieve O(1) space recursively.
In recursive binary search, the 'depth' of conquest—how many recursive calls we make—directly determines time complexity. Let's analyze this precisely.
Problem Size Reduction:
At each step, we reduce the problem size by approximately half:
| Depth | Max Elements Remaining |
|---|---|
| 0 | n |
| 1 | n/2 |
| 2 | n/4 |
| k | n/2^k |
Maximum Depth:
The search terminates when the remaining space has at most 1 element:
n/2^k ≤ 1 n ≤ 2^k k ≥ log₂(n)
Therefore, maximum depth = ⌈log₂(n)⌉.
Time Complexity Derivation:
With at most O(log n) recursive calls, each doing O(1) work (midpoint calculation and comparison), total time is O(log n).
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
import math def analyze_conquest_depth(): """ Analyze the relationship between array size and maximum recursion/iteration depth in binary search. """ print("Binary Search Depth Analysis") print("=" * 60) print(f"{'Array Size':<20} {'Max Depth (log₂ n)':<20} {'10x Increase':<15}") print("-" * 60) previous_depth = 0 for n in [10, 100, 1_000, 10_000, 100_000, 1_000_000, 10_000_000, 100_000_000, 1_000_000_000]: depth = math.ceil(math.log2(n)) if n > 1 else 1 increase = depth - previous_depth if previous_depth > 0 else "-" print(f"{n:<20,} {depth:<20} {increase}") previous_depth = depth print() print("Key insight: Every 10x increase in array size adds only ~3.3 depth!") print("(Because log₂(10) ≈ 3.32)") print() # Show the practical implications print("Practical implications:") print("-" * 60) examples = [ ("Your music library", 10_000), ("Small database table", 1_000_000), ("Large user database", 100_000_000), ("All URLs on the internet (estimated)", 5_000_000_000), ("All atoms in observable universe", 10**80), ] for name, size in examples: depth = math.ceil(math.log2(size)) if size > 1 else 1 print(f" {name}: {size:,} items → {depth} comparisons max") analyze_conquest_depth() def trace_recursion_depth(n: int, target_position: int) -> int: """ Trace exactly how many recursive calls are made for a target at a specific position in an array of size n. """ # Simulate search in array [0, 1, 2, ..., n-1] left, right = 0, n - 1 depth = 0 while left <= right: depth += 1 mid = left + (right - left) // 2 if mid == target_position: return depth elif mid < target_position: left = mid + 1 else: right = mid - 1 return depth # Not found # Show that different positions have different depthsprint("\nDepth by target position in array of 16 elements:")print("-" * 40)for pos in range(16): depth = trace_recursion_depth(16, pos) visual = "─" * depth + "◆" print(f"Position {pos:2}: depth {depth} {visual}")The depth analysis reveals why O(log n) is so powerful. Even for an array of ALL atoms in the observable universe (~10^80 elements), binary search would need only about 266 comparisons. The logarithm grows so slowly that it's practically constant for any realistic input size.
Every recursive algorithm needs base cases—conditions that stop recursion and return a direct answer. Binary search has two base cases:
Base Case 1: Target Found
When arr[mid] == target, we've found what we're looking for. No further recursion is needed—we return mid immediately.
Base Case 2: Empty Search Space
When left > right, the search space is empty. If the target existed, it would have been found (since we never eliminate the containing subspace). Therefore, target doesn't exist, and we return -1.
These base cases are exhaustive: every binary search terminates at one of them.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
def binary_search_base_case_analysis(arr: list[int], target: int) -> int: """ Binary search with explicit base case handling and analysis. """ def search(left: int, right: int, depth: int = 0) -> int: indent = " " * depth # BASE CASE 2: Empty search space if left > right: print(f"{indent}BASE CASE 2: Empty space [{left}..{right}]") print(f"{indent} Reason: All candidates eliminated") print(f"{indent} Conclusion: Target {target} does not exist") return -1 mid = left + (right - left) // 2 # BASE CASE 1: Target found if arr[mid] == target: print(f"{indent}BASE CASE 1: Found at index {mid}") print(f"{indent} Verification: arr[{mid}] = {arr[mid]} = target") return mid # RECURSIVE CASE: Continue searching if arr[mid] < target: print(f"{indent}RECURSE: {arr[mid]} < {target}, search right") return search(mid + 1, right, depth + 1) else: print(f"{indent}RECURSE: {arr[mid]} > {target}, search left") return search(left, mid - 1, depth + 1) print(f"Searching for {target} in {arr}") print("-" * 50) return search(0, len(arr) - 1) # Test base case 1: Target existsprint("TEST 1: Target exists")print("=" * 50)result = binary_search_base_case_analysis([1, 3, 5, 7, 9], 5)print(f"Result: {result}\n") # Test base case 2: Target doesn't exist (between elements)print("TEST 2: Target doesn't exist (between elements)")print("=" * 50)result = binary_search_base_case_analysis([1, 3, 5, 7, 9], 4)print(f"Result: {result}\n") # Test base case 2: Target doesn't exist (too small)print("TEST 3: Target too small")print("=" * 50)result = binary_search_base_case_analysis([1, 3, 5, 7, 9], 0)print(f"Result: {result}\n") # Test base case 2: Target doesn't exist (too large)print("TEST 4: Target too large")print("=" * 50)result = binary_search_base_case_analysis([1, 3, 5, 7, 9], 10)print(f"Result: {result}\n")Base Case Correctness:
For binary search to be correct, we must prove:
At least one base case is always reached — The search space strictly shrinks at each step (mid is always excluded), so we must eventually reach left > right or find the target.
Base cases give correct answers:
arr[mid] == target, returning mid is clearly correct.left > right, the invariant guarantees target would be in arr[left..right] if it existed—but this range is empty, so target doesn't exist.Common Mistakes:
| Mistake | Consequence |
|---|---|
Using left < right instead of left <= right | Misses single-element case |
| Not handling empty array | Crash or incorrect result |
| Returning wrong value on not-found | Semantic error |
The crucial property for termination is that EVERY recursive call or loop iteration STRICTLY reduces the search space. Using left = mid + 1 and right = mid - 1 (not left = mid or right = mid) guarantees this. The +1/-1 exclusion of mid is essential—mid has been examined and can be safely removed.
Binary search's conquest of a single subproblem leads to an elegant recurrence relation and a beautifully simple complexity analysis.
The Recurrence:
T(n) = T(n/2) + O(1)
Where:
T(n) = time to search an array of size nT(n/2) = time to search one half (the only subproblem we solve)O(1) = time to compute mid and compare (constant)Solving the Recurrence:
Contrast with Two-Subproblem D&C:
If we had to search BOTH halves (like merge sort does), the recurrence would be:
T(n) = 2T(n/2) + O(1)
Using the Master Theorem: a=2, b=2, log_b(a) = 1, so T(n) = Θ(n).
This would be no better than linear search! The power of binary search comes precisely from eliminating one subproblem entirely.
General Pattern:
| Subproblems Solved | Recurrence | Complexity |
|---|---|---|
| 1 of size n/2 | T(n) = T(n/2) + O(1) | O(log n) |
| 2 of size n/2 | T(n) = 2T(n/2) + O(1) | O(n) |
| 2 of size n/2, O(n) combine | T(n) = 2T(n/2) + O(n) | O(n log n) |
| 1 of size n-1 | T(n) = T(n-1) + O(1) | O(n) |
What matters isn't just solving one subproblem—it's that the subproblem is a CONSTANT FRACTION of the original. Solving T(n-1) instead of T(n/2) gives linear, not logarithmic, complexity. The fraction doesn't have to be exactly 1/2, but it must be some constant fraction less than 1.
Different binary search variants modify the CONQUER phase in subtle but important ways. Understanding these modifications helps you implement variants correctly.
Standard Binary Search:
Find First Occurrence:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
def find_first_occurrence(arr: list[int], target: int) -> int: """ Find the FIRST occurrence of target. Key insight: When we find target, we don't stop! We record the find and keep conquering the LEFT subspace to see if there's an earlier occurrence. """ left, right = 0, len(arr) - 1 result = -1 # Track best result so far print(f"Finding FIRST occurrence of {target} in {arr}") while left <= right: mid = left + (right - left) // 2 if arr[mid] == target: print(f" Found at {mid}, but keep searching LEFT for earlier occurrence") result = mid # Record this occurrence right = mid - 1 # But keep searching left! elif arr[mid] < target: left = mid + 1 else: right = mid - 1 print(f" Final result: {result}") return result def find_last_occurrence(arr: list[int], target: int) -> int: """ Find the LAST occurrence of target. Mirror of find_first: when we find target, we keep conquering the RIGHT subspace. """ left, right = 0, len(arr) - 1 result = -1 print(f"Finding LAST occurrence of {target} in {arr}") while left <= right: mid = left + (right - left) // 2 if arr[mid] == target: print(f" Found at {mid}, but keep searching RIGHT for later occurrence") result = mid left = mid + 1 # Keep searching right! elif arr[mid] < target: left = mid + 1 else: right = mid - 1 print(f" Final result: {result}") return result def search_in_rotated_array(arr: list[int], target: int) -> int: """ Search in a rotated sorted array. Key insight: One half is ALWAYS sorted. We identify which half is sorted, then decide whether target could be in it. """ left, right = 0, len(arr) - 1 print(f"Searching for {target} in rotated array {arr}") while left <= right: mid = left + (right - left) // 2 print(f" Checking [{left}..{right}], mid={mid}, arr[mid]={arr[mid]}") if arr[mid] == target: return mid # Determine which half is sorted if arr[left] <= arr[mid]: # Left half is sorted print(f" Left half [{left}..{mid}] is sorted") if arr[left] <= target < arr[mid]: print(f" Target {target} is in left sorted half") right = mid - 1 else: print(f" Target {target} is NOT in left half, go right") left = mid + 1 else: # Right half is sorted print(f" Right half [{mid}..{right}] is sorted") if arr[mid] < target <= arr[right]: print(f" Target {target} is in right sorted half") left = mid + 1 else: print(f" Target {target} is NOT in right half, go left") right = mid - 1 return -1 # Examplesprint("=" * 60)find_first_occurrence([1, 2, 3, 3, 3, 3, 4, 5], 3)print()find_last_occurrence([1, 2, 3, 3, 3, 3, 4, 5], 3)print()search_in_rotated_array([4, 5, 6, 7, 0, 1, 2], 0)Variant Patterns in Conquer Phase:
| Variant | Standard Conquer | Modified Conquer |
|---|---|---|
| First occurrence | Stop on find | Force left search on find |
| Last occurrence | Stop on find | Force right search on find |
| Rotated array | Use arr[mid] vs target | Also consider which half is sorted |
| Answer space | Compare arr[mid] vs target | Evaluate predicate on mid |
| Peak finding | Compare arr[mid] vs target | Compare arr[mid] vs arr[mid+1] |
The underlying structure remains the same: conquer ONE subproblem. What changes is how we decide WHICH subproblem to conquer.
We've explored the CONQUER phase of binary search in comprehensive detail. Let's consolidate the key insights:
What's Next:
We've examined DIVIDE and CONQUER in detail. The final page will analyze the COMBINE phase—which is notably trivial in binary search. We'll explore why no combination is needed, how this contrasts with other D&C algorithms, and what this tells us about the structure of the search problem.
You now deeply understand the CONQUER phase of binary search—why only one subproblem is solved, how elimination works, the role of base cases, and how different variants modify the conquest strategy. The logarithmic efficiency of binary search stems directly from this single-subproblem conquest pattern.