Loading content...
In the previous page, we discovered that binary search can find optimal answers, not just array elements. But why does this work? What mathematical principle guarantees correctness?
The answer lies in monotonic functions and the elegant transformation of optimization problems into decision problems. This page provides the rigorous foundation that separates confident implementation from hopeful guessing.
Understanding these concepts isn't just academic—it enables you to:
By the end of this page, you will understand: (1) the formal definition of monotonic functions and their relevance to binary search, (2) how optimization problems 'reduce' to decision problems, (3) why the feasibility predicate must exhibit monotonicity, and (4) techniques to verify monotonicity in novel problems.
A function is monotonic if it preserves order—it either never decreases or never increases as its input increases.
Formal Definitions:
Monotonically Non-Decreasing (Weakly Increasing): A function f: ℝ → ℝ is monotonically non-decreasing if:
For all x₁ < x₂: f(x₁) ≤ f(x₂)
As input grows, output either grows or stays the same—never shrinks.
Monotonically Non-Increasing (Weakly Decreasing): A function f: ℝ → ℝ is monotonically non-increasing if:
For all x₁ < x₂: f(x₁) ≥ f(x₂)
As input grows, output either shrinks or stays the same—never grows.
Strictly Monotonic versions require strict inequality (output strictly increases or decreases).
| Function | Monotonicity | Explanation |
|---|---|---|
| f(x) = x² | Non-monotonic | Decreases for x < 0, increases for x > 0 |
| f(x) = x³ | Strictly increasing | Always increases as x increases |
| f(x) = 2x + 5 | Strictly increasing | Linear with positive slope |
| f(x) = -3x + 7 | Strictly decreasing | Linear with negative slope |
| f(x) = ⌊x⌋ (floor) | Non-decreasing | Increases in steps, stays flat between integers |
| sin(x) | Non-monotonic | Oscillates up and down |
| f(x) = 1/x (for x > 0) | Strictly decreasing | Shrinks as x grows (positive domain) |
Why does monotonicity matter for binary search?
Consider searching for the smallest x where f(x) ≥ threshold:
This divide-and-conquer property is what binary search exploits. Without monotonicity, we can't safely discard any half—we'd have to check every value.
Binary search requires monotonicity (or a monotonic property). For sorted arrays, the property is 'arr[i] ≤ arr[j] for i < j'. For answer-space search, it's 'feasibility is monotonic with respect to the answer value.'
In binary search on answer space, our feasibility predicate returns boolean values (True/False), not numbers. Boolean monotonicity has a special structure:
Definition: Boolean Monotonic Function
A boolean function f: ℤ → {True, False} is monotonic if its values form a contiguous block:
Pattern 1 (Non-increasing → for minimization problems):
[False, False, ..., False, True, True, ..., True]
↑ ↑
infeasible feasible
└── first True is the answer
Pattern 2 (Non-decreasing → for maximization problems):
[True, True, ..., True, False, False, ..., False]
↑ ↑
feasible infeasible
└── last True is the answer
The key insight: there's exactly one transition point from one boolean value to the other. Binary search finds this transition.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
def visualize_boolean_monotonicity(lo: int, hi: int, is_feasible) -> None: """ Visualize the boolean partition created by a monotonic feasibility function. For minimization: should see [False, False, ..., False, True, True, ..., True] For maximization: should see [True, True, ..., True, False, False, ..., False] """ print(f"Feasibility across [{lo}, {hi}]:") print() results = [] first_true = None last_true = None for x in range(lo, hi + 1): result = is_feasible(x) results.append(("T" if result else "F", x)) if result and first_true is None: first_true = x if result: last_true = x # Print first 20 and last 5 for large ranges to_print = results[:20] + ["..."] + results[-5:] if len(results) > 30 else results print(" ".join(str(r[0]) + f"({r[1]})" if isinstance(r, tuple) else r for r in to_print)) print() print(f"First True (min answer): {first_true}") print(f"Last True (max answer): {last_true}") # Example: Ship packages problemdef can_ship(capacity: int) -> bool: weights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] D = 5 days_needed = 1 current_load = 0 for w in weights: if current_load + w <= capacity: current_load += w else: days_needed += 1 current_load = w return days_needed <= D # Visualizationvisualize_boolean_monotonicity(10, 25, can_ship) # Output:# Feasibility across [10, 25]:# F(10) F(11) F(12) F(13) F(14) T(15) T(16) T(17) T(18) T(19) T(20) T(21) T(22) T(23) T(24) T(25)# First True (min answer): 15# Last True (max answer): 25The partition property guarantees binary search correctness:
When we check feasibility at mid:
is_feasible(mid) == True (for minimization): the answer is at mid or to its left. All values right of mid are also feasible but suboptimal.is_feasible(mid) == False: the answer is to the right. All values left of mid are also infeasible.This is precisely the logic that allows eliminating half the search space each iteration.
If your feasibility function produces [True, False, True, False, ...], binary search WILL fail. There's no single 'boundary' to find. Before implementing, always verify that feasibility transitions exactly once across your search range.
Binary search on answer space fundamentally works by reducing optimization problems to decision problems. This is a classic technique in computer science with deep theoretical implications.
What are optimization and decision problems?
Optimization Problem: 'What is the minimum/maximum value X such that [constraints are satisfied]?'
Decision Problem: 'Given a value X, is [constraint satisfied]?'
The reduction insight:
If you have an efficient solver for the decision version, you can solve the optimization version by binary searching over possible answers and querying the decision oracle.
Optimization(constraints) → Decision(constraints, X) for various X
↓ ↓
'Is X feasible?' Binary search finds optimal X
Why is the decision problem often easier?
Verifying an answer is frequently simpler than computing it. Consider:
The optimization problem asks 'What value?'; the decision problem only asks 'Does this value work?'
weights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], D = 5 daysMinimum capacity = 15Optimization: 'Find minimum capacity' → Reduction to decision: 'For capacity C, can we ship in 5 days?' → Binary search calls decision oracle for C = 32, 21, 16, 13, 15, 14 → Finds C = 15 is first feasible value → Returns 15 as the minimum
Complexity of the reduction:
Let:
Then solving the optimization takes:
T_optimization = O(log R) × T_decision
This is efficient when:
This optimization-to-decision reduction is fundamental in complexity theory. It relates to concepts like NP completeness, where decision versions of problems (e.g., 'Is there a Hamiltonian path?') are used to prove hardness of optimization versions ('What's the shortest Hamiltonian path?').
Before applying binary search on answer, you must establish that feasibility is monotonic. Here's a systematic approach to proving this:
The Monotonicity Proof Template:
Example: Shipping Problem Monotonicity Proof
Claim: If we can ship all packages in D days with capacity C, we can also ship with capacity C+1.
Proof:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
"""Monotonicity Proof: Shipping Problem Feasibility: can_ship(weights, D, capacity) -> bool Claim: If can_ship(weights, D, C) == True, then can_ship(weights, D, C+1) == True Intuitive argument (simulation-based):- With capacity C, suppose we ship packages using a greedy assignment- Each day, we load packages until adding one more would exceed C- This produces some schedule S with days_needed(S) <= D - Now consider capacity C+1:- We use the SAME schedule S- Every package fits (since each day's load <= C < C+1)- Therefore days_needed(S) <= D still holds - Hence if C works, C+1 works Mathematical formalization:Let schedule(C) = optimal # of days for capacity CThen: schedule(C+1) <= schedule(C) for all C This is a non-increasing function, which gives:- For minimization (find smallest working C): can_ship(C) = True → can_ship(C+1) = True (non-decreasing boolean) The sequence looks like: [False, False, ..., False, True, True, ..., True]""" def verify_monotonicity(weights, D, lo, hi): """ Empirically verify monotonicity of the shipping feasibility function. Returns True if monotonicity holds across the range. """ def can_ship(capacity): days = 1 load = 0 for w in weights: if load + w <= capacity: load += w else: days += 1 load = w return days <= D prev_result = can_ship(lo) transition_count = 0 for c in range(lo + 1, hi + 1): curr_result = can_ship(c) if prev_result != curr_result: transition_count += 1 if prev_result == True: # True -> False is wrong direction print(f"ERROR: Transition True->False at capacity {c}") return False prev_result = curr_result print(f"Monotonicity verified! Transitions: {transition_count}") return transition_count <= 1 # Should transition at most once # Verify for shipping problemweights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]D = 5verify_monotonicity(weights, D, 10, 55)# Output: Monotonicity verified! Transitions: 1Common monotonicity patterns:
| Problem Type | Monotonicity Direction | More X means... |
|---|---|---|
| Min capacity | Non-decreasing feasibility | Easier to fit items |
| Min time | Non-decreasing feasibility | More time = easier completion |
| Max items with budget | Non-increasing feasibility | More items = harder to afford |
| Min workers for tasks | Non-decreasing feasibility | More workers = easier completion |
| Max distance between points | Non-increasing feasibility | Larger gap = harder to place points |
You don't always need a rigorous mathematical proof. A convincing argument like 'if I have more capacity, I can certainly fit the same packages in fewer trips' is usually sufficient. The key is being explicit about why more of X makes the problem easier (or harder).
Not all problems have monotonic feasibility. Recognizing when binary search on answer won't work is as important as knowing when it will.
Red flags indicating non-monotonicity:
Find X such that X² = 16X = 4 or X = -4This is NOT a monotonic feasibility problem:
The pattern [F, F, F, T, F, F, F, ...] is NOT monotonic. Binary search would likely miss X = 4 entirely.
Another counter-example: Scheduling with exact timing
Problem: Find the start time T such that all tasks complete exactly at time 100.
This is non-monotonic because:
Feasibility doesn't form a monotonic pattern—it's True only at one point.
What to do when monotonicity fails:
If your function increases then decreases (or vice versa)—like finding the maximum of a parabola—that's unimodal, not monotonic. Use ternary search instead of binary search. Ternary search handles unimodal functions by eliminating 1/3 of the range per iteration.
Binary search on answer works for both discrete (integer) and continuous (real-valued) answer spaces, but the implementation differs slightly.
Discrete (Integer) Search Space:
while lo <= hilo = mid + 1 or hi = mid - 1lo > hi, the loop endsContinuous (Real) Search Space:
lo <= hi (infinite precision needed)while hi - lo > epsilon (e.g., epsilon = 1e-9)lo = mid or hi = mid (no +1/-1)(lo + hi) / 2 at the end1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
def binary_search_continuous(lo: float, hi: float, is_feasible, epsilon: float = 1e-9) -> float: """ Binary search on continuous (real-valued) answer space. Finds the minimum x in [lo, hi] such that is_feasible(x) == True. Assumes monotonic feasibility: [False, False, ..., True, True, ...] Args: lo: Lower bound (must be infeasible or boundary) hi: Upper bound (must be feasible) is_feasible: Monotonic predicate function epsilon: Precision threshold Returns: Minimum feasible value within epsilon precision """ # Method 1: Epsilon-based termination while hi - lo > epsilon: mid = (lo + hi) / 2 if is_feasible(mid): hi = mid # Answer is at mid or to the left else: lo = mid # Answer is to the right return (lo + hi) / 2 def binary_search_continuous_iterations(lo: float, hi: float, is_feasible, iterations: int = 100) -> float: """ Alternative: Fixed number of iterations (more predictable runtime). 100 iterations gives precision of (hi - lo) / 2^100 ≈ 10^-30 * original range More than enough for double precision floats. """ for _ in range(iterations): mid = (lo + hi) / 2 if is_feasible(mid): hi = mid else: lo = mid return (lo + hi) / 2 # Example: Find minimum radius to cover all points with circles of equal radiusdef min_radius_to_cover_points(points: list[tuple[float, float]], centers: list[tuple[float, float]]) -> float: """ Find minimum radius r such that every point is within distance r of at least one center. """ import math def is_feasible(radius: float) -> bool: # Check if all points are covered by some circle for px, py in points: covered = False for cx, cy in centers: dist = math.sqrt((px - cx)**2 + (py - cy)**2) if dist <= radius + 1e-12: # Small epsilon for float comparison covered = True break if not covered: return False return True # Bounds: lo = 0, hi = max possible distance lo = 0.0 hi = 0.0 for px, py in points: for cx, cy in centers: dist = math.sqrt((px - cx)**2 + (py - cy)**2) hi = max(hi, dist) return binary_search_continuous(lo, hi, is_feasible) # Example usagepoints = [(0, 0), (3, 4), (6, 0)]centers = [(1, 1), (5, 1)]print(f"Minimum radius: {min_radius_to_cover_points(points, centers):.6f}")Key differences summarized:
| Aspect | Discrete | Continuous |
|---|---|---|
| Values | Integers | Real numbers |
| Loop condition | lo <= hi | hi - lo > epsilon or fixed iterations |
| Updates | mid ± 1 | mid (no offset) |
| Precision | Exact | Limited by epsilon |
| Iterations | O(log(hi - lo)) | O(log((hi - lo)/epsilon)) |
| Termination | Automatic | Controlled by epsilon/iterations |
For continuous binary search, using 100 fixed iterations is often cleaner than epsilon comparisons. It avoids floating-point precision issues and gives consistent runtime. 100 iterations provides ~30 decimal digits of precision—far more than doubles can represent.
Let's formalize everything we've discussed into precise mathematical statements. This rigor helps in proving correctness and extending the technique to novel problems.
Definition 1: Answer Space
The answer space A = [lo, hi] ⊂ ℤ (or ℝ) is the set of all candidate answers to an optimization problem.
Definition 2: Feasibility Predicate
The feasibility predicate f: A → {True, False} determines whether a candidate answer satisfies all problem constraints.
Definition 3: Monotonic Feasibility (for minimization)
f is monotonically non-decreasing if:
∀ x, y ∈ A where x < y: f(x) ⟹ f(y)
(If x is feasible, then all values greater than x are also feasible.)
Theorem: Binary Search Correctness
Given answer space A = [lo, hi] and monotonically non-decreasing predicate f where:
Binary search finds x* = min{x ∈ A : f(x) = True} in O(log|A|) queries to f.
Proof Sketch:
Invariant: Throughout execution, the answer x* lies in [lo, hi].
Base case: Initially, [lo, hi] = A, which contains x*.
Inductive step: At each iteration:
Termination: Range shrinks by ~half each iteration, so terminates in O(log|A|) steps.
Correctness: At termination, lo = hi = x* (the unique minimum feasible value).
Loop Invariant for Implementation:
At every iteration:
1234567891011121314151617181920212223242526272829303132333435363738
def binary_search_minimization_with_invariants(lo: int, hi: int, is_feasible) -> int: """ Binary search with explicit loop invariant annotations. Finds minimum x such that is_feasible(x) == True. Precondition: is_feasible is monotonically non-decreasing on [lo, hi] Precondition: is_feasible(hi) == True (at least one feasible value exists) Postcondition: Returns x* = min{x : is_feasible(x) == True} """ # INVARIANT: answer ∈ [lo, hi] # INVARIANT: ∀ x < lo: is_feasible(x) == False # INVARIANT: ∀ x > hi: is_feasible(x) == True (but suboptimal) original_lo, original_hi = lo, hi answer = hi # Default to upper bound (known feasible) while lo <= hi: # ASSERTION: answer ∈ [lo, original_hi] # ASSERTION: is_feasible(answer) == True mid = lo + (hi - lo) // 2 # Avoid overflow if is_feasible(mid): # mid is feasible; it might be the answer or something smaller is # ASSERTION: answer ∈ [lo, mid] answer = mid # Update best known feasible hi = mid - 1 # Search for smaller feasible # MAINTAINS: ∀ x > hi (now = mid-1): is_feasible(x) == True else: # mid is infeasible; answer must be larger # ASSERTION: answer ∈ [mid+1, hi] lo = mid + 1 # Search for feasible in right half # MAINTAINS: ∀ x < lo (now = mid+1): is_feasible(x) == False # POSTCONDITION: lo > hi, so range is empty # POSTCONDITION: answer == min{x : is_feasible(x) == True} return answerThese formal statements aren't just academic exercises. They help you: (1) verify preconditions before applying the technique, (2) debug when things go wrong by checking invariants, and (3) extend the approach to new problems with confidence.
In practice, how do you verify that a problem has monotonic feasibility before committing to a binary search approach? Here are practical techniques:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
def check_monotonicity(lo: int, hi: int, is_feasible, expected_direction: str = "non-decreasing") -> bool: """ Empirically verify monotonicity of a feasibility function. Args: lo, hi: Range to check is_feasible: The predicate function expected_direction: "non-decreasing" (for minimization) or "non-increasing" (for maximization) Returns: True if monotonicity holds, False otherwise Also prints diagnostic information if monotonicity fails. """ results = [(x, is_feasible(x)) for x in range(lo, hi + 1)] # Find transitions transitions = [] for i in range(1, len(results)): prev_x, prev_f = results[i-1] curr_x, curr_f = results[i] if prev_f != curr_f: transitions.append((prev_x, prev_f, curr_x, curr_f)) # Check expected pattern if expected_direction == "non-decreasing": # Should only have False->True transitions for prev_x, prev_f, curr_x, curr_f in transitions: if prev_f == True and curr_f == False: print(f"❌ VIOLATION: True->False at {prev_x}->{curr_x}") print(f" Expected: non-decreasing (F, F, ..., T, T, ...)") return False else: # non-increasing # Should only have True->False transitions for prev_x, prev_f, curr_x, curr_f in transitions: if prev_f == False and curr_f == True: print(f"❌ VIOLATION: False->True at {prev_x}->{curr_x}") print(f" Expected: non-increasing (T, T, ..., F, F, ...)") return False if len(transitions) > 1: print(f"⚠ WARNING: {len(transitions)} transitions found (expected at most 1)") print(f" Transitions: {transitions}") print(f"✓ Monotonicity ({expected_direction}) verified with {len(transitions)} transition(s)") return True # Example usagedef can_ship(weights, D, capacity): days = 1 load = 0 for w in weights: if load + w <= capacity: load += w else: days += 1 load = w return days <= D weights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]D = 5is_feasible = lambda c: can_ship(weights, D, c) check_monotonicity(10, 30, is_feasible, "non-decreasing")Best practice: Start with intuition, back it up with a quick informal proof, then verify empirically on test cases. This three-layer approach catches errors that slip through any single method.
We've established the rigorous mathematical foundation for binary search on answer space. Let's consolidate:
What's next:
With the mathematical foundation in place, the next page explores a powerful class of problems: minimizing the maximum and maximizing the minimum. These patterns appear constantly in competitive programming and systems design, and they're perfectly suited for binary search on answer space.
You now understand the mathematical principles that make binary search on answer space correct: monotonic functions, optimization-to-decision reduction, and the boolean partition of the search space. This foundation prepares you to tackle the sophisticated 'minimax' and 'maximin' patterns coming next.