Loading learning content...
Consider these everyday optimization challenges:
These belong to a family called minimax (minimize the maximum) and maximin (maximize the minimum) problems. They optimize for fairness or worst-case guarantees rather than totals or averages.
Binary search on answer space solves these problems with elegant efficiency. This page teaches you to recognize, formulate, and solve minimax/maximin problems.
By the end of this page, you will: (1) understand the minimax and maximin optimization patterns, (2) recognize their presence in problem statements, (3) formulate them as binary search on answer problems, and (4) implement solutions for classic examples from each category.
Minimax means finding a configuration that minimizes the largest element among a set of values. It optimizes for the worst case.
Mathematical formulation:
Minimize: max(f₁(x), f₂(x), ..., fₙ(x))
Subject to: x satisfies constraints
You're not minimizing the sum or average—you're specifically targeting the largest value and pushing it down as low as possible.
Why minimax matters:
| Objective | Formula | Optimizes For | Example |
|---|---|---|---|
| Minimize sum | min Σxᵢ | Total cost | Shortest total distance |
| Minimize average | min (Σxᵢ)/n | Typical case | Average wait time |
| Minimax | min max(xᵢ) | Worst case | Maximum worker load |
| Minimize variance | min Var(x) | Consistency | Predictable latency |
The transformation to binary search:
Minimax optimization transforms beautifully into binary search:
If we can check whether maximum ≤ M is achievable for any given M, we binary search to find the smallest such M.
Why is this monotonic?
If we can achieve maximum ≤ M, we can certainly achieve maximum ≤ M+1 (any valid configuration for M also works for M+1). This gives the non-decreasing feasibility pattern binary search requires.
'Find the minimum possible maximum X' becomes 'For a given limit M, can we ensure all values are ≤ M?' + binary search on M. The feasibility check typically involves a greedy or simulation approach.
Problem Statement (LeetCode 410):
Given an integer array
numsand an integerk, splitnumsintoknon-empty contiguous subarrays. Minimize the largest sum among theseksubarrays.
Example:
nums = [7, 2, 5, 10, 8], k = 218[7, 2, 5] (sum=14) and [10, 8] (sum=18). Maximum is 18.
[7, 2, 5, 10] and [8] give max(24, 8) = 24, which is worse.This is a minimax problem: minimize the maximum subarray sum.
Binary search approach:
max(nums) (at least must fit the largest element), and the maximum possible is sum(nums) (everything in one group)1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
def splitArray(nums: list[int], k: int) -> int: """ LeetCode 410: Split Array Largest Sum Find the minimum possible value of the largest sum among k subarrays. Approach: Binary search on answer space - Answer space: [max(nums), sum(nums)] - Feasibility: can we split into ≤ k groups with each sum ≤ limit? Time Complexity: O(n * log(sum - max)) Space Complexity: O(1) """ def can_split_with_max_sum(max_sum: int) -> bool: """ Feasibility predicate: Can we split nums into at most k subarrays where each subarray has sum ≤ max_sum? Strategy: Greedily extend current subarray until adding more exceeds max_sum. Monotonicity: If we can split with limit M, we can split with limit M+1 (same split works, all sums still ≤ M+1). """ groups_needed = 1 current_sum = 0 for num in nums: # Try to add num to current group if current_sum + num <= max_sum: current_sum += num else: # Start a new group groups_needed += 1 current_sum = num # Early termination if groups_needed > k: return False return groups_needed <= k # Define search space lo = max(nums) # Each group needs at least the max element hi = sum(nums) # Worst case: all in one group # Binary search for minimum feasible max_sum answer = hi while lo <= hi: mid = lo + (hi - lo) // 2 if can_split_with_max_sum(mid): answer = mid # mid works; try smaller hi = mid - 1 else: lo = mid + 1 # mid too small; need larger return answer # Test casesprint(splitArray([7, 2, 5, 10, 8], 2)) # Output: 18print(splitArray([1, 2, 3, 4, 5], 2)) # Output: 9 (split: [1,2,3,4], [5] or [1,2,3], [4,5])print(splitArray([1, 4, 4], 3)) # Output: 4 (split: [1], [4], [4]) # Trace for [7, 2, 5, 10, 8], k = 2:# lo = 10, hi = 32# mid = 21: can_split? [7,2,5,10]=24 > 21, so [7,2,5]=14, [10,8]=18 -> 2 groups ✓ -> hi=20# mid = 15: can_split? [7,2,5]=14 ✓, then [10] > 15? no, [10]=10 ✓, [8] -> 3 groups ✗ -> lo=16# mid = 18: can_split? [7,2,5]=14, [10,8]=18 -> 2 groups ✓ -> hi=17# mid = 16: can_split? [7,2,5]=14, [10] > 16? no [10]=10, [8] -> 3 groups ✗ -> lo=17# mid = 17: can_split? [7,2,5]=14, [10] > 17? no [10]=10, [8] -> 3 groups ✗ -> lo=18# lo > hi, return answer = 18The feasibility check is greedy: extend the current subarray as long as possible without exceeding the limit, then start a new one. This greedy approach is optimal for minimizing the number of subarrays, which is exactly what we need to verify feasibility.
Maximin is the dual of minimax: find a configuration that maximizes the smallest element. It optimizes for the worst-off case.
Mathematical formulation:
Maximize: min(f₁(x), f₂(x), ..., fₙ(x))
Subject to: x satisfies constraints
Why maximin matters:
The transformation to binary search:
Maximin transforms to binary search similarly:
If we can check whether minimum ≥ M is achievable, we binary search to find the largest such M.
Why is this monotonic?
If we can achieve minimum ≥ M+1, we can certainly achieve minimum ≥ M (any configuration achieving M+1 also achieves M). This gives feasibility that's non-increasing in M—True for small M, False for large M. We want the largest M that's still True.
'Find the maximum possible minimum X' becomes 'For a given threshold M, can we ensure all values are ≥ M?' + binary search on M. Note the direction reversal: we search for the largest feasible M, not the smallest.
Problem Statement (SPOJ AGGRCOW / LeetCode variants):
You have
nstalls at positions along a line. You need to placeccows in these stalls such that the minimum distance between any two cows is as large as possible. Find this maximum minimum distance.
Example:
stalls = [1, 2, 4, 8, 9], c = 33This is a maximin problem: maximize the minimum distance between cows.
Binary search approach:
(max_stall - min_stall) divided evenly1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
def maxMinDistance(stalls: list[int], c: int) -> int: """ Aggressive Cows: Maximize the minimum distance between c cows in stalls. Approach: Binary search on answer space - Answer space: [1, (max_stall - min_stall) // (c - 1)] - Feasibility: can we place c cows with minimum gap ≥ distance? Time Complexity: O(n log n) for sorting + O(n log(max_gap)) for binary search Space Complexity: O(1) extra (or O(n) for sorting) """ # Sort stalls by position stalls.sort() n = len(stalls) def can_place_cows(min_distance: int) -> bool: """ Feasibility predicate: Can we place c cows such that every pair of cows is at least min_distance apart? Strategy: Greedily place cows left-to-right. Place first cow at leftmost stall. For each subsequent cow, place at the first stall that's >= min_distance from the last placed cow. Monotonicity: If we can place with gap ≥ D, we can place with gap ≥ D-1 (same placement works, gaps are still ≥ D-1). So feasibility is non-increasing in min_distance. """ cows_placed = 1 last_position = stalls[0] # Place first cow at leftmost stall for i in range(1, n): if stalls[i] - last_position >= min_distance: cows_placed += 1 last_position = stalls[i] if cows_placed >= c: return True # Early termination return cows_placed >= c # Define search space lo = 1 # Minimum possible distance hi = (stalls[-1] - stalls[0]) // (c - 1) # Maximum possible if evenly spaced # Binary search for maximum feasible min_distance # Note: For maximin, we want the LARGEST feasible value answer = 1 while lo <= hi: mid = lo + (hi - lo) // 2 if can_place_cows(mid): answer = mid # mid works; try larger lo = mid + 1 # Search right for larger feasible distance else: hi = mid - 1 # mid too large; need smaller return answer # Test casesprint(maxMinDistance([1, 2, 4, 8, 9], 3)) # Output: 3print(maxMinDistance([1, 2, 8, 4, 9], 3)) # Output: 3 (same after sorting)print(maxMinDistance([1, 2, 3, 4, 7], 3)) # Output: 3 (place at 1, 4, 7) # Trace for [1, 2, 4, 8, 9], c = 3:# After sorting: [1, 2, 4, 8, 9]# lo = 1, hi = (9-1)/(3-1) = 4# mid = 2: can_place? Cow1@1, Cow2@4 (4-1≥2 ✓), Cow3@8 (8-4≥2 ✓) -> 3 cows ✓ -> lo=3# mid = 3: can_place? Cow1@1, Cow2@4 (4-1≥3 ✓), Cow3@8 (8-4≥3 ✓) -> 3 cows ✓ -> lo=4# mid = 4: can_place? Cow1@1, Cow2@8 (8-1≥4 ✓), Cow3@? (9-8=1<4 ✗) -> 2 cows ✗ -> hi=3# lo > hi, return answer = 3Notice the direction change! In maximin, when feasible we search RIGHT (lo = mid + 1) to find larger values. In minimax, when feasible we search LEFT (hi = mid - 1) to find smaller values. Getting this wrong is a common bug.
Pattern recognition is crucial. Here's how to identify these problems from their statements:
| Keywords | Problem Type | Binary Search Direction |
|---|---|---|
| 'minimize the maximum' | Minimax | Search left when feasible (smaller) |
| 'minimize the largest' | Minimax | Search left when feasible |
| 'minimize worst-case' | Minimax | Search left when feasible |
| 'balance workload' | Minimax | Search left when feasible |
| 'maximize the minimum' | Maximin | Search right when feasible (larger) |
| 'maximize the smallest' | Maximin | Search right when feasible |
| 'maximize worst-case guarantee' | Maximin | Search right when feasible |
| 'maximize the gap' | Maximin | Search right when feasible |
The key insight for recognition:
Ask yourself: What am I optimizing?
Common disguises:
These problems often appear without using 'min' or 'max' explicitly:
If you can rephrase the problem as 'Can all values be kept below/above threshold T?' where you want the smallest/largest such T, you have a minimax/maximin problem solvable by binary search.
Here's a systematic framework for transforming any minimax/maximin problem into binary search:
Step-by-Step Framework:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
# TEMPLATE: Minimax (Minimize the Maximum)def solve_minimax(input_data): """ Template for minimax problems. Goal: Minimize the maximum value in some configuration. """ def compute_bounds(): # Lower bound: smallest possible maximum # Upper bound: largest possible maximum lo = minimum_possible_answer(input_data) hi = maximum_possible_answer(input_data) return lo, hi def is_feasible(max_limit): """ Can we create a configuration where the maximum is ≤ max_limit? Usually implemented as a greedy simulation. """ # Greedy: try to satisfy constraints with this limit # Return True if possible, False otherwise pass lo, hi = compute_bounds() answer = hi while lo <= hi: mid = lo + (hi - lo) // 2 if is_feasible(mid): answer = mid # This limit works hi = mid - 1 # Try smaller limit (SEARCH LEFT) else: lo = mid + 1 # Need larger limit return answer # TEMPLATE: Maximin (Maximize the Minimum) def solve_maximin(input_data): """ Template for maximin problems. Goal: Maximize the minimum value in some configuration. """ def compute_bounds(): # Lower bound: smallest possible minimum # Upper bound: largest possible minimum lo = minimum_possible_answer(input_data) hi = maximum_possible_answer(input_data) return lo, hi def is_feasible(min_threshold): """ Can we create a configuration where the minimum is ≥ min_threshold? Usually implemented as a greedy simulation. """ # Greedy: try to maintain this threshold # Return True if possible, False otherwise pass lo, hi = compute_bounds() answer = lo while lo <= hi: mid = lo + (hi - lo) // 2 if is_feasible(mid): answer = mid # This threshold achievable lo = mid + 1 # Try larger threshold (SEARCH RIGHT) else: hi = mid - 1 # Need smaller threshold return answer # Key Difference:# - Minimax: when feasible, search LEFT (hi = mid - 1) for smaller max# - Maximin: when feasible, search RIGHT (lo = mid + 1) for larger minThe feasibility predicate almost always uses a greedy approach. For minimax (limiting the maximum), you try to fit as much as possible within the limit. For maximin (enforcing a minimum), you try to achieve the threshold at each step. This greedy nature is why these problems work with binary search.
Let's see more problems to solidify the pattern recognition:
Example 1: Divide Chocolate (Maximin)
LeetCode 1231: You have a chocolate bar with n chunks. Each chunk has a sweetness value. You want to split the bar into k+1 pieces (giving k pieces to friends, keeping one). Maximize the minimum sweetness you get (you take the least sweet piece).
12345678910111213141516171819202122232425262728293031323334353637383940414243
def maximizeSweetness(sweetness: list[int], k: int) -> int: """ LeetCode 1231: Divide Chocolate Maximize the minimum sweetness among k+1 pieces. This is a maximin problem. Strategy: Binary search on the minimum sweetness threshold. Feasibility: Can we split into k+1 pieces each with sweetness ≥ threshold? """ def can_split(min_sweetness: int) -> bool: """Can we make k+1 pieces, each with total sweetness ≥ min_sweetness?""" pieces = 0 current_sweetness = 0 for s in sweetness: current_sweetness += s if current_sweetness >= min_sweetness: pieces += 1 current_sweetness = 0 return pieces >= k + 1 # k friends + 1 for yourself # Bounds lo = min(sweetness) # At least the smallest single chunk hi = sum(sweetness) // (k + 1) # At most equal split of total # Maximin: search right when feasible answer = lo while lo <= hi: mid = lo + (hi - lo) // 2 if can_split(mid): answer = mid lo = mid + 1 # Try larger (maximin → search right) else: hi = mid - 1 return answer print(maximizeSweetness([1,2,3,4,5,6,7,8,9], 5)) # Output: 6print(maximizeSweetness([5,6,7,8,9,1,2,3,4], 8)) # Output: 1Example 2: Minimize Maximum Distance (Minimax)
Placing gas stations: Given existing gas stations at certain positions and adding k new stations, minimize the maximum distance between any two adjacent stations.
1234567891011121314151617181920212223242526272829303132333435363738394041424344
def minimizeMaxDistance(stations: list[int], k: int) -> float: """ Minimize the maximum distance to the nearest gas station after adding k stations. This is a continuous minimax problem (answer is a real number). Strategy: Binary search on the maximum allowed distance. Feasibility: Can we add k or fewer stations such that all gaps are ≤ max_dist? """ stations.sort() def can_achieve_max_dist(max_dist: float) -> bool: """Can we make all gaps ≤ max_dist using at most k additional stations?""" stations_needed = 0 for i in range(1, len(stations)): gap = stations[i] - stations[i-1] # How many stations needed to reduce this gap to ≤ max_dist? # If gap = 10 and max_dist = 3, we need ceil(10/3) - 1 = 3 stations import math stations_needed += math.ceil(gap / max_dist) - 1 if stations_needed > k: return False return stations_needed <= k # Bounds (continuous) lo = 0.0 hi = max(stations[i] - stations[i-1] for i in range(1, len(stations))) # Minimax: search left when feasible (smaller max_dist is better) # Use fixed iterations for floating point for _ in range(100): # 100 iterations for ~30 decimal digits precision mid = (lo + hi) / 2 if can_achieve_max_dist(mid): hi = mid # Can do with this max_dist; try smaller else: lo = mid # Can't achieve; need larger max_dist return (lo + hi) / 2 print(f"{minimizeMaxDistance([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 9):.6f}") # Output: 0.5Notice how every example follows the same template: (1) define bounds, (2) design greedy feasibility check, (3) binary search in the correct direction. Once you internalize this pattern, new problems become straightforward applications.
Even experienced programmers make these mistakes. Be vigilant:
< instead of ≤ or vice versa in the greedy checker. Whether you use strict or non-strict inequality must match the problem semantics.== comparisons on floats.1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
# ❌ Wrong: Confusing minimax and maximin search directionsdef wrong_direction(lo, hi, is_feasible): answer = hi while lo <= hi: mid = (lo + hi) // 2 if is_feasible(mid): answer = mid lo = mid + 1 # BUG: This is maximin direction, but we're doing minimax! else: hi = mid - 1 return answer # ❌ Wrong: Off-by-one in number of piecesdef wrong_pieces(sweetness, k): def can_split(threshold): pieces = 0 curr = 0 for s in sweetness: curr += s if curr >= threshold: pieces += 1 curr = 0 return pieces >= k # BUG: Should be k + 1 (k friends + yourself) # ... # ✓ Correct: Careful direction choicedef correct_minimax(lo, hi, is_feasible): answer = hi while lo <= hi: mid = (lo + hi) // 2 if is_feasible(mid): answer = mid hi = mid - 1 # Minimax: search LEFT for smaller else: lo = mid + 1 return answer def correct_maximin(lo, hi, is_feasible): answer = lo while lo <= hi: mid = (lo + hi) // 2 if is_feasible(mid): answer = mid lo = mid + 1 # Maximin: search RIGHT for larger else: hi = mid - 1 return answerWhen your solution fails, print the feasibility result for every mid value in a small test case. You should see a clean partition: all False values on one side, all True on the other. If the pattern is mixed, your feasibility checker is buggy.
Minimax and maximin are powerful optimization patterns that appear constantly in competitive programming and real-world systems. Let's consolidate:
What's next:
We've covered the theoretical foundation and the minimax/maximin patterns. The final page applies everything to classic problems: the capacity to ship packages (which we've seen) and the split array problem, with complete walkthroughs and variations you'll encounter in practice.
You can now recognize, formulate, and solve minimax and maximin problems using binary search on answer space. These patterns appear in everything from load balancing to network design to competitive programming. Next, we'll cement your skills with detailed walkthroughs of classic problems.