Loading learning content...
If early termination is knowing when to stop, pruning is knowing what never to start. Pruning eliminates entire branches of the search space before they're explored, recognizing that certain paths cannot possibly lead to valid or optimal solutions.
Consider a chess grandmaster who doesn't calculate every possible move sequence. Instead, they instantly recognize that most moves are poor and focus calculation on a handful of promising candidates. This is pruning in action—eliminating vast swaths of the possibility space without explicit examination.
In algorithms, pruning transforms exponential searches into tractable computations. Backtracking without pruning is exhaustive enumeration. Backtracking with effective pruning becomes surgical precision. This page will teach you to identify pruning opportunities, implement them correctly, and understand when pruning's overhead exceeds its benefits.
By the end of this page, you will be able to: • Distinguish pruning from early termination and understand when each applies • Implement constraint-based pruning that eliminates infeasible branches • Apply bound-based pruning to optimization problems • Recognize and exploit dominance relationships between states • Use symmetry breaking to avoid redundant exploration • Design effective pruning conditions for backtracking algorithms
While related, pruning and early termination are distinct techniques applied at different points in computation:
Early Termination: You're in the middle of processing and determine the remaining work is unnecessary. You stop what you're doing and return.
Pruning: Before starting to explore a branch, you determine it cannot lead to a valid or better solution. You skip the branch entirely.
The key distinction: termination stops ongoing work; pruning prevents work from starting.
| Aspect | Early Termination | Pruning |
|---|---|---|
| When applied | During computation | Before branching into a subtree |
| What it affects | Stops current processing | Skips entire branches |
| Typical use | Found answer, can't improve, invariant violated | Branch is infeasible, dominated, or provably suboptimal |
| Example | Stop search when target found | Don't explore permutations starting with invalid element |
| Overhead | Check once when condition becomes true | Check at every potential branch point |
123456789101112131415161718192021222324252627282930313233343536
def example_with_both(n, constraint): """ Demonstrates both pruning and early termination in one function. """ best_solution = None def explore(partial_solution): nonlocal best_solution # EARLY TERMINATION: We've built a complete solution if len(partial_solution) == n: if is_better(partial_solution, best_solution): best_solution = partial_solution[:] return for candidate in get_candidates(): # PRUNING: Don't even explore this branch if not is_feasible(partial_solution, candidate): continue # Skip this branch entirely if not can_beat_best(partial_solution, candidate, best_solution): continue # Prune: this path cannot improve on best partial_solution.append(candidate) explore(partial_solution) partial_solution.pop() # EARLY TERMINATION: Found optimal, no need to continue if best_solution and is_optimal(best_solution): return explore([]) return best_solution # Pruning: "Don't go down this path"# Early termination: "Stop going down ANY more paths"The most effective algorithms use both: prune aggressively to avoid exploring bad branches, and terminate early when the goal is achieved or when remaining exploration is provably useless. They work together, with pruning reducing the search space and termination cutting off exploration once sufficient information is gathered.
The most straightforward pruning eliminates branches that violate constraints. If a partial solution already violates a constraint, extending it cannot produce a valid solution.
The Principle:
At each step, check if the current partial solution can possibly lead to a complete valid solution. If not, prune immediately.
N-Queens Example:
The N-Queens problem places N queens on an N×N chessboard so no two queens threaten each other. Without pruning, we'd explore N^N board configurations. With constraint-based pruning, we only place queens in non-threatening positions:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
def solve_n_queens(n): """ Solve N-Queens with aggressive constraint-based pruning. Without pruning: Try all n^n possible placements With pruning: Only extend partial solutions that don't violate constraints For n=8: - Without pruning: 16,777,216 configurations - With pruning: ~15,000 nodes explored - Reduction: >1000x """ solutions = [] # Track which columns and diagonals are under attack cols = set() # Columns with a queen pos_diag = set() # r + c diagonals with a queen neg_diag = set() # r - c diagonals with a queen def backtrack(row, queens): if row == n: solutions.append(queens[:]) return for col in range(n): # CONSTRAINT-BASED PRUNING: # Skip positions that are already under attack if col in cols: continue # Column conflict if row + col in pos_diag: continue # Positive diagonal conflict if row - col in neg_diag: continue # Negative diagonal conflict # Place queen and update attack sets queens.append(col) cols.add(col) pos_diag.add(row + col) neg_diag.add(row - col) backtrack(row + 1, queens) # Backtrack: remove queen queens.pop() cols.remove(col) pos_diag.remove(row + col) neg_diag.remove(row - col) backtrack(0, []) return solutions # The pruning is ESSENTIAL:# Row 0: 8 choices# Row 1: ~5 valid choices (some pruned)# Row 2: ~3 valid choices# ...# The branching factor rapidly decreases due to pruningSudoku with Constraint Propagation:
Sudoku solvers demonstrate how pruning extends beyond simple checking to active constraint propagation:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
def solve_sudoku(board): """ Solve Sudoku with constraint propagation and pruning. Key insight: Track possible values for each cell. When a value is placed, propagate constraints to eliminate impossible values from related cells. If any cell has 0 possible values, prune this branch. """ def get_possible(board, row, col): """Get possible values for a cell based on current constraints.""" if board[row][col] != 0: return set() possible = set(range(1, 10)) # Remove values in same row for c in range(9): if board[row][c] != 0: possible.discard(board[row][c]) # Remove values in same column for r in range(9): if board[r][col] != 0: possible.discard(board[r][col]) # Remove values in same 3x3 box box_row, box_col = 3 * (row // 3), 3 * (col // 3) for r in range(box_row, box_row + 3): for c in range(box_col, box_col + 3): if board[r][c] != 0: possible.discard(board[r][c]) return possible def find_best_cell(board): """ MRV heuristic: Most constrained variable first. Choose cell with fewest possible values. Maximizes pruning effectiveness. """ min_options = 10 best_cell = None best_possible = None for r in range(9): for c in range(9): if board[r][c] == 0: possible = get_possible(board, r, c) if len(possible) == 0: return None, None, False # PRUNE: No valid options if len(possible) < min_options: min_options = len(possible) best_cell = (r, c) best_possible = possible return best_cell, best_possible, True def solve(board): cell, possible, valid = find_best_cell(board) if not valid: return False # Constraint violation detected if cell is None: return True # All cells filled - solution found! row, col = cell for num in possible: # Only try feasible values (pruned!) board[row][col] = num if solve(board): return True board[row][col] = 0 # Backtrack return False # No valid value for this cell solve(board) return boardSimple constraint checking just verifies the current state. Full constraint propagation determines all implications of a choice, potentially triggering cascading deductions. Advanced Sudoku solvers use techniques like naked singles, hidden singles, and more sophisticated propagation that can solve many puzzles without any backtracking at all.
Bound-based pruning is essential for optimization problems. The idea: compute an optimistic bound on what the current branch can achieve. If that bound is worse than an already-found solution, prune the branch.
Types of Bounds:
For maximization problems:
For minimization problems:
The Quality-Cost Tradeoff:
Tighter bounds prune more aggressively but cost more to compute. The art is finding bounds that are:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
def traveling_salesman_bnb(distances): """ TSP with branch-and-bound using MST lower bound. Lower bound: Cost so far + MST of remaining cities If lower_bound >= best_tour, prune this branch. The MST provides a valid lower bound because any tour visiting remaining cities must at least connect them, and MST is the cheapest connection. """ n = len(distances) best_tour = float('inf') best_path = None def mst_cost(remaining_cities): """ Compute MST cost using Prim's algorithm. This is O(V²) but provides a tight lower bound. """ if len(remaining_cities) <= 1: return 0 cities = list(remaining_cities) in_mst = {cities[0]} total = 0 while len(in_mst) < len(cities): min_edge = float('inf') next_city = None for u in in_mst: for v in cities: if v not in in_mst: if distances[u][v] < min_edge: min_edge = distances[u][v] next_city = v in_mst.add(next_city) total += min_edge return total def min_edge_to_remaining(city, remaining): """Minimum cost to connect to any remaining city.""" if not remaining: return distances[city][0] # Return to start return min(distances[city][other] for other in remaining) def branch_and_bound(path, cost, remaining): nonlocal best_tour, best_path current = path[-1] # Complete tour if not remaining: total = cost + distances[current][0] # Return to start if total < best_tour: best_tour = total best_path = path[:] return # BOUND-BASED PRUNING: # Lower bound = cost so far + MST of remaining + min edges lower_bound = cost + mst_cost(remaining) lower_bound += min_edge_to_remaining(current, remaining) if lower_bound >= best_tour: return # PRUNE: Cannot beat best # Explore remaining cities for next_city in remaining: new_remaining = remaining - {next_city} branch_and_bound( path + [next_city], cost + distances[current][next_city], new_remaining ) # Initialize remaining = set(range(1, n)) # All cities except start branch_and_bound([0], 0, remaining) return best_tour, best_path # Without pruning: (n-1)! permutations explored# With effective bounds: Often explores only a tiny fraction# For n=15: 87 billion permutations vs potentially ~thousands with good boundsAlpha-Beta Pruning in Game Trees:
The classic application of bound-based pruning in adversarial search:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
def minimax(node, depth, is_maximizing, game): """Minimax without pruning: explores entire game tree.""" if depth == 0 or game.is_terminal(node): return game.evaluate(node) if is_maximizing: max_eval = float('-inf') for child in game.get_children(node): eval = minimax(child, depth - 1, False, game) max_eval = max(max_eval, eval) return max_eval else: min_eval = float('inf') for child in game.get_children(node): eval = minimax(child, depth - 1, True, game) min_eval = min(min_eval, eval) return min_eval def alpha_beta(node, depth, alpha, beta, is_maximizing, game): """ Minimax with alpha-beta pruning. Alpha: Best score the maximizer can guarantee so far Beta: Best score the minimizer can guarantee so far If at any point beta <= alpha, we can prune: - The minimizer won't choose this path (maximizer can do better elsewhere) - The maximizer won't choose this path (minimizer can do better elsewhere) Reduces game tree by roughly sqrt(b^d) instead of b^d where b is branching factor and d is depth. """ if depth == 0 or game.is_terminal(node): return game.evaluate(node) if is_maximizing: max_eval = float('-inf') for child in game.get_children(node): eval = alpha_beta(child, depth - 1, alpha, beta, False, game) max_eval = max(max_eval, eval) alpha = max(alpha, eval) if beta <= alpha: break # BETA CUTOFF: Minimizer won't allow this path return max_eval else: min_eval = float('inf') for child in game.get_children(node): eval = alpha_beta(child, depth - 1, alpha, beta, True, game) min_eval = min(min_eval, eval) beta = min(beta, eval) if beta <= alpha: break # ALPHA CUTOFF: Maximizer won't choose this path return min_eval # Usage:# best_move_value = alpha_beta(root, depth, float('-inf'), float('inf'), True, game) # Efficiency gain:# - Perfect ordering: reduces O(b^d) to O(b^(d/2)) # - Random ordering: typically O(b^(3d/4))# - For chess: billions of nodes reduced to millionsAlpha-beta's effectiveness depends heavily on exploration order. If the best moves are examined first, pruning is maximized. Practical game engines use iterative deepening, hash tables of previous evaluations, and heuristic move ordering to examine likely-good moves first, dramatically improving pruning effectiveness.
Dominance pruning eliminates states that are provably inferior to other states. If state A dominates state B (A is at least as good in all respects and strictly better in some), we never need to explore extensions of B.
The Dominance Principle:
If from state A, we can reach any solution reachable from state B, and A is no worse positioned to reach those solutions, then B can be safely pruned.
Example: Weighted Job Scheduling
Consider scheduling jobs where state includes (time, profit). State A = (10, 100) dominates B = (15, 80) because A has more time remaining AND more profit accumulated. Any solution extending B could be matched or beaten by extending A.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
def weighted_job_scheduling_with_dominance(jobs): """ Schedule jobs to maximize profit using dominance pruning. Jobs: list of (start, end, profit) State: (current_time, accumulated_profit) Dominance: State A dominates State B if: - A.time <= B.time (more time available) - A.profit >= B.profit (more profit accumulated) - At least one inequality is strict We only keep non-dominated states at each step. """ jobs = sorted(jobs, key=lambda x: x[1]) # Sort by end time # Pareto frontier of non-dominated states # Each state: (end_time, profit) states = [(0, 0)] # Start: time 0, profit 0 for start, end, profit in jobs: new_states = [] for time, accumulated in states: # Option 1: Skip this job (state unchanged) new_states.append((time, accumulated)) # Option 2: Take this job (if feasible) if start >= time: new_state = (end, accumulated + profit) new_states.append(new_state) # DOMINANCE PRUNING: Keep only Pareto-optimal states # A state (t1, p1) dominates (t2, p2) if t1 <= t2 and p1 >= p2 states = remove_dominated_states(new_states) # Best profit among all final states return max(profit for _, profit in states) def remove_dominated_states(states): """ Remove states dominated by others. For (time, profit) states: - Sort by time - Scan, keeping only states where profit increases O(n log n) for this specific dominance relation. """ # Sort by time ascending, profit descending states = sorted(set(states), key=lambda x: (x[0], -x[1])) pareto_frontier = [] max_profit = float('-inf') for time, profit in states: if profit > max_profit: pareto_frontier.append((time, profit)) max_profit = profit # Else: This state is dominated (same or more time, less profit) return pareto_frontier # Without dominance pruning: Potentially exponential states# With dominance pruning: At most O(n²) non-dominated states# Often much fewer in practiceDominance in Knapsack Problems:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
def knapsack_with_dominance(items, capacity): """ 0/1 Knapsack with dominance-based state reduction. State: (weight_used, value_accumulated) Dominance: (w1, v1) dominates (w2, v2) if w1 <= w2 and v1 >= v2 Traditional DP: O(n * capacity) states With dominance: O(n * |pareto_states|), potentially much smaller """ # pareto_states: list of (weight, value) non-dominated states pareto_states = [(0, 0)] for weight, value in items: new_states = [] for w, v in pareto_states: # Don't take item new_states.append((w, v)) # Take item if it fits if w + weight <= capacity: new_states.append((w + weight, v + value)) # Apply dominance pruning pareto_states = get_pareto_front(new_states) return max(v for w, v in pareto_states) def get_pareto_front(states): """Return Pareto-optimal states for (minimize weight, maximize value).""" states = sorted(set(states), key=lambda x: (x[0], -x[1])) front = [] max_value = float('-inf') for weight, value in states: if value > max_value: front.append((weight, value)) max_value = value return front # Example where dominance pruning helps dramatically:# Items with many similar weight-value ratios create redundant states# Standard DP explores them all; dominance pruning eliminates duplicatesNot all problems have useful dominance relations. For dominance pruning to help, you need: (1) A way to compare states partially (not just by total value), (2) Enough dominated states to offset the cost of dominance checking, and (3) The ability to identify dominated states efficiently. When these conditions hold, dominance pruning can be transformative.
Symmetry breaking is a special form of pruning that eliminates solutions which are equivalent under problem symmetries. If two solutions are symmetric (one can be transformed into the other via rotation, reflection, permutation, etc.), we need to find only one.
Why Symmetry Creates Redundancy:
Consider placing 8 queens on a chessboard. If we find solution S, then the rotations and reflections of S are also solutions. There are 8 such symmetries (4 rotations × 2 reflections). Without symmetry breaking, we might find all 8 variants when we only need one.
Breaking Symmetry by Forcing Decisions:
The simplest symmetry breaking technique constrains early decisions to eliminate symmetric equivalents:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
def n_queens_with_symmetry_breaking(n): """ N-Queens that avoids generating symmetric solutions. Symmetry breaking: Only place first queen in the first half of row 0. This eliminates horizontal reflection symmetry immediately. For 8-queens: - Standard: 92 solutions (with many symmetric pairs) - Unique (non-symmetric): 12 solutions """ solutions = [] cols = set() pos_diag = set() neg_diag = set() def backtrack(row, queens): if row == n: solutions.append(queens[:]) return # SYMMETRY BREAKING: First queen only in columns 0 to n//2 - 1 if row == 0: col_range = range(n // 2) else: col_range = range(n) for col in col_range: if col in cols or row + col in pos_diag or row - col in neg_diag: continue queens.append(col) cols.add(col) pos_diag.add(row + col) neg_diag.add(row - col) backtrack(row + 1, queens) queens.pop() cols.remove(col) pos_diag.remove(row + col) neg_diag.remove(row - col) backtrack(0, []) return solutions # Returns canonical solutions; multiply by 2 for total (horizontal reflections) def subset_sum_with_symmetry(arr, target): """ Find subsets that sum to target, avoiding equivalent subsets. Symmetry: If elements repeat, {a, b, c} and {a, c, b} are same subset. Breaking: Process elements in order, never go back. """ arr = sorted(arr) # Enables duplicate skipping results = [] def backtrack(start, path, remaining): if remaining == 0: results.append(path[:]) return if remaining < 0: return for i in range(start, len(arr)): # SYMMETRY BREAKING: Skip duplicates if i > start and arr[i] == arr[i - 1]: continue if arr[i] > remaining: break # Sorted, so no point continuing path.append(arr[i]) backtrack(i + 1, path, remaining - arr[i]) path.pop() backtrack(0, [], target) return resultsSymmetry Breaking in Graph Coloring:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
def graph_coloring_with_symmetry_breaking(graph, num_colors): """ Color graph vertices using k colors such that no adjacent vertices share color. Symmetry: Color labels are interchangeable. Swapping all 'red' and 'blue' gives an equivalent coloring. Breaking: Assign colors in order of first use. First vertex gets color 0. Next uncolored vertex gets lowest unused color OR any already-used color. This creates canonical colorings. """ n = len(graph) coloring = [-1] * n solutions = [] def is_valid(vertex, color): for neighbor in graph[vertex]: if coloring[neighbor] == color: return False return True def backtrack(vertex, colors_used): if vertex == n: solutions.append(coloring[:]) return # SYMMETRY BREAKING: Only use colors 0 to colors_used (inclusive) # This avoids equivalent solutions where color labels are permuted max_color = min(colors_used + 1, num_colors) for color in range(max_color): if is_valid(vertex, color): coloring[vertex] = color # Update colors_used if we introduced a new color new_colors_used = max(colors_used, color + 1) backtrack(vertex + 1, new_colors_used) coloring[vertex] = -1 backtrack(0, 0) return solutions # Without symmetry breaking: k! equivalent colorings for each unique coloring# With symmetry breaking: Only canonical representatives explored# For k=5 colors: 120x reduction in search spaceTo apply symmetry breaking, first identify the symmetries: Are elements interchangeable (permutation symmetry)? Is there rotational or reflectional symmetry in the problem structure? Can equivalent objects be reordered freely? Once symmetries are identified, break them by forcing a canonical ordering or constraining early decisions.
Feasibility pruning asks a simple question at each decision point: "Is it still possible to complete a valid solution from here?" If the answer is no, prune immediately.
Common Feasibility Checks:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
def partition_equal_subset_with_feasibility(nums): """ Check if array can be partitioned into two subsets with equal sum. Multiple feasibility pruning techniques: 1. Total sum must be even 2. No single element can exceed target 3. Remaining elements must be able to reach remaining target """ total = sum(nums) # FEASIBILITY: Early exit if impossible if total % 2 != 0: return False target = total // 2 nums = sorted(nums, reverse=True) # Larger elements first # FEASIBILITY: Largest element can't exceed target if nums[0] > target: return False def backtrack(index, current_sum, remaining_sum): if current_sum == target: return True if index == len(nums): return False # FEASIBILITY: Already exceeded target if current_sum > target: return False # FEASIBILITY: Even taking all remaining elements won't reach target if current_sum + remaining_sum < target: return False # FEASIBILITY: Next element itself exceeds what we need remaining_needed = target - current_sum if nums[index] > remaining_needed: # Can't take this element; check if skipping it is feasible return backtrack(index + 1, current_sum, remaining_sum - nums[index]) # Try including current element if backtrack(index + 1, current_sum + nums[index], remaining_sum - nums[index]): return True # Skip duplicate elements at this level while index + 1 < len(nums) and nums[index + 1] == nums[index]: index += 1 # Try excluding current element return backtrack(index + 1, current_sum, remaining_sum - nums[index]) return backtrack(0, 0, total) def combination_sum_with_feasibility(candidates, target): """ Find combinations summing to target with comprehensive feasibility pruning. """ candidates = sorted(candidates) results = [] def backtrack(start, path, remaining): if remaining == 0: results.append(path[:]) return for i in range(start, len(candidates)): candidate = candidates[i] # FEASIBILITY: Sorted order means all further candidates are larger # If this one exceeds remaining, all subsequent ones will too if candidate > remaining: break # Not 'continue', but 'break' - prune all remaining path.append(candidate) # Same index i: can reuse same number backtrack(i, path, remaining - candidate) path.pop() backtrack(0, [], target) return resultsEvery pruning check has a cost. If the check is expensive and rarely succeeds in pruning, it may slow down the algorithm. Profile to ensure your pruning conditions provide net benefit. The best pruning checks are cheap to evaluate and frequently eliminate large subtrees.
Different pruning strategies suit different problem types. Understanding when to apply each technique is crucial for effective algorithm design.
| Strategy | Best For | Effectiveness | Overhead |
|---|---|---|---|
| Constraint-Based | CSPs, feasibility problems | Eliminates invalid branches | Low (simple checks) |
| Bound-Based | Optimization problems | Eliminates suboptimal branches | Medium (bound calculation) |
| Dominance | DP with multi-dimensional state | Eliminates redundant states | Medium (dominance testing) |
| Symmetry Breaking | Problems with symmetric solutions | Divides by symmetry group size | Low (ordering constraints) |
| Feasibility | Resource/capacity problems | Early dead-end detection | Low-Medium (depends on check) |
Combining Multiple Pruning Strategies:
The most effective algorithms layer multiple pruning techniques:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
def solve_with_layered_pruning(problem): """ Demonstrates layered pruning: cheap checks first, expensive checks later. Order pruning checks by (cost, effectiveness): 1. Fastest checks that catch obvious violations 2. Medium-cost checks that prune significant portions 3. Expensive checks only when simpler ones fail to prune """ best_solution = None def explore(state): nonlocal best_solution # Layer 1: CONSTRAINT PRUNING (cheapest) # Simple feasibility checks if state.violates_basic_constraints(): return # O(1) check, catches many invalid branches # Layer 2: SYMMETRY PRUNING # Check if this state is a symmetric duplicate if not state.is_canonical(): return # Avoid exploring symmetric equivalents # Layer 3: FEASIBILITY PRUNING # Can we possibly complete from here? if not state.can_complete(): return # Check remaining resources, etc. # Layer 4: BOUND-BASED PRUNING (most expensive) # Is this branch worth exploring? if best_solution is not None: upper_bound = state.compute_optimistic_bound() if upper_bound <= best_solution.value: return # Can't beat current best # If we reach here, explore this branch if state.is_complete(): if is_better(state, best_solution): best_solution = state.copy() return for next_state in state.generate_children(): explore(next_state) explore(initial_state) return best_solution # Key insight: Each layer catches some branches.# Cheap wide nets → medium filters → expensive precision# This way, expensive checks are only run on promising branches.Often, 80% of the pruning benefit comes from 20% of the pruning rules. Start with the simplest, most obvious pruning conditions. Add more sophisticated techniques only when profiling shows the simpler ones are insufficient. Over-engineering pruning can hurt maintainability without proportional performance gains.
Pruning is essential for making exponential search tractable. By eliminating branches before exploration, we transform infeasible brute-force approaches into practical algorithms.
Key Pruning Techniques Mastered:
The Pruning Mindset:
At every branch point, ask: "Is this branch worth exploring?" Look for reasons to skip, not reasons to continue. The most elegant pruning eliminates vast portions of the search space with minimal computation.
What's Next:
The final page of this module explores space-time trade-offs—the fundamental balancing act between memory usage and computation time. You'll learn when to trade memory for speed (caching, precomputation) and when memory constraints demand slower but leaner algorithms.
You now possess a comprehensive toolkit for pruning search spaces. From simple constraint checks to sophisticated dominance relations and symmetry breaking, you can identify and implement pruning strategies that transform exponential searches into tractable algorithms. The next page will explore the fundamental trade-off between space and time that underlies all optimization decisions.