Loading learning content...
Imagine you're trying to open a combination lock and don't know the correct digits. The most straightforward strategy? Try every possible combination. Start at 0-0-0, then 0-0-1, 0-0-2, and so on, until the lock opens. This exhaustive approach—methodically generating every possibility and testing each one—is the essence of brute force.
Brute force is the algorithmic equivalent of brute strength in problem-solving: it doesn't rely on cleverness, insight, or shortcuts. It simply enumerates all possible solutions, tests each one against the problem's constraints, and keeps those that pass. In the world of algorithms, this strategy serves as both a powerful baseline and a conceptual foundation upon which more sophisticated techniques are built.
By the end of this page, you will deeply understand the brute force paradigm: how it generates complete solution spaces, why it guarantees correctness when properly implemented, what computational costs it incurs, and when this straightforward approach is actually the right choice. This understanding is essential before we can appreciate backtracking's key innovations.
Brute force is an algorithmic paradigm characterized by systematic enumeration of all possible candidates for a solution, followed by verification of each candidate against the problem's requirements. The approach follows a conceptually simple two-phase process:
Generation Phase: Produce every element in the solution space—every permutation, combination, configuration, or assignment that could theoretically satisfy the problem.
Filtering Phase: Test each generated candidate to determine whether it actually satisfies all constraints, keeping valid solutions and discarding invalid ones.
The hallmark of brute force is its separation of concerns: generation logic is entirely independent of validation logic. The generator knows nothing about what makes a valid solution; it simply produces all possibilities. The filter knows nothing about how candidates were produced; it simply checks each one.
When the solution space is correctly enumerated and the validity test is correctly implemented, brute force guarantees finding all valid solutions. This completeness guarantee makes brute force valuable as a reference implementation and for problems where correctness cannot be compromised.
The Two-Phase Mental Model:
Think of brute force as operating two completely separate machines:
Machine 1 (Generator): A factory that manufactures every possible configuration, like a machine printing every possible lottery ticket number.
Machine 2 (Filter): A quality control inspector examining each manufactured item, discarding those that fail to meet specifications.
These machines work in isolation. Machine 1 doesn't care what Machine 2 will accept—it just makes everything. Machine 2 doesn't care how expensive Machine 1's production was—it just inspects what arrives.
This separation is both brute force's strength (simplicity) and its weakness (inefficiency). We're paying to manufacture items we know will be rejected, simply because the manufacturing process doesn't know how to avoid them.
To understand brute force's computational cost, we must understand the size of solution spaces. Combinatorial mathematics provides the framework for computing these sizes, and the results often reveal why brute force becomes impractical for larger inputs.
Common Solution Space Sizes:
| Problem Type | Solution Space Size | n=10 | n=20 | n=30 |
|---|---|---|---|---|
| Binary strings of length n | 2^n | 1,024 | ~1 million | ~1 billion |
| Permutations of n elements | n! | 3.6 million | ~2.4 × 10^18 | ~2.7 × 10^32 |
| Subsets of n elements | 2^n | 1,024 | ~1 million | ~1 billion |
| k-combinations from n (k=n/2) | C(n,k) | 252 | 184,756 | ~155 million |
| Arrangements of n items in k positions | n^k | 10 billion (k=10) | ~1 × 10^26 | ~2 × 10^44 |
Interpreting These Numbers:
Consider permutations: with just 10 elements, there are 3.6 million arrangements to examine. At 20 elements, the count exceeds the number of seconds since the Big Bang. At 30 elements, no computer that could ever exist would complete the enumeration before the heat death of the universe.
This combinatorial explosion is the central challenge of brute force. Solution spaces don't grow linearly or even polynomially—they grow exponentially or factorially. Every additional element can multiply the search space by a factor equal to or greater than the current problem size.
A computer performing a billion operations per second would need:
• ~1 second to enumerate 2^30 candidates • ~18 minutes for 2^40 candidates • ~13 days for 2^50 candidates • ~36 years for 2^60 candidates • ~36,000 years for 2^70 candidates
Every addition of 10 elements to a subset problem multiplies runtime by approximately 1,000.
Why This Matters:
These numbers aren't merely academic. They define hard limits on what brute force can achieve:
Understanding these limits helps us:
Every brute force algorithm, regardless of the specific problem, follows the same structural pattern. Understanding this anatomy allows you to recognize brute force implementations and, more importantly, identify opportunities for optimization.
The Universal Brute Force Structure:
GenerateAll(parameters):
if complete_candidate:
candidates.add(current_candidate)
return
for each possible_choice in choice_space:
extend current_candidate with possible_choice
GenerateAll(updated_parameters)
remove possible_choice from current_candidate // backtrack
FilterValid(candidates, constraints):
valid_solutions = []
for each candidate in candidates:
if satisfies_all_constraints(candidate, constraints):
valid_solutions.add(candidate)
return valid_solutions
BruteForce(problem):
candidates = GenerateAll(initial_state)
return FilterValid(candidates, problem.constraints)
Dissecting the Components:
1. Choice Space Definition
At each decision point, brute force considers all available options without discrimination. If we're generating binary strings, the choice space at each position is {0, 1}. If we're generating permutations, the choice space at position i includes all elements not yet used.
2. Candidate Construction
Candidates are built incrementally by making choices. Each recursive call adds one more piece to the current candidate until it's complete. The construction process implicitly defines a search tree where:
3. State Management
As we build candidates, we must track the current state (what choices have been made) and restore state when exploring alternatives. This "make choice, recurse, unmake choice" pattern is technically backtracking at the generation level, but the key distinction from "backtracking" as an optimization technique is that we never skip generating any candidate.
4. Validation
After generation completes (or as each candidate completes), validation tests whether the candidate satisfies the problem's constraints. In pure brute force, validation is completely separate from generation—we don't use constraint information to guide or prune the search.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
def generate_all_binary_strings(n: int) -> list[str]: """ Generate ALL binary strings of length n. Pure brute force: no consideration of any constraints. Solution space size: 2^n """ result = [] def generate(current: list[str], position: int): # Base case: complete candidate if position == n: result.append(''.join(current)) return # Enumerate all choices at this position for bit in ['0', '1']: current.append(bit) # Make choice generate(current, position + 1) # Recurse current.pop() # Unmake choice generate([], 0) return result def filter_valid(candidates: list[str], constraint) -> list[str]: """ Filter phase: test each candidate against constraint. Completely separate from generation logic. """ return [c for c in candidates if constraint(c)] # Example usage: Find binary strings with exactly k onesn = 4k = 2 # Step 1: Generate ALL 2^4 = 16 binary stringsall_strings = generate_all_binary_strings(n)print(f"Generated {len(all_strings)} candidates") # 16 # Step 2: Filter to keep only those with exactly k onesvalid_strings = filter_valid( all_strings, lambda s: s.count('1') == k)print(f"Valid solutions: {valid_strings}")# ['0011', '0101', '0110', '1001', '1010', '1100']Let's trace through brute force generation of all permutations—a fundamental operation that underlies many combinatorial problems. This example illuminates both the elegance and the overhead of complete enumeration.
Problem: Generate all permutations of [1, 2, 3]
Solution Space Size: 3! = 6 permutations
Visualizing the Search Tree:
[]
┌──────────────┼──────────────┐
[1] [2] [3]
┌───┴───┐ ┌───┴───┐ ┌───┴───┐
[1,2] [1,3] [2,1] [2,3] [3,1] [3,2]
│ │ │ │ │ │
[1,2,3] [1,3,2] [2,1,3] [2,3,1] [3,1,2] [3,2,1]
Each path from root to leaf represents a complete permutation. Brute force visits every leaf node, regardless of any constraints we might have. If we only wanted permutations where 1 comes before 2, brute force would still generate all 6, then filter to keep the 3 valid ones.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
def generate_all_permutations(elements: list) -> list[list]: """ Generate ALL permutations of the given elements. This is brute force because we generate every permutation regardless of any constraints that might be imposed later. Time: O(n! × n) for generation Space: O(n! × n) to store all permutations """ result = [] n = len(elements) def generate(current: list, remaining: set): # Base case: permutation complete if len(current) == n: result.append(current.copy()) return # Try each remaining element at current position for element in list(remaining): # list() for safe iteration current.append(element) remaining.remove(element) generate(current, remaining) # Backtrack: restore state for next iteration current.pop() remaining.add(element) generate([], set(elements)) return result # Trace the execution with instrumentationdef generate_permutations_traced(elements: list) -> list[list]: """Same algorithm with detailed tracing.""" result = [] n = len(elements) call_count = [0] # Use list for mutable counter in closure def generate(current: list, remaining: set, depth: int): call_count[0] += 1 indent = " " * depth print(f"{indent}generate(current={current}, remaining={remaining})") if len(current) == n: print(f"{indent} → FOUND: {current}") result.append(current.copy()) return for element in sorted(remaining): print(f"{indent} Choosing {element}...") current.append(element) remaining.remove(element) generate(current, remaining, depth + 1) print(f"{indent} Unchoosing {element}...") current.pop() remaining.add(element) generate([], set(elements), 0) print(f"\nTotal recursive calls: {call_count[0]}") return result # Execute trace for small exampleprint("=== Generating all permutations of [1, 2, 3] ===\n")perms = generate_permutations_traced([1, 2, 3])print(f"\nAll permutations: {perms}")print(f"Count: {len(perms)}")Key Observations from the Trace:
Systematic Exploration: The algorithm tries element 1 first, exhausting all permutations starting with 1 before moving to 2, then 3. This systematic nature guarantees completeness.
State Management: Notice the "Choosing/Unchoosing" pattern. Before recursing, we modify state (add element). After recursing, we restore state (remove element). This is technically backtracking at the implementation level, but we're not using it to prune—just to manage state.
Exponential Work: Even for just 3 elements, we make 16 recursive calls. For n elements, call count grows proportionally to n!.
No Early Termination: Every branch is followed to completion. If we wanted only permutations where the sum of first two elements is even, brute force still generates all 6 permutations before filtering.
Brute force's computational cost has three components: generation cost, storage cost, and validation cost. Understanding each helps us identify optimization opportunities.
1. Generation Cost
Generating the full solution space requires visiting every node in the search tree. For a tree with N leaves and average depth d:
| Problem | Solution Space N | Per-Candidate Cost | Total Time |
|---|---|---|---|
| All subsets | 2^n | O(n) to construct | O(n × 2^n) |
| All permutations | n! | O(n) to construct | O(n × n!) |
| N-Queens (brute force) | n^n placements | O(n) to validate | O(n^(n+1)) |
| Graph coloring (k colors) | k^n | O(n²) to validate | O(n² × k^n) |
| Traveling salesman | (n-1)!/2 routes | O(n) to compute length | O(n × n!) |
2. Storage Cost
If we generate all candidates before filtering (true brute force), we must store the entire solution space:
This storage cost can itself become prohibitive, forcing modified approaches that validate during generation rather than after.
3. Validation Cost
Each candidate must be fully validated. If validation takes time V(n):
For complex constraints, V(n) can be substantial. Validating that a graph coloring is legal requires checking all O(n²) edges. Validating a Sudoku solution requires checking 27 groups.
Brute force costs multiply: (generation cost) × (validation cost per candidate). Even if validation is cheap O(1), the exponential generation cost dominates. But for problems with expensive validation (O(n²) or worse), the total cost becomes truly astronomical. An n=15 graph coloring check requires generating 3^15 ≈ 14 million candidates, each needing up to 105 edge checks—nearly 1.5 billion operations total.
The fundamental inefficiency of brute force lies in wasted work—effort spent generating candidates that will inevitably be rejected. This waste often represents the vast majority of computational effort.
Example: Constrained Subset Sum
Consider finding all subsets of {1, 2, 3, ..., 20} that sum to exactly 100.
Brute force generates over 1 million subsets to find ~4,000 valid ones. More than 99.6% of its work produces rejected candidates.
Visualizing Wasted Work:
Consider constructing subsets incrementally. At each step, we decide whether to include the current element:
Target sum = 100, elements = {50, 60, 70, ...}
Include 50? YES → current_sum = 50
Include 60? YES → current_sum = 110
Include 70? YES → current_sum = 180
... all subsequent subsets will sum to at least 180
... none can sum to exactly 100
... but brute force generates all of them anyway!
The moment our partial sum exceeds 100, we know that no extension can produce a valid subset (assuming positive numbers). Yet brute force continues building and testing every possible extension.
Backtracking addresses wasted work by checking constraints during generation rather than after. When a partial solution violates constraints, we immediately abandon that path without exploring any of its exponentially many extensions. This is the key insight that transforms brute force into backtracking.
Despite its inefficiencies, brute force remains the right choice in several scenarios. Understanding these cases prevents over-engineering and helps recognize when optimization effort isn't worthwhile.
Legitimate Use Cases for Brute Force:
The Decision Framework:
Should I use brute force?
│
├─ Is solution space size ≤ reasonable threshold (~10^6 to 10^7)?
│ ├─ YES → Brute force is likely fine
│ └─ NO → Consider optimization
│
├─ Can constraints be checked on partial solutions?
│ ├─ NO → Brute force may be necessary
│ └─ YES → Backtracking could help
│
├─ What's the rejection rate?
│ ├─ LOW (most candidates valid) → Brute force is efficient
│ └─ HIGH (most rejected) → Pruning provides large gains
│
└─ Is this a one-time or repeated computation?
├─ ONE-TIME + small input → Brute force acceptable
└─ REPEATED or large input → Optimize
Sometimes optimizing brute force takes longer to implement than brute force takes to run. For a one-time problem with n=12, spending an hour implementing backtracking when brute force completes in 30 seconds is poor time management. Knowing when not to optimize is as valuable as knowing how.
We've thoroughly examined the brute force approach to solving combinatorial problems. Let's consolidate the key insights:
What's Next:
Having established brute force as our baseline, we're now prepared to understand backtracking's key innovation: pruning during generation. The next page examines how incorporating constraint checking within the search process dramatically reduces the number of candidates that need to be explored, transforming exponential algorithms into practical tools for larger problem sizes.
You now have a comprehensive understanding of the brute force paradigm: its mechanics, costs, sources of inefficiency, and legitimate use cases. This foundation is essential for appreciating backtracking's elegance—it directly addresses everything we've identified as problematic about brute force.