Loading content...
So far, we've focused on what constraints enable—which algorithms are fast enough to pass. But equally important is what constraints eliminate—which approaches to immediately discard. Master eliminators solve problems faster not by knowing more algorithms, but by wasting no time on doomed approaches.
When facing a problem with n ≤ 10⁵, an experienced solver doesn't consider O(n²) solutions. They don't even let O(n²) ideas form fully in their mind. That entire branch of the solution tree is pruned before exploration begins. This mental discipline—ruthless elimination based on constraints—dramatically accelerates problem-solving.
By the end of this page, you'll develop the discipline to immediately eliminate infeasible approaches based on constraints. You'll learn the complete elimination checklist, understand common elimination scenarios, and gain the confidence to narrow your focus to viable solutions without second-guessing.
Elimination is not about being pessimistic—it's about being efficient. Every approach you eliminate saves time that would have been wasted on:
The Cost of Not Eliminating:
Consider a developer who doesn't practice elimination. Facing a problem with n ≤ 10⁵:
Time wasted: potentially hours on approaches 1-3.
An elimination-first solver sees n ≤ 10⁵ and immediately knows:
They skip directly to approach 4 or equivalent.
Make elimination your first step, not an afterthought. Before brainstorming solutions, look at constraints and explicitly list what's eliminated. This focuses your creative energy on viable approaches only.
Use this systematic checklist when analyzing any problem. Work through each elimination category to narrow your solution space.
Step 1: Time Complexity Elimination
Based on primary constraint n:
| n Range | Eliminate These Complexities |
|---|---|
| n > 25 | O(2ⁿ), O(n!), O(n × 2ⁿ) — all exponential |
| n > 500 | O(n³), O(n⁴) — cubic and higher |
| n > 10,000 | O(n²), O(n² log n) — quadratic |
| n > 10⁶ | O(n √n), O(n log² n) — sub-linear but super-log |
| n > 10⁷ | O(n log n) — log factor eliminated |
| n > 10⁸ | Anything that reads all input — need formula |
Step 2: Space Complexity Elimination
Memory limits constrain data structures:
64 MB limit:
- Cannot store 10⁷ integers (40 MB for ints, but limited headroom)
- Cannot use adjacency matrix for graph with V > 4000
- 2D DP table limited to roughly 4000 × 4000
256 MB limit:
- Can store ~6 × 10⁷ integers
- Adjacency matrix up to ~8000 × 8000
- 2D DP up to ~8000 × 8000
1 GB limit:
- Can store ~2.5 × 10⁸ integers
- Most practical limits removed
Step 3: Structural Elimination
Problem structure eliminates algorithm categories:
Let's work through realistic constraint sets and practice elimination thinking.
Scenario 1: Large Array with Queries
Constraints:
n ≤ 10⁵ (array size)
q ≤ 10⁵ (number of queries)
Time limit: 2 seconds
Elimination Analysis:
✗ O(n × q) = 10¹⁰ — ELIMINATED (100× over budget) ✗ O(n² + q) = 10¹⁰ — ELIMINATED ✗ Per-query O(n) with no preprocessing — ELIMINATED
✓ O(n + q log n) — Segment tree with build + queries ✓ O((n + q) √n) — Mo's algorithm for offline queries ✓ O(n log n + q) — Preprocessing + O(1) queries (prefix sums, sparse table)
Scenario 2: Small n, Complex Problem
Constraints:
n ≤ 15
m ≤ 100
Time limit: 1 second
Elimination Analysis:
✗ Polynomial algorithms that don't use n's smallness — likely not the intended solution
✓ O(2ⁿ × m) = 32768 × 100 = 3.3 × 10⁶ — acceptable ✓ O(n! × something small) — up to ~1.3 × 10¹² / something = depends on "something" ✓ Bitmask DP with m factor
Insight: n = 15 is screaming for exponential-in-n approach. Look for subset structure.
Scenario 3: Graph Problem
Constraints:
V ≤ 10⁵ (vertices)
E ≤ 2 × 10⁵ (edges)
Time limit: 2 seconds
Elimination Analysis:
✗ Floyd-Warshall O(V³) = 10¹⁵ — ELIMINATED ✗ Bellman-Ford O(V × E) = 2 × 10¹⁰ — ELIMINATED ✗ Adjacency matrix (V² space) — ELIMINATED (4 × 10¹⁰ bytes)
✓ Dijkstra O((V + E) log V) ≈ 5 × 10⁶ — works ✓ BFS O(V + E) ≈ 3 × 10⁵ — works ✓ DFS O(V + E) — works ✓ Adjacency list representation — required
Scenario 4: String Problem with Large Values
Constraints:
n ≤ 10⁵ (string length)
Characters: lowercase English letters (26)
Time limit: 1 second
Elimination Analysis:
✗ O(n²) substring enumeration — ELIMINATED ✗ O(n × 26ⁿ) anything — obviously eliminated ✗ Suffix tree/array with O(n²) construction — need efficient construction
✓ O(n × 26) = 2.6 × 10⁶ — alphabet-sized inner loop per character ✓ O(n log n) — sorting-based approaches ✓ O(n) with O(26) state — linear with bounded auxiliary
Insight: The 26-letter alphabet means O(n × 26) is essentially O(n). Look for character-frequency or per-character-type solutions.
During problem-solving, mentally or physically note: "Eliminated X because of Y." This prevents returning to dead ends and clarifies your thinking. In interviews, stating eliminations aloud demonstrates mature problem-solving.
Beyond complexity elimination, certain constraint patterns eliminate entire algorithm families regardless of their complexity.
Shortest Path Algorithm Elimination:
| Constraint/Scenario | Eliminate | Use Instead |
|---|---|---|
| V > 500 | Floyd-Warshall (O(V³)) | Dijkstra or Bellman-Ford per source |
| E > V, negative edges | Dijkstra (doesn't handle negatives) | Bellman-Ford |
| V × E > 10⁸, negative edges | Bellman-Ford (too slow) | SPFA or problem-specific approach |
| Unweighted graph | Dijkstra (overkill) | BFS (simpler, same complexity) |
| Single source needed | Floyd-Warshall (overkill) | Dijkstra from one source |
Sorting Algorithm Elimination:
| Constraint | Eliminate | Use Instead |
|---|---|---|
| Already sorted input | Any sorting (unnecessary overhead) | Direct processing |
| Values in [0, k] with k ≤ n | Comparison sorts O(n log n) | Counting sort O(n + k) |
| n > 10⁷ | Any comparison sort (log factor hurts) | Counting/Radix if applicable |
| Need stable sort | Quick sort (unstable) | Merge sort |
| Memory constrained | Merge sort (O(n) extra space) | Heap sort (in-place) |
Data Structure Elimination:
| Constraint | Eliminate | Use Instead |
|---|---|---|
| n > 10⁷ with many operations | Hash map with high overhead | Direct array if values bounded |
| Need ordered iteration | Hash map / Hash set | BST / TreeSet / sorted vector |
| Need range queries + updates | Simple array (O(n) per query) | Segment tree, Fenwick tree |
| Static data, range queries | Segment tree (overkill for static) | Sparse table (simpler, O(1) query) |
| Values > 10⁹ | Array indexed by value | Hash map or coordinate compression |
Sometimes an algorithm at the complexity boundary might work with good constants and optimization. When in doubt, implement the clearly-viable approach first. Only consider boundary algorithms if no better option exists.
The range of values (not sizes) creates its own elimination criteria. This is often overlooked but equally important.
Large Value Ranges (a[i] ≤ 10⁹):
What to Use Instead:
Coordinate Compression — Map n distinct values to [0, n-1], then use value-indexed approaches on compressed values.
Hash Maps — O(1) amortized access for arbitrary keys. Higher constant factor but handles any value range.
Sorting + Binary Search — Sort values, then binary search for positions. O(log n) per lookup.
Value-Independent Algorithms — Many algorithms only compare values, never index by them. Sorting, segment trees with comparison, etc.
Small Value Ranges (a[i] ≤ 100):
Conversely, small value ranges enable approaches:
✓ Direct array indexing by value — 100-sized arrays are trivial ✓ Value-based DP — dp[n][100] = 100n states, manageable ✓ Frequency counting — Exact counts for each value ✓ Nested loops over value range — 100 × 100 = 10⁴ is nothing
1234567891011121314151617181920212223242526272829303132
// Problem: Count inversions where a[i] ≤ 10^9// Cannot use value-indexed Fenwick tree directly// Solution: Compress values to [0, n-1], then use Fenwick tree function countInversions(arr: number[]): number { const n = arr.length; // Step 1: Coordinate compression const sorted = [...arr].sort((a, b) => a - b); const valueToRank = new Map<number, number>(); let rank = 0; for (const val of sorted) { if (!valueToRank.has(val)) { valueToRank.set(val, rank++); } } // Step 2: Map original values to ranks const compressed = arr.map(v => valueToRank.get(v)!); // Step 3: Now use Fenwick tree indexed by compressed values // (Fenwick tree implementation omitted for brevity) // Count elements > current that appeared before // Values now in [0, n-1], Fenwick tree of size n is feasible // Original values up to 10^9 would require 10^9-sized tree! return countWithFenwick(compressed);} // Key insight: We only care about relative order, not absolute values// Compression preserves relative order while enabling value-indexed structuresMemory constraints eliminate algorithms independently of time complexity. An algorithm can be fast enough but still fail due to space requirements.
Memory Calculation Quick Reference:
1 int = 4 bytes
1 long long = 8 bytes
1 pointer = 8 bytes (64-bit)
1 object in Java/C# = ~16+ bytes overhead + fields
1 MB = 10^6 bytes ≈ 250,000 ints
64 MB ≈ 16 million ints
256 MB ≈ 64 million ints
1 GB ≈ 250 million ints
| Memory Limit | Maximum Viable | Eliminate |
|---|---|---|
| 64 MB | ~1.5 × 10⁷ ints, 2D: 4000 × 4000 | Large graphs as matrix, full 2D DP for n > 4000 |
| 128 MB | ~3 × 10⁷ ints, 2D: 5500 × 5500 | n × n matrices for n > 5500 |
| 256 MB | ~6 × 10⁷ ints, 2D: 8000 × 8000 | Very large 2D structures |
| 512 MB | ~1.2 × 10⁸ ints | Only the most extreme cases |
Common Memory Elimination Patterns:
1. Graph Representation:
2. 2D DP Optimization:
3. Recursion Stack:
4. Object Overhead:
When a DP recurrence only uses the previous 1-2 rows, replace dp[n][m] with dp[2][m] and alternate rows. This reduces O(n×m) space to O(m), often the difference between passing and failing memory limits.
In technical interviews, explicitly stating what you're eliminating and why demonstrates structured thinking and earns credit even before you solve the problem.
The Elimination Dialogue:
Interviewer: "Find all pairs in an array that sum to a target. The array can have up to 10 million elements."
Strong Candidate Response:
"Ten million elements means n = 10⁷. Let me think about complexity constraints first.
A brute-force approach checking all pairs would be O(n²), which is 10¹⁴ operations—way too slow. That's eliminated immediately.
I need O(n log n) or better. Sorting plus two pointers would give O(n log n)—that's about 2.3 × 10⁸ operations, borderline but likely acceptable.
Even better, using a hash set for O(1) lookups gives me O(n) overall—just 10⁷ operations, very safe.
I'll go with the hash set approach. Does that sound reasonable before I implement?"
Phrases That Signal Strong Elimination Skills:
Handling Follow-Up Eliminators:
Interviewers often follow up with: "What if the array is sorted?" or "What if memory is limited?"
These are elimination questions in disguise:
When you correctly eliminate approaches, interviewers trust your final solution more. They know you've considered alternatives. Even if you struggle with implementation, demonstrated elimination skills show senior-level thinking.
Integrate elimination into your standard problem-solving workflow for maximum efficiency.
The Workflow:
123456789101112131415161718192021222324252627282930313233
// STEP 1: Read constraints BEFORE reading the problem details// Note: n = ?, m = ?, values = ?, memory = ?, time = ? // STEP 2: Immediate complexity elimination// Ask: What's the maximum viable complexity?// n <= 10^5 → need O(n log n) or better// -> ELIMINATE: all O(n²) approaches // STEP 3: Read the problem// Understand what's asked // STEP 4: Structural elimination// Ask: What structural constraints exist?// Is it a tree? Graph? Sorted? Static?// -> ELIMINATE: algorithms requiring structures we don't have // STEP 5: Value-based elimination// Ask: What are the value ranges?// a[i] <= 10^9 → ELIMINATE: value-indexed arrays// a[i] <= 100 → ENABLE: value-based DP // STEP 6: Memory elimination// Ask: What space can I afford?// 256 MB, n = 10^5 → 2D array up to ~8000 x 8000// -> ELIMINATE: n x n matrix if n > 8000 // STEP 7: Generate solutions within surviving space// Only consider algorithms that passed all elimination criteria // STEP 8: Verify chosen solution// Calculate exact complexity, confirm it passes // STEP 9: ImplementThe Decision Matrix:
After elimination, you often narrow down to 1-3 viable approaches. Create a mental decision matrix:
| Approach | Time | Space | Implementation Difficulty | Verdict |
|---|---|---|---|---|
| Segment Tree | O(n log n) ✓ | O(n) ✓ | High | Viable |
| Mo's Algorithm | O(n √n) ✓ | O(n) ✓ | Medium | Viable |
| Brute Force | O(n²) ✗ | O(1) | Low | Eliminated |
If multiple approaches are viable, choose based on:
Elimination-first solvers are faster not because they know more algorithms, but because they waste no mental energy on dead ends. In a 45-minute interview, eliminating bad approaches in the first 2 minutes leaves 43 minutes for viable solutions. That's a competitive advantage.
You've now completed the full toolkit for constraint-based algorithm selection. Let's consolidate the key principles from this module:
The Master Problem-Solving Loop:
1. Read constraints
2. Eliminate infeasible approaches
3. Identify viable complexity classes
4. Match algorithms to problem requirements
5. Choose the simplest viable solution
6. Verify complexity against budget
7. Implement with confidence
What's Next:
With constraint-based algorithm selection mastered, the next module explores Edge Cases & Testing Strategies—learning to identify the inputs that break solutions and systematically verify correctness.
You now possess one of the most valuable skills in algorithmic problem-solving: the ability to look at constraints and immediately know which algorithms to consider and which to reject. This skill compounds with experience; every problem you solve reinforces your constraint intuition.
Congratulations on completing Module 4: Constraint-Based Algorithm Selection. You've learned to read constraints as hidden solution hints, estimate required complexities, map constraints to algorithms, and eliminate infeasible approaches. These skills will serve you in every coding interview, competitive programming contest, and real-world engineering challenge you face.