Loading content...
You've learned to read constraints. Now let's use them systematically.
Most beginners approach algorithm selection backwards. They think: "What algorithms do I know?" then try to fit one to the problem. Engineers think: "What do the constraints allow?" and let that filter their options.
This difference is profound. The constraint-first approach:
This page teaches you to use constraints as a systematic filter for algorithm selection.
By the end of this page, you will have a systematic framework for translating constraints into algorithm requirements, understand how to use constraints to eliminate infeasible approaches, and know how to match problem types with appropriate algorithm classes.
Here's a systematic approach to algorithm selection that professionals use. Follow these steps in order:
Step 1: Extract All Constraints
Read the problem statement carefully. Identify:
Step 2: Calculate the Complexity Ceiling
Using the time limit and input size, determine the maximum complexity class:
Step 3: Eliminate Infeasible Approaches
Cross off any algorithm that exceeds your complexity ceiling. Be ruthless—if O(n²) won't fit, don't consider O(n²) algorithms no matter how elegant.
Step 4: Match Problem Pattern to Algorithm Class
Among the remaining feasible approaches, match the problem type to known algorithm patterns.
Step 5: Verify Against Memory Constraints
Confirm your chosen approach fits memory. A brilliant O(n log n) algorithm is useless if it needs O(n²) space.
Before writing any code, spend 30 seconds on constraint analysis. Calculate n², n log n, and compare to 10⁸. This tiny investment prevents the most common competitive programming mistake: implementing an algorithm that can never pass.
Experienced engineers recognize common constraint patterns instantly. Each pattern implies specific algorithm classes:
Pattern 1: Very Small n (n ≤ 20)
Implication: Exponential algorithms are viable.
Signals: The problem wants you to explore all possibilities.
Typical approaches:
Example constraint: "n ≤ 15, find minimum cost to visit all cities"
Pattern 2: Medium n (n ≤ 1,000 to 5,000)
Implication: O(n²) is acceptable, maybe O(n³) for smaller values.
Signals: Classic DP, nested loops, or graph algorithms on dense graphs.
Typical approaches:
Example constraint: "n ≤ 2,000, find longest common subsequence"
Pattern 3: Large n (n ≤ 10⁵ to 10⁶)
Implication: O(n log n) or O(n) required.
Signals: Sorting, binary search, divide and conquer, or specialized data structures.
Typical approaches:
Example constraint: "n ≤ 10⁵, count inversions in array"
Pattern 4: Very Large n (n ≤ 10⁷ to 10⁸)
Implication: Must be O(n) or close to it.
Signals: Linear scans, counting, prefix sums, simple arithmetic.
Typical approaches:
Example constraint: "n ≤ 10⁷, find max subarray sum"
Pattern 5: Astronomical n (n ≤ 10⁹ or more)
Implication: O(log n) or O(1) only. Cannot even read input linearly!
Signals: Mathematical formulas, binary search on answer, matrix exponentiation.
Typical approaches:
Example constraint: "n ≤ 10¹⁸, find nth Fibonacci number mod 10⁹+7"
| Input Range | Max Complexity | Typical Techniques | Problem Signal |
|---|---|---|---|
| n ≤ 20 | O(2ⁿ), O(n!) | Brute force, bitmask DP | "All possibilities" |
| n ≤ 500 | O(n³) | 3D DP, Floyd-Warshall | "Dense graph, all pairs" |
| n ≤ 5,000 | O(n²) | 2D DP, nested loops | "Classic DP" |
| n ≤ 10⁶ | O(n log n) | Sorting, trees, binary search | "Efficient processing" |
| n ≤ 10⁸ | O(n) | Linear scan, counting | "Single pass" |
| n > 10⁸ | O(log n), O(1) | Math, binary search | "Formula or search" |
Real problems often have multiple constraints that interact. Analyzing them together reveals more than analyzing each alone.
The Query Pattern: Large n, Large Q
n ≤ 10⁵ elements, Q ≤ 10⁵ queries
Analysis:
Implication: You need preprocessing or logarithmic queries. This pattern signals segment trees, binary indexed trees, or prefix sums.
The Reverse Pattern: Large n, Tiny Q
n ≤ 10⁶ elements, Q ≤ 10 queries
Analysis:
Implication: Simple per-query algorithms are fine. Don't waste time on complex preprocessing.
The Grid Pattern: n × m Constraints
Grid of n × m cells, n, m ≤ 1,000
Analysis:
Implication: Standard grid DP is fine. Four-loop approaches (two coordinates each) are too slow.
The Sum Constraint Pattern
Sum of n across all test cases ≤ 2 × 10⁵
This is a game-changer! It means:
Don't be intimidated by "n ≤ 10⁵" when there's a sum constraint—the amortized budget is what matters.
The Product Constraint Pattern
n × m ≤ 10⁶
This allows asymmetric dimensions:
Implication: Algorithm must handle all shapes. O(n × m) total work is fine.
The Edge vs Node Pattern (Graphs)
n ≤ 10⁵ nodes, m ≤ 2 × 10⁵ edges
Analysis:
Implication: Use adjacency list, not matrix. Dijkstra with heap O((n + m) log n) works.
Never analyze constraints in isolation. The combination 'n ≤ 10⁵ AND Q ≤ 10⁵' tells you something different than 'n ≤ 10⁵ AND Q ≤ 10'. Write down all constraints, then analyze their product and interactions.
Once you know your complexity ceiling, you can match problem types to algorithm classes. Here are the most common mappings:
For O(n²) budget:
Sorting problems:
Subsequence problems:
Comparison problems:
Graph problems:
For O(n log n) budget:
Sorting-based problems:
Divide and conquer:
Tree/set operations:
Search problems:
For O(n) budget:
Linear scans:
Counting problems:
Stack/queue patterns:
Two pointers (on sorted or structured data):
For O(log n) or O(1) budget:
Mathematical:
Binary search variants:
Precomputed lookups:
| Budget | Problem Pattern | Algorithm Class |
|---|---|---|
| O(n²) | Compare all pairs | Nested loops |
| O(n²) | 2D sequence comparison | DP with 2D table |
| O(n log n) | Find pairs with property | Sort + two pointers |
| O(n log n) | Range queries (multiple) | Segment tree / BIT |
| O(n) | Single-pass statistics | Kadane's, prefix sums |
| O(n) | Find element/pair | Hash table |
| O(log n) | Search in sorted space | Binary search |
| O(1) | Direct calculation | Math formula |
Let's walk through complete examples of constraint-driven algorithm selection.
Example 1: Two Sum
Problem: Given array of n integers, find if any two sum to target. Constraints: n ≤ 10⁵, time limit 1 second.
Step 1: Identify constraints
Step 2: Calculate complexity ceiling
Step 3: Eliminate infeasible approaches
Step 4: Select approach
Step 5: Verify memory
Example 2: Range Sum Queries
Problem: Given array of n integers, answer Q queries for sum of elements in range [l, r]. Constraints: n ≤ 10⁵, Q ≤ 10⁵, time limit 2 seconds.
Step 1: Identify constraints
Step 2: Calculate complexity ceiling
Step 3: Eliminate infeasible approaches
Step 4: Select approach
Step 5: Verify memory
Example 3: All-Pairs Shortest Paths
Problem: Given weighted graph, find shortest path between every pair of nodes. Constraints: n ≤ 400 nodes, m ≤ n², time limit 2 seconds.
Step 1: Identify constraints
Step 2: Calculate complexity ceiling
Step 3: Eliminate infeasible approaches
Step 4: Select approach
Step 5: Verify memory
Example 4: Subset Sum (Small n)
Problem: Given n integers, check if any subset sums to target. Constraints: n ≤ 20, integers up to 10⁹, time limit 2 seconds.
Step 1: Identify constraints
Step 2: Calculate complexity ceiling
Step 3: Analyze approaches
Step 4: Select approach
Step 5: Verify memory
In competitive programming, constraints are carefully chosen. 'n ≤ 20' almost always means exponential is expected. 'n ≤ 10⁵' almost always means O(n log n) or O(n). Problem setters pick constraints to allow intended solutions and reject naive ones.
Some constraints carry special meaning beyond raw numbers. Learn to recognize these signals.
"Values are small" (values ≤ 10⁶ with large n)
Signal: Counting sort or value-indexed DP might work.
Example: "n ≤ 10⁷ elements, each element ≤ 10⁶"
"Array is sorted" or "Sorted in non-decreasing order"
Signal: Binary search is available. Two pointers work.
Example: "Given sorted array, find pair summing to target"
"All elements are distinct" or "Unique integers"
Signal: No duplicates to handle. Simpler logic.
Example: "Array of n distinct integers"
"Elements are 0 or 1" or "Binary values"
Signal: Bit manipulation, special counting techniques.
Example: "Given binary array, find longest subarray with equal 0s and 1s"
"Coordinates up to 10⁹" (but n is small)
Signal: Coordinate compression needed if using arrays.
Example: "n ≤ 10⁵ points with coordinates up to 10⁹"
"Tree with n nodes" or "Connected graph with n-1 edges"
Signal: Tree-specific algorithms apply. No cycles.
Example: "Given tree with n ≤ 10⁵ nodes"
"Queries are online" or "Answer each query before seeing next"
Signal: Can't preprocess queries. Need persistent or online structures.
Contrast: "Offline queries" means you can reorder them for efficiency (e.g., Mo's algorithm).
"Modulo 10⁹+7" or "Answer mod M"
Signal: Answer is huge. Need modular arithmetic.
Example: "Find number of subsequences, answer mod 10⁹+7"
"Exactly k" vs "At most k"
Signal: "Exactly k" often uses inclusion-exclusion or sliding window. "At most k" might use binary search on k.
| Constraint Phrase | What It Signals | Typical Technique |
|---|---|---|
| "Array is sorted" | Pre-sorted input | Binary search, two pointers |
| "Elements are distinct" | No duplicates | Simpler algorithms, direct hash |
| "Values ≤ 10⁶" | Small value range | Counting sort, value-indexed DP |
| "Binary values (0/1)" | Special structure | Bit manipulation, transform tricks |
| "Coordinates up to 10⁹" | Sparse space | Coordinate compression |
| "Tree structure" | No cycles | Tree DP, DFS, LCA |
| "Answer mod 10⁹+7" | Large answer | Modular arithmetic throughout |
Here's a decision flowchart you can apply to any problem:
START: Read problem constraints
↓
[1] Is n very small (≤ 20)?
→ YES: Consider exponential algorithms (2ⁿ, n!)
→ NO: Continue
↓
[2] Is n small-medium (≤ 5,000)?
→ YES: O(n²) algorithms are viable
→ NO: Continue
↓
[3] Is n large (≤ 10⁶)?
→ YES: Need O(n log n) or O(n)
→ NO: Continue (very large n)
↓
[4] Is n huge (> 10⁶)?
→ YES: Need O(n) or better. Maybe O(log n) or O(1).
After determining complexity budget, ask:
[5] Are there Q queries?
→ Q large: Need fast per-query (O(log n) or O(1))
→ Q small: Per-query can be O(n)
↓
[6] What's the memory budget?
→ Calculate if 2D arrays fit
→ Consider in-place algorithms if tight
↓
[7] Are there special constraints?
→ Sorted input: Two pointers, binary search
→ Small values: Counting-based approaches
→ Tree structure: Tree-specific algorithms
Seriously consider printing this flowchart (or keeping it in your notes) until the process becomes automatic. Every competitive programmer has internalized this logic through practice.
Constraints are the first thing to analyze—before you even think about algorithm specifics. They narrow your options, eliminate dead ends, and point toward the expected solution class.
What's next:
We've covered how constraints guide algorithm selection. But what about when multiple viable algorithms exist with different trade-offs? The next page explores time vs. space trade-offs—how to choose when you can trade memory for speed or vice versa.
You now have a systematic framework for constraint-driven algorithm selection. Practice applying this framework consciously on every problem until it becomes second nature.