Loading content...
This page is your definitive reference for constraint-based algorithm selection. When you see a constraint like n ≤ 10⁵, you should immediately know which complexity classes are viable and which algorithms typically achieve those complexities.
These mappings aren't arbitrary—they emerge from the fundamental relationship between input sizes, operation counts, and execution time. Problem setters worldwide use these same thresholds when designing problems, making this knowledge universally applicable across competitive programming platforms, technical interviews, and real-world engineering.
Bookmark this page. When facing a problem, identify the primary constraint, look up the corresponding row in the reference table, and you'll have an immediate list of viable algorithmic approaches. Each constraint range is paired with typical algorithms that achieve the required complexity.
The following table is the core reference for constraint-based algorithm selection. Each row represents a constraint range and the algorithms that typically work within that range.
Reading the Table:
| n Range | Max Complexity | Typical Algorithms | Key Insight |
|---|---|---|---|
| n ≤ 10-12 | O(n!), O(n² × 2ⁿ) | Brute force permutations, TSP exact, complete search | Any algorithm works; brute force is expected |
| n ≤ 15-18 | O(2ⁿ × poly) | Bitmask DP, subset sum, Hamiltonian path DP | Exponential in n is required; look for subset structure |
| n ≤ 20-25 | O(2ⁿ), O(3ⁿ/poly) | Meet-in-the-middle, complete subset enumeration | Split into halves, exponential but clever |
| n ≤ 40-50 | O(2^(n/2)) | Meet-in-the-middle, split search space | Direct 2ⁿ too slow; split required |
| n ≤ 100 | O(n³), O(n⁴) | Floyd-Warshall, cubic DP, O(n⁴) rarely | Higher polynomial OK; watch for n⁴ |
| n ≤ 400-500 | O(n³) | Matrix operations, all-pairs shortest path, 3-nested loops | n³ borderline; optimize constants |
| n ≤ 2000-3000 | O(n²) comfortable | Quadratic DP, LCS, Edit Distance, simple nested loops | Quadratic is expected and safe |
| n ≤ 5000-10000 | O(n²) tight | Optimized quadratic, low-constant n² algorithms | n² works but watch constants |
| n ≤ 10⁵ | O(n √n), O(n log²n) | Mo's algorithm, sqrt decomposition, log² queries | Need sub-quadratic; log factors OK |
| n ≤ 2-5 × 10⁵ | O(n log n) | Sorting, segment/Fenwick trees, binary search, mergesort-based | Standard efficient algorithms |
| n ≤ 10⁶ | O(n log n) comfortable | All O(n log n) algorithms with good constants | Log factor is cheap at this scale |
| n ≤ 10⁷ | O(n) or O(n log n) tight | Linear algorithms, prefix sums, single-pass streaming | Approaching linear-only territory |
| n ≤ 10⁸ | O(n) required | Linear scan, counting sort, streaming algorithms | Must be linear; no log factor room |
| n > 10⁸ | O(log n), O(√n), O(1) | Binary search, math formulas, number theory | Reading input is the bottleneck; compute must be instant |
When n is very small (typically ≤ 25), the problem is explicitly designed for exponential-time algorithms. This is not a suggestion—it's a requirement. The problem cannot be solved polynomially, or the polynomial solution is not the intended approach.
Why These Specific Numbers?
123456789101112131415161718192021222324252627282930313233343536373839
// Classic Bitmask DP: Traveling Salesman Problem// n cities, find minimum cost tour visiting all exactly once function tsp(n: number, dist: number[][]): number { const INF = Number.MAX_SAFE_INTEGER; // dp[mask][i] = min cost to visit cities in mask, ending at city i const dp: number[][] = Array(1 << n).fill(null) .map(() => Array(n).fill(INF)); dp[1][0] = 0; // Start at city 0 for (let mask = 1; mask < (1 << n); mask++) { for (let last = 0; last < n; last++) { if (!(mask & (1 << last))) continue; // last not in mask if (dp[mask][last] === INF) continue; for (let next = 0; next < n; next++) { if (mask & (1 << next)) continue; // already visited const newMask = mask | (1 << next); dp[newMask][next] = Math.min( dp[newMask][next], dp[mask][last] + dist[last][next] ); } } } // Find minimum cost to visit all and return to start const fullMask = (1 << n) - 1; let result = INF; for (let last = 0; last < n; last++) { result = Math.min(result, dp[fullMask][last] + dist[last][0]); } return result;} // Complexity: O(n² × 2ⁿ)// For n = 20: 20² × 2²⁰ ≈ 4 × 10⁸ operations// Works with good constantsWhen you see n ≤ 18 or n ≤ 20, immediately think: "This is a bitmask DP problem." The constraint is a direct hint that the problem involves subsets, and the intended solution processes all 2ⁿ of them.
When n is in the hundreds to low thousands, you're in polynomial territory. The question is which degree of polynomial the problem requires.
Cubic (n ≤ 500):
Cubic algorithms involve three nested loops or equivalent work:
At n = 500, n³ = 125 million—right at the 10⁸ threshold.
Quadratic (n ≤ 5000-10000):
Quadratic algorithms are the workhorses of dynamic programming:
DP[i][j] computed for all pairs (i, j) where i, j ∈ [0, n)
Total states: n²
Transition: typically O(1) per state
Overall: O(n²)
Classic examples:
The n = 3000 Sweet Spot:
Many problems use n ≤ 3000 as the constraint. This is comfortable for O(n²):
123456789101112131415161718192021
// Longest Common Subsequence - Classic O(n²) DPfunction lcs(s1: string, s2: string): number { const n = s1.length, m = s2.length; // dp[i][j] = LCS of s1[0..i-1] and s2[0..j-1] const dp: number[][] = Array(n + 1).fill(null) .map(() => Array(m + 1).fill(0)); for (let i = 1; i <= n; i++) { for (let j = 1; j <= m; j++) { if (s1[i-1] === s2[j-1]) { dp[i][j] = dp[i-1][j-1] + 1; } else { dp[i][j] = Math.max(dp[i-1][j], dp[i][j-1]); } } } return dp[n][m];} // Complexity: O(n × m) = O(n²) when n ≈ m// For n = m = 3000: 9 × 10⁶ operations ✓When n reaches 10⁵ or 10⁶, you need subquadratic algorithms. This is the domain of efficient data structures and divide-and-conquer techniques.
The O(n log n) Family:
These algorithms add a logarithmic factor to linear work:
| Category | Algorithms | When to Use |
|---|---|---|
| Sorting-Based | Merge sort, Quick sort, Heap sort, Counting inversions | When order matters or sorting simplifies the problem |
| Binary Search | Binary search + linear preprocessing, Longest Increasing Subsequence | When you need to find thresholds or optimal values |
| Divide & Conquer | Merge sort variants, Closest pair of points | When problem splits into independent subproblems |
| Tree Structures | Segment trees, Fenwick trees, Balanced BST (set/map) | Range queries, point updates, ordered operations |
| Heap Operations | Priority queue with n operations, Heap sort | Dynamic ordering, k-th element queries |
The O(n √n) Alternative:
Some problems fall between O(n log n) and O(n²). When O(n log n) seems too tight but O(n²) is too slow, consider O(n √n):
For n = 10⁵: n√n = 10⁵ × 316 ≈ 3 × 10⁷. Viable with good constants.
12345678910111213141516171819202122232425262728293031323334353637383940
// Segment Tree for Range Sum Queries - O(log n) per operationclass SegmentTree { private tree: number[]; private n: number; constructor(arr: number[]) { this.n = arr.length; this.tree = new Array(4 * this.n).fill(0); this.build(arr, 0, 0, this.n - 1); } private build(arr: number[], node: number, start: number, end: number): void { if (start === end) { this.tree[node] = arr[start]; return; } const mid = Math.floor((start + end) / 2); this.build(arr, 2*node+1, start, mid); this.build(arr, 2*node+2, mid+1, end); this.tree[node] = this.tree[2*node+1] + this.tree[2*node+2]; } // Range sum query in O(log n) query(l: number, r: number): number { return this.queryHelper(0, 0, this.n-1, l, r); } private queryHelper(node: number, start: number, end: number, l: number, r: number): number { if (r < start || end < l) return 0; if (l <= start && end <= r) return this.tree[node]; const mid = Math.floor((start + end) / 2); return this.queryHelper(2*node+1, start, mid, l, r) + this.queryHelper(2*node+2, mid+1, end, l, r); }} // Build: O(n), Query: O(log n), Update: O(log n)// For n = 10⁵ and q = 10⁵ queries:// Total: O(n + q log n) = 10⁵ + 10⁵ × 17 ≈ 2 × 10⁶ ✓O(n log n) comes in many forms: sorting + linear scan, n binary searches, n heap operations, n tree queries. All have the same asymptotic complexity but different constant factors. Segment trees typically have higher constants than sorting-based approaches.
When n approaches 10⁷ or 10⁸, you need strictly linear algorithms. There's no room for log factors—even O(n log n) at 10⁸ gives 2.6 × 10⁹ operations, potentially too slow.
Linear-Only Techniques:
The Challenge of Linear:
Linear algorithms require cleverness. You can't afford to:
You can:
123456789101112131415161718192021222324252627282930313233343536373839
// Two Pointers: Find pair with target sum in sorted arrayfunction twoSum(arr: number[], target: number): [number, number] | null { let left = 0, right = arr.length - 1; while (left < right) { const sum = arr[left] + arr[right]; if (sum === target) return [left, right]; if (sum < target) left++; else right--; } return null;}// O(n) - single pass with two pointers // Sliding Window: Maximum sum subarray of size kfunction maxSumSubarray(arr: number[], k: number): number { let windowSum = 0; for (let i = 0; i < k; i++) windowSum += arr[i]; let maxSum = windowSum; for (let i = k; i < arr.length; i++) { windowSum = windowSum - arr[i - k] + arr[i]; maxSum = Math.max(maxSum, windowSum); } return maxSum;}// O(n) - each element added and removed exactly once // Kadane's Algorithm: Maximum subarray sum (any length)function maxSubarraySum(arr: number[]): number { let maxEndingHere = arr[0]; let maxSoFar = arr[0]; for (let i = 1; i < arr.length; i++) { maxEndingHere = Math.max(arr[i], maxEndingHere + arr[i]); maxSoFar = Math.max(maxSoFar, maxEndingHere); } return maxSoFar;}// O(n) - classic linear DPAt n = 10⁸, reading the input itself takes significant time. In competitive programming, you MUST use fast I/O. In Python, use sys.stdin. In C++, use scanf or ios_base::sync_with_stdio(false). Slow I/O can cause TLE even with an optimal algorithm.
When constraints include values like n ≤ 10¹², n ≤ 10¹⁸, or even larger, you cannot iterate through all elements—there isn't enough time to even read them. The answer must come from mathematical insight.
The Constraint Hierarchy:
| Complexity | Max n | Typical Use Cases |
|---|---|---|
| O(1) | Any | Direct formulas, arithmetic/geometric series, Gauss sum |
| O(log n) | 10¹⁸ | Binary search, binary exponentiation, matrix power |
| O(√n) | 10¹² | Counting divisors, prime checking, factorization |
| O(log² n) | 10¹⁵ | Binary search with log work per step |
| O(√n × log n) | 10¹⁰ | Optimized factorization, some number theory |
Common Problem Types:
1. Mathematical Formulas
Sum of 1 to n: n(n+1)/2 — O(1) Count of multiples of k up to n: floor(n/k) — O(1) Fibonacci(n): matrix exponentiation — O(log n)
2. Binary Search on Answer
When you can verify if answer ≥ x in O(log n) or O(1), binary search finds the optimal answer in O(log(range) × verification).
3. Matrix Exponentiation
Linear recurrences can be computed in O(log n) using matrix powers:
1234567891011121314151617181920212223242526272829303132333435363738
// Compute n-th Fibonacci number in O(log n)// Using matrix exponentiation: [F(n+1), F(n)] = [[1,1],[1,0]]^n × [1,0] function matrixMult(A: bigint[][], B: bigint[][]): bigint[][] { const C: bigint[][] = [[0n, 0n], [0n, 0n]]; for (let i = 0; i < 2; i++) { for (let j = 0; j < 2; j++) { for (let k = 0; k < 2; k++) { C[i][j] += A[i][k] * B[k][j]; } } } return C;} function matrixPower(M: bigint[][], n: bigint): bigint[][] { let result: bigint[][] = [[1n, 0n], [0n, 1n]]; // Identity while (n > 0n) { if (n % 2n === 1n) { result = matrixMult(result, M); } M = matrixMult(M, M); n = n / 2n; } return result;} function fibonacci(n: bigint): bigint { if (n <= 1n) return n; const M: bigint[][] = [[1n, 1n], [1n, 0n]]; const result = matrixPower(M, n - 1n); return result[0][0];} // Complexity: O(log n) matrix multiplications// Each multiplication: O(1) (2×2 matrices)// Total: O(log n)// Works for n up to 10^18 and beyond!When n ≥ 10⁹, your first instinct should be: "Is there a closed-form formula?" Many counting problems have direct formulas. Combinatorics (nCr), number theory (divisor counts, GCD patterns), and sequence sums often have O(1) solutions.
Real problems often have multiple constraints that interact. Mastering multi-constraint analysis is essential for accurate algorithm selection.
The Interaction Principle:
When you have variables n, m, k, etc., the viable algorithms depend on ALL their relationships:
| Constraints | Implied Complexity | Typical Algorithms |
|---|---|---|
| n ≤ 10⁵, m ≤ 10⁵ (independent) | O(n + m) or O(n log n + m log m) | Process each dimension separately |
| n ≤ 10⁵, m ≤ 10⁵, need n×m work | IMPOSSIBLE | Find a smarter approach! |
| n ≤ 10⁵, k ≤ 20 (small k) | O(n × 2^k) or O(n × k²) | Exponential/polynomial in k OK |
| n ≤ 10⁵, q ≤ 10⁵ (queries) | O((n + q) log n) or O((n + q)√n) | Segment tree, Mo's algorithm |
| n,m ≤ 3000 (grid) | O(n × m) or O(n × m × log) | 2D DP, BFS on grid |
| V ≤ 10⁵, E ≤ 10⁵ (graph) | O(V + E) or O((V + E) log V) | BFS, DFS, Dijkstra |
| Sum of n over cases ≤ 10⁵ | O(n²) per case may work | Amortized over test cases |
The Small Variable Trick:
When one variable is small while others are large, exploit the small one:
n ≤ 10⁵ (large)
k ≤ 10 (small)
Approaches:
- O(n × k): Linear per k value - definitely works
- O(n × k²): 10⁵ × 100 = 10⁷ - works
- O(n × 2^k): 10⁵ × 1024 ≈ 10⁸ - borderline but OK
DO NOT: O(n² × k) = 10¹⁰ × 10 = 10¹¹ - too slow
The small variable often suggests the algorithm type:
For problems with 3+ constraint variables, check each pair: Is O(n × m) viable? O(n × k)? O(m × k)? Sometimes the answer depends on which pair you optimize and which you iterate simply.
Here's the complete, printable reference for constraint-to-algorithm mapping. Memorize the major thresholds; reference the details as needed.
Algorithm Selection Flowchart:
Step 1: Identify the largest constraint variable
↓
Step 2: Look up maximum complexity from table
↓
Step 3: List algorithms achieving that complexity
↓
Step 4: Match algorithm capabilities to problem requirements
↓
Step 5: If no match, look for problem-specific insights
to achieve required complexity
What's Next:
The final page of this module teaches you to use constraints to eliminate approaches—the negative space of algorithm selection. You'll learn which algorithms to immediately discard based on constraints, saving time and mental energy for viable approaches.
You now have a comprehensive reference mapping constraints to complexity classes and algorithms. This table will serve as your algorithm selection guide for every problem you encounter. Bookmark this page and internalize the major thresholds—they form the foundation of rapid problem analysis.