Loading content...
Every algorithmic problem comes with constraints—limits on input size, value ranges, time limits, and memory restrictions. Most developers view these as mere guardrails, restrictions that define what inputs are valid. But elite problem-solvers recognize a deeper truth: constraints are encrypted hints about the expected solution.
Problem setters don't choose constraints arbitrarily. When they specify 1 ≤ n ≤ 10^5, they're signaling that an O(n) or O(n log n) solution exists and is expected. When they say 1 ≤ n ≤ 20, they're practically shouting "use exponential enumeration." Learning to decode this language transforms constraint reading from passive observation into active problem-solving.
By the end of this page, you will be able to look at any problem's constraints and immediately narrow down the viable algorithmic approaches. You'll understand the precise mappings between input sizes and required complexities, recognize unusual constraints as signals for specific techniques, and use constraint analysis to eliminate incorrect approaches before wasting time on them.
Modern computers can execute roughly 10^8 to 10^9 simple operations per second. This fundamental limit, combined with typical time limits of 1-2 seconds, creates a direct mapping between input size and required algorithmic complexity.
The core insight is simple: if your algorithm performs more than approximately 10^8 operations, it will likely time out. This creates hard boundaries on what complexity classes are feasible for different input sizes.
| Input Size (n) | Maximum Operations | Required Complexity | Viable Algorithms |
|---|---|---|---|
| n ≤ 10 | ~10! = 3.6M | O(n!) or O(n² × 2^n) | All permutations, brute force on subsets with extra work |
| n ≤ 15 | ~2^15 × 15 = 500K | O(2^n × n) | Bitmask DP with linear factor, TSP-style problems |
| n ≤ 20 | ~2^20 = 1M | O(2^n) | Bitmask enumeration, meet-in-the-middle, subset DP |
| n ≤ 25 | ~2^25 = 33M | O(2^(n/2) × n) | Meet-in-the-middle with sorting/binary search |
| n ≤ 100 | ~10^6 | O(n³) | Floyd-Warshall, 3D DP, O(n²) with O(n) inner work |
| n ≤ 400 | ~6.4 × 10^7 | O(n³) (tight) | Cubic algorithms with good constants |
| n ≤ 1,000 | ~10^6 | O(n²) | Quadratic DP, all-pairs comparisons, nested loops |
| n ≤ 5,000 | ~2.5 × 10^7 | O(n²) (comfortable) | LCS, edit distance, many classic DP problems |
| n ≤ 10,000 | ~10^8 | O(n²) (tight) | Quadratic with careful implementation |
| n ≤ 100,000 | ~10^7 | O(n log n) | Sorting, binary search, segment trees, balanced BSTs |
| n ≤ 1,000,000 | ~10^7 | O(n) or O(n log n) | Single-pass, hash tables, two pointers, BFS/DFS |
| n ≤ 10,000,000 | ~10^7 | O(n) | Linear scan, counting, bucket sort |
| n ≤ 10^12 | ~10^6-10^7 | O(√n) or O(log n) | Math formulas, binary search, sqrt decomposition |
| n ≤ 10^18 | ~60-100 | O(log n) | Binary search, exponentiation, digit DP |
Memorize this: your algorithm should perform roughly 10^8 operations or fewer. Given n, calculate n², n log n, n³, 2^n, etc. The largest complexity that stays under 10^8 is your target. This single heuristic correctly identifies the expected complexity for 90%+ of problems.
Let's examine how specific constraint values translate into algorithmic requirements with concrete examples.
When you see very small n values, the problem setter is telling you that exponential solutions are expected. This is a clear signal for:
Example Recognition:
Problem: Given n items with weights and values (n ≤ 20), find the maximum value subset that doesn't exceed weight W.
With n ≤ 20 and needing to consider subsets → Bitmask DP or meet-in-the-middle. If W is also small, standard 0/1 knapsack DP works. If W is huge but n is small, bitmask over items is the right choice.
This range screams O(n²) solutions:
This is the sweet spot requiring O(n log n) or O(n) solutions:
Beyond size constraints, value ranges provide equally important hints. The range of values in your input often determines what techniques are viable.
| Value Constraint | What It Enables | Example Application |
|---|---|---|
| Values ≤ 100 | Counting sort, frequency arrays | "Given array with values 1-100, find most frequent" |
| Values ≤ 10^6 | Direct indexing, value as DP state | "Count pairs with difference k" |
| Values ≤ 10^9 | Coordinate compression needed | "Discretize values for segment tree" |
| Values can be negative | Hash maps over arrays, offset indexing | "Subarray sum equals k" |
| Values ≤ 26 | Alphabet-sized arrays, bitmasks | "Anagram detection, character frequency" |
Key Insight: Value as DP State
When constraints like sum ≤ 10^4 or target ≤ 10^5 appear alongside array size constraints, it often signals that the value becomes a DP dimension:
dp[i][w] where w ranges over possible weightsdp[sum] tracking reachable sumsdp[amount] tracking minimum coinsIf n is large but target sum is small, this guides you toward value-indexed DP rather than subset enumeration.
When you see: "Array has n ≤ 10^5 elements, but sum of all elements ≤ 10^6" — this is a strong signal. The algorithm likely depends on the sum, not n². Consider using the sum as a bound for DP states or iteration limits.
When values are huge (up to 10^9) but n is manageable (≤ 10^5), you can't index by value directly. Instead, use coordinate compression:
This turns a 10^9 range into at most n distinct values, making array-based approaches viable.
Beyond standard size and value constraints, unusual or very specific constraints often point directly to the intended technique. Learn to recognize these patterns:
| Constraint Pattern | What It Signals | Approach |
|---|---|---|
| V ≤ 10^5, E ≤ 10^5 | Sparse graph | Adjacency list, O(V + E) algorithms |
| V ≤ 400 | Floyd-Warshall viable | All-pairs shortest path in O(V³) |
| V = E + 1 | It's a tree! | Tree algorithms, no cycle detection needed |
| Graph is a DAG | Topological sort | DP on DAGs, critical path |
| All edges have weight 1 | BFS for shortest path | No need for Dijkstra's overhead |
| Negative weights present | Bellman-Ford required | Dijkstra won't work correctly |
| Graph has ≤ 2 edges per node | Special structure | Possibly a linked list or functional graph |
| Constraint Pattern | What It Signals | Approach |
|---|---|---|
| String length ≤ 26 | Possibly related to alphabet | Consider character-level DP or sliding window |
| Alphabet size = 2 (binary string) | Bit manipulation possible | XOR tricks, counting transitions |
| Pattern length << text length | Pattern matching | KMP, Rabin-Karp, Z-algorithm |
| Sum of all string lengths ≤ 10^6 | Total work bounded | Individual strings can be processed naively |
| Strings are anagrams/permutations | Frequency-based comparison | Character count arrays, sorting |
When you see "sum of all array lengths ≤ X" or "total number of characters ≤ Y", it means overall work is bounded even if individual inputs vary. This allows O(len) processing per input without worrying about worst-case combinations.
Real problems often have multiple constraints that interact. Understanding these interactions is crucial for selecting the right approach.
Consider: Given an n × m grid where n ≤ 1000, m ≤ 1000
The total cells = n × m ≤ 10^6. This means:
The interaction: n and m together determine feasibility, not individually.
Many problems have: n ≤ 10^5, k ≤ 20
This almost always signals O(n × 2^k) or O(n × k) solutions:
Common examples:
| Constraints | Interpretation | Likely Approach |
|---|---|---|
| n ≤ 10^5, q ≤ 10^5 queries | O(n log n) preprocessing + O(log n) per query | Segment tree, sparse table, binary lifting |
| n ≤ 10^5, k ≤ 10 | O(n × k) or O(n × 2^k) total | DP with k dimension, bitmask over k |
| n ≤ 500, m ≤ 500 | O(n × m × min(n, m)) may work | 2D DP with extra dimension |
| n ≤ 10^5, values ≤ 10^5 | Value-indexed possible | Counting, frequency arrays |
| n ≤ 10^5, k ≤ n | Must handle varying k efficiently | Two pointers, sliding window |
When constraints multiply (n × m, n × q), always compute the product. Individually "safe" values like n = 10^5 and m = 10^5 create n × m = 10^10 — far too large for O(n × m) approaches. Look for algorithms that avoid full multiplication.
Beyond input constraints, the time limit and memory limit provide additional signals.
| Time Limit | Interpretation | Implication |
|---|---|---|
| 1 second | Standard competitive programming | ~10^8 operations maximum |
| 2-3 seconds | Slightly relaxed | ~2-3 × 10^8 operations; may allow larger constants |
| 5+ seconds | Complex algorithm expected | Multiple phases, heavy preprocessing, or IO-bound |
| < 1 second (e.g., 0.5s) | Very tight | Highly optimized O(n) or O(n log n) needed |
Key insight: When time limits are generous (3+ seconds) but constraints seem to require tight optimization, the problem may involve:
| Memory Limit | Interpretation | Implication |
|---|---|---|
| 256 MB | Standard | ~64 million integers, ~32 million longs |
| 64 MB | Constrained | Space-optimized DP needed; avoid large auxiliary structures |
| 512+ MB | Memory-rich | Can afford large lookup tables, multiple arrays |
| Very tight (16-32 MB) | Streaming/in-place required | Cannot store all input; process incrementally |
Quick rule: n integers = 4n bytes. For n = 10^6 integers, that's 4 MB. A 2D array of 10^4 × 10^4 integers = 400 MB — likely too large. Always sanity-check your data structure sizes against memory limits.
When n is large but memory is tight, look for:
One of the most powerful uses of constraints is elimination — ruling out approaches that cannot possibly work. This is often faster than trying to find the right approach directly.
Problem: Given n = 10^5 elements, find all pairs (i, j) where i < j and arr[i] + arr[j] = target.
Elimination analysis:
Within seconds, we've narrowed from "any approach" to "two viable approaches." Implementation differences then guide the final choice.
Knowing what WON'T work is as valuable as knowing what will. Each eliminated approach saves you from potentially 10-20 minutes of wasted implementation. Master elimination, and you'll solve problems faster by trying fewer wrong things.
Let's practice constraint decoding with real examples:
Given an array of n integers and q queries, each query asks for the sum of elements in range [l, r].
Constraints: n ≤ 10^5, q ≤ 10^5, values can be up to 10^9
Analysis:
Conclusion: Prefix sums for O(1) queries, or segment tree if updates expected.
Given n items with weights w_i and values v_i, find the maximum value subset with total weight ≤ W.
Constraints: n ≤ 40, W ≤ 10^15, w_i ≤ 10^15, v_i ≤ 10^9
Analysis:
Conclusion: Split items into two halves. Enumerate all 2^20 subsets for each half. Sort one half by weight. For each subset in first half, binary search for best complement in second half.
Find the number of paths from top-left to bottom-right in an n × m grid, moving only right or down.
Constraints: n, m ≤ 1000
Analysis:
Conclusion: Straightforward 2D DP. Path count grows exponentially → need modular arithmetic or big integers depending on output requirements.
Constraints are the problem setter's way of guiding you toward the intended solution. By learning to decode this language, you transform constraint reading into active algorithm selection.
What's Next:
With constraint analysis mastered, we move to input/output structure analysis — the art of examining the shape of inputs and outputs to reveal algorithmic patterns. You'll learn how arrays vs trees vs graphs, what's given vs what's requested, and the structure of the answer all guide algorithm selection.
You now understand constraints as the encrypted hints they truly are. Every constraint value, every limit, every unusual restriction carries information about the expected solution. Decode this language, and algorithm selection becomes dramatically faster and more accurate.