Loading content...
Every coding problem—whether in a competitive programming contest, a technical interview, or a real production system—comes with constraints. And here's a secret that separates novices from engineers:
Constraints aren't obstacles. They're clues.
A beginner looks at "n ≤ 10⁶" and feels limited. An engineer looks at the same constraint and immediately knows: "I need O(n log n) or better; O(n²) won't work here." The constraint tells you what algorithms are viable before you write a single line of code.
This module teaches you to read constraints like an engineer—extracting maximum information to guide your algorithmic decisions. By the end, you'll never dive into implementation without first asking: "What do the constraints tell me?"
By the end of this page, you will understand the three fundamental types of constraints (input size, time limits, memory limits), how to translate constraints into algorithmic requirements, and how to use a simple mental math framework to assess algorithm viability before implementation.
Almost every computational problem operates under three fundamental constraints. Understanding each, and how they interact, is essential for algorithm selection.
1. Input Size Constraints
These specify the scale of data your algorithm must handle:
Input size is the most important constraint because it directly determines which complexity classes are feasible. The relationship is mathematical: if n = 10⁶ and your algorithm is O(n²), that's 10¹² operations—typically far too many.
2. Time Limit Constraints
These specify how quickly your solution must complete:
Time limits translate to operation counts through a rule of thumb:
Modern computers execute roughly 10⁸ to 10⁹ simple operations per second.
This means:
These are approximations—actual speed depends on hardware, operation complexity, and memory access patterns—but they're accurate enough for algorithm selection.
3. Memory Limit Constraints
These specify how much RAM your solution may use:
Memory translates to element counts:
Memory constraints often eliminate algorithms that require large auxiliary data structures.
| Constraint Type | What It Specifies | How It Guides Algorithms | Typical Values |
|---|---|---|---|
| Input Size (n) | Scale of data | Determines viable complexity classes | 10³ to 10⁹ |
| Time Limit | Execution time budget | Sets operation count ceiling | 1-5 seconds, 100ms-30s |
| Memory Limit | RAM budget | Limits auxiliary data structures | 128 MB - 1 GB |
This is the most important mental model in this entire module. Memorize it. Given an input size n and a time limit (usually ~1 second), there's a maximum complexity that's feasible:
The Constraint-Complexity Table:
| Input Size (n) | Max Feasible Complexity | Operations (approx) | Algorithm Examples |
|---|---|---|---|
| n ≤ 10 | O(n!), O(2^n) | ~3.6M, ~1K | Brute force permutations, subset enumeration |
| n ≤ 20 | O(2^n), O(n² × 2^n) | ~1M | Bitmask DP, meet-in-the-middle |
| n ≤ 100 | O(n³), O(n² log n) | ~1M | Floyd-Warshall, some DP problems |
| n ≤ 1,000 | O(n²) | ~1M | Bubble sort, 2D DP, adjacency matrix |
| n ≤ 10,000 | O(n²) barely, O(n√n) comfortable | ~100M | Optimized O(n²), Mo's algorithm |
| n ≤ 100,000 | O(n log n), O(n√n) | ~2M-3M | Merge sort, segment trees |
| n ≤ 1,000,000 | O(n log n), O(n) | ~20M | Linear algorithms, efficient sorting |
| n ≤ 10,000,000 | O(n) | ~10M | Linear scans, counting |
| n ≤ 10^8 | O(n) barely, O(log n) operations | ~100M | Simple linear, precomputation + O(1) queries |
| n ≤ 10^9 or more | O(log n), O(1) | ~30 or 1 | Binary search, math formulas |
When you see a constraint, immediately ask: 'What's n²?' If n = 10⁵, then n² = 10¹⁰—way over 10⁸. So O(n²) is too slow. You need O(n log n) or O(n). This 5-second calculation saves hours of debugging timeout errors.
How to use this table:
Example reasoning:
Problem: "Given an array of n integers (n ≤ 10⁵), find if any two elements sum to a target."
The constraint eliminated the naive approach before you'd waste time implementing it.
The 10⁸ operations per second rule lets you translate time limits into operation budgets:
The calculation:
Operation Budget = Time Limit (seconds) × 10⁸
Examples:
Then check if your algorithm fits:
Given n = 10⁶ and time limit = 2 seconds:
Safety margins:
The 10⁸ rule is an approximation. Account for:
Conservative approach: Plan for 10⁷ to 5 × 10⁷ for safety, especially in competitive programming where time limits are tight.
Some operations are inherently slower: function calls, memory allocation, random memory access, I/O operations. An 'O(n)' algorithm that chases pointers across memory (like traversing a linked list) can be 10x slower than an 'O(n)' algorithm with sequential array access. Cache locality matters at scale.
Memory limits are often overlooked by beginners, but they're just as important as time limits. Understanding memory helps you:
Memory calculation basics:
| Data Type | Size (bytes) | Items per 256 MB | Items per 1 GB |
|---|---|---|---|
| boolean (typical) | 1 | 256 million | 1 billion |
| char (ASCII) | 1 | 256 million | 1 billion |
| short (16-bit int) | 2 | 128 million | 512 million |
| int (32-bit) | 4 | 64 million | 256 million |
| long (64-bit) | 8 | 32 million | 128 million |
| double (64-bit) | 8 | 32 million | 128 million |
| pointer/reference | 8 | 32 million | 128 million |
| Java Integer (boxed) | 16+ | 16 million | 64 million |
| Python int (small) | 28+ | ~9 million | ~35 million |
Common memory scenarios:
Scenario 1: Linear array
Scenario 2: 2D matrix
Scenario 3: Adjacency matrix for graph
Scenario 4: Memoization table
For a 256 MB limit with 4-byte integers: n × n matrix fits only if n ≤ ~8,000. For 512 MB: n ≤ ~11,000. This constraint often forces you to use different representations (adjacency list vs matrix) or space-optimized DP techniques.
Hidden memory costs:
Languages have overhead beyond raw data:
In competitive programming, prefer primitive arrays. In production, balance memory with code clarity and maintainability.
Real problems often have multiple interacting constraints. Reading them carefully reveals exactly what's feasible.
Example 1: Array with queries
Given an array of n ≤ 10⁵ elements and Q ≤ 10⁵ queries, answer each query in time.
Analysis:
The constraint pattern (large n AND large Q) signals: "You need O(log n) or O(1) per query."
Example 2: Graph problem
Given a graph with n ≤ 1000 nodes and m ≤ 100,000 edges...
Both representations are feasible here because n is moderate.
Example 3: Heavy constraint disparity
Given n ≤ 100,000 integers, perform up to m ≤ 10 operations.
Key insight: m is tiny! Even O(n) per operation is fine:
Don't over-engineer. When one dimension is small, simpler algorithms suffice.
| Constraint Pattern | What It Signals | Typical Approach |
|---|---|---|
| Large n, Large Q | Need fast queries | Preprocessing, segment trees, binary search |
| Large n, Small Q | Per-query can be slow | Simple O(n) or O(n log n) per query |
| Small n, Large anything | Exponential might work | Brute force, backtracking viable |
| n×m product bounded | Total work capped | Often O(n × m) is acceptable |
| Sum of all n ≤ X | Amortized budget | Total work across test cases is X |
Some problems say: 'Multiple test cases. Sum of n across all test cases ≤ 10⁶.' This means your O(n) algorithm can handle each case individually, even if some cases have large n—because total work is bounded. It's a hint that O(n) or O(n log n) is expected.
Constraints look different depending on where you encounter them. Learn to recognize them in each context.
Competitive Programming:
Explicit and precise:
Constraints:
1 ≤ n ≤ 10⁵
1 ≤ m ≤ 2 × 10⁵
Time limit: 1 second
Memory limit: 256 MB
These constraints are intentionally set to allow specific algorithm classes. Problem setters calibrate limits so that O(n log n) passes and O(n²) fails. Read them as hints!
Technical Interviews:
Usually conversational:
Always ask clarifying questions:
These questions show engineering maturity. Interviewers expect and appreciate them.
Real-World Production:
Constraints are discovered, not given:
Production constraints often include:
The analytical skills transfer directly—you're still mapping resources to algorithm viability.
In production, err on the side of simplicity until proven otherwise. A maintainable O(n log n) algorithm that every engineer understands beats a clever O(n) algorithm that only one person can modify. Optimize when constraints demand it, not before.
Let's practice translating constraints into algorithm requirements. For each scenario, determine which algorithms are viable.
Exercise 1:
n ≤ 10⁵, time limit 1 second. Find pair summing to target.
Exercise 2:
n ≤ 3,000, time limit 2 seconds. Find longest increasing subsequence.
N is small enough that O(n²) is fine here!
Exercise 3:
n ≤ 20, find subset with maximum sum ≤ target.
Exercise 4:
n × m grid where n, m ≤ 10,000. Find shortest path.
Exercise 5:
Q ≤ 10⁵ queries on array of n ≤ 10⁵ elements. Each query: range sum.
This constraint pattern (large n, large Q) signals: precompute!
Make constraint analysis your first step on EVERY problem. Before sketching an algorithm, calculate: 'If n = [max value], does O(whatever) fit?' This 10-second check prevents hours of debugging timeout errors.
We've established the foundation of constraint-based thinking. Constraints aren't limitations—they're the problem statement telling you which algorithms can work.
What's next:
Now that you can read constraints, we'll explore how they guide algorithm selection. The next page examines how to use constraints to narrow down algorithm choices systematically—eliminating infeasible options and identifying viable approaches before writing any code.
You now understand problem constraints as decision tools. You can calculate whether an algorithm will pass within time and memory limits. This skill is foundational—every algorithm choice you make from now on should start with constraint analysis.