Loading content...
You've read the constraints: n ≤ 10⁵, m ≤ 10⁵, time limit 2 seconds. Now what? The bridge between raw numbers and algorithm selection is time complexity estimation—the skill of calculating exactly which Big-O classes will pass the time limit and which will fail.
This isn't guesswork. It's arithmetic. And once you master it, you'll never again waste 30 minutes implementing an O(n²) solution only to discover it times out, nor will you over-engineer an O(n log n) solution when O(n²) would have been perfectly fine.
By the end of this page, you will be able to calculate the maximum input size each complexity class can handle, determine the required complexity for any given constraints, and handle complex multi-variable constraints with confidence. You'll develop the intuition to estimate complexity requirements in seconds.
Time complexity estimation rests on one fundamental relationship:
Operations per second × Time limit = Maximum operations allowed
For competitive programming and most interview contexts:
This gives us our working rule: stay under 10⁸ operations for safety, with some flexibility up to 2×10⁸ for well-optimized code.
An 'operation' in this context means a simple CPU instruction: an addition, comparison, array access, or pointer increment. Complex operations like hash lookups, set insertions, or string comparisons count as multiple operations. When in doubt, assume your algorithm's true operation count is 2-10× higher than the bare loop count suggests.
The Calculation Process:
Given constraints and a proposed algorithm:
Example:
Constraints: n ≤ 10⁵, proposed algorithm: O(n log n)
Example 2:
Constraints: n ≤ 10⁵, proposed algorithm: O(n²)
Rather than calculating each time, you can memorize the maximum n for each common complexity class. These numbers assume a 10⁸ operation budget (1 second on a modern judge).
Deriving Maximum Sizes:
For each complexity f(n), solve: f(n) = 10⁸
| Complexity | Formula for Max n | Approximate Max n | Typical Algorithms |
|---|---|---|---|
| O(1) | N/A | Any | Mathematical formulas, direct computation |
| O(log n) | 2^(10⁸) | Effectively unlimited | Binary search on answer, exponentiation |
| O(√n) | (10⁸)² | 10¹⁶ | Prime factorization, sqrt decomposition iteration |
| O(n) | 10⁸ | 10⁸ | Linear scan, prefix sums, two pointers |
| O(n log n) | 10⁸/(log n) | ~5 × 10⁶ | Sorting, balanced trees, segment trees |
| O(n √n) | 10⁸ / √n | ~2.5 × 10⁵ | Mo's algorithm, sqrt decomposition |
| O(n²) | √(10⁸) | 10⁴ | Quadratic DP, Floyd-Warshall (small n) |
| O(n² log n) | √(10⁸/log n) | ~4 × 10³ | Quadratic with binary search |
| O(n³) | ∛(10⁸) | ~465 | Matrix multiplication, cubic DP |
| O(2^n) | log₂(10⁸) | ~26 | Subset enumeration, bitmask DP |
| O(n × 2^n) | solve numerically | ~20-22 | Traveling salesman, Hamiltonian path |
| O(n!) | solve numerically | ~11 | Permutation enumeration |
Memorize these approximate thresholds: O(n²) → n ≤ 10⁴, O(n log n) → n ≤ 10⁶, O(2^n) → n ≤ 25, O(n!) → n ≤ 11. These cover 90% of constraint interpretation situations.
The previous section answers "Can this algorithm run in time?" But the more practical question is the reverse: "Given these constraints, what complexity do I need?"
The Reverse Process:
Worked Example:
Constraints: n ≤ 3000
Conclusion: You need an O(n²) algorithm or better. O(n³) will TLE.
12345678910111213141516171819202122
// Mental lookup table for required complexity// Given constraint n, find minimum required complexity: function getRequiredComplexity(n: number): string { if (n <= 11) return "O(n!) acceptable"; if (n <= 22) return "O(n × 2^n) acceptable"; if (n <= 25) return "O(2^n) acceptable"; if (n <= 100) return "O(n³) acceptable"; if (n <= 500) return "O(n³) borderline, prefer O(n²)"; if (n <= 5000) return "O(n²) acceptable"; if (n <= 10000) return "O(n²) borderline, prefer O(n log n)"; if (n <= 100000) return "O(n√n) or O(n log n) acceptable"; if (n <= 1000000) return "O(n log n) acceptable"; if (n <= 10000000) return "O(n log n) borderline, prefer O(n)"; if (n <= 100000000) return "O(n) acceptable"; return "O(log n) or O(1) required";} // Examples:console.log(getRequiredComplexity(3000)); // O(n²) acceptableconsole.log(getRequiredComplexity(100000)); // O(n√n) or O(n log n) acceptableconsole.log(getRequiredComplexity(18)); // O(n × 2^n) acceptableThe Boundary Cases:
When n falls exactly at a boundary (n = 10,000 for O(n²), or n = 500 for O(n³)), err toward the faster algorithm:
If you're right at the boundary, you're gambling. Choose the safer, faster algorithm unless you have a strong reason not to.
Many problems have multiple constraint variables—arrays with n elements and m queries, graphs with V vertices and E edges, strings of length n with alphabet size σ. Analyzing these requires considering all variables together.
General Approach:
Example: Graph Problem
Constraints: V ≤ 10⁵ vertices, E ≤ 2 × 10⁵ edges
Only Dijkstra (or BFS for unweighted) is viable.
| Scenario | Complexity Expression | Typical Constraints | Verdict |
|---|---|---|---|
| Array + Queries | O(n + q log n) | n, q ≤ 10⁵ | ✓ ~2 × 10⁶ ops |
| Array + Queries | O(n × q) | n, q ≤ 10⁵ | ✗ 10¹⁰ ops |
| Array + Queries | O((n + q) √n) | n, q ≤ 10⁵ | ✓ Mo's algorithm viable |
| Graph traversal | O(V + E) | V, E ≤ 10⁵ | ✓ ~2 × 10⁵ ops |
| Graph shortest path | O(V × E) | V, E ≤ 10⁵ | ✗ 10¹⁰ ops |
| 2D Grid | O(rows × cols) | rows, cols ≤ 1000 | ✓ 10⁶ ops |
| 2D Grid DP | O(rows × cols × k) | rows, cols ≤ 1000, k ≤ 100 | ✓ 10⁸ ops (borderline) |
| String + pattern | O(n + m) | n, m ≤ 10⁶ | ✓ KMP, Z-algorithm |
| String + pattern | O(n × m) | n, m ≤ 10⁶ | ✗ 10¹² ops |
Some complexities have hidden variables. O(n log(max_value)) includes the value range. O(n × alphabet_size) depends on character set size. O(V + E) in dense graphs means O(V²). Always identify ALL variables affecting complexity.
Not all problems have 1-2 second time limits. You must adjust your calculations accordingly.
Scaling the Budget:
| Time Limit | Operation Budget | Adjustment Factor |
|---|---|---|
| 0.5 sec | 5 × 10⁷ | 0.5× standard |
| 1 sec | 10⁸ | 1× (baseline) |
| 2 sec | 2 × 10⁸ | 2× |
| 3 sec | 3 × 10⁸ | 3× |
| 5 sec | 5 × 10⁸ | 5× |
| 10 sec | 10⁹ | 10× |
Practical Impact:
With a 5-second time limit instead of 1 second:
Tight Time Limits (≤0.5 sec):
When time limits are very tight, constant factors matter more:
Generous Time Limits (≥5 sec):
Generous limits often indicate:
Don't over-optimize when time limits are generous—it's a hint that the expected solution isn't blazingly fast.
Aim for your estimated operations to be at least 10× under the budget. This accounts for: (1) constant factors in your code, (2) worst-case inputs maximizing operations, (3) judge server speed variations, and (4) hidden work in language runtime. If your estimate is 10⁷ and budget is 10⁸, you're in good shape. If estimate is 8×10⁷, you're gambling.
Different programming languages have vastly different execution speeds. Your complexity estimation must account for this.
Language Speed Hierarchy:
Some judges adjust time limits per language. Others don't—meaning Python solutions need either algorithms with much smaller constant factors or lower complexity classes entirely.
ios_base::sync_with_stdio(false); cin.tie(NULL);v.reserve(n) to avoid reallocationsinline for small, frequently called functions\n over endl: endl flushes buffer, \n doesn't-O2 or -O3 for optimizationssys.stdin.readline() instead of input()sum(), min(), max() are optimized| Language | Effective Budget | Max n for O(n²) | Max n for O(n log n) |
|---|---|---|---|
| C++ (optimized) | ~10⁸ | ~10,000 | ~5,000,000 |
| Java | ~3-5 × 10⁷ | ~6,000 | ~2,000,000 |
| Python (CPython) | ~10⁶ - 10⁷ | ~1,000 - 3,000 | ~100,000 - 500,000 |
| Python (PyPy) | ~3 × 10⁷ | ~5,000 | ~1,500,000 |
In interviews, language speed rarely matters for correctness—interviewers care about algorithmic complexity, not constant factors. However, being aware of these differences helps in discussions about production performance and when solving online assessments with strict time limits.
Big-O notation hides constant factors, but constants matter when you're near the time limit boundary. Understanding how to estimate and manage constants separates marginal passes from confident solves.
Common Sources of High Constants:
123456789101112131415161718192021222324252627282930
// Example: Two O(n log n) algorithms with different constants // Algorithm A: Sort + Binary Search (constant factor: ~3)// - Sorting: ~1.5n log n comparisons with good cache behavior// - Binary search: very fast, good cache behavior// True cost: ~3 × n log n operations // Algorithm B: Balanced BST insertions (constant factor: ~50)// - Each insertion: log n tree traversal with pointer chasing// - Poor cache behavior (nodes scattered in memory)// - Allocation overhead for each node// True cost: ~50 × n log n operations // For n = 10⁵:// Algorithm A: 3 × 10⁵ × 17 ≈ 5 × 10⁶ operations// Algorithm B: 50 × 10⁵ × 17 ≈ 8.5 × 10⁷ operations // Both are O(n log n), but A is 17× faster in practice! // Rule of thumb constant factor estimates:const CONSTANT_FACTORS = { "Array access": 1, "Vector push_back": 1.5, // amortized, occasional reallocation "Hash map access": 10-30, "std::set/map access": 20-50, "Function call": 2-5, "Division": 20-50, "Memory allocation": 50-200, "Cache miss": 100-300,};The Practical Rule:
When estimating true operation count:
Example Estimation:
Algorithm: Process n elements with a segment tree (each operation involves tree traversal)
The most common cause of unexpected TLE is underestimating constant factors when you're close to the time limit. If your estimated operations are 5 × 10⁷ and budget is 10⁸, you're technically fine but in danger zone. Either optimize constants or find a better algorithm.
Before committing to an implementation, verify that your complexity estimates are correct. This saves time wasted on doomed approaches.
Technique 1: Operation Counting
Add a counter to your algorithm and run on maximum input:
long long ops = 0;
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
ops++;
// actual work
}
}
cout << "Total ops: " << ops << endl;
// If ops > 10^8, you'll likely TLE
Technique 2: Quick Stress Test
Run your solution on maximum-size random input locally:
// Generate worst-case input
auto start = chrono::high_resolution_clock::now();
// Run algorithm
auto end = chrono::high_resolution_clock::now();
auto duration = chrono::duration_cast<chrono::milliseconds>(end - start);
cout << "Time: " << duration.count() << "ms" << endl;
If it takes >800ms locally for a 1-second limit, you're at risk (judges are usually slower than local machines).
Technique 3: Scaling Test
Run on multiple input sizes and verify complexity:
| n | Time (ms) | Expected for O(n²) | Expected for O(n log n) |
|---|---|---|---|
| 1000 | 50 | 50 | 50 |
| 2000 | 200 | 200 (4×) | 110 (2.2×) |
| 4000 | 800 | 800 (4×) | 240 (2.2×) |
If time quadruples when n doubles, you have O(n²). If it slightly more than doubles, you have O(n log n).
If your solution runs in under 300ms locally on maximum input, it's almost certainly safe. 300-700ms is likely OK but watch for slow judges. Over 700ms is risky. Over 1 second locally is almost guaranteed TLE on the judge.
With practice, complexity estimation becomes instant mental math. Here's the essential knowledge to internalize:
Quick Mental Decision Tree:
See n = X → What complexity class is required?
X ≤ 12 → O(n!) OK → Brute force permutations
X ≤ 20-25 → O(2ⁿ) OK → Bitmask DP, subset enumeration
X ≤ 100 → O(n³) OK → Floyd-Warshall, simple cubic
X ≤ 5000 → O(n²) OK → Quadratic DP, nested loops
X ≤ 10⁵ → O(n√n) or better → Mo's, sqrt decomposition
X ≤ 10⁶ → O(n log n) OK → Sorting, segment trees, binary search
X ≤ 10⁸ → O(n) required → Linear passes, streaming
X > 10⁸ → O(log n) required → Mathematical formula, pure binary search
What's Next:
Now that you can estimate required complexity, the next page provides the detailed mapping from constraint ranges to specific complexity classes and algorithms—a comprehensive reference table for algorithm selection.
You now have the mathematical tools to estimate time complexity requirements from any constraint set. This skill transforms constraint analysis from intuition into calculation, ensuring you never waste time on algorithms that can't possibly pass the time limit.