Loading learning content...
Quick Sort presents an interesting paradox in algorithm analysis: it's the preferred sorting algorithm for most real-world applications despite having a worse theoretical worst case than its competitors. Merge Sort and Heap Sort both guarantee O(n log n), yet Quick Sort — with its O(n²) worst case — consistently outperforms them in practice.
Understanding this apparent contradiction requires rigorous complexity analysis. We need to understand not just the Big-O notation, but the underlying mathematics that produces these bounds. Why is the average case O(n log n)? Why does the worst case reach O(n²)? How rare is the worst case? And what do these bounds actually mean in practice?
This page develops a complete understanding of Quick Sort's time complexity — mathematically rigorous but presented with intuition. By the end, you'll not only know the complexity bounds but understand deeply why they are what they are.
By the end of this page, you will (1) understand the best, average, and worst-case complexity of Quick Sort, (2) be able to derive these bounds using recurrence relations, (3) develop intuition for when each case occurs, and (4) understand the probability of worst-case behavior with random pivot selection.
The best case for Quick Sort occurs when every pivot perfectly bisects the array — splitting it into two equal (or nearly equal) halves. This is the ideal scenario that produces the minimum possible work.
Setting Up the Analysis:
Let T(n) represent the time to sort an array of n elements in the best case.
At each step:
The Recurrence Relation:
T(n) = 2T(n/2) + cn
T(1) = O(1)
Where cn is the time for partitioning (linear in the number of elements).
Solving the Recurrence:
This is the classic divide-and-conquer recurrence. We can solve it by expansion:
T(n) = 2T(n/2) + cn
= 2[2T(n/4) + c(n/2)] + cn = 4T(n/4) + 2cn
= 4[2T(n/8) + c(n/4)] + 2cn = 8T(n/8) + 3cn
...
= 2^k T(n/2^k) + k·cn
When k = log₂n, we have n/2^k = 1:
T(n) = n·T(1) + (log₂n)·cn
= n·O(1) + cn·log₂n
= O(n log n)
| Level | Subarrays | Size Each | Work at Level | Cumulative |
|---|---|---|---|---|
| 0 | 1 | 16 | 16 | 16 |
| 1 | 2 | 8 | 2 × 8 = 16 | 32 |
| 2 | 4 | 4 | 4 × 4 = 16 | 48 |
| 3 | 8 | 2 | 8 × 2 = 16 | 64 |
| 4 | 16 | 1 | 16 × 1 = 16 | 80 |
Key Observations:
O(n) work per level: Each level processes all n elements exactly once (distributed across all subarrays at that level).
O(log n) levels: The tree has log₂n levels because we halve the problem size at each level.
Total work: O(n) × O(log n) = O(n log n)
Visual Intuition:
Think of the recursion as building a balanced binary tree:
This is identical to Merge Sort's complexity, which also has perfect n/2 splits and O(n log n) performance.
The recurrence T(n) = 2T(n/2) + Θ(n) falls under Case 2 of the Master Theorem: a = 2, b = 2, f(n) = Θ(n). Since f(n) = Θ(n^(log_b a)) = Θ(n^1) = Θ(n), we have T(n) = Θ(n log n). The Master Theorem provides a quick way to verify our analysis.
The worst case occurs when every pivot is the extreme element (minimum or maximum), creating maximally unbalanced partitions: one of size n-1 and one of size 0.
When Does This Happen?
The Recurrence Relation:
T(n) = T(n-1) + T(0) + cn
= T(n-1) + cn (since T(0) = O(1))
This is no longer a divide-and-conquer recurrence — it's a linear recurrence!
Solving the Recurrence:
T(n) = T(n-1) + cn
= T(n-2) + c(n-1) + cn
= T(n-3) + c(n-2) + c(n-1) + cn
...
= T(1) + c(2 + 3 + ... + n)
= O(1) + c · (n(n+1)/2 - 1)
= O(n²)
| Call | Subarray | Pivot | Work | Left Size | Right Size |
|---|---|---|---|---|---|
| 1 | [1,2,3,4,5] | 5 | 5 | 4 | 0 |
| 2 | [1,2,3,4] | 4 | 4 | 3 | 0 |
| 3 | [1,2,3] | 3 | 3 | 2 | 0 |
| 4 | [1,2] | 2 | 2 | 1 | 0 |
| 5 | [1] | — | 1 | base | — |
Total work: 5 + 4 + 3 + 2 + 1 = 15 = n(n+1)/2 = O(n²)
Why This Is So Bad:
Compare the best and worst cases:
| Case | Recursion Depth | Work per Level | Total Work |
|---|---|---|---|
| Best | log n | n | n log n |
| Worst | n | varies (n, n-1, n-2...) | n² |
The worst case is qualitatively different:
The Stack Depth Problem:
In the worst case, the recursion depth is O(n). For very large arrays, this can cause stack overflow! The best case has O(log n) recursion depth.
Production implementations use tail recursion optimization: always recurse on the smaller partition first, convert the larger partition to iteration. This limits stack depth to O(log n) even in the worst case.
Sorted or nearly-sorted inputs are common in practice: log files, database results, sequential IDs, etc. Using Quick Sort with fixed first/last pivot on such inputs produces worst-case O(n²) behavior. This is why production implementations use randomization or median-of-three pivot selection.
The average case analysis is mathematically more involved, but the result is encouraging: Quick Sort achieves O(n log n) on average, with a constant factor only about 39% worse than the best case.
The Key Insight:
With random data (or random pivot selection), each element is equally likely to be chosen as pivot. When we select a pivot randomly, the expected partition split is not perfect, but it's "good enough" most of the time.
The Probability Distribution:
If we pick a random element as pivot, it has:
A "reasonable" pivot (one in the middle 80%) creates partitions that are at most 10%/90% — still O(log n) depth!
Setting Up the Average Case Recurrence:
Let T(n) be the expected running time over all possible pivot choices.
If the pivot has rank i (i.e., it's the i-th smallest element):
Each rank is equally likely (probability 1/n), so:
T(n) = cn + (1/n) × Σᵢ₌₁ⁿ [T(i-1) + T(n-i)]
Simplifying (using symmetry: T(0) + T(n-1) = T(1) + T(n-2) = ... appears twice):
T(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k)
123456789101112131415161718192021222324252627282930313233343536373839
/** * Average Case Analysis Sketch * * We can formally solve the recurrence, but here's the intuitive approach: * * Key insight: Most pivots are "good enough" * - A pivot in positions n/4 to 3n/4 gives at most 75%/25% split * - Probability of being in this range: 50% * - Expected "good" pivots before a "bad" one: 2 * * This means: for every "bad" partition, we expect ~2 "good" partitions * The good partitions dominate, keeping expected depth at O(log n) */ // Simplified analysis using "good" and "bad" pivots:function analyzeAverageCase(n: number): void { const GOOD_THRESHOLD = 0.25; // Pivot in middle 50% is "good" // With probability 1/2, pivot is "good" (25%-75% split) // "Good" split: larger partition is at most 3n/4 // Depth to reach size 1: log_{4/3}(n) ≈ 2.4 log₂(n) // With "bad" pivots mixed in, depth increases but stays O(log n) // Formal analysis: expected depth ≈ 2 ln(n) ≈ 1.39 log₂(n) console.log(`Expected recursion depth: ~${(1.39 * Math.log2(n)).toFixed(1)}`); console.log(`Best case depth: ${Math.log2(n).toFixed(1)}`); console.log(`Ratio: ~1.39 (39% more levels than best case)`);} /** * The formal solution to the recurrence: * T(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k) * * yields: * T(n) ≈ 1.39 × n × log₂(n) * * This is O(n log n) with a constant factor about 39% worse than best case. */Solving the Recurrence (Sketch):
The recurrence T(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k) can be solved by:
The result is:
T(n) ≈ 1.39 × n × log₂(n) = O(n log n)
Alternative Intuition with Indicator Variables:
Another elegant analysis counts expected comparisons:
This approach, using indicator random variables, is both elegant and rigorous.
The average case is only about 39% slower than the best case. This is remarkably close! In contrast, the worst case is O(n/log n) times slower. The average case being so close to optimal is a key reason Quick Sort is practical — the typical performance is near-optimal, and worst case is rare.
With random pivot selection, what's the probability of hitting the worst case? Let's analyze this rigorously.
Defining "Bad" Pivots:
Let's say a pivot is "bad" if it's in the extreme 10% — either the smallest 5% or largest 5% of elements. With a bad pivot, one partition has ≤ 5% of elements, the other has ≥ 95%.
Probability Calculations:
For a single partition:
For the worst case to occur, we need every pivot to be bad. With n elements, we need approximately n bad pivots in a row:
For n = 100: P = 10⁻¹⁰⁰ (essentially zero) For n = 1000: P = 10⁻¹⁰⁰⁰ (inconceivably small)
For comparison, there are approximately 10⁸⁰ atoms in the observable universe. The probability of worst-case Quick Sort on 100 elements is 10²⁰ times smaller than randomly selecting a specific atom in the universe!
| Array Size (n) | P(worst case) | Context |
|---|---|---|
| 10 | 10⁻¹⁰ | Somewhat unlikely |
| 20 | 10⁻²⁰ | Winning lottery 3 times |
| 50 | 10⁻⁵⁰ | Less than 1/(atoms in Earth) |
| 100 | 10⁻¹⁰⁰ | Inconceivable |
| 1000 | 10⁻¹⁰⁰⁰ | Truly impossible |
What About "Pretty Bad" Performance?
Even if we don't hit the absolute worst case, could we get O(n^1.5) or other bad behavior?
Analysis shows that with random pivots:
This is a consequence of Chernoff bounds applied to the recursion structure. The probability of significant deviation from average decreases exponentially with n.
Practical Implication:
With random pivot selection, you can be confident that Quick Sort will behave as O(n log n) on any input. The "worst case" becomes a theoretical concern, not a practical one. This is why:
Quick Sort with random pivots is a 'Monte Carlo' algorithm: it always produces the correct output, but its running time is probabilistic. The guarantee isn't that every run is fast, but that the expected time is O(n log n) and large deviations are exponentially unlikely. For practical purposes, this is as good as a deterministic guarantee.
Let's put Quick Sort's complexity in context by comparing it with other major sorting algorithms.
| Algorithm | Best Case | Average Case | Worst Case | Space |
|---|---|---|---|---|
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) |
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
Quick Sort vs Merge Sort:
| Aspect | Quick Sort | Merge Sort |
|---|---|---|
| Worst case | O(n²) | O(n log n) |
| Space | O(log n) | O(n) |
| Cache behavior | Excellent (in-place) | Good (sequential) |
| Practical speed | Typically fastest | Slightly slower |
| Stability | Unstable | Stable |
Why Quick Sort Wins Despite Worse Worst Case:
Quick Sort vs Heap Sort:
| Aspect | Quick Sort | Heap Sort |
|---|---|---|
| Worst case | O(n²) | O(n log n) |
| Average constant | Lower | Higher |
| Cache behavior | Excellent | Poor (many distant accesses) |
| Practical speed | Faster | Slower |
Heap Sort has guaranteed O(n log n), but its cache behavior is poor — heap operations access distant array indices. Quick Sort's locality dominates in practice.
The Hidden Constants:
Big-O notation hides constant factors. Let's reveal them:
Quick Sort (average): ≈ 1.39 × n × log₂n comparisons
Merge Sort: ≈ 1.00 × n × log₂n comparisons + n copies per level
Heap Sort: ≈ 2.00 × n × log₂n comparisons
Quick Sort makes about 40% more comparisons than Merge Sort, but avoids the copy overhead. Heap Sort makes about 45% more comparisons than Quick Sort.
For swap/move counts:
The combination of low comparison count and minimal data movement makes Quick Sort the overall winner for in-memory sorting.
Use Quick Sort when speed matters most and O(n²) is acceptable (with randomization). Use Merge Sort when stability is required or guaranteed O(n log n) is essential. Use Heap Sort when guaranteed O(n log n) is needed without extra space. The 'best' algorithm depends on your constraints.
While we've focused on time complexity, Quick Sort's space efficiency is equally important to understand.
Stack Space for Recursion:
Quick Sort's space usage is dominated by the recursion stack. Each recursive call uses O(1) space for local variables, so total space equals O(depth of recursion).
Space in Each Case:
Best case: O(log n)
Average case: O(log n)
Worst case: O(n)
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
/** * Quick Sort with Tail Recursion Optimization * * Strategy: Always recurse on the SMALLER partition first, * then iterate (not recurse) on the larger partition. * * This guarantees O(log n) stack depth even in the worst case! */function quickSortOptimizedStack( arr: number[], low: number, high: number): void { while (low < high) { const p = partition(arr, low, high); // Determine which partition is smaller const leftSize = p - low; const rightSize = high - p; if (leftSize < rightSize) { // Left partition is smaller - recurse on it quickSortOptimizedStack(arr, low, p - 1); // Continue with right partition (tail "recursion" as iteration) low = p + 1; // high stays the same } else { // Right partition is smaller - recurse on it quickSortOptimizedStack(arr, p + 1, high); // Continue with left partition high = p - 1; // low stays the same } }} /** * Why this works: * * The smaller partition has at most n/2 elements. * So the recursion depth for recursive calls is at most log₂n. * * The larger partition is handled by iteration (no stack growth). * * Even if every partition is n-1 / 1: * - We recurse on the size-1 partition (depth 1) * - We iterate on the size n-1 partition * - Total recursive depth: O(log n) because we always pick smaller */Comparison with Other Algorithms:
| Algorithm | Auxiliary Space | Stack Space | Total |
|---|---|---|---|
| Quick Sort | O(1) | O(log n) typical | O(log n) |
| Quick Sort (worst) | O(1) | O(n) | O(n) |
| Quick Sort (optimized) | O(1) | O(log n) guaranteed | O(log n) |
| Merge Sort | O(n) | O(log n) | O(n) |
| Heap Sort | O(1) | O(1)* | O(1) |
| Insertion Sort | O(1) | O(1)* | O(1) |
*Iterative implementations
The Space Advantage:
Quick Sort's O(log n) space is a significant advantage over Merge Sort's O(n):
In memory-constrained environments, this difference matters enormously. Quick Sort can sort in-place; Merge Sort cannot (for arrays).
For linked lists, Merge Sort achieves O(1) auxiliary space (plus O(log n) stack). This is because merging linked lists doesn't require copying to auxiliary arrays — we just manipulate pointers. For linked lists specifically, Merge Sort's space disadvantage disappears.
Quick Sort achieves O(n log n) average-case complexity. But is this optimal? Could there be a faster comparison-based sorting algorithm?
The Answer: No. There's a fundamental lower bound.
The Decision Tree Argument:
Any comparison-based sorting algorithm can be modeled as a decision tree:
Key Observations:
Proof that log₂(n!) = Ω(n log n):
Using Stirling's approximation: n! ≈ √(2πn) × (n/e)ⁿ
log₂(n!) ≈ log₂(√(2πn)) + n × log₂(n/e)
≈ (1/2)log₂(2πn) + n log₂n - n log₂e
= n log₂n - O(n)
= Θ(n log n)
The Implication:
No comparison-based sorting algorithm can do better than Ω(n log n) comparisons in the worst case. Quick Sort (average case), Merge Sort, and Heap Sort all achieve this bound — they are asymptotically optimal.
Non-comparison sorts like Counting Sort, Radix Sort, and Bucket Sort can achieve O(n) under certain conditions. They 'cheat' by not being comparison-based — they use the structure of the data (integer keys, limited range) rather than just comparing elements. When applicable, they can be faster than Quick Sort.
Quick Sort's Position:
Merge Sort and Heap Sort achieve O(n log n) in all cases. So why prefer Quick Sort?
Because constant factors matter. Quick Sort's lower constant factors make it faster in practice, even though its worst case is theoretically worse. The n in O(n log n) might be 1.39n for Quick Sort versus 2n for Heap Sort — a 30% difference in speed for the same big-O class.
For Interview Discussions:
When asked about Quick Sort's complexity, a complete answer covers:
We've conducted a rigorous analysis of Quick Sort's complexity. Here are the essential takeaways:
What's Next:
We've analyzed the theoretical complexity. But why is Quick Sort so fast in practice? The final page explores the practical factors — cache behavior, branch prediction, memory access patterns — that make Quick Sort the algorithm of choice for real-world sorting.
You now understand Quick Sort's complexity deeply — not just the Big-O bounds but why they are what they are, how rare the worst case is, and how Quick Sort compares to other sorting algorithms. This prepares you for informed algorithm selection and interview discussions about sorting.