Loading content...
Quick Sort presents an interesting paradox in algorithm analysis: it's the preferred sorting algorithm for most real-world applications despite having a worse theoretical worst case than its competitors. Merge Sort and Heap Sort both guarantee O(n log n), yet Quick Sort — with its O(n²) worst case — consistently outperforms them in practice.\n\nUnderstanding this apparent contradiction requires rigorous complexity analysis. We need to understand not just the Big-O notation, but the underlying mathematics that produces these bounds. Why is the average case O(n log n)? Why does the worst case reach O(n²)? How rare is the worst case? And what do these bounds actually mean in practice?\n\nThis page develops a complete understanding of Quick Sort's time complexity — mathematically rigorous but presented with intuition. By the end, you'll not only know the complexity bounds but understand deeply why they are what they are.
By the end of this page, you will (1) understand the best, average, and worst-case complexity of Quick Sort, (2) be able to derive these bounds using recurrence relations, (3) develop intuition for when each case occurs, and (4) understand the probability of worst-case behavior with random pivot selection.
The best case for Quick Sort occurs when every pivot perfectly bisects the array — splitting it into two equal (or nearly equal) halves. This is the ideal scenario that produces the minimum possible work.\n\nSetting Up the Analysis:\n\nLet T(n) represent the time to sort an array of n elements in the best case.\n\nAt each step:\n1. Partition: O(n) time to partition n elements\n2. Recurse: Two recursive calls on n/2 elements each\n\nThe Recurrence Relation:\n\n\nT(n) = 2T(n/2) + cn\nT(1) = O(1)\n\n\nWhere cn is the time for partitioning (linear in the number of elements).\n\nSolving the Recurrence:\n\nThis is the classic divide-and-conquer recurrence. We can solve it by expansion:\n\n\nT(n) = 2T(n/2) + cn\n = 2[2T(n/4) + c(n/2)] + cn = 4T(n/4) + 2cn\n = 4[2T(n/8) + c(n/4)] + 2cn = 8T(n/8) + 3cn\n ...\n = 2^k T(n/2^k) + k·cn\n\n\nWhen k = log₂n, we have n/2^k = 1:\n\n\nT(n) = n·T(1) + (log₂n)·cn\n = n·O(1) + cn·log₂n\n = O(n log n)\n
| Level | Subarrays | Size Each | Work at Level | Cumulative |
|---|---|---|---|---|
| 0 | 1 | 16 | 16 | 16 |
| 1 | 2 | 8 | 2 × 8 = 16 | 32 |
| 2 | 4 | 4 | 4 × 4 = 16 | 48 |
| 3 | 8 | 2 | 8 × 2 = 16 | 64 |
| 4 | 16 | 1 | 16 × 1 = 16 | 80 |
Key Observations:\n\n1. O(n) work per level: Each level processes all n elements exactly once (distributed across all subarrays at that level).\n\n2. O(log n) levels: The tree has log₂n levels because we halve the problem size at each level.\n\n3. Total work: O(n) × O(log n) = O(n log n)\n\nVisual Intuition:\n\nThink of the recursion as building a balanced binary tree:\n- Each node represents a partition operation\n- The tree has height log₂n\n- At each level, the total work sums to n\n- Total work = height × work per level = O(n log n)\n\nThis is identical to Merge Sort's complexity, which also has perfect n/2 splits and O(n log n) performance.
The recurrence T(n) = 2T(n/2) + Θ(n) falls under Case 2 of the Master Theorem: a = 2, b = 2, f(n) = Θ(n). Since f(n) = Θ(n^(log_b a)) = Θ(n^1) = Θ(n), we have T(n) = Θ(n log n). The Master Theorem provides a quick way to verify our analysis.
The worst case occurs when every pivot is the extreme element (minimum or maximum), creating maximally unbalanced partitions: one of size n-1 and one of size 0.\n\nWhen Does This Happen?\n\n1. Already sorted array with last-element pivot: [1, 2, 3, 4, 5] — pivot 5 is always maximum\n2. Reverse-sorted array with first-element pivot: [5, 4, 3, 2, 1] — pivot 5 is always maximum\n3. Adversarial input: An attacker who knows the pivot selection strategy can construct worst-case inputs\n\nThe Recurrence Relation:\n\n\nT(n) = T(n-1) + T(0) + cn\n = T(n-1) + cn (since T(0) = O(1))\n\n\nThis is no longer a divide-and-conquer recurrence — it's a linear recurrence!\n\nSolving the Recurrence:\n\n\nT(n) = T(n-1) + cn\n = T(n-2) + c(n-1) + cn\n = T(n-3) + c(n-2) + c(n-1) + cn\n ...\n = T(1) + c(2 + 3 + ... + n)\n = O(1) + c · (n(n+1)/2 - 1)\n = O(n²)\n
| Call | Subarray | Pivot | Work | Left Size | Right Size |
|---|---|---|---|---|---|
| 1 | [1,2,3,4,5] | 5 | 5 | 4 | 0 |
| 2 | [1,2,3,4] | 4 | 4 | 3 | 0 |
| 3 | [1,2,3] | 3 | 3 | 2 | 0 |
| 4 | [1,2] | 2 | 2 | 1 | 0 |
| 5 | [1] | — | 1 | base | — |
Total work: 5 + 4 + 3 + 2 + 1 = 15 = n(n+1)/2 = O(n²)\n\nWhy This Is So Bad:\n\nCompare the best and worst cases:\n\n| Case | Recursion Depth | Work per Level | Total Work |\n|------|-----------------|----------------|------------|\n| Best | log n | n | n log n |\n| Worst | n | varies (n, n-1, n-2...) | n² |\n\nThe worst case is qualitatively different:\n- Instead of log n levels of O(n), we have n levels of O(n), O(n-1), O(n-2)...\n- The recursion tree is completely unbalanced — a linear chain, not a tree\n- Each element becomes a pivot one at a time, in sequence\n\nThe Stack Depth Problem:\n\nIn the worst case, the recursion depth is O(n). For very large arrays, this can cause stack overflow! The best case has O(log n) recursion depth.\n\nProduction implementations use tail recursion optimization: always recurse on the smaller partition first, convert the larger partition to iteration. This limits stack depth to O(log n) even in the worst case.
Sorted or nearly-sorted inputs are common in practice: log files, database results, sequential IDs, etc. Using Quick Sort with fixed first/last pivot on such inputs produces worst-case O(n²) behavior. This is why production implementations use randomization or median-of-three pivot selection.
The average case analysis is mathematically more involved, but the result is encouraging: Quick Sort achieves O(n log n) on average, with a constant factor only about 39% worse than the best case.\n\nThe Key Insight:\n\nWith random data (or random pivot selection), each element is equally likely to be chosen as pivot. When we select a pivot randomly, the expected partition split is not perfect, but it's "good enough" most of the time.\n\nThe Probability Distribution:\n\nIf we pick a random element as pivot, it has:\n- 10% chance of being in the bottom 10% (very bad pivot)\n- 80% chance of being in the middle 80% (reasonable pivot)\n- 10% chance of being in the top 10% (very bad pivot)\n\nA "reasonable" pivot (one in the middle 80%) creates partitions that are at most 10%/90% — still O(log n) depth!\n\nSetting Up the Average Case Recurrence:\n\nLet T(n) be the expected running time over all possible pivot choices.\n\nIf the pivot has rank i (i.e., it's the i-th smallest element):\n- Left partition has i-1 elements\n- Right partition has n-i elements\n- Partition takes cn time\n\nEach rank is equally likely (probability 1/n), so:\n\n\nT(n) = cn + (1/n) × Σᵢ₌₁ⁿ [T(i-1) + T(n-i)]\n\n\nSimplifying (using symmetry: T(0) + T(n-1) = T(1) + T(n-2) = ... appears twice):\n\n\nT(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k)\n
123456789101112131415161718192021222324252627282930313233343536373839
/** * Average Case Analysis Sketch * * We can formally solve the recurrence, but here's the intuitive approach: * * Key insight: Most pivots are "good enough" * - A pivot in positions n/4 to 3n/4 gives at most 75%/25% split * - Probability of being in this range: 50% * - Expected "good" pivots before a "bad" one: 2 * * This means: for every "bad" partition, we expect ~2 "good" partitions * The good partitions dominate, keeping expected depth at O(log n) */ // Simplified analysis using "good" and "bad" pivots:function analyzeAverageCase(n: number): void { const GOOD_THRESHOLD = 0.25; // Pivot in middle 50% is "good" // With probability 1/2, pivot is "good" (25%-75% split) // "Good" split: larger partition is at most 3n/4 // Depth to reach size 1: log_{4/3}(n) ≈ 2.4 log₂(n) // With "bad" pivots mixed in, depth increases but stays O(log n) // Formal analysis: expected depth ≈ 2 ln(n) ≈ 1.39 log₂(n) console.log(`Expected recursion depth: ~${(1.39 * Math.log2(n)).toFixed(1)}`); console.log(`Best case depth: ${Math.log2(n).toFixed(1)}`); console.log(`Ratio: ~1.39 (39% more levels than best case)`);} /** * The formal solution to the recurrence: * T(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k) * * yields: * T(n) ≈ 1.39 × n × log₂(n) * * This is O(n log n) with a constant factor about 39% worse than best case. */Solving the Recurrence (Sketch):\n\nThe recurrence T(n) = cn + (2/n) × Σₖ₌₀ⁿ⁻¹ T(k) can be solved by:\n\n1. Multiply both sides by n: nT(n) = cn² + 2Σₖ₌₀ⁿ⁻¹ T(k)\n2. Write the equation for n-1 and subtract\n3. This eliminates the sum, giving: nT(n) - (n-1)T(n-1) = cn + 2T(n-1)\n4. Rearrange and solve\n\nThe result is:\n\nT(n) ≈ 1.39 × n × log₂(n) = O(n log n)\n\nAlternative Intuition with Indicator Variables:\n\nAnother elegant analysis counts expected comparisons:\n- Element i is compared to element j only if one of them is the first pivot selected from the range containing both\n- Probability of i and j being compared = 2/(j-i+1)\n- Sum over all pairs to get expected comparisons ≈ 2n Hₙ ≈ 2n ln n = O(n log n)\n\nThis approach, using indicator random variables, is both elegant and rigorous.
The average case is only about 39% slower than the best case. This is remarkably close! In contrast, the worst case is O(n/log n) times slower. The average case being so close to optimal is a key reason Quick Sort is practical — the typical performance is near-optimal, and worst case is rare.
With random pivot selection, what's the probability of hitting the worst case? Let's analyze this rigorously.\n\nDefining "Bad" Pivots:\n\nLet's say a pivot is "bad" if it's in the extreme 10% — either the smallest 5% or largest 5% of elements. With a bad pivot, one partition has ≤ 5% of elements, the other has ≥ 95%.\n\nProbability Calculations:\n\nFor a single partition:\n- P(bad pivot) = 10% = 0.1\n- P(good pivot) = 90% = 0.9\n\nFor the worst case to occur, we need every pivot to be bad. With n elements, we need approximately n bad pivots in a row:\n\n- P(all n pivots bad) = (0.1)ⁿ\n\nFor n = 100: P = 10⁻¹⁰⁰ (essentially zero)\nFor n = 1000: P = 10⁻¹⁰⁰⁰ (inconceivably small)\n\nFor comparison, there are approximately 10⁸⁰ atoms in the observable universe. The probability of worst-case Quick Sort on 100 elements is 10²⁰ times smaller than randomly selecting a specific atom in the universe!
| Array Size (n) | P(worst case) | Context |
|---|---|---|
| 10 | 10⁻¹⁰ | Somewhat unlikely |
| 20 | 10⁻²⁰ | Winning lottery 3 times |
| 50 | 10⁻⁵⁰ | Less than 1/(atoms in Earth) |
| 100 | 10⁻¹⁰⁰ | Inconceivable |
| 1000 | 10⁻¹⁰⁰⁰ | Truly impossible |
What About "Pretty Bad" Performance?\n\nEven if we don't hit the absolute worst case, could we get O(n^1.5) or other bad behavior?\n\nAnalysis shows that with random pivots:\n- P(time > 10 × expected) < 2⁻ⁿ\n- The runtime is highly concentrated around the expected O(n log n)\n\nThis is a consequence of Chernoff bounds applied to the recursion structure. The probability of significant deviation from average decreases exponentially with n.\n\nPractical Implication:\n\nWith random pivot selection, you can be confident that Quick Sort will behave as O(n log n) on any input. The "worst case" becomes a theoretical concern, not a practical one. This is why:\n\n1. Random pivot selection is preferred over deterministic selection\n2. Quick Sort is trusted in production systems\n3. The O(n²) worst case doesn't deter practitioners
Quick Sort with random pivots is a 'Monte Carlo' algorithm: it always produces the correct output, but its running time is probabilistic. The guarantee isn't that every run is fast, but that the expected time is O(n log n) and large deviations are exponentially unlikely. For practical purposes, this is as good as a deterministic guarantee.
Let's put Quick Sort's complexity in context by comparing it with other major sorting algorithms.
| Algorithm | Best Case | Average Case | Worst Case | Space |
|---|---|---|---|---|
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) |
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
Quick Sort vs Merge Sort:\n\n| Aspect | Quick Sort | Merge Sort |\n|--------|-----------|------------|\n| Worst case | O(n²) | O(n log n) |\n| Space | O(log n) | O(n) |\n| Cache behavior | Excellent (in-place) | Good (sequential) |\n| Practical speed | Typically fastest | Slightly slower |\n| Stability | Unstable | Stable |\n\nWhy Quick Sort Wins Despite Worse Worst Case:\n\n1. Lower constant factors: Quick Sort's inner loop is extremely tight\n2. Better cache behavior: In-place partitioning maximizes cache utilization\n3. Less memory overhead: No auxiliary arrays to allocate/deallocate\n4. Rare worst case: With randomization, O(n²) essentially never occurs\n\nQuick Sort vs Heap Sort:\n\n| Aspect | Quick Sort | Heap Sort |\n|--------|-----------|-----------|\n| Worst case | O(n²) | O(n log n) |\n| Average constant | Lower | Higher |\n| Cache behavior | Excellent | Poor (many distant accesses) |\n| Practical speed | Faster | Slower |\n\nHeap Sort has guaranteed O(n log n), but its cache behavior is poor — heap operations access distant array indices. Quick Sort's locality dominates in practice.
The Hidden Constants:\n\nBig-O notation hides constant factors. Let's reveal them:\n\n\nQuick Sort (average): ≈ 1.39 × n × log₂n comparisons\nMerge Sort: ≈ 1.00 × n × log₂n comparisons + n copies per level\nHeap Sort: ≈ 2.00 × n × log₂n comparisons\n\n\nQuick Sort makes about 40% more comparisons than Merge Sort, but avoids the copy overhead. Heap Sort makes about 45% more comparisons than Quick Sort.\n\nFor swap/move counts:\n- Quick Sort: fewer swaps (Hoare ~n/6, Lomuto ~n/2)\n- Merge Sort: n copies per level (total n log n copies)\n- Heap Sort: many swaps (in heapify operations)\n\nThe combination of low comparison count and minimal data movement makes Quick Sort the overall winner for in-memory sorting.
Use Quick Sort when speed matters most and O(n²) is acceptable (with randomization). Use Merge Sort when stability is required or guaranteed O(n log n) is essential. Use Heap Sort when guaranteed O(n log n) is needed without extra space. The 'best' algorithm depends on your constraints.
While we've focused on time complexity, Quick Sort's space efficiency is equally important to understand.\n\nStack Space for Recursion:\n\nQuick Sort's space usage is dominated by the recursion stack. Each recursive call uses O(1) space for local variables, so total space equals O(depth of recursion).\n\nSpace in Each Case:\n\nBest case: O(log n)\n- Recursion depth = log₂n (perfectly balanced tree)\n- Stack space = O(log n)\n\nAverage case: O(log n)\n- Expected recursion depth ≈ 2 ln n = O(log n)\n- Stack space = O(log n)\n\nWorst case: O(n)\n- Recursion depth = n (completely unbalanced)\n- Stack space = O(n)\n- This can cause stack overflow for large n!
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
/** * Quick Sort with Tail Recursion Optimization * * Strategy: Always recurse on the SMALLER partition first, * then iterate (not recurse) on the larger partition. * * This guarantees O(log n) stack depth even in the worst case! */function quickSortOptimizedStack( arr: number[], low: number, high: number): void { while (low < high) { const p = partition(arr, low, high); // Determine which partition is smaller const leftSize = p - low; const rightSize = high - p; if (leftSize < rightSize) { // Left partition is smaller - recurse on it quickSortOptimizedStack(arr, low, p - 1); // Continue with right partition (tail "recursion" as iteration) low = p + 1; // high stays the same } else { // Right partition is smaller - recurse on it quickSortOptimizedStack(arr, p + 1, high); // Continue with left partition high = p - 1; // low stays the same } }} /** * Why this works: * * The smaller partition has at most n/2 elements. * So the recursion depth for recursive calls is at most log₂n. * * The larger partition is handled by iteration (no stack growth). * * Even if every partition is n-1 / 1: * - We recurse on the size-1 partition (depth 1) * - We iterate on the size n-1 partition * - Total recursive depth: O(log n) because we always pick smaller */Comparison with Other Algorithms:\n\n| Algorithm | Auxiliary Space | Stack Space | Total |\n|-----------|-----------------|-------------|-------|\n| Quick Sort | O(1) | O(log n) typical | O(log n) |\n| Quick Sort (worst) | O(1) | O(n) | O(n) |\n| Quick Sort (optimized) | O(1) | O(log n) guaranteed | O(log n) |\n| Merge Sort | O(n) | O(log n) | O(n) |\n| Heap Sort | O(1) | O(1)* | O(1) |\n| Insertion Sort | O(1) | O(1)* | O(1) |\n\n*Iterative implementations\n\nThe Space Advantage:\n\nQuick Sort's O(log n) space is a significant advantage over Merge Sort's O(n):\n\n- For n = 1 million: Quick Sort uses ~20 stack frames, Merge Sort needs ~4 MB auxiliary array\n- For n = 1 billion: Quick Sort uses ~30 stack frames, Merge Sort needs ~4 GB auxiliary array\n\nIn memory-constrained environments, this difference matters enormously. Quick Sort can sort in-place; Merge Sort cannot (for arrays).
For linked lists, Merge Sort achieves O(1) auxiliary space (plus O(log n) stack). This is because merging linked lists doesn't require copying to auxiliary arrays — we just manipulate pointers. For linked lists specifically, Merge Sort's space disadvantage disappears.
Quick Sort achieves O(n log n) average-case complexity. But is this optimal? Could there be a faster comparison-based sorting algorithm?\n\nThe Answer: No. There's a fundamental lower bound.\n\nThe Decision Tree Argument:\n\nAny comparison-based sorting algorithm can be modeled as a decision tree:\n- Each internal node is a comparison between two elements\n- Each leaf is a permutation of the input that would produce sorted output\n- The path from root to leaf represents comparisons made for that input\n\nKey Observations:\n\n1. There are n! possible input permutations\n2. Each must lead to a different leaf (different outputs needed)\n3. A binary tree with L leaves has height ≥ log₂L\n4. Therefore: height ≥ log₂(n!) = Ω(n log n)\n\nProof that log₂(n!) = Ω(n log n):\n\nUsing Stirling's approximation: n! ≈ √(2πn) × (n/e)ⁿ\n\n\nlog₂(n!) ≈ log₂(√(2πn)) + n × log₂(n/e)\n ≈ (1/2)log₂(2πn) + n log₂n - n log₂e\n = n log₂n - O(n)\n = Θ(n log n)\n\n\nThe Implication:\n\nNo comparison-based sorting algorithm can do better than Ω(n log n) comparisons in the worst case. Quick Sort (average case), Merge Sort, and Heap Sort all achieve this bound — they are asymptotically optimal.
Non-comparison sorts like Counting Sort, Radix Sort, and Bucket Sort can achieve O(n) under certain conditions. They 'cheat' by not being comparison-based — they use the structure of the data (integer keys, limited range) rather than just comparing elements. When applicable, they can be faster than Quick Sort.
Quick Sort's Position:\n\n- Quick Sort average case: O(n log n) — optimal\n- Quick Sort worst case: O(n²) — not optimal, but rare\n\nMerge Sort and Heap Sort achieve O(n log n) in all cases. So why prefer Quick Sort?\n\nBecause constant factors matter. Quick Sort's lower constant factors make it faster in practice, even though its worst case is theoretically worse. The n in O(n log n) might be 1.39n for Quick Sort versus 2n for Heap Sort — a 30% difference in speed for the same big-O class.\n\nFor Interview Discussions:\n\nWhen asked about Quick Sort's complexity, a complete answer covers:\n1. Best case: O(n log n) when pivots bisect\n2. Average case: O(n log n) with randomization\n3. Worst case: O(n²) with pathological pivots\n4. Why it's used despite worst case: it's rare and constants are low\n5. The Ω(n log n) lower bound as context
We've conducted a rigorous analysis of Quick Sort's complexity. Here are the essential takeaways:
What's Next:\n\nWe've analyzed the theoretical complexity. But why is Quick Sort so fast in practice? The final page explores the practical factors — cache behavior, branch prediction, memory access patterns — that make Quick Sort the algorithm of choice for real-world sorting.
You now understand Quick Sort's complexity deeply — not just the Big-O bounds but why they are what they are, how rare the worst case is, and how Quick Sort compares to other sorting algorithms. This prepares you for informed algorithm selection and interview discussions about sorting.