Loading learning content...
When someone says "bubble sort is O(n²)," what exactly does that mean? How do we arrive at that characterization? And why does it matter so much in practice?
This page provides a rigorous, mathematical analysis of bubble sort's time complexity. We'll derive the exact number of operations, understand the difference between best, average, and worst cases, and see how O(n²) translates to real-world performance degradation.
By the end, you won't just know that bubble sort is O(n²)—you'll understand why it is and what that implies for algorithm selection.
By the end of this page, you will derive bubble sort's time complexity from first principles, understand the mathematical meaning of O(n²), calculate exact operation counts for any input size, and internalize why quadratic complexity is problematic at scale.
The primary operation in sorting is the comparison—determining which of two elements should come first. Let's count exactly how many comparisons bubble sort performs.
Standard Bubble Sort (without early termination):
Given an array of size n:
Total Comparisons:
$$\text{Total} = (n-1) + (n-2) + (n-3) + \cdots + 2 + 1$$
This is the sum of the first (n-1) positive integers. Using the well-known formula:
$$\sum_{k=1}^{n-1} k = \frac{(n-1) \cdot n}{2} = \frac{n^2 - n}{2}$$
For n = 5: $$\frac{5^2 - 5}{2} = \frac{25 - 5}{2} = \frac{20}{2} = 10 \text{ comparisons}$$
For n = 100: $$\frac{100^2 - 100}{2} = \frac{10000 - 100}{2} = \frac{9900}{2} = 4,950 \text{ comparisons}$$
For n = 1,000,000: $$\frac{1000000^2 - 1000000}{2} \approx 500,000,000,000 = 5 \times 10^{11} \text{ comparisons}$$
At one million elements, bubble sort performs approximately 500 billion comparisons. If each comparison takes just 1 nanosecond, that's 500 seconds—over 8 minutes—just for comparing! And this doesn't include the swaps. This is why O(n²) algorithms fail at scale.
1234567891011121314151617181920212223242526272829303132333435
function countBubbleSortComparisons(n: number): number { // Formula: n(n-1)/2 return (n * (n - 1)) / 2;} // Verify against actual implementationfunction bubbleSortWithComparisonsCount(arr: number[]): number { const n = arr.length; let comparisons = 0; for (let pass = 0; pass < n - 1; pass++) { for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; } } } return comparisons;} // Test for various sizesfor (const n of [5, 10, 20, 100, 1000]) { const predicted = countBubbleSortComparisons(n); const arr = Array.from({ length: n }, (_, i) => n - i); // Worst case const actual = bubbleSortWithComparisonsCount(arr); console.log(`n=${n}: Predicted=${predicted}, Actual=${actual}, Match=${predicted === actual}`);}// Output:// n=5: Predicted=10, Actual=10, Match=true// n=10: Predicted=45, Actual=45, Match=true// n=20: Predicted=190, Actual=190, Match=true// n=100: Predicted=4950, Actual=4950, Match=true// n=1000: Predicted=499500, Actual=499500, Match=trueWe derived that bubble sort performs exactly (n²-n)/2 comparisons. But we say it's O(n²), not O(n²-n)/2. Why? And what does O(n²) actually mean?
Big-O Definition (Simplified):
f(n) is O(g(n)) if there exist positive constants c and n₀ such that:
$$f(n) \leq c \cdot g(n) \quad \text{for all } n \geq n_0$$
In plain English: f(n) grows no faster than g(n), ignoring constant factors and small inputs.
Applying This to Bubble Sort:
Our function is f(n) = (n² - n)/2. We claim f(n) is O(n²).
Proof: $$f(n) = \frac{n^2 - n}{2} = \frac{n^2}{2} - \frac{n}{2}$$
For n ≥ 1: $$f(n) = \frac{n^2}{2} - \frac{n}{2} < \frac{n^2}{2} < n^2$$
So with c = 1 and n₀ = 1, we have f(n) < 1·n² for all n ≥ 1. ✓
Why We Drop the Lower-Order Terms:
As n grows large:
Big-O captures asymptotic behavior—how the function grows as n approaches infinity. For large n, (n²-n)/2 behaves like n²/2, which behaves like n².
| n | Exact: (n²-n)/2 | Approximate: n²/2 | Simple: n² | Ratio to n² |
|---|---|---|---|---|
| 10 | 45 | 50 | 100 | 0.45 |
| 100 | 4,950 | 5,000 | 10,000 | 0.495 |
| 1,000 | 499,500 | 500,000 | 1,000,000 | 0.4995 |
| 10,000 | 49,995,000 | 50,000,000 | 100,000,000 | 0.49995 |
| 100,000 | 4,999,950,000 | 5,000,000,000 | 10,000,000,000 | 0.499995 |
Notice the ratio converges to 0.5 (or 1/2) as n grows. The exact formula (n²-n)/2 approaches n²/2 asymptotically. In Big-O, we ignore this constant factor of 1/2 because it doesn't affect the growth rate—doubling n still quadruples the work.
The best case for bubble sort occurs when the array is already sorted. What happens in this scenario?
Standard Bubble Sort (without optimization):
Even for a sorted array, standard bubble sort performs all (n-1) + (n-2) + ... + 1 = n(n-1)/2 comparisons. It doesn't know the array is sorted until it does all the work.
Best Case Comparisons: O(n²) ← Still quadratic! Best Case Swaps: 0 ← Zero swaps needed
Optimized Bubble Sort (with early termination):
The optimized version tracks whether any swap occurred during a pass. If a complete pass makes no swaps, the array is sorted and we can stop.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
// Standard bubble sort - always O(n²) comparisonsfunction bubbleSortStandard(arr: number[]): { comparisons: number } { const n = arr.length; let comparisons = 0; for (let pass = 0; pass < n - 1; pass++) { for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; } } } return { comparisons };} // Optimized bubble sort - O(n) comparisons for sorted inputfunction bubbleSortOptimized(arr: number[]): { comparisons: number; passes: number } { const n = arr.length; let comparisons = 0; let passes = 0; for (let pass = 0; pass < n - 1; pass++) { let swapped = false; passes++; for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; swapped = true; } } if (!swapped) break; // Early termination } return { comparisons, passes };} // Test on already-sorted arraysconst sizes = [10, 100, 1000]; for (const n of sizes) { const sorted = Array.from({ length: n }, (_, i) => i + 1); const standard = bubbleSortStandard([...sorted]); const optimized = bubbleSortOptimized([...sorted]); console.log(`n=${n}:`); console.log(` Standard: ${standard.comparisons} comparisons`); console.log(` Optimized: ${optimized.comparisons} comparisons, ${optimized.passes} passes`); console.log(` Savings: ${((1 - optimized.comparisons / standard.comparisons) * 100).toFixed(1)}%`);} // Output:// n=10:// Standard: 45 comparisons// Optimized: 9 comparisons, 1 passes// Savings: 80.0%// n=100:// Standard: 4950 comparisons// Optimized: 99 comparisons, 1 passes// Savings: 98.0%// n=1000:// Standard: 499500 comparisons// Optimized: 999 comparisons, 1 passes// Savings: 99.8%Best Case Analysis (Optimized Version):
For a sorted array, the optimized version:
Best Case Comparisons: O(n) ← Linear! Best Case Swaps: 0 Best Case Passes: 1
This makes bubble sort an adaptive algorithm—it performs better when input is partially sorted.
For nearly-sorted arrays (only a few elements out of place), the optimized version terminates after just a few passes. An array with k inversions requires at most O(k/n + 1) passes, where k is much smaller than n² for nearly-sorted data.
The worst case for bubble sort occurs when the array is in reverse sorted order—every element is in the wrong position.
Example: Array [5, 4, 3, 2, 1]
Every pair (i, j) where i < j is an inversion: 5>4, 5>3, 5>2, 5>1, 4>3, 4>2, 4>1, 3>2, 3>1, 2>1. That's n(n-1)/2 = 10 inversions.
Worst Case Performance:
The worst case is truly O(n²) for both comparisons AND swaps. This is as bad as it gets.
123456789101112131415161718192021222324252627282930313233343536373839404142434445
function bubbleSortWorstCase(arr: number[]): { comparisons: number; swaps: number; passes: number;} { const n = arr.length; let comparisons = 0; let swaps = 0; let passes = 0; for (let pass = 0; pass < n - 1; pass++) { passes++; let swappedThisPass = false; for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; swaps++; swappedThisPass = true; } } if (!swappedThisPass) break; } return { comparisons, swaps, passes };} // Test worst caseconst worstCase = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1];const result = bubbleSortWorstCase([...worstCase]); console.log("Worst case (reverse sorted) n=10:");console.log(` Comparisons: ${result.comparisons}`);console.log(` Swaps: ${result.swaps}`);console.log(` Passes: ${result.passes}`);console.log(` Expected: ${10 * 9 / 2} = 45 each`); // Output:// Worst case (reverse sorted) n=10:// Comparisons: 45// Swaps: 45// Passes: 9// Expected: 45 = 45 each| Array Size (n) | Comparisons | Swaps | Total Operations |
|---|---|---|---|
| 10 | 45 | 45 | 90 |
| 100 | 4,950 | 4,950 | 9,900 |
| 1,000 | 499,500 | 499,500 | 999,000 |
| 10,000 | 49,995,000 | 49,995,000 | 99,990,000 (~100M) |
| 100,000 | 4,999,950,000 | 4,999,950,000 | ~10 billion |
In production systems, you must plan for worst case. An adversarial user might submit reverse-sorted data. A bug might corrupt ordering. Murphy's Law: if worst case can happen, it will. With bubble sort, worst case is extremely bad and very achievable.
What about "typical" random input? The average case requires probabilistic analysis.
Assumptions for Average Case:
Average Number of Inversions:
For a random permutation of n elements, the expected number of inversions is:
$$E[\text{inversions}] = \frac{n(n-1)}{4}$$
Why n(n-1)/4?
Consider any pair of positions (i, j) where i < j. In a random permutation, the element at position i is larger than the element at position j with probability 1/2. There are C(n,2) = n(n-1)/2 such pairs. So:
$$E[\text{inversions}] = \frac{n(n-1)}{2} \times \frac{1}{2} = \frac{n(n-1)}{4}$$
Average Case Performance:
Even with the optimization, comparisons still dominate. Expected early termination saves some work, but the complexity remains O(n²).
The Bottom Line:
For random input, bubble sort is unavoidably quadratic.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
function measureAverageCase(n: number, trials: number): { avgComparisons: number; avgSwaps: number; expectedSwaps: number;} { let totalComparisons = 0; let totalSwaps = 0; for (let t = 0; t < trials; t++) { // Generate random permutation const arr = Array.from({ length: n }, (_, i) => i); for (let i = n - 1; i > 0; i--) { const j = Math.floor(Math.random() * (i + 1)); [arr[i], arr[j]] = [arr[j], arr[i]]; // Fisher-Yates shuffle } // Sort and count let comparisons = 0; let swaps = 0; for (let pass = 0; pass < n - 1; pass++) { let swapped = false; for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; swaps++; swapped = true; } } if (!swapped) break; } totalComparisons += comparisons; totalSwaps += swaps; } return { avgComparisons: totalComparisons / trials, avgSwaps: totalSwaps / trials, expectedSwaps: (n * (n - 1)) / 4 };} // Test with multiple trialsconst result = measureAverageCase(100, 1000);console.log("Average case over 1000 random arrays of size 100:");console.log(` Avg Comparisons: ${result.avgComparisons.toFixed(1)}`);console.log(` Avg Swaps: ${result.avgSwaps.toFixed(1)}`);console.log(` Expected Swaps (n(n-1)/4): ${result.expectedSwaps}`);console.log(` Difference: ${Math.abs(result.avgSwaps - result.expectedSwaps).toFixed(1)}`); // Output (approximately):// Average case over 1000 random arrays of size 100:// Avg Comparisons: 4672.3 (less than 4950 due to early termination)// Avg Swaps: 2478.5// Expected Swaps (n(n-1)/4): 2475// Difference: 3.5The empirical results closely match the theoretical prediction of n(n-1)/4 swaps. This is a common pattern in algorithm analysis: probabilistic theory predicts average behavior, and experiments validate it. Understanding both approaches makes you a stronger engineer.
While time complexity is bubble sort's weakness, space complexity is its strength.
In-Place Sorting:
Bubble sort is an in-place algorithm—it sorts the array using only a constant amount of extra memory, regardless of input size.
Extra Space Used:
Space Complexity: O(1)
This means bubble sort uses the same amount of extra memory whether sorting 10 elements or 10 million elements. The array is sorted "in place" without requiring a copy.
Comparison with Other Algorithms:
Not all sorting algorithms share this property:
| Algorithm | Time (Average) | Space | In-Place? |
|---|---|---|---|
| Bubble Sort | O(n²) | O(1) | Yes ✓ |
| Insertion Sort | O(n²) | O(1) | Yes ✓ |
| Selection Sort | O(n²) | O(1) | Yes ✓ |
| Merge Sort | O(n log n) | O(n) | No ✗ |
| Quick Sort | O(n log n) | O(log n) | Yes* (stack space) |
| Heap Sort | O(n log n) | O(1) | Yes ✓ |
| Counting Sort | O(n + k) | O(k) | No ✗ |
When O(1) Space Matters:
However, in most modern applications, O(n) space is acceptable, and faster O(n log n) algorithms are preferred despite the memory cost.
Bubble sort exemplifies a common trade-off: it's extremely space-efficient but time-inefficient. Merge sort is the opposite—fast but memory-hungry. The right choice depends on your constraints. Modern systems typically have ample memory but demand speed.
Big-O tells us how algorithms scale, but what are the actual running times? Let's estimate real-world performance.
Assumptions:
| n | Comparisons | Time @ 300M/s | Assessment |
|---|---|---|---|
| 100 | 4,950 | ~0.02 ms | Instant ✓ |
| 1,000 | 499,500 | ~1.7 ms | Instant ✓ |
| 10,000 | ~50 million | ~166 ms | Noticeable |
| 100,000 | ~5 billion | ~16 seconds | Unacceptable ✗ |
| 1,000,000 | ~500 billion | ~28 minutes | Catastrophic ✗ |
The Quadratic Wall:
Notice how quickly things deteriorate:
This is the defining characteristic of O(n²): doubling input quadruples time. At some point, you hit a wall where no amount of hardware can help.
12345678910111213141516171819202122232425262728293031323334
function benchmarkBubbleSort(sizes: number[]): void { for (const n of sizes) { // Create worst-case array const arr = Array.from({ length: n }, (_, i) => n - i); const start = performance.now(); // Bubble sort for (let pass = 0; pass < n - 1; pass++) { for (let i = 0; i < n - pass - 1; i++) { if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; } } } const elapsed = performance.now() - start; const operations = (n * (n - 1)) / 2; const opsPerMs = operations / elapsed; console.log(`n=${n.toLocaleString()}: ${elapsed.toFixed(2)}ms (${(opsPerMs / 1000).toFixed(1)}M ops/s)`); }} // Run benchmark (be careful with large n!)benchmarkBubbleSort([100, 500, 1000, 2000, 5000, 10000]); // Typical output on modern machine:// n=100: 0.03ms (1.5M ops/s)// n=500: 0.45ms (0.5M ops/s)// n=1,000: 1.8ms (0.3M ops/s)// n=2,000: 7.2ms (0.3M ops/s)// n=5,000: 45ms (0.3M ops/s)// n=10,000: 180ms (0.3M ops/s)In production, you rarely have just the sort operation. There's data fetching, processing, response formatting. If sorting alone takes 180ms, your entire operation might exceed timeouts. This is why O(n²) algorithms are avoided for non-trivial data sizes.
To truly appreciate bubble sort's limitations, we must compare it with efficient sorting algorithms. Merge sort, quick sort, and heap sort all achieve O(n log n) average-case complexity.
How Much Faster is O(n log n)?
Let's compare operation counts:
| n | n²/2 (Bubble) | n log₂ n (Merge) | Ratio (Bubble/Merge) |
|---|---|---|---|
| 10 | 50 | 33 | 1.5x |
| 100 | 5,000 | 664 | 7.5x |
| 1,000 | 500,000 | 9,966 | 50x |
| 10,000 | 50,000,000 | 132,877 | 376x |
| 100,000 | 5,000,000,000 | 1,660,964 | 3,010x |
| 1,000,000 | 500,000,000,000 | 19,931,569 | 25,086x |
The Gap Grows Exponentially:
At 1 million elements:
If merge sort takes 1 second, bubble sort takes nearly 7 hours.
Why the Dramatic Difference?
n² grows much faster than n log n:
For large n, this difference is devastating.
12345678910111213141516171819202122
function compareGrowthRates(): void { console.log("n\t\tn²/2\t\tn·log₂(n)\tRatio"); console.log("=".repeat(60)); for (const n of [10, 100, 1000, 10000, 100000]) { const quadratic = (n * n) / 2; const nlogn = n * Math.log2(n); const ratio = (quadratic / nlogn).toFixed(1); console.log(`${n}\t\t${quadratic.toLocaleString()}\t\t${nlogn.toFixed(0)}\t\t${ratio}x`); }} compareGrowthRates();// Output:// n n²/2 n·log₂(n) Ratio// ============================================================// 10 50 33 1.5x// 100 5,000 664 7.5x// 1000 500,000 9,966 50.2x// 10000 50,000,000 132,877 376.4x// 100000 5,000,000,000 1,660,964 3,010.0xFor any significant data size (n > 1000), always prefer O(n log n) algorithms. The difference isn't just "faster"—it's the difference between a usable system and a broken one. Bubble sort is educational, not practical.
We've rigorously analyzed bubble sort's time complexity. Let's consolidate the key insights:
What's next:
We've established that bubble sort is O(n²)—slow by modern standards. But does this mean it's never useful? The final page examines the surprising situations where bubble sort is acceptable, and more importantly, when it's absolutely not.
You now have a rigorous understanding of bubble sort's time complexity. You can derive the exact operation count, explain why it's O(n²), compare cases, and understand the practical implications. Next, we'll explore when bubble sort is and isn't appropriate.