Loading content...
Every sorting algorithm has a fundamental operation—the atomic building block from which all sorting behavior emerges. For bubble sort, that operation is the adjacent swap: comparing two neighboring elements and exchanging their positions if they're out of order.
This might seem trivially simple. After all, swapping two values is one of the first things every programmer learns. But understanding swaps deeply reveals surprising insights about algorithm design, memory access patterns, and why certain sorting algorithms perform better than others.
In this page, we'll dissect the swap operation with the rigor of a Principal Engineer analyzing a production system.
By the end of this page, you will understand multiple swap implementations, their memory implications, why bubble sort uses adjacent swaps (not arbitrary swaps), the mathematical properties that make swapping work, and how swap patterns reveal the algorithm's behavior.
A swap is conceptually simple: exchange the values at two positions in an array. But let's examine this carefully.
The Problem:
Given array arr and two indices i and j, we want:
arr[i] to contain the value previously at arr[j]arr[j] to contain the value previously at arr[i]The Challenge:
If we write arr[i] = arr[j] first, we've lost the original value of arr[i]. When we then write arr[j] = arr[i], we're copying the new value of arr[i] (which is the same as arr[j]). Both positions now hold the same value!
This is the lost update problem, and it's why swapping requires careful implementation.
123456789101112
// DON'T DO THIS - Classic beginner mistakefunction wrongSwap(arr: number[], i: number, j: number): void { arr[i] = arr[j]; // Original arr[i] is now LOST! arr[j] = arr[i]; // This copies arr[j] back to arr[j]} // Example:const arr = [5, 3];wrongSwap(arr, 0, 1);console.log(arr); // [3, 3] - WRONG! Should be [3, 5] // The value 5 was destroyed when we overwrote arr[0]The Solution:
We must preserve one value before overwriting. There are several correct approaches, each with different characteristics.
12345678910111213141516
function swapWithTemp(arr: number[], i: number, j: number): void { const temp = arr[i]; // Save arr[i] in temporary storage arr[i] = arr[j]; // Overwrite arr[i] with arr[j] arr[j] = temp; // Write saved value to arr[j]} // Example:const arr = [5, 3];swapWithTemp(arr, 0, 1);console.log(arr); // [3, 5] ✓ Correct! // Memory operations:// 1. Read arr[i] → temp (stack allocation + write)// 2. Read arr[j] → arr[i] (array read + array write)// 3. Read temp → arr[j] (stack read + array write)// Total: 3 reads, 3 writes, 1 temporary variableThe temporary variable approach is universally recommended because: (1) it's clear and readable, (2) it works for any data type, (3) modern compilers optimize it effectively, and (4) it has no risk of overflow or type-specific issues. Unless you have a specific reason, use this method.
Beyond the temporary variable method, several alternative swapping techniques exist. Each has specific characteristics worth understanding.
12345678910111213141516
function swapWithDestructuring(arr: number[], i: number, j: number): void { [arr[i], arr[j]] = [arr[j], arr[i]];} // Example:const arr = [5, 3];swapWithDestructuring(arr, 0, 1);console.log(arr); // [3, 5] ✓ // How it works:// 1. Right side creates a temporary array [arr[j], arr[i]] = [3, 5]// 2. Left side destructures: arr[i] = 3, arr[j] = 5// // Under the hood, this creates a temporary array,// so it's essentially similar to the temp variable approach,// but potentially with more memory allocation.For numeric values only, some mathematical tricks can swap without explicit temporaries:
1234567891011121314151617181920
function swapWithArithmetic(arr: number[], i: number, j: number): void { // Only works for numbers! Will fail for objects, strings, etc. arr[i] = arr[i] + arr[j]; // arr[i] now holds sum arr[j] = arr[i] - arr[j]; // arr[j] = sum - arr[j] = original arr[i] arr[i] = arr[i] - arr[j]; // arr[i] = sum - new arr[j] = original arr[j]} // Example with small numbers:let arr = [5, 3];swapWithArithmetic(arr, 0, 1);console.log(arr); // [3, 5] ✓ // ⚠️ WARNING: This can overflow!// If arr[i] + arr[j] exceeds Number.MAX_SAFE_INTEGER,// you'll get unexpected results. // Example of overflow danger:arr = [Number.MAX_SAFE_INTEGER, 1];swapWithArithmetic(arr, 0, 1);// Result is INCORRECT due to precision loss!1234567891011121314151617181920212223
function swapWithXOR(arr: number[], i: number, j: number): void { // Only works for 32-bit integers! Fails for large numbers, floats, objects if (i !== j) { // Must check: XOR swap fails when swapping with itself arr[i] = arr[i] ^ arr[j]; arr[j] = arr[i] ^ arr[j]; // Now arr[j] = original arr[i] arr[i] = arr[i] ^ arr[j]; // Now arr[i] = original arr[j] }} // Example:let arr = [5, 3]; // Binary: 5 = 101, 3 = 011swapWithXOR(arr, 0, 1);console.log(arr); // [3, 5] ✓ // How XOR swap works (using x = 5, y = 3):// Step 1: x = x ^ y = 101 ^ 011 = 110 (6)// Step 2: y = x ^ y = 110 ^ 011 = 101 (5) — y now has original x// Step 3: x = x ^ y = 110 ^ 101 = 011 (3) — x now has original y // XOR properties used:// - a ^ a = 0 (XOR with self gives 0)// - a ^ 0 = a (XOR with 0 gives original)// - a ^ b ^ b = a (XOR twice cancels out)While clever, these techniques have serious drawbacks: (1) they only work for specific numeric types, (2) they're confusing to read, (3) they can overflow or lose precision, (4) the XOR swap fails when i === j, and (5) modern compilers optimize temp variable swaps just as well. Use the temporary variable or destructuring methods instead.
| Method | Works With | Pros | Cons |
|---|---|---|---|
| Temp Variable | All types | Clear, safe, universal | Requires extra memory (minimal) |
| Destructuring | All types | Elegant, one-liner | Creates temp array, JS-specific |
| Arithmetic | Numbers only | No explicit temp | Overflow risk, type-limited |
| XOR | 32-bit integers | No explicit temp | Obscure, fails when i=j, type-limited |
Bubble sort specifically uses adjacent swaps—swaps between elements at positions i and i+1. This is not just an implementation detail; it's a fundamental characteristic that defines the algorithm's behavior.
The Adjacency Constraint:
Bubble sort answers the question: "If we can only compare and swap neighbors, how do we sort an array?"
This constraint might seem arbitrary, but it models real situations:
What adjacent-only swapping means algorithmically:
Consider an element that starts at position 0 but needs to end up at position n-1 (the smallest element in a max-to-min array). In bubble sort:
This is the "turtle" problem—small elements at the right side of the array move left very slowly, one position per pass. Large elements at the left ("rabbits") move right quickly because they keep winning comparisons.
123456789101112131415161718192021222324252627282930
// A 'turtle' (small value at the end) moves LEFT slowly// A 'rabbit' (large value at the start) moves RIGHT quickly // Array: [5, 1, 2, 3, 4] — 5 is a rabbit, 1 is NOT a turtle (it's already close to correct)// Let's try: [2, 3, 4, 5, 1] — 1 is a turtle! // Pass 1: [2, 3, 4, 5, 1]// Compare 2,3: OK → [2, 3, 4, 5, 1]// Compare 3,4: OK → [2, 3, 4, 5, 1]// Compare 4,5: OK → [2, 3, 4, 5, 1]// Compare 5,1: SWAP → [2, 3, 4, 1, 5]// After pass 1: 1 moved ONE position left. 5 is now sorted. // Pass 2: [2, 3, 4, 1, 5]// Compare 2,3: OK → [2, 3, 4, 1, 5]// Compare 3,4: OK → [2, 3, 4, 1, 5] // Compare 4,1: SWAP → [2, 3, 1, 4, 5]// After pass 2: 1 moved ONE more position left. 4,5 are sorted. // Pass 3: [2, 3, 1, 4, 5]// Compare 2,3: OK → [2, 3, 1, 4, 5]// Compare 3,1: SWAP → [2, 1, 3, 4, 5]// After pass 3: 1 moved ONE more position left. 3,4,5 are sorted. // Pass 4: [2, 1, 3, 4, 5]// Compare 2,1: SWAP → [1, 2, 3, 4, 5]// After pass 4: 1 reached position 0! // The turtle (1) took 4 passes to move 4 positions left.// Each pass, it moved exactly one position.A variant called Cocktail Shaker Sort (or Bidirectional Bubble Sort) alternates between left-to-right and right-to-left passes. This helps turtles and rabbits move equally fast, improving performance on certain inputs. However, worst-case complexity remains O(n²).
Understanding swaps mathematically gives us powerful tools for analyzing sorting algorithms.
Swaps and Permutations:
An array can be viewed as a permutation of the sorted order. For example:
[1, 2, 3, 4, 5][3, 1, 4, 5, 2] — This is a permutation that maps 1→3, 2→1, 3→4, 4→5, 5→2A swap is a transposition in permutation terminology—it exchanges exactly two elements. Any permutation can be expressed as a product of transpositions.
Inversions: Measuring Disorder
An inversion is a pair of elements that are out of order. For array A, an inversion is a pair (i, j) where i < j but A[i] > A[j].
Examples:
[1, 2, 3, 4, 5] has 0 inversions (sorted)[5, 4, 3, 2, 1] has 10 inversions (reverse sorted)[3, 1, 4, 5, 2] has inversions: (3,1), (3,2), (4,2), (5,2) = 4 inversionsThe Key Insight:
Each adjacent swap removes exactly one inversion.
When we swap adjacent elements A[i] and A[i+1] (where A[i] > A[i+1]), we fix the inversion between these two elements without affecting any other inversions.
123456789101112131415161718192021222324
function countInversions(arr: number[]): number { let inversions = 0; const n = arr.length; for (let i = 0; i < n; i++) { for (let j = i + 1; j < n; j++) { if (arr[i] > arr[j]) { inversions++; } } } return inversions;} // Examples:console.log(countInversions([1, 2, 3, 4, 5])); // 0 — sortedconsole.log(countInversions([5, 4, 3, 2, 1])); // 10 — max inversionsconsole.log(countInversions([3, 1, 4, 5, 2])); // 4 // Maximum inversions for array of size n:// n(n-1)/2 — every pair is inverted (reverse sorted) // For n=5: 5*4/2 = 10 inversions maximumWhy This Matters:
Since bubble sort only performs adjacent swaps, and each swap removes exactly one inversion:
Number of swaps in bubble sort = Number of inversions in the original array
This gives us a precise measure of bubble sort's work:
The number of inversions in an array exactly equals the number of swaps bubble sort (and insertion sort) will perform. For nearly-sorted data, this number is low, making these algorithms efficient. For random or reverse-sorted data, it's high, making them slow.
Let's analyze exactly how many swaps bubble sort performs in different scenarios.
| Input Type | Array Example (n=5) | Inversions | Swaps |
|---|---|---|---|
| Already Sorted | [1, 2, 3, 4, 5] | 0 | 0 |
| Reverse Sorted | [5, 4, 3, 2, 1] | 10 | 10 |
| One Swap Away | [1, 3, 2, 4, 5] | 1 | 1 |
| Random (typical) | [3, 5, 1, 4, 2] | ~5-6 | ~5-6 |
| All Same Values | [3, 3, 3, 3, 3] | 0 | 0 |
12345678910111213141516171819202122232425262728293031323334
function bubbleSortWithStats(arr: number[]): { sorted: number[], swaps: number, comparisons: number } { const n = arr.length; let swaps = 0; let comparisons = 0; for (let pass = 0; pass < n - 1; pass++) { for (let i = 0; i < n - pass - 1; i++) { comparisons++; if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; swaps++; } } } return { sorted: arr, swaps, comparisons };} // Test different scenarios:console.log(bubbleSortWithStats([1, 2, 3, 4, 5]));// { sorted: [1,2,3,4,5], swaps: 0, comparisons: 10 } console.log(bubbleSortWithStats([5, 4, 3, 2, 1]));// { sorted: [1,2,3,4,5], swaps: 10, comparisons: 10 } console.log(bubbleSortWithStats([2, 1, 3, 5, 4]));// { sorted: [1,2,3,4,5], swaps: 2, comparisons: 10 } // Notice: comparisons are always n(n-1)/2 = 10 for n=5// But swaps vary based on disorderKey Observations:
Comparisons are fixed (without early termination): Standard bubble sort always makes exactly n(n-1)/2 comparisons, regardless of input order.
Swaps are variable: The number of swaps equals the number of inversions, ranging from 0 (sorted) to n(n-1)/2 (reverse sorted).
Swaps are expensive: Each swap typically involves 3 memory operations (read-read-write or read-write-write depending on implementation). Minimizing swaps is often more important than minimizing comparisons.
With early termination: Comparisons can be reduced for nearly-sorted inputs if we detect "no swaps in a pass".
For primitive types like numbers, comparisons and swaps have similar costs. For complex objects, comparison might involve expensive operations (string comparison, deep object comparison), while swapping is cheap (just moving references). This affects which sorting algorithm is optimal.
While algorithm analysis typically focuses on operation counts, real-world performance depends heavily on memory access patterns. Understanding how bubble sort interacts with memory reveals why it's often faster than expected for small arrays but slower for large ones.
Cache-Friendly Sequential Access:
Bubble sort accesses array elements sequentially: arr[0], arr[1], arr[2], and so on. This pattern is highly cache-friendly because:
Spatial locality: When we access arr[i], neighboring elements like arr[i+1] are likely already in the CPU cache (they were fetched together in a cache line)
Prefetching: Modern CPUs detect sequential access patterns and prefetch upcoming memory before it's needed
Predictable access: No pointer chasing or random jumps—the CPU knows exactly what's coming next
123456789101112131415161718
// Bubble sort memory access pattern (for one pass):// Array in memory: [A₀][A₁][A₂][A₃][A₄][A₅][A₆][A₇]// └────────────────────────────────┘// Cache line (e.g., 64 bytes = 8 integers) // Access sequence:// Step 1: Read A₀, A₁ → Compare → Maybe swap// Step 2: Read A₁, A₂ → Compare → Maybe swap (A₁ already in cache!)// Step 3: Read A₂, A₃ → Compare → Maybe swap (A₂ already in cache!)// ... // Each comparison needs 2 array reads.// But one element is always already cached from the previous comparison!// Effective memory reads = n (not 2n) // Compare this to a random-access algorithm:// Random access pattern: A₇, A₂, A₅, A₀, A₄, A₆, A₁, A₃// Cache misses are frequent, performance suffersWhy Small Arrays Favor Bubble Sort:
For arrays that fit entirely in L1/L2 cache (typically a few KB to a few hundred KB), bubble sort's simplicity and sequential access often outperform more complex algorithms:
This is why some library implementations use bubble sort or insertion sort for small subarrays, even within a faster overall algorithm.
Production sorting implementations like Timsort (Python, Java) and Introsort (C++) use insertion sort (similar access pattern to bubble sort) for small subarrays. The threshold is typically 16-64 elements. Below this size, the O(n²) algorithm with low overhead beats the O(n log n) algorithm with higher overhead.
A sorting algorithm is stable if it preserves the relative order of equal elements. Bubble sort is inherently stable, and this stability emerges directly from its use of adjacent swaps.
Why Adjacent Swaps Preserve Stability:
Consider two equal elements at positions i and j where i < j. For their relative order to change:
But bubble sort only swaps when arr[i] > arr[i+1]. For equal elements, the condition is false, so they're never swapped. The earlier element stays earlier.
For an element to "pass" another through intermediate swaps, it would need to be strictly greater (to swap past intermediates) and eventually equal (at the destination). But if it's strictly greater, it won't stop—it'll keep bubbling right. Equal elements maintain their order.
1234567891011121314151617181920212223242526272829303132333435363738
interface Item { value: number; originalIndex: number; // Track original position} function stableBubbleSort(items: Item[]): Item[] { const n = items.length; for (let pass = 0; pass < n - 1; pass++) { for (let i = 0; i < n - pass - 1; i++) { // Strict greater-than: equal elements are NOT swapped if (items[i].value > items[i + 1].value) { [items[i], items[i + 1]] = [items[i + 1], items[i]]; } } } return items;} // Test with equal valuesconst items: Item[] = [ { value: 3, originalIndex: 0 }, // First 3 { value: 1, originalIndex: 1 }, { value: 3, originalIndex: 2 }, // Second 3 { value: 2, originalIndex: 3 },]; const sorted = stableBubbleSort(items);console.log(sorted);// [// { value: 1, originalIndex: 1 },// { value: 2, originalIndex: 3 },// { value: 3, originalIndex: 0 }, // First 3 comes first!// { value: 3, originalIndex: 2 }, // Second 3 comes second!// ] // The two 3s maintained their relative order ✓If you change the comparison from > to >=, bubble sort becomes unstable. Equal elements would be swapped, reversing their original order. Always use strict inequality for stable sorting.
Why Stability Matters:
Stability is crucial for multi-key sorting. Suppose you have a list of employees and want to sort by department, then by name within each department:
After step 2, employees within the same department are still sorted by name because the stable sort preserved their relative order from step 1. With an unstable sort, the name ordering within departments would be lost.
Understanding when an algorithm doesn't act is as important as understanding when it does. In bubble sort, the condition arr[i] > arr[i+1] determines whether a swap occurs. The cases where we don't swap are equally important.
No-Swap Cases:
arr[i] < arr[i+1]: Elements are already in correct relative order. No action needed.
arr[i] === arr[i+1]: Elements are equal. For stability, we don't swap (strict >, not >=).
Why No-Swap Matters:
The no-swap cases directly affect performance:
12345678910111213141516171819202122232425262728293031323334353637
function bubbleSortEarlyTermination(arr: number[]): { result: number[], passesExecuted: number } { const n = arr.length; let passesExecuted = 0; for (let pass = 0; pass < n - 1; pass++) { let swapped = false; passesExecuted++; for (let i = 0; i < n - pass - 1; i++) { if (arr[i] > arr[i + 1]) { [arr[i], arr[i + 1]] = [arr[i + 1], arr[i]]; swapped = true; } } // If entire pass had no swaps, array is sorted if (!swapped) { break; } } return { result: arr, passesExecuted };} // Test with sorted arrayconsole.log(bubbleSortEarlyTermination([1, 2, 3, 4, 5]));// { result: [1,2,3,4,5], passesExecuted: 1 }// Only 1 pass needed! (instead of 4) // Test with nearly sortedconsole.log(bubbleSortEarlyTermination([1, 2, 4, 3, 5]));// { result: [1,2,3,4,5], passesExecuted: 2 }// Pass 1: swaps 4 and 3// Pass 2: no swaps → doneThis makes bubble sort 'adaptive'—its performance adapts to the input's existing order. An adaptive sorting algorithm performs better when given partially sorted input. This property makes bubble sort reasonable for maintaining already-sorted data with occasional insertions.
We've examined the humble swap operation in remarkable depth. Let's consolidate:
> comparison), relative order is maintained—a valuable property for practical sorting.What's next:
Now that we understand the swap operation deeply, we're ready to analyze bubble sort's complexity rigorously. The next page derives the O(n²) time complexity, explains exactly what that means, and explores best, average, and worst cases with mathematical precision.
You now understand the adjacent swap—the atomic operation of bubble sort. You've seen multiple implementations, understood why adjacency matters, learned the connection to inversions, and appreciated cache effects and stability. Next, we formally analyze time complexity.