Loading learning content...
Imagine you're organizing a bookshelf by height. How would you do it? Most people instinctively use a strategy like this: scan all the books, find the shortest one, and place it first. Then scan the remaining books, find the next shortest, and place it second. Repeat until done.
This intuitive approach—repeatedly selecting the "most suitable" element and placing it in its correct position—is precisely the essence of Selection Sort. It's one of the most natural sorting algorithms because it mirrors how humans often organize things in the physical world.
By the end of this page, you will thoroughly understand the Selection Sort algorithm: its core principle, step-by-step mechanics, visual intuition, and pseudocode. You'll see how this simple algorithm embodies a powerful sorting strategy—repeated selection—and understand why, despite its simplicity, it has specific properties that make it valuable to study.
Selection Sort is built on a deceptively simple principle:
Repeatedly find the minimum element from the unsorted portion and swap it with the first unsorted element.
This principle contains the two essential operations that define the algorithm:
The beauty of this approach lies in its invariant: after each pass through the array, exactly one more element is guaranteed to be in its final sorted position. The sorted portion of the array grows by one element with each iteration, while the unsorted portion shrinks correspondingly.
After i iterations, the first i elements of the array are in their final sorted positions. This invariant is the key to understanding why Selection Sort works correctly. Each iteration extends the sorted portion by exactly one element, and that element will never need to move again.
Why "Selection" Sort?
The algorithm's name derives from its central operation: selecting the appropriate element at each step. Unlike Bubble Sort, which works by repeatedly comparing adjacent elements, Selection Sort works by selecting the globally optimal element (minimum or maximum) from the remaining unsorted portion.
This distinction is fundamental to understanding the algorithm's behavior:
This difference in approach leads to different performance characteristics that we'll explore throughout this module.
Let's trace through Selection Sort with a concrete example. We'll sort the array [64, 25, 12, 22, 11] in ascending order.
Initial State:
[64, 25, 12, 22, 11]| Pass | Unsorted Range | Minimum Found | Swap | Array After Pass |
|---|---|---|---|---|
| 1 | Indices 0-4 | 11 at index 4 | Swap 11 ↔ 64 | [11, 25, 12, 22, 64] |
| 2 | Indices 1-4 | 12 at index 2 | Swap 12 ↔ 25 | [11, 12, 25, 22, 64] |
| 3 | Indices 2-4 | 22 at index 3 | Swap 22 ↔ 25 | [11, 12, 22, 25, 64] |
| 4 | Indices 3-4 | 25 at index 3 | No swap needed | [11, 12, 22, 25, 64] |
Detailed Walkthrough:
Pass 1: We scan positions 0 through 4 to find the minimum. The minimum value is 11 at index 4. We swap it with the element at index 0 (64). Now index 0 contains the smallest element, and it's in its final position.
Pass 2: The sorted portion is [11]. We scan positions 1 through 4 for the next minimum. We find 12 at index 2 and swap it with position 1 (25). The sorted portion grows to [11, 12].
Pass 3: We scan positions 2 through 4. The minimum is 22 at index 3. Swap with position 2 (25). Sorted portion: [11, 12, 22].
Pass 4: We scan positions 3 through 4. The minimum is 25 at index 3—it's already in the correct position! No swap is needed (swapping an element with itself has no effect). Sorted portion: [11, 12, 22, 25].
After 4 passes, the entire array is sorted. Note that we needed n - 1 = 4 passes for an array of 5 elements. The last element (64) is automatically in its correct position once all others are placed.
We only need n-1 passes because when n-1 elements are in their correct positions, the nth element must also be correct—there's nowhere else for it to go. This is a subtle but important observation that applies to many sorting algorithms.
The most powerful mental model for Selection Sort is visualizing the array as two distinct regions:
Sorted Region (Left): Elements that have been placed in their final positions. This region grows by one element with each pass.
Unsorted Region (Right): Elements that still need to be processed. This region shrinks by one element with each pass.
The boundary between these regions moves from left to right as the algorithm progresses:
Initial: [ 64 | 25 | 12 | 22 | 11 ] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ All unsorted Pass 1: [ 11 | 25 | 12 | 22 | 64 ] ^^^^|^^^^^^^^^^^^^^^^^^^^^^ Sorted Unsorted Pass 2: [ 11 | 12 | 25 | 22 | 64 ] ^^^^^^^^^|^^^^^^^^^^^^^^^^^ Sorted Unsorted Pass 3: [ 11 | 12 | 22 | 25 | 64 ] ^^^^^^^^^^^^^^|^^^^^^^^^^^^ Sorted Unsorted Pass 4: [ 11 | 12 | 22 | 25 | 64 ] ^^^^^^^^^^^^^^^^^^^|^^^^^^^ Sorted Last element Final: [ 11 | 12 | 22 | 25 | 64 ] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ All sortedThe Card Game Analogy:
Imagine you're holding a hand of cards and want to sort them from lowest to highest. With Selection Sort:
Each card you place is in its final position—you'll never need to move it again. This is a key characteristic of Selection Sort that distinguishes it from other simple sorting algorithms.
Think of Selection Sort as "Select the minimum, Set it in place." Each pass selects one element (the minimum of the unsorted portion) and sets it in its final position. Once set, an element never moves again. This makes the algorithm's behavior very predictable.
Now let's formalize our understanding with precise pseudocode. The algorithm consists of two nested loops:
12345678910111213141516
SELECTION-SORT(A, n) // A: array to sort // n: number of elements for i = 0 to n - 2: // Outer loop: each position to fill min_index = i // Assume current position has minimum for j = i + 1 to n - 1: // Inner loop: search unsorted portion if A[j] < A[min_index]: // Found a smaller element min_index = j // Update minimum index // Swap the minimum element with position i if min_index ≠ i: // Only swap if needed swap(A[i], A[min_index]) return A // Array is now sortedKey observations from the pseudocode:
Outer loop runs n-1 times: We don't need to process the last element; it's automatically correct after n-1 placements.
Inner loop starts at i+1: We only search the unsorted portion (elements at indices > i).
We track the index, not the value: Storing min_index rather than the minimum value itself allows us to perform the swap efficiently.
Conditional swap: The if min_index ≠ i check is optional but avoids unnecessary self-swaps when the minimum is already in position i.
12345678910111213141516171819202122232425262728293031323334353637383940
def selection_sort(arr: list) -> list: """ Sort an array using Selection Sort algorithm. Args: arr: List of comparable elements Returns: The same list, sorted in ascending order Time Complexity: O(n²) for all cases Space Complexity: O(1) - sorts in place """ n = len(arr) # Iterate through each position to fill for i in range(n - 1): # Find the index of minimum element in unsorted portion min_index = i for j in range(i + 1, n): if arr[j] < arr[min_index]: min_index = j # Swap the minimum element with the current position # Only swap if the minimum is not already at position i if min_index != i: arr[i], arr[min_index] = arr[min_index], arr[i] return arr # Example usageif __name__ == "__main__": test_array = [64, 25, 12, 22, 11] print(f"Original: {test_array}") sorted_array = selection_sort(test_array) print(f"Sorted: {sorted_array}") # Output: [11, 12, 22, 25, 64]Selection Sort has several important properties that define its behavior and applicability. Understanding these properties deeply is essential for knowing when Selection Sort might be appropriate and when to choose alternatives.
Consider the array [5a, 3, 5b, 2] where 5a and 5b are equal values. In pass 1, we find 2 as minimum and swap it with 5a: [2, 3, 5b, 5a]. Now 5a appears AFTER 5b, violating stability. The long-distance swaps in Selection Sort disrupt the relative order of equal elements.
A loop invariant is a condition that is true before and after each iteration of a loop. Understanding the loop invariant is essential for proving an algorithm's correctness and deeply understanding its mechanics.
For Selection Sort, the loop invariant is:
At the start of iteration i, the subarray A[0..i-1] contains the i smallest elements of the original array, in sorted order.
Let's prove this invariant holds throughout the algorithm:
By proving the loop invariant—showing it holds at initialization, is maintained through each iteration, and leads to the correct result at termination—we have formally proven that Selection Sort correctly sorts any input array. This style of reasoning is fundamental to algorithm analysis.
While the standard Selection Sort algorithm is straightforward, there are several interesting variations and extensions worth understanding:
1. Bidirectional (Cocktail) Selection Sort
Instead of only finding the minimum element in each pass, we can find both the minimum AND maximum. The minimum is placed at the beginning of the unsorted portion, and the maximum is placed at the end. This reduces the number of passes from n-1 to roughly n/2, though the constant factor improvement doesn't change the O(n²) complexity.
12345678910111213141516171819202122232425262728293031323334
def bidirectional_selection_sort(arr: list) -> list: """ Optimized Selection Sort that finds both min and max in each pass. Reduces passes by half, though still O(n²) overall. """ n = len(arr) left = 0 right = n - 1 while left < right: min_idx = left max_idx = left # Find both minimum and maximum in one scan for i in range(left, right + 1): if arr[i] < arr[min_idx]: min_idx = i if arr[i] > arr[max_idx]: max_idx = i # Place minimum at left arr[left], arr[min_idx] = arr[min_idx], arr[left] # Handle case where maximum was at left position if max_idx == left: max_idx = min_idx # Place maximum at right arr[right], arr[max_idx] = arr[max_idx], arr[right] left += 1 right -= 1 return arr2. Maximum Selection Sort (Descending Order)
To sort in descending order, simply find the maximum instead of the minimum in each pass. Alternatively, run standard Selection Sort and then reverse the array (though this adds O(n) operations).
3. Selection Sort with Sentinel
By placing a sentinel value (a value guaranteed to be smaller than any element) at the end of the array, we can eliminate the bounds check in the inner loop. This is a micro-optimization that reduces comparisons but requires knowing the value range.
4. Heap Selection (Heap Sort Precursor)
The fundamental insight of Selection Sort—repeatedly selecting the minimum—naturally leads to Heap Sort. If we organize the unsorted portion as a min-heap, we can find the minimum in O(log n) instead of O(n), reducing the overall complexity to O(n log n). This is precisely what Heap Sort does.
Understanding Selection Sort deeply prepares you for Heap Sort. The conceptual leap is realizing that the bottleneck in Selection Sort is finding the minimum (O(n) per pass). A heap data structure solves this by maintaining the minimum at the root, making extraction O(log n). This is a beautiful example of how data structures and algorithms work together.
We've now thoroughly explored the Selection Sort algorithm—its intuition, mechanics, properties, and variations. Let's consolidate the key takeaways:
What's Next:
Now that we understand the Selection Sort algorithm conceptually, the next page dives into the core operation: finding the minimum and moving it to position. We'll examine this operation in fine detail, understand its time complexity contribution, and see how it determines the algorithm's overall behavior.
You now have a comprehensive understanding of the Selection Sort algorithm. You can trace through its execution, understand its invariant, and appreciate its properties. Next, we'll examine the minimum-finding operation that lies at the heart of Selection Sort.