Loading content...
The sorted output property we've established isn't merely a theoretical curiosity—it's a practical powerhouse. Understanding that inorder traversal produces sorted output transforms how we approach a class of problems that appear constantly in real-world applications.
Need to find the median salary in a company's BST-organized payroll? The 10th percentile response time? All products priced between $50 and $100? These are all instances of problems that become elegant when viewed through the sorted output lens.
By the end of this page, you will master two powerful applications of the sorted output property: finding the k-th smallest element and performing range queries. You'll see multiple implementation approaches, understand their trade-offs, and recognize these patterns in interview problems and real-world systems.
Problem Statement:
Given a Binary Search Tree and an integer k, find the k-th smallest element in the BST.
Examples:
This problem appears constantly in coding interviews and has practical applications in:
Since inorder traversal visits nodes in sorted order, the k-th node visited is exactly the k-th smallest element. We don't need to traverse the entire tree—we can stop as soon as we've visited k nodes.
Why the sorted output property makes this easy:
Imagine you have a sorted array [10, 20, 30, 40, 50, 60, 70, 80]. Finding the 3rd smallest element is trivial: it's array[2] (0-indexed), which is 30.
A BST with inorder output [10, 20, 30, 40, 50, 60, 70, 80] is essentially the same sorted sequence, just stored in tree form. The 3rd element in inorder traversal is the 3rd smallest element.
The only question is: how do we access the k-th element efficiently?
The simplest approach directly leverages the sorted output property:
This approach is called early termination because we stop as soon as we find the answer, rather than traversing the entire tree.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
class Solution: def kth_smallest(self, root, k: int) -> int: """ Find the k-th smallest element in a BST. Approach: Inorder traversal with early termination. The k-th node visited in inorder is the k-th smallest. Time Complexity: O(H + k) where H is tree height - O(H) to reach the leftmost node - O(k) to traverse k nodes - In balanced tree: O(log n + k) - In skewed tree: O(n) worst case Space Complexity: O(H) for recursion stack """ self.count = 0 self.result = None def inorder(node): if node is None or self.result is not None: return # Traverse left subtree inorder(node.left) # Process current node (early termination check) if self.result is not None: return self.count += 1 if self.count == k: self.result = node.val return # Traverse right subtree inorder(node.right) inorder(root) return self.result class SolutionIterative: def kth_smallest(self, root, k: int) -> int: """ Iterative approach using explicit stack. Often preferred for avoiding recursion stack overhead. """ stack = [] current = root count = 0 while stack or current: # Go left as far as possible while current: stack.append(current) current = current.left # Process node current = stack.pop() count += 1 if count == k: return current.val # Move to right subtree current = current.right # k is larger than number of nodes return NoneTrace through example:
For BST with inorder [10, 20, 30, 40, 50] and k = 3:
| Node Visited | Count | Action |
|---|---|---|
| 10 | 1 | count < k, continue |
| 20 | 2 | count < k, continue |
| 30 | 3 | count == k, return 30! |
We never visit nodes 40 and 50—early termination saves work.
The basic approach is O(H + k). For k close to n in a balanced tree, this approaches O(n). Can we do better?
Yes—if we're willing to augment the BST.
An augmented BST stores additional information at each node. For k-th smallest queries, we store the size of each node's left subtree. This extra information enables O(H) queries regardless of k.
Augmentation trades space and insertion/deletion complexity for query efficiency. Each node stores one extra integer, and insertions/deletions must update counts along the path. This is worth it when k-th smallest queries are frequent.
How it works:
For a node N with left subtree size L:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
class AugmentedNode: """ BST node augmented with left subtree size. This enables O(H) k-th smallest queries. """ def __init__(self, val): self.val = val self.left = None self.right = None self.left_size = 0 # Size of left subtree class AugmentedBST: def __init__(self): self.root = None def insert(self, val): """ Insert with left_size maintenance. Time: O(H) """ if self.root is None: self.root = AugmentedNode(val) return current = self.root while True: if val < current.val: current.left_size += 1 # One more node in left subtree if current.left is None: current.left = AugmentedNode(val) return current = current.left else: if current.right is None: current.right = AugmentedNode(val) return current = current.right def kth_smallest(self, k: int): """ Find k-th smallest element in O(H) time. Key insight: left_size tells us how many elements are smaller than the current node. Time Complexity: O(H) - just one root-to-node path Space Complexity: O(1) for iterative version """ current = self.root while current: left_size = current.left_size if k == left_size + 1: # Current node is the k-th smallest # left_size nodes are smaller, so this is position left_size + 1 return current.val elif k <= left_size: # k-th smallest is in the left subtree current = current.left else: # k-th smallest is in the right subtree # Subtract nodes we've "passed": left subtree + current k -= (left_size + 1) current = current.right return None # k is out of range # Example usage:# bst = AugmentedBST()# for val in [50, 30, 70, 20, 40, 60, 80]:# bst.insert(val)# # print(bst.kth_smallest(1)) # → 20 (minimum)# print(bst.kth_smallest(3)) # → 40# print(bst.kth_smallest(7)) # → 80 (maximum)| Operation | Basic BST | Augmented BST |
|---|---|---|
| k-th smallest query | O(H + k) | O(H) |
| Insert | O(H) | O(H) + size updates |
| Delete | O(H) | O(H) + size updates |
| Space per node | value + 2 pointers | value + 2 pointers + 1 integer |
| Best for | Infrequent k-th queries | Frequent k-th queries |
Problem Statement:
Given a BST and a range [low, high], find all elements that fall within this range (inclusive).
Real-world applications:
The sorted output property makes this problem elegant: we need to find a contiguous segment of the sorted sequence.
Unlike full inorder traversal, range queries can prune entire subtrees. If the current node is less than 'low', we skip the left subtree (all values even smaller). If greater than 'high', we skip the right subtree. This pruning gives us O(H + k) where k is the number of results.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
def range_search(root, low: int, high: int) -> list: """ Find all values in the BST within the range [low, high]. Key pruning insights: - If node.val < low: skip left subtree (all values < low) - If node.val > high: skip right subtree (all values > high) - This is more efficient than full traversal Time Complexity: O(H + k) where k is number of elements in range - O(H) to find the range start - O(k) to enumerate elements in range Space Complexity: O(H) for recursion stack """ result = [] def inorder_range(node): if node is None: return # Only go left if there might be values >= low if node.val > low: inorder_range(node.left) # Include current node if in range if low <= node.val <= high: result.append(node.val) # Only go right if there might be values <= high if node.val < high: inorder_range(node.right) inorder_range(root) return result def range_sum(root, low: int, high: int) -> int: """ Calculate the sum of all values in range [low, high]. Same pruning logic, but accumulates sum instead of list. Time: O(H + k), Space: O(H) """ if root is None: return 0 total = 0 # Include current value if in range if low <= root.val <= high: total += root.val # Check left subtree if there might be values in range if root.val > low: total += range_sum(root.left, low, high) # Check right subtree if there might be values in range if root.val < high: total += range_sum(root.right, low, high) return total def count_in_range(root, low: int, high: int) -> int: """ Count elements in range [low, high]. Same pruning logic, but counts instead of collecting. Time: O(H + k), Space: O(H) """ if root is None: return 0 count = 0 if low <= root.val <= high: count = 1 if root.val > low: count += count_in_range(root.left, low, high) if root.val < high: count += count_in_range(root.right, low, high) return countLet's trace range query [35, 65] on this BST:
50
/ \
30 70
/ \ / \
20 40 60 80
/
10
Sorted sequence: [10, 20, 30, 40, 50, 60, 70, 80] Target range: [35, 65] Expected result: [40, 50, 60]
| Step | Node | Condition Check | Action | Result So Far |
|---|---|---|---|---|
| 1 | 50 | 50 > 35 → go left; 35 ≤ 50 ≤ 65 → include | Include 50, explore left | [50] |
| 2 | 30 | 30 > 35? No, skip left; 35 ≤ 30? No, don't include | Skip left subtree | [50] |
| 3 | 30 | 30 < 65 → go right | Explore right subtree | [50] |
| 4 | 40 | 40 > 35? Yes, but left is null; 35 ≤ 40 ≤ 65 → include | Include 40 | [50, 40] |
| 5 | 40 | 40 < 65 → go right; but right is null | Back to 50 | [50, 40] |
| 6 | 50 | Already included; 50 < 65 → go right | Explore right subtree | [50, 40] |
| 7 | 70 | 70 > 35 → go left | Explore left subtree | [50, 40] |
| 8 | 60 | 35 ≤ 60 ≤ 65 → include | Include 60 | [50, 40, 60] |
| 9 | 70 | 35 ≤ 70 ≤ 65? No (70 > 65); 70 < 65? No, skip right | Skip right subtree | [50, 40, 60] |
What we pruned:
Result: [40, 50, 60] — exactly the elements in range, in sorted order!
Notice the output is already sorted because we're performing a modified inorder traversal. The sorted output property extends naturally to range queries.
Full inorder traversal would visit all 8 nodes. With pruning, we only visited 6 nodes (50, 30, 40, 70, 60, and back-tracking). For trees with millions of nodes and small ranges, the savings are enormous.
Two related operations that leverage the sorted output property:
Floor(x): The largest element in the BST that is ≤ x
Ceiling(x): The smallest element in the BST that is ≥ x
These operations are crucial for:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
def floor(root, x): """ Find the largest value in BST that is <= x. Returns None if all values are > x. Strategy: Track the best candidate seen so far. - If node.val == x: exact match, return it - If node.val < x: node is a candidate, look for better in right - If node.val > x: look in left subtree Time: O(H), Space: O(1) iterative """ result = None current = root while current: if current.val == x: # Exact match is the best possible floor return current.val elif current.val < x: # Current node is valid floor candidate result = current.val # Look for larger valid candidate in right subtree current = current.right else: # current.val > x # Current is too large, floor must be in left subtree current = current.left return result def ceiling(root, x): """ Find the smallest value in BST that is >= x. Returns None if all values are < x. Strategy: Track the best candidate seen so far. - If node.val == x: exact match, return it - If node.val > x: node is a candidate, look for better in left - If node.val < x: look in right subtree Time: O(H), Space: O(1) iterative """ result = None current = root while current: if current.val == x: # Exact match is the best possible ceiling return current.val elif current.val > x: # Current node is valid ceiling candidate result = current.val # Look for smaller valid candidate in left subtree current = current.left else: # current.val < x # Current is too small, ceiling must be in right subtree current = current.right return result # Example usage:# BST with values [10, 20, 30, 40, 50, 60, 70, 80]# floor(root, 45) → 40 (largest value ≤ 45)# floor(root, 50) → 50 (exact match)# floor(root, 5) → None (no value ≤ 5)# ceiling(root, 45) → 50 (smallest value ≥ 45)# ceiling(root, 50) → 50 (exact match)# ceiling(root, 85) → None (no value ≥ 85)Connection to sorted sequences:
In a sorted array, floor and ceiling can be found using binary search:
BST floor/ceiling operations are the tree analog of binary search. They follow the same O(log n) efficiency (for balanced trees) but work on a dynamic structure that supports efficient insertions and deletions.
These operations appear frequently in coding interviews. Let's look at common problem patterns:
When you see a BST problem asking about ordering, ranks, ranges, or sorted properties, immediately think: 'This is really about the sorted inorder sequence.' Frame the problem in terms of that sequence, and the solution often becomes clearer.
Example: Two Sum with BST
Given a BST and a target sum, find if there exist two elements that sum to the target.
Approach:
Time: O(n) for traversal + O(n) for two-pointer = O(n) Space: O(n) for storing sequence (can be optimized with iterators)
We've explored how the sorted output property enables powerful operations. Let's consolidate the key insights:
What's next:
We've seen how to query the sorted sequence. The final page explores how to iterate through a BST in sorted order without materializing the entire sequence—enabling memory-efficient sorted traversal and lazy evaluation patterns.
You now understand how to leverage the sorted output property for practical operations: k-th smallest queries, range searches, floor/ceiling operations, and common interview patterns. These techniques transform BSTs from simple search structures into powerful ordered data stores.