Loading content...
In the previous page, we established what level-order traversal is and when to use it. Now we confront the crucial how: What mechanism enables us to visit nodes level by level?
The answer lies in a data structure we've already studied: the queue. The queue's FIFO (First-In-First-Out) behavior is not merely convenient for level-order traversal—it's precisely what makes the algorithm work. Understanding this connection deeply transforms the algorithm from a recipe to be memorized into an inevitable consequence of how queues operate.
By the end of this page, you will understand exactly why queues enable level-order traversal, be able to trace the algorithm step by step, implement the algorithm in code, and recognize the variations and patterns that build on this foundational technique.
To understand why queues are essential, let's first consider what goes wrong if we use other data structures.
The Problem: Maintaining Level Order
We want to visit nodes such that:
k are visited before any node at level k+1The challenge is that when we're at a node, we discover its children—but we shouldn't visit them yet! We should first visit all remaining nodes at the current level. How do we "save" children for later while ensuring they're processed in the right order?
Tree: 1 / \ 2 3 / \ 4 5 Using a STACK (LIFO - Last-In-First-Out): Step 1: Push 1 Stack: [1] Step 2: Pop 1, visit it, push children 2, 3 Stack: [2, 3] (3 on top) Step 3: Pop 3 (top), visit it, push children (none) Stack: [2] Output so far: 1, 3 ← Already wrong! Step 4: Pop 2, visit it, push children 4, 5 Stack: [4, 5] Step 5: Continue... Final output: 1, 3, 2, 5, 4 ← This is preorder, NOT level-order! PROBLEM: Stack's LIFO property causes us to visit the most recently added node (3) before completing the current level.We visited 3 before 2, violating level-order.Same Tree: 1 / \ 2 3 / \ 4 5 Using a QUEUE (FIFO - First-In-First-Out): Step 1: Enqueue 1 Queue: [1] (front → back) Step 2: Dequeue 1 (front), visit it, enqueue children 2, 3 Queue: [2, 3] (2 at front, 3 at back) Step 3: Dequeue 2 (front), visit it, enqueue children 4, 5 Queue: [3, 4, 5] Output so far: 1, 2 ✓ Correct! Step 4: Dequeue 3 (front), visit it, enqueue children (none) Queue: [4, 5] Output so far: 1, 2, 3 ✓ Level 1 complete! Step 5: Dequeue 4, visit it Queue: [5] Step 6: Dequeue 5, visit it Queue: [] Final output: 1, 2, 3, 4, 5 ✓ Perfect level-order! SUCCESS: Queue's FIFO property ensures we visit 2 (added first)before 3, completing level 1 before moving to level 2.The FIFO Guarantee:
The queue's FIFO property provides exactly what we need:
Parents before children: When we enqueue a node's children, they go to the back of the queue. Any nodes already in the queue (which are at the same level or earlier) are processed first.
Left before right: By enqueuing the left child before the right child, we ensure left-to-right ordering within each level.
Level completion guarantee: All nodes at level k are enqueued before we start processing level k+1. Since we dequeue from the front, all level k nodes are processed before any level k+1 node.
This isn't coincidence—it's structural necessity. The queue's ordering property directly creates the level-order behavior.
Think of the queue as a "waiting room" organized by arrival time. Parents enter the room first and get their ticket. Their children arrive later and get tickets behind all current waiters. Since we serve in ticket order, all parents are served before any children. This is precisely level-order traversal.
Let's formalize the level-order traversal algorithm. The core process is elegant in its simplicity—a direct translation of how queues work.
Algorithm: Level-Order Traversal
123456789101112131415161718
LEVEL-ORDER-TRAVERSAL(root): 1. If root is null, return empty result 2. Create an empty queue Q 3. Enqueue root into Q 4. While Q is not empty: a. Dequeue the front node, call it 'current' b. Process/visit 'current' (add to result, print, etc.) c. If current has a left child, enqueue left child d. If current has a right child, enqueue right child 5. Return result Time Complexity: O(n) - each node is enqueued and dequeued exactly onceSpace Complexity: O(w) where w = maximum width of tree In worst case (complete binary tree's last level): O(n/2) = O(n)Key Observations:
Initialization: We start by putting only the root in the queue. This bootstraps the process.
Loop invariant: At the start of each iteration, the queue contains all unexplored nodes, ordered by their level (lowest level at front) and then left-to-right within each level.
Termination: Each node is enqueued exactly once (when its parent is processed). Since there are n nodes, the loop runs n times.
Null handling: We only enqueue non-null children. This prevents null pointer issues and unnecessary loop iterations.
Processing flexibility: "Process/visit" can mean anything—print, add to list, transform, check a condition, etc.
Tree: 15 / \ 10 20 / \ \ 5 12 25 / \ 11 14 Trace (Queue state after each step): Initial: Q = [15] Result = []Dequeue 15: Q = [10, 20] Result = [15]Dequeue 10: Q = [20, 5, 12] Result = [15, 10]Dequeue 20: Q = [5, 12, 25] Result = [15, 10, 20]Dequeue 5: Q = [12, 25] Result = [15, 10, 20, 5]Dequeue 12: Q = [25, 11, 14] Result = [15, 10, 20, 5, 12]Dequeue 25: Q = [11, 14] Result = [15, 10, 20, 5, 12, 25]Dequeue 11: Q = [14] Result = [15, 10, 20, 5, 12, 25, 11]Dequeue 14: Q = [] Result = [15, 10, 20, 5, 12, 25, 11, 14] Final Result: [15, 10, 20, 5, 12, 25, 11, 14] Verification by level: Level 0: 15 ✓ Level 1: 10, 20 ✓ Level 2: 5, 12, 25 ✓ Level 3: 11, 14 ✓Watch for these pitfalls: (1) Forgetting to check if root is null before starting. (2) Enqueueing null children (causes errors or infinite loops). (3) Checking if queue is empty AFTER dequeuing instead of before. (4) Using wrong data structure (array without proper dequeue behavior, or using a stack by accident).
Let's translate the algorithm into working code. We'll examine implementations in multiple languages to see how the core pattern remains consistent across different syntax.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
// Definition for a binary tree nodeclass TreeNode { val: number; left: TreeNode | null; right: TreeNode | null; constructor(val: number = 0, left: TreeNode | null = null, right: TreeNode | null = null) { this.val = val; this.left = left; this.right = right; }} /** * Performs level-order traversal of a binary tree. * Returns an array of node values in level order. * * @param root - The root of the binary tree * @returns Array of values in level order * * Time Complexity: O(n) where n is the number of nodes * Space Complexity: O(w) where w is the maximum width of the tree */function levelOrderTraversal(root: TreeNode | null): number[] { // Handle empty tree if (root === null) { return []; } const result: number[] = []; const queue: TreeNode[] = []; // Initialize queue with root queue.push(root); // Process nodes until queue is empty while (queue.length > 0) { // Dequeue front node const current = queue.shift()!; // Process the current node result.push(current.val); // Enqueue children (left before right for left-to-right order) if (current.left !== null) { queue.push(current.left); } if (current.right !== null) { queue.push(current.right); } } return result;} // Example usage:// Tree: 1// / \// 2 3// / \// 4 5//// const root = new TreeNode(1,// new TreeNode(2, new TreeNode(4), new TreeNode(5)),// new TreeNode(3)// );// console.log(levelOrderTraversal(root)); // Output: [1, 2, 3, 4, 5]123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
from collections import dequefrom typing import Optional, List class TreeNode: """Definition for a binary tree node.""" def __init__(self, val: int = 0, left: 'TreeNode' = None, right: 'TreeNode' = None): self.val = val self.left = left self.right = right def level_order_traversal(root: Optional[TreeNode]) -> List[int]: """ Performs level-order traversal of a binary tree. Args: root: The root of the binary tree Returns: List of node values in level order Time Complexity: O(n) where n is the number of nodes Space Complexity: O(w) where w is the maximum width """ if root is None: return [] result = [] # Using deque for O(1) popleft operation queue = deque([root]) while queue: # Dequeue from front (left side) current = queue.popleft() # Process the current node result.append(current.val) # Enqueue children if current.left: queue.append(current.left) if current.right: queue.append(current.right) return result # Example usage:# Tree: 1# / \# 2 3# / \# 4 5## root = TreeNode(1,# TreeNode(2, TreeNode(4), TreeNode(5)),# TreeNode(3)# )# print(level_order_traversal(root)) # Output: [1, 2, 3, 4, 5]In JavaScript/TypeScript, array.shift() is O(n) as it requires re-indexing all elements. For performance-critical applications with large trees, consider using an index-based approach (track front index instead of shifting) or a proper Queue class. Python's collections.deque provides O(1) popleft(), making it the preferred choice.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
import java.util.*; class TreeNode { int val; TreeNode left; TreeNode right; TreeNode(int val) { this.val = val; } TreeNode(int val, TreeNode left, TreeNode right) { this.val = val; this.left = left; this.right = right; }} public class LevelOrderTraversal { /** * Performs level-order traversal of a binary tree. * * @param root The root of the binary tree * @return List of node values in level order * * Time Complexity: O(n) * Space Complexity: O(w) where w is maximum width */ public List<Integer> levelOrder(TreeNode root) { List<Integer> result = new ArrayList<>(); if (root == null) { return result; } // LinkedList implements Queue interface with O(1) operations Queue<TreeNode> queue = new LinkedList<>(); queue.offer(root); while (!queue.isEmpty()) { // Dequeue front node TreeNode current = queue.poll(); // Process current node result.add(current.val); // Enqueue children if (current.left != null) { queue.offer(current.left); } if (current.right != null) { queue.offer(current.right); } } return result; }}Implementation Patterns Across Languages:
Notice how the core structure is identical regardless of language:
This pattern is so fundamental that recognizing it becomes automatic with practice. Any deviation from this pattern likely indicates a bug or a variant algorithm.
Let's examine more precisely what the queue contains at various points and why this guarantees level-order behavior.
The Queue Invariant:
At any point during the algorithm, the queue contains:
This invariant is maintained by our enqueue order: we always add left child before right child, and we process parents before children.
Tree: 1 / \ 2 3 / \ / \ 4 5 6 7 / \ 8 9 At different points, the queue holds: After processing level 0: Queue: [2, 3] ↑ Contains exactly level 1 (all of it, left to right) After processing level 1: During: [3, 4, 5] → [4, 5, 6, 7] After: Queue: [4, 5, 6, 7] ↑ Contains exactly level 2 (all of it, left to right) After processing level 2: During: [5, 6, 7, 8, 9] → [6, 7, 8, 9] → [7, 8, 9] → [8, 9] After: Queue: [8, 9] ↑ Contains exactly level 3 (all of it, left to right) INSIGHT: At the moment we FINISH processing level k, the queue contains EXACTLY all nodes at level k+1. This is not coincidence—it's structural necessity:- Nodes at level k+1 are children of level k nodes- We enqueue children as we process parents- By finishing level k, all level k+1 nodes are enqueuedWhy Level-Order is Guaranteed:
The proof follows from the queue invariant:
Base case: Initially, only level 0 (root) is in the queue.
Inductive step: Assume when we start processing level k, the queue contains exactly all level k nodes in left-to-right order.
k node, we enqueue its level k+1 children.k nodes, we've enqueued all level k+1 nodes in order.Conclusion: The queue transitions from containing level k to containing level k+1, maintaining the ordering invariant.
The Separation Property:
A crucial insight is that level boundaries are implicitly maintained. When we dequeue the last node of level k, the queue contains only level k+1 nodes. When we dequeue the first node of level k+1, all level k nodes are already processed. The queue naturally separates levels without explicit markers.
Imagine the algorithm as a wave moving down the tree. The queue is the wavefront—it contains exactly those nodes the wave is about to reach. As the wave moves past a node (dequeue), it pushes forward to that node's children (enqueue). The wave always moves one complete level at a time.
Understanding the space complexity of level-order traversal helps us anticipate performance characteristics and make informed decisions.
Space Complexity: O(w) where w = maximum width
The queue's size varies during execution. The key insight is that the queue's size is proportional to the width of the current and adjacent levels being processed.
Complete Binary Tree of Height 4: 1 Level 0: 1 node / \ 2 3 Level 1: 2 nodes / \ / \ 4 5 6 7 Level 2: 4 nodes /\ /\ /\ /\ 8-9 ... 14-15 Level 3: 8 nodes Queue size during traversal: Start: [1] Size: 1 After level 0: [2, 3] Size: 2 After level 1: [4, 5, 6, 7] Size: 4 After level 2: [8, 9, 10, 11, 12, 13, 14, 15] Size: 8 (MAXIMUM) After level 3: [] Size: 0 Maximum queue size = 8 = width of widest level In a complete binary tree with n nodes: - Last level has approximately n/2 nodes - Maximum queue size ≈ n/2 = O(n) But this is worst case. In practice: - If tree is skewed (like a linked list), max width = 1 - Space complexity depends on tree shape| Tree Shape | BFS Space (Queue) | DFS Space (Stack) | Winner |
|---|---|---|---|
| Complete/Balanced | O(n/2) = O(n) | O(log n) | DFS |
| Skewed (like linked list) | O(1) | O(n) | BFS |
| Wide but shallow | O(width) | O(height) | DFS |
| Narrow but deep | O(1) | O(depth) | BFS |
Key Memory Insights:
BFS uses width-proportional space: The queue holds at most one complete level plus partial adjacent levels. Maximum size ≤ nodes at widest level.
DFS uses height-proportional space: The stack (implicit or explicit) holds at most one path from root to current node. Maximum size = tree height.
Trade-off depends on tree shape:
In typical scenarios: Most practical trees aren't extremely skewed, so DFS often has a space advantage. But both are acceptable for most use cases.
For very large trees (millions of nodes), the O(n) worst-case memory for BFS on balanced trees can be significant. A balanced tree with 1 million nodes has ~500,000 nodes at the last level, requiring storage for 500,000 node references. Consider this when choosing between BFS and DFS for memory-constrained environments.
The time complexity of level-order traversal is straightforward, but understanding why helps us analyze variations and extensions.
Time Complexity: O(n)
Every node is processed exactly once. Let's break down the work:
For each node, we perform: 1. ENQUEUE operation: O(1) - Each node is enqueued exactly once - Total: n × O(1) = O(n) 2. DEQUEUE operation: O(1)* - Each node is dequeued exactly once - Total: n × O(1) = O(n) *Note: In JavaScript, shift() is O(n), not O(1)! Use proper queue implementation for O(1). 3. PROCESS operation: O(1) (for simple operations like adding to list) - Each node's value is processed once - Total: n × O(1) = O(n) 4. NULL CHECKS for children: O(1) - Each node's children checked twice (left, right) - Total: n × 2 × O(1) = O(n) Total: O(n) + O(n) + O(n) + O(n) = O(n) The algorithm visits each node exactly once, and does O(1) work per node, giving O(n) overall.Comparing to DFS Time Complexity:
Both BFS and DFS have O(n) time complexity for complete traversal—every node must be visited exactly once regardless of the order. The difference lies in:
Constant factors: BFS uses queue operations; DFS uses stack operations (or recursion). In practice, these are similarly fast.
Early termination: For search problems (not full traversal), BFS can find the closest node satisfying a condition faster because it examines nodes by distance from root.
Cache locality: DFS often has better cache locality because it processes nodes in the same subtree consecutively. BFS jumps across subtrees, which can hurt cache performance.
When Time Matters:
For full traversals, the difference is negligible. For searches where you want the "nearest" match, BFS can be faster in practice because it doesn't waste time going deep into distant branches.
Our O(n) analysis assumes O(1) processing per node. If processing each node takes f(node) time, the total becomes O(Σf(node)) = O(n × average(f)). For example, if processing involves comparing node values to a long string, overall complexity increases accordingly. The traversal itself is always O(n); what you do at each node determines the total.
The efficiency of level-order traversal depends on using a proper queue implementation. Let's examine the options across different programming environments.
The Key Requirement:
We need O(1) enqueue (add to back) and O(1) dequeue (remove from front). Without both being O(1), our O(n) algorithm degrades.
| Language | Good Option | Enqueue | Dequeue | Notes |
|---|---|---|---|---|
| Python | collections.deque | O(1) | O(1) | Optimized double-ended queue |
| Java | LinkedList (as Queue) | O(1) | O(1) | Use offer() and poll() |
| Java | ArrayDeque | O(1)* | O(1)* | *Amortized; faster than LinkedList |
| JavaScript/TS | Array with index tracking | O(1) | O(1) | Don't use shift()! |
| C++ | std::queue<T> | O(1) | O(1) | Adapter over deque by default |
| Go | Custom slice with index | O(1) | O(1) | No built-in queue |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
/** * Efficient Queue implementation using an array with index tracking. * Avoids O(n) shift() by tracking the front index. */class Queue<T> { private items: T[] = []; private front: number = 0; enqueue(item: T): void { this.items.push(item); // O(1) amortized } dequeue(): T | undefined { if (this.isEmpty()) { return undefined; } const item = this.items[this.front]; this.front++; // Periodically compact to prevent memory bloat if (this.front > 1000 && this.front > this.size()) { this.items = this.items.slice(this.front); this.front = 0; } return item; // O(1) } peek(): T | undefined { return this.items[this.front]; } isEmpty(): boolean { return this.front >= this.items.length; } size(): number { return this.items.length - this.front; }} // Usage in level-order traversal:function levelOrderEfficient(root: TreeNode | null): number[] { if (root === null) return []; const result: number[] = []; const queue = new Queue<TreeNode>(); queue.enqueue(root); while (!queue.isEmpty()) { const current = queue.dequeue()!; result.push(current.val); if (current.left) queue.enqueue(current.left); if (current.right) queue.enqueue(current.right); } return result;}Why This Matters:
For small trees (hundreds of nodes), the difference between O(1) and O(n) dequeue is negligible. But consider a tree with 1 million nodes:
The naive approach becomes quadratic, turning a sub-second operation into minutes of computation.
For trees with fewer than ~10,000 nodes, Array.shift() is usually fast enough that the difference is imperceptible. For interview problems with small test cases, simplicity might outweigh performance. But for production code on large data, always use proper O(1) queue operations.
Let's consolidate the key insights from our deep dive into using queues for level-order traversal.
What's Next:
We now know how to visit all nodes in level order using a queue. But many practical problems require more than just visiting—they need to know which level each node belongs to or process entire levels as units.
In the next page, we'll explore how to enhance our basic algorithm to process nodes level by level, maintaining explicit level boundaries for problems that require level-aware logic.
You now understand the deep connection between queues and level-order traversal. The FIFO property isn't just convenient—it's what makes the algorithm possible. You can implement efficient level-order traversal and understand why each component of the algorithm is necessary.