Loading learning content...
Once a choice is made, what happens next? The algorithm must explore the consequences of that choice — descending deeper into the solution space to see where this path leads. This is the Explore phase: the recursive call that carries your decision forward.
Exploration is where backtracking reveals its true nature as a depth-first traversal of an implicit tree. Each recursive call represents a node in this tree, and the sequence of calls from root to leaf represents one complete path through the space of all possible solutions.
In this page, we'll dissect the Explore phase thoroughly. You'll understand how recursive calls propagate state, how to design correct base cases, and how the call stack serves as the algorithm's memory — enabling the "backtracking" that gives the technique its name.
By the end of this page, you will understand:
• How recursive calls propagate decisions through the solution tree • The role of the call stack as implicit state storage • How to design correct and complete base cases • The connection between recursion depth and decision progress • How exploration interacts with Choose and Unchoose
This forms the second pillar of the universal backtracking template.
When you make a recursive call in backtracking, you're essentially asking: "Given the choices I've made so far, what happens if I continue from here?"
The recursive call carries with it all the information needed to continue the search:
123456789101112131415161718
def backtracking_template(state): # BASE CASE: Check if exploration is complete if is_complete(state): process_solution(state) return for choice in get_candidates(state): # === CHOOSE === make_choice(state, choice) # === EXPLORE === # The recursive call: continue exploration with the choice made backtracking_template(updated_state) # When this returns, we know all paths from this choice have been examined # === UNCHOOSE === undo_choice(state, choice) # Now we're ready to try the next candidateEach recursive call initiates a complete traversal of a sub-tree in the solution space. When you call backtrack(updated_state), you're saying: "Explore every possible completion of the current partial solution." The call returns only after all such completions have been examined.
This is a depth-first traversal:
Backtracking is depth-first search on an implicit tree — a tree that exists conceptually but isn't constructed in memory. The nodes are states, the edges are choices, and the tree structure emerges from the recursion itself. We never build the whole tree; we generate nodes on-the-fly as we explore.
One of the most elegant aspects of recursive backtracking is how the call stack serves as automatic memory. Each stack frame preserves:
This means you don't need explicit data structures to remember where you are in the search — the programming language's runtime handles it for you.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
def permutations_with_trace(nums): """Generate permutations with call stack visualization.""" result = [] used = [False] * len(nums) def backtrack(path, depth): indent = " " * depth # Visualize depth print(f"{indent}Enter: path={path}") if len(path) == len(nums): print(f"{indent}>>> Found solution: {path}") result.append(path[:]) return for i in range(len(nums)): if used[i]: continue # Choose path.append(nums[i]) used[i] = True # Explore — this creates a new stack frame print(f"{indent}Recurse with choice: {nums[i]}") backtrack(path, depth + 1) # When we return here, the deeper stack frames are gone, # but 'path', 'i', 'used' are preserved in THIS frame # Unchoose path.pop() used[i] = False print(f"{indent}Backtracked from: {nums[i]}") print(f"{indent}Exit: path={path}") backtrack([], 0) return result # Example output for nums = [1, 2]:# Enter: path=[]# Recurse with choice: 1# Enter: path=[1]# Recurse with choice: 2# Enter: path=[1, 2]# >>> Found solution: [1, 2]# Exit: path=[1, 2]# Backtracked from: 2# Exit: path=[1]# Backtracked from: 1# Recurse with choice: 2# Enter: path=[2]# Recurse with choice: 1# Enter: path=[2, 1]# >>> Found solution: [2, 1]# Exit: path=[2, 1]# Backtracked from: 1# Exit: path=[2]# Backtracked from: 2# Exit: path=[]When backtrack(path, depth + 1) is called:
path, i, used) in the current frame remain intactThis automatic preservation is why we can "undo" our choice after the recursive call — the pre-choice state is still there in the stack frame.
The call stack has a finite depth (typically a few thousand frames in most languages). For very deep recursions, you may hit stack overflow errors. Solutions include:
• Tail call optimization (not available in all languages) • Iterative conversion with an explicit stack • Increasing stack size (language/OS dependent) • Restructuring to reduce recursion depth
Every recursive algorithm needs base cases — conditions under which recursion stops. In backtracking, base cases determine when we've reached a complete solution or a dead end.
Success Base Case: We've made all required decisions and have a complete solution.
Failure Base Case: We've hit a constraint violation or dead end.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
# === PERMUTATIONS ===def permute(nums): result = [] def backtrack(path, used): # SUCCESS BASE CASE: All positions filled if len(path) == len(nums): result.append(path[:]) return # Complete solution found for i in range(len(nums)): if used[i]: continue # Choose, Explore, Unchoose... backtrack([], [False] * len(nums)) return result # === COMBINATION SUM ===def combination_sum(candidates, target): result = [] def backtrack(start, path, remaining): # SUCCESS BASE CASE: Exact target reached if remaining == 0: result.append(path[:]) return # FAILURE BASE CASE: Overshot the target if remaining < 0: return # Dead end — backtrack for i in range(start, len(candidates)): # Choose, Explore, Unchoose... backtrack(0, [], target) return result # === N-QUEENS ===def solve_n_queens(n): result = [] def backtrack(row, queens, cols, diag1, diag2): # SUCCESS BASE CASE: All queens placed if row == n: result.append(queens[:]) return for col in range(n): # FAILURE CHECK (inline, not base case) if col in cols or (row - col) in diag1 or (row + col) in diag2: continue # This candidate invalid, skip # Choose, Explore, Unchoose... backtrack(0, [], set(), set(), set()) return resultNot all "failures" are base cases. Sometimes you discover mid-exploration that continuing is pointless. This is pruning, and it can happen:
Strategically placed pruning prevents unnecessary exploration of doomed sub-trees.
The earlier you detect that a path is doomed, the more computation you save. If you can prove that no valid extension exists from the current state, return immediately rather than exploring empty sub-trees. This is the essence of backtracking's efficiency advantage over brute force.
123456789101112131415161718192021222324252627
def combination_sum_pruned(candidates, target): """ Optimized combination sum with strategic pruning. Assumes candidates are sorted in ascending order. """ result = [] candidates.sort() # Enable early termination def backtrack(start, path, remaining): if remaining == 0: result.append(path[:]) return for i in range(start, len(candidates)): candidate = candidates[i] # PRUNING: If this candidate exceeds remaining, # all larger candidates will too (since sorted) if candidate > remaining: break # Not 'continue' — 'break' to skip all remaining path.append(candidate) backtrack(i, path, remaining - candidate) # Can reuse same element path.pop() backtrack(0, [], target) return resultThe Explore phase involves passing state to the recursive call. There are several ways to manage this, each with tradeoffs:
Modify a shared data structure before recursing, then restore it after.
Pros: Memory efficient; no copying Cons: Requires careful restore logic; harder to parallelize
Create a new copy of the state with the choice applied, pass it to the recursive call.
Pros: Simpler logic; no restore needed; naturally parallelizable Cons: Higher memory usage; copying overhead
Use mutation for some state (e.g., tracking sets) and copying for others (e.g., the path).
123456789101112131415161718192021222324
def permute_mutate(nums): result = [] path = [] # Shared, mutated used = [False] * len(nums) def backtrack(): if len(path) == len(nums): result.append(path[:]) # Copy on capture return for i in range(len(nums)): if used[i]: continue path.append(nums[i]) # Mutate used[i] = True # Mutate backtrack() path.pop() # Restore used[i] = False # Restore backtrack() return result12345678910111213141516171819
def permute_copy(nums): result = [] def backtrack(path, remaining): if not remaining: result.append(path) # No copy needed return for i, num in enumerate(remaining): # Create NEW copies with choice applied new_path = path + [num] new_remaining = remaining[:i] + remaining[i+1:] backtrack(new_path, new_remaining) # No restore needed — originals unchanged backtrack([], nums) return resultMutate-and-Restore is preferred when: • State is large and copying is expensive • You're optimizing for performance • Sequential execution is fine
Copy-and-Pass is preferred when: • State is small • Code clarity is prioritized • Parallel exploration is needed • You're prone to bugs in restore logic
To truly understand exploration, visualize backtracking as traversing a solution tree (also called a decision tree or search tree).
Exploration is a depth-first traversal of this tree. We descend along one path, hit a leaf, then backtrack to try the next branch.
[] /|\ / | \ / | \ [1] [2] [3] /\ /\ /\ / \ / \ / \ [1,2][1,3][2,1][2,3][3,1][3,2] | | | | | | [1,2,3][1,3,2][2,1,3][2,3,1][3,1,2][3,2,1] ✓ ✓ ✓ ✓ ✓ ✓ Legend: [] = Root (empty permutation) [1] = After choosing '1' for position 0 [1,2] = After choosing '1' then '2' [1,2,3] = Complete permutation (leaf) ✓ = Complete solutionThe tree has n! leaves for permutations of n elements — one for each valid permutation.
The height is n — we make n decisions (one for each position).
Branching factor decreases — at depth d, there are (n-d) choices remaining.
Every leaf is a solution — in permutations, all complete paths are valid.
For problems with constraints (like N-Queens), some branches are pruned early, and not all leaves are solutions.
Before implementing a backtracking solution, sketch the first few levels of its solution tree. Ask yourself:
• What does each level of the tree represent? • What are the candidates at each node? • Which branches can be pruned? • What makes a valid leaf vs. a dead end?
This visual model often clarifies the Choose-Explore-Unchoose structure.
In well-structured backtracking, recursion depth directly corresponds to decision progress. Each level of recursion represents one more decision made:
This correspondence is essential for reasoning about your algorithm:
| Problem | Max Depth | What Each Level Decides |
|---|---|---|
| Permutations of n elements | n | Which element fills position d |
| Subsets of n elements | n | Include or exclude element d |
| N-Queens | N | Which column for queen in row d |
| Sudoku | 81 (or fewer) | Which digit for the next empty cell |
| Combination Sum | Target / min(candidates) | Which candidate to include next |
| Word Break | Length of string | Where to end the current word |
The relationship between depth and progress is often captured by a progress parameter — an argument to the recursive function that advances toward the base case:
backtrack(index) where index advances from 0 to nbacktrack(remaining) where remaining decreases toward 0backtrack(path) where len(path) grows toward targetA well-designed progress parameter guarantees termination: each recursive call is strictly closer to the base case.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
# Progress by INDEX (advancing start position)def combinations(n, k): result = [] def backtrack(start, path): # 'start' is progress parameter if len(path) == k: result.append(path[:]) return for i in range(start, n + 1): # 'start' only increases path.append(i) backtrack(i + 1, path) # Progress: start → start + 1 path.pop() backtrack(1, []) return result # Progress by REMAINING VALUE (decreasing target)def combination_sum(candidates, target): result = [] def backtrack(start, remaining, path): # 'remaining' is progress if remaining == 0: result.append(path[:]) return if remaining < 0: return for i in range(start, len(candidates)): path.append(candidates[i]) backtrack(i, remaining - candidates[i], path) # Progress: remaining decreases path.pop() backtrack(0, target, []) return result # Progress by SOLUTION SIZE (path grows toward target)def permute(nums): result = [] def backtrack(path, used): # len(path) is implicit progress if len(path) == len(nums): # Progress complete result.append(path[:]) return for i in range(len(nums)): if used[i]: continue path.append(nums[i]) # Progress: len(path) + 1 used[i] = True backtrack(path, used) path.pop() used[i] = False backtrack([], [False] * len(nums)) return resultIf your recursive call doesn't make progress toward the base case, you'll have infinite recursion. Always verify:
• The progress parameter changes in every recursive call • The change moves toward the base case condition • There are no paths where progress stalls
Exploration serves two purposes depending on the problem:
For problems requiring all solutions (all permutations, all valid Sudoku arrangements), the Explore phase visits every valid leaf and collects each solution.
For problems where any valid solution suffices (solving a Sudoku puzzle), exploration can terminate early once a solution is found.
The difference is subtle but important in how you handle return values:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
# FIND ALL SOLUTIONSdef find_all_solutions(): result = [] # Collect all solutions def backtrack(state): if is_complete(state): result.append(capture_solution(state)) return # Continue exploring other branches for choice in get_candidates(state): make_choice(state, choice) backtrack(state) # Return value ignored — always continue undo_choice(state, choice) backtrack(initial_state) return result # All solutions collected # FIND ANY SOLUTION (Early Termination)def find_any_solution(): def backtrack(state): if is_complete(state): return capture_solution(state) # Return the solution for choice in get_candidates(state): make_choice(state, choice) result = backtrack(state) # Check return value if result is not None: return result # Propagate success up the stack undo_choice(state, choice) return None # No solution found in this branch return backtrack(initial_state) # Returns solution or None # FIND ANY SOLUTION (Boolean variant)def solve_sudoku(board): def backtrack(): cell = find_empty_cell(board) if cell is None: return True # All cells filled — success! row, col = cell for digit in get_candidates(board, row, col): board[row][col] = digit if backtrack(): # If recursion succeeds... return True # ...propagate success upward board[row][col] = 0 # Undo and try next return False # No valid digit for this cell backtrack() return boardIn "find any" problems, the return value propagates success up the call stack. When a leaf returns success (True or a solution), every ancestor must also return success immediately, unwinding the stack without exploring remaining siblings. This short-circuiting is what enables early termination.
The Explore phase is sandwiched between Choose and Unchoose. Understanding this relationship is key to writing correct backtracking code.
This invariant must hold for every choice at every level. If any step corrupts the state, subsequent exploration will be incorrect.
┌─────────────────────────────────────────────────────────────────────────┐│ for choice in candidates: ││ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ CHOOSE │ ││ │ ├── Add 'choice' to partial solution │ ││ │ ├── Update tracking structures │ ││ │ └── State now reflects 'choice' being made │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ EXPLORE (Recursive Call) │ ││ │ ├── All deeper calls see 'choice' in the state │ ││ │ ├── Entire sub-tree from this choice is explored │ ││ │ └── Returns after all possibilities from 'choice' are exhausted │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ UNCHOOSE │ ││ │ ├── Remove 'choice' from partial solution │ ││ │ ├── Restore tracking structures │ ││ │ └── State is now EXACTLY as before CHOOSE │ ││ └───────────────────────────────────────────────────────────────────┘ ││ ││ (Loop continues with next candidate) │└─────────────────────────────────────────────────────────────────────────┘If your backtracking code produces missing or duplicate solutions, check the Choose-Explore-Unchoose invariant:
Any discrepancy reveals the bug.
The Explore phase is where the search happens — where we descend through the solution space to find all paths or the first valid solution. It's the heartbeat of backtracking, driving the depth-first traversal that makes the algorithm work.
What's Next:
With Choose and Explore understood, we turn to the final piece: Unchoose — Undo the Decision. This phase ensures we can backtrack cleanly, restoring state so that other candidates can be explored without interference from previous choices.
You now understand the Explore phase of the backtracking template in depth. You know how recursive calls traverse the solution tree, how the call stack preserves state, and how to design correct base cases and pruning strategies. This prepares you for the Unchoose phase that completes the template.