Loading content...
Imagine you've written what you believe is a correct algorithm. You're confident in the logic, the pseudocode looks clean, and you're ready to implement. But how do you know it's correct?
You could implement it, run test cases, and debug failures. But this approach is slow, frustrating, and often reveals problems after you've invested significant coding time. There's a better way.
Dry running is the technique of manually executing your algorithm step by step, tracking variable changes and control flow exactly as a computer would—but using your brain, a pen, and paper instead of a CPU and memory. It's like being a human interpreter for your own code.
This technique is not optional. It's the most reliable way to verify algorithm correctness before you write a single line of implementation code.
By the end of this page, you will master the art of dry running. You'll learn how to set up trace tables, track variable states through iterations, verify loop invariants, catch off-by-one errors, and develop the intuition that comes from deeply understanding how algorithms execute.
Dry running (also called hand tracing, desk checking, or walkthrough) is the process of manually simulating algorithm execution on paper or a whiteboard. You:
The term "dry run" comes from the idea of running through something without the actual execution—like a rehearsal before a performance, or a fire drill without a real fire.
Expert programmers don't skip dry running—they've just internalized it. When a senior engineer glances at code and says 'there's an off-by-one error in the loop termination,' they're doing a rapid mental dry run. The skill becomes faster with practice, but never obsolete.
The most reliable dry running technique is the trace table (also called a state table or variable trace). A trace table systematically records the value of every variable after each step of execution.
How to construct a trace table:
Example: Tracing a simple maximum-finding algorithm
Let's trace the FindMaximum algorithm with input [3, 7, 2, 9, 4]:
| Step | i | array[i] | max | array[i] > max? | Action |
|---|---|---|---|---|---|
| Init | 3 | max ← array[0] = 3 | |||
| 1 | 1 | 7 | 7 | Yes (7 > 3) | max ← 7 |
| 2 | 2 | 2 | 7 | No (2 ≤ 7) | No change |
| 3 | 3 | 9 | 9 | Yes (9 > 7) | max ← 9 |
| 4 | 4 | 4 | 9 | No (4 ≤ 9) | No change |
| End | 9 | RETURN 9 ✓ |
Reading the trace table:
max = 3 (first element)The trace table proves the algorithm works for this input. To build confidence, we'd trace additional inputs, especially edge cases.
You don't need columns for every expression—only for variables that change and comparisons that affect control flow. Include enough to follow the logic, but not so much that the table becomes unwieldy.
Loops are where most algorithmic bugs hide. Off-by-one errors, incorrect termination conditions, and infinite loops all occur in loop constructs. Careful loop tracing catches these errors.
Key elements to verify when tracing loops:
< vs <= and length vs length - 1Example: Tracing a loop to count occurrences
| Step | i | array[i] | array[i] = 3? | count |
|---|---|---|---|---|
| Init | 0 | |||
| 1 | 0 | 1 | No | 0 |
| 2 | 1 | 3 | Yes | 1 |
| 3 | 2 | 5 | No | 1 |
| 4 | 3 | 3 | Yes | 2 |
| 5 | 4 | 3 | Yes | 3 |
| 6 | 5 | 2 | No | 3 |
| End | RETURN 3 ✓ |
Verifying loop boundaries:
i ← 0 TO length(array) - 1 = i ← 0 TO 5Off-by-one errors are so common they have a name: 'fencepost errors' (if you need fence posts between sections, you need n+1 posts for n sections). When tracing, always verify: Does the loop start at the right index? Does it end at the right index? Does the range include both endpoints or exclude one?
Nested loops, conditionals within loops, and function calls within loops create additional complexity. The trace table approach scales to handle these cases.
Example: Tracing a nested loop (finding pairs)
| i | j | array[i] | array[j] | sum | = 5? | pairs |
|---|---|---|---|---|---|---|
| 0 | 1 | 1 | 4 | 5 | Yes | [[0,1]] |
| 0 | 2 | 1 | 3 | 4 | No | [[0,1]] |
| 0 | 3 | 1 | 2 | 3 | No | [[0,1]] |
| 1 | 2 | 4 | 3 | 7 | No | [[0,1]] |
| 1 | 3 | 4 | 2 | 6 | No | [[0,1]] |
| 2 | 3 | 3 | 2 | 5 | Yes | [[0,1], [2,3]] |
Observations from the trace:
Verifying loop boundaries:
i ← 0 TO length-2 = i ← 0 TO 2 (3 iterations for i = 0, 1, 2)j ← 1 TO 3 (j = 1, 2, 3)j ← 2 TO 3 (j = 2, 3)j ← 3 TO 3 (j = 3 only)Total comparisons: 3 + 2 + 1 = 6, which matches our 6 rows ✓
For nested loops, you can either show each inner iteration as a row (comprehensive) or summarize the inner loop's effect in one row per outer iteration (compact). Choose based on complexity—if bugs hide in the inner loop, trace it fully.
Recursive algorithms require a different tracing approach. Each recursive call creates a new context with its own parameter values. The key is to trace the call stack—the sequence of active function calls.
Techniques for tracing recursion:
Example: Tracing binary search (recursive)
Recursion has two phases: the 'winding' phase where calls stack up, and the 'unwinding' phase where returns propagate back. For algorithms that compute on the way down (like binary search), the result is found during winding. For algorithms that accumulate (like factorial), results combine during unwinding.
When algorithms use stacks, queues, or other data structures, your trace must show the state of these structures at each step.
Example: Tracing a stack-based parenthesis validator
| Step | char | Action | stack (bottom→top) |
|---|---|---|---|
| Init | Create empty stack | [] | |
| 1 | ( | Push | ['('] |
| 2 | ( | Push | ['(', '('] |
| 3 | ) | Pop | ['('] |
| 4 | ( | Push | ['(', '('] |
| 5 | ( | Push | ['(', '(', '('] |
| 6 | ) | Pop | ['(', '('] |
| 7 | ) | Pop | ['('] |
| 8 | ) | Pop | [] |
| End | isEmpty(stack)? | Yes → RETURN true ✓ |
Visualizing the stack:
It often helps to draw the stack graphically:
Step 5 (maximum depth):
┌───┐
│ ( │ ← top
├───┤
│ ( │
├───┤
│ ( │ ← bottom
└───┘
Drawing the data structure helps you see whether operations are correct and catch issues like underflow (popping from empty) or overflow (if size is bounded).
When tracing algorithms that use complex data structures (trees, graphs, priority queues), sketch the structure's state at critical points—especially before and after the operations that modify it. This visual trace often reveals bugs invisible in a pure variable trace.
Dry running one case proves the algorithm works for that input. But algorithms can fail on specific edge cases while working on typical inputs. Strategic test case selection maximizes bug-finding efficiency.
Prioritizing test cases for time-constrained dry runs:
You often don't have time to trace every case. Prioritize:
If all three pass, you have reasonable confidence. If time permits, add more edge cases.
| Test Case | Input | Expected | Why Include |
|---|---|---|---|
| Typical | [3, 7, 2, 9, 4] | 9 | Normal case with max in middle |
| Single element | [5] | 5 | Tests if loop handles n=1 |
| Max at start | [9, 3, 2, 1] | 9 | Tests if initialization is correct |
| Max at end | [1, 2, 3, 9] | 9 | Tests if loop reaches last element |
| All equal | [5, 5, 5, 5] | 5 | Tests handling of no updates |
| Negatives | [-3, -1, -7] | -1 | Tests if comparison works for negatives |
In interviews and production, most bugs occur in edge cases, not happy paths. An algorithm that works for [3, 7, 2, 9, 4] but crashes on [] or returns wrong for [5] is broken. Always trace at least one edge case.
Experienced programmers know the usual suspects. When dry running, watch specifically for these common error patterns:
< length vs <= length, starting at 0 vs 1< when <= was intended, or = vs ≠(5/2) is 2, not 2.5; affects midpoint calculationsThis bug would crash at runtime with an index error. But our trace caught it before we wrote any code. We traced to i=3, asked 'what is array[3]?', and realized the array only has indices 0, 1, 2. Five minutes of tracing saved debugging time.
Dry running is not busywork—it's the most efficient way to verify algorithm correctness. The time invested in tracing pays dividends in reduced debugging and increased confidence.
What's next:
Now that you can trace algorithm execution, the next skill is tracing complexity—understanding how time and space requirements evolve as you step through the algorithm. This combines dry running techniques with the complexity analysis we've learned earlier in this chapter.
You now have the tools to systematically verify any algorithm's correctness through dry running. This skill sets you apart from programmers who rely solely on trial-and-error debugging. Next, we'll extend this technique to trace time and space complexity during dry runs.