Loading learning content...
You've learned to recognize edge cases—empty inputs, extreme values, duplicates, and special characters. But recognizing potential issues is only half the battle. The other half is systematically generating test cases that exercise your code against all these edge cases before they catch you in production or during a critical interview.
The ability to generate effective test cases separates competent engineers from exceptional ones:
This isn't about writing more tests—it's about writing strategic tests that maximize bug-finding power with minimal effort. A well-designed test suite of 10 cases can be more effective than 1000 random inputs.
By the end of this page, you will have a complete toolkit for test case generation: boundary value analysis, equivalence partitioning, stress testing, property-based testing, and specialized techniques for different algorithm types. You'll know how to generate test cases that find bugs before they find you.
Effective test case generation requires a specific mindset: think like an adversary. Your goal is to break your own code—to find inputs that expose bugs, edge cases that weren't handled, and assumptions that don't hold.
The Adversarial Mindset:
In interviews, always trace through at least three test cases before submitting: one trivial (empty/single), one typical (medium-sized normal), and one boundary (extreme value or size). This demonstrates systematic thinking and catches 80% of bugs in under a minute.
Boundary Value Analysis (BVA) is a systematic testing technique based on the observation that bugs tend to cluster at the edges of valid input ranges. The technique tests values at, just inside, and just outside boundaries.
The Core Principle:
For any input with a valid range [min, max], test these values:
| Input Type | Boundaries to Test | Example Values |
|---|---|---|
| Array index [0, n-1] | 0, 1, n-2, n-1, -1, n | 0, 1, 98, 99, -1, 100 (for n=100) |
| Array size [0, max] | 0, 1, 2, max-1, max | 0, 1, 2, 999, 1000 |
| Integer range [a, b] | a-1, a, a+1, b-1, b, b+1 | For [1,100]: 0, 1, 2, 99, 100, 101 |
| 32-bit signed | -2^31, -2^31+1, -1, 0, 1, 2^31-2, 2^31-1 | INT_MIN, INT_MIN+1, ..., INT_MAX |
| Percentage [0, 100] | -1, 0, 1, 99, 100, 101 | -1, 0, 1, 99, 100, 101 |
| String length [0, n] | 0, 1, n-1, n | Empty, single char, max-1, max length |
| Boolean | true, false | Both values (no boundaries as such) |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165
from typing import Callable, Any, TypeVarimport sys T = TypeVar('T') def generate_boundary_values(min_val: int, max_val: int) -> list[int]: """ Generate boundary test values for a numeric range. Includes values at, just inside, and just outside boundaries. """ values = [] # Below valid range if min_val > -sys.maxsize: values.append(min_val - 1) # At and just inside lower boundary values.append(min_val) if min_val < max_val: values.append(min_val + 1) # Middle of range (if range is large enough) if max_val - min_val > 4: values.append((min_val + max_val) // 2) # At and just inside upper boundary if max_val - min_val > 1: values.append(max_val - 1) values.append(max_val) # Above valid range if max_val < sys.maxsize: values.append(max_val + 1) return values def generate_array_size_tests(max_size: int) -> list[int]: """ Generate array sizes for boundary testing. """ sizes = [ 0, # Empty 1, # Single element 2, # Two elements (minimum for pairs) ] if max_size > 10: sizes.append(10) # Small if max_size > 100: sizes.append(100) # Medium if max_size > 2: sizes.append(max_size - 1) # Just under max sizes.append(max_size) # Maximum allowed return sizes def generate_index_tests(array_length: int) -> list[int]: """ Generate index values for boundary testing. Includes valid and invalid indices. """ if array_length == 0: return [-1, 0] # Only invalid indices indices = [] # Invalid indices indices.append(-1) indices.append(-array_length - 1) # Negative beyond valid # Valid boundary indices indices.append(0) # First if array_length > 1: indices.append(1) # Second indices.append(array_length - 2) # Second to last indices.append(array_length - 1) # Last # Middle index if array_length > 4: indices.append(array_length // 2) # Invalid high indices indices.append(array_length) # Just past end indices.append(array_length + 1) # Further past return indices def generate_32bit_boundary_values() -> list[int]: """ Generate 32-bit signed integer boundary values. """ INT32_MIN = -2**31 INT32_MAX = 2**31 - 1 return [ INT32_MIN, # Minimum INT32_MIN + 1, # Just above minimum -1, # Negative near zero 0, # Zero 1, # Positive near zero INT32_MAX - 1, # Just below maximum INT32_MAX, # Maximum ] class BoundaryTestGenerator: """ Comprehensive boundary test generator for function testing. """ def __init__(self, func: Callable, valid_range: tuple[int, int] = None): self.func = func self.valid_range = valid_range def generate_test_suite(self) -> list[dict]: """ Generate comprehensive boundary test cases. """ tests = [] if self.valid_range: min_val, max_val = self.valid_range # Valid boundary values for value in generate_boundary_values(min_val, max_val): is_valid = min_val <= value <= max_val tests.append({ 'input': value, 'expected_valid': is_valid, 'category': 'boundary' }) return tests def run_tests(self, tests: list[dict]) -> list[dict]: """ Execute test cases and report results. """ results = [] for test in tests: try: output = self.func(test['input']) results.append({ **test, 'output': output, 'error': None, 'passed': True # No exception }) except Exception as e: results.append({ **test, 'output': None, 'error': str(e), 'passed': not test.get('expected_valid', True) }) return resultsEquivalence Partitioning divides the input space into classes where all values in a class should produce equivalent behavior. You only need to test one value from each partition—if the code handles one correctly, it should handle all of them.
The Core Principle:
Common Partitions:
| Problem Type | Input Partitions | Representative Test Per Partition |
|---|---|---|
| Array Search | Empty, Single, Multiple; Target present/absent; Target at start/middle/end | []: not found; [5]: find 5; [1,3,5,7,9]: find 5 (middle) |
| Sorting | Empty, Single, Two; Already sorted; Reverse sorted; Random; All same | []: []; [1,2,3]: stable; [3,2,1]: needs full sort |
| String Processing | Empty; Whitespace only; Alphanumeric; Special chars; Unicode | ""; " "; "hello"; "@#$"; "café" |
| Numeric Calculation | Negative; Zero; Small positive; Large positive; At limits | -10, 0, 5, 1000000, INT_MAX |
| Graph Algorithms | Empty; Single node; Disconnected; Connected; Cycle present | No nodes; Single node; Two components; Complete graph; Has cycle |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175
from typing import Callable, TypeVar, Anyfrom dataclasses import dataclassfrom enum import Enum, autoimport random T = TypeVar('T') class ArrayPartition(Enum): """Equivalence partitions for array inputs.""" EMPTY = auto() SINGLE = auto() TWO = auto() SMALL = auto() MEDIUM = auto() LARGE = auto() class ValuePartition(Enum): """Equivalence partitions for numeric values.""" NEGATIVE_LARGE = auto() NEGATIVE_SMALL = auto() ZERO = auto() POSITIVE_SMALL = auto() POSITIVE_LARGE = auto() class OrderPartition(Enum): """Equivalence partitions for array ordering.""" SORTED_ASC = auto() SORTED_DESC = auto() RANDOM = auto() ALL_SAME = auto() @dataclassclass TestCase: """Represents a single test case with metadata.""" input: Any partitions: list[Enum] description: str def generate_array_partition_tests( value_range: tuple[int, int] = (1, 100)) -> list[TestCase]: """ Generate test arrays for each equivalence partition. """ min_val, max_val = value_range tests = [] # Size partitions tests.append(TestCase( input=[], partitions=[ArrayPartition.EMPTY], description="Empty array" )) tests.append(TestCase( input=[random.randint(min_val, max_val)], partitions=[ArrayPartition.SINGLE], description="Single element" )) tests.append(TestCase( input=sorted(random.sample(range(min_val, max_val), 2)), partitions=[ArrayPartition.TWO], description="Two elements" )) # Order partitions small_arr = sorted(random.sample(range(min_val, max_val), 10)) tests.append(TestCase( input=small_arr, partitions=[ArrayPartition.SMALL, OrderPartition.SORTED_ASC], description="Small sorted ascending" )) tests.append(TestCase( input=small_arr[::-1], partitions=[ArrayPartition.SMALL, OrderPartition.SORTED_DESC], description="Small sorted descending" )) random_arr = random.sample(range(min_val, max_val), 10) tests.append(TestCase( input=random_arr, partitions=[ArrayPartition.SMALL, OrderPartition.RANDOM], description="Small random order" )) same_val = random.randint(min_val, max_val) tests.append(TestCase( input=[same_val] * 10, partitions=[ArrayPartition.SMALL, OrderPartition.ALL_SAME], description="All same values" )) return tests def generate_numeric_partition_tests() -> list[TestCase]: """ Generate test values for each numeric equivalence partition. """ INT32_MIN = -2**31 INT32_MAX = 2**31 - 1 tests = [ TestCase( input=INT32_MIN, partitions=[ValuePartition.NEGATIVE_LARGE], description="Minimum 32-bit integer" ), TestCase( input=-10, partitions=[ValuePartition.NEGATIVE_SMALL], description="Small negative" ), TestCase( input=0, partitions=[ValuePartition.ZERO], description="Zero" ), TestCase( input=10, partitions=[ValuePartition.POSITIVE_SMALL], description="Small positive" ), TestCase( input=INT32_MAX, partitions=[ValuePartition.POSITIVE_LARGE], description="Maximum 32-bit integer" ), ] return tests def partition_based_test_selector( all_tests: list[TestCase], covered: set = None) -> list[TestCase]: """ Select minimum tests to cover all partitions. Greedy set cover approximation. """ if covered is None: covered = set() selected = [] remaining = list(all_tests) while remaining: # Find test that covers most new partitions best_test = None best_new_coverage = 0 for test in remaining: test_partitions = set(test.partitions) new_coverage = len(test_partitions - covered) if new_coverage > best_new_coverage: best_new_coverage = new_coverage best_test = test if best_test is None or best_new_coverage == 0: break selected.append(best_test) covered.update(best_test.partitions) remaining.remove(best_test) return selectedStress testing verifies that your algorithm performs correctly and efficiently under maximum load. These tests expose:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203
import randomimport timefrom typing import Callable, TypeVar, Any T = TypeVar('T') def stress_test( func_to_test: Callable, func_reference: Callable, input_generator: Callable[[], Any], num_tests: int = 1000, verbose: bool = False) -> dict: """ Stress test by comparing function output to reference implementation. Args: func_to_test: Function being tested func_reference: Known-correct reference implementation input_generator: Function that generates random test inputs num_tests: Number of random tests to run verbose: Print each test case Returns: Dictionary with test results and any failing cases """ passed = 0 failed = 0 failing_cases = [] for i in range(num_tests): test_input = input_generator() try: result = func_to_test(test_input) expected = func_reference(test_input) if result == expected: passed += 1 else: failed += 1 failing_cases.append({ 'input': test_input, 'expected': expected, 'actual': result }) if verbose: print(f"FAIL #{i}: input={test_input}") print(f" Expected: {expected}") print(f" Actual: {result}") except Exception as e: failed += 1 failing_cases.append({ 'input': test_input, 'error': str(e) }) if verbose: print(f"ERROR #{i}: input={test_input}, error={e}") return { 'passed': passed, 'failed': failed, 'total': num_tests, 'failing_cases': failing_cases[:10] # Limit stored cases } def performance_test( func: Callable, input_generator: Callable[[int], Any], sizes: list[int], time_limit: float = 1.0) -> list[dict]: """ Measure function performance at different input sizes. Verifies O(n), O(n log n), O(n²), etc. behavior. """ results = [] for size in sizes: test_input = input_generator(size) start_time = time.perf_counter() try: func(test_input) elapsed = time.perf_counter() - start_time results.append({ 'size': size, 'time': elapsed, 'passed': elapsed <= time_limit, 'error': None }) except Exception as e: results.append({ 'size': size, 'time': None, 'passed': False, 'error': str(e) }) return results def estimate_complexity(results: list[dict]) -> str: """ Estimate time complexity from performance measurements. Compares growth rates to classify complexity. """ valid_results = [r for r in results if r['time'] is not None] if len(valid_results) < 3: return "Insufficient data" sizes = [r['size'] for r in valid_results] times = [r['time'] for r in valid_results] # Calculate ratios for complexity estimation ratios = [] for i in range(1, len(sizes)): size_ratio = sizes[i] / sizes[i-1] time_ratio = times[i] / times[i-1] if times[i-1] > 0 else float('inf') ratios.append((size_ratio, time_ratio)) # Classify based on average ratio behavior avg_size_ratio = sum(r[0] for r in ratios) / len(ratios) avg_time_ratio = sum(r[1] for r in ratios) / len(ratios) if avg_time_ratio < 1.2: return "O(1) - Constant" elif avg_time_ratio < avg_size_ratio * 0.6: return "O(log n) - Logarithmic" elif avg_time_ratio < avg_size_ratio * 1.5: return "O(n) - Linear" elif avg_time_ratio < avg_size_ratio * 2: return "O(n log n) - Linearithmic" elif avg_time_ratio < avg_size_ratio ** 2 * 1.5: return "O(n²) - Quadratic" else: return "O(n²+) - Possibly worse than quadratic" # ============================================# RANDOM INPUT GENERATORS# ============================================ def random_array_generator( size: int, min_val: int = -10**9, max_val: int = 10**9) -> list[int]: """Generate random integer array of given size.""" return [random.randint(min_val, max_val) for _ in range(size)] def random_sorted_array_generator( size: int, min_val: int = -10**6, max_val: int = 10**6) -> list[int]: """Generate sorted random array.""" arr = random_array_generator(size, min_val, max_val) return sorted(arr) def random_string_generator( length: int, alphabet: str = 'abcdefghijklmnopqrstuvwxyz') -> str: """Generate random string of given length.""" return ''.join(random.choice(alphabet) for _ in range(length)) def random_tree_generator(n: int) -> list[tuple[int, int]]: """ Generate random tree edges for n nodes. Returns list of (parent, child) edges. """ if n <= 1: return [] edges = [] for i in range(1, n): parent = random.randint(0, i - 1) edges.append((parent, i)) return edges def random_graph_generator( n: int, edge_probability: float = 0.3) -> list[tuple[int, int]]: """ Generate random undirected graph. """ edges = [] for i in range(n): for j in range(i + 1, n): if random.random() < edge_probability: edges.append((i, j)) return edgesThe classic stress test pattern: (1) Write a slow but obviously correct 'naive' solution, (2) Write your optimized solution, (3) Generate random inputs, (4) Compare outputs. When they differ, you've found a bug—and you have the exact failing input to debug.
Property-based testing is a powerful technique where instead of specifying exact expected outputs, you specify properties that should always hold true. The testing framework generates random inputs and verifies the properties.
Why Properties Instead of Examples?
| Property Type | Description | Example |
|---|---|---|
| Idempotent | Applying twice = applying once | sort(sort(arr)) == sort(arr) |
| Involutory | Applying twice = identity | reverse(reverse(arr)) == arr |
| Inverse | Operation has reverse | decrypt(encrypt(x)) == x |
| Invariant | Quantity preserved | sum(arr) == sum(sorted(arr)) |
| Commutative | Order doesn't matter | merge(a, b) == merge(b, a) |
| Associative | Grouping doesn't matter | merge(merge(a, b), c) == merge(a, merge(b, c)) |
| Monotonic | Preserves ordering | x < y implies f(x) <= f(y) |
| Bounded | Output within range | 0 <= result <= max_possible |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166
import randomfrom typing import Callable, TypeVar, Any T = TypeVar('T') def property_test( property_fn: Callable[..., bool], input_generators: list[Callable[[], Any]], num_tests: int = 100) -> dict: """ Run property-based tests. Args: property_fn: Function that returns True if property holds input_generators: List of functions that generate random inputs num_tests: Number of random tests Returns: Test results with any counterexamples """ passed = 0 failed = 0 counterexamples = [] for _ in range(num_tests): # Generate random inputs inputs = [gen() for gen in input_generators] try: if property_fn(*inputs): passed += 1 else: failed += 1 counterexamples.append(inputs) except Exception as e: failed += 1 counterexamples.append({'inputs': inputs, 'error': str(e)}) return { 'passed': passed, 'failed': failed, 'counterexamples': counterexamples[:5] } # ============================================# SORTING PROPERTIES# ============================================ def sort_function(arr: list[int]) -> list[int]: """The function being tested (your implementation).""" return sorted(arr) # Replace with your implementation def prop_sort_preserves_length(arr: list[int]) -> bool: """Sorting preserves array length.""" return len(sort_function(arr)) == len(arr) def prop_sort_preserves_elements(arr: list[int]) -> bool: """Sorting preserves all elements (multiset equality).""" from collections import Counter return Counter(sort_function(arr)) == Counter(arr) def prop_sort_is_ordered(arr: list[int]) -> bool: """Sorted array is in non-decreasing order.""" result = sort_function(arr) return all(result[i] <= result[i+1] for i in range(len(result)-1)) def prop_sort_is_idempotent(arr: list[int]) -> bool: """Sorting twice equals sorting once.""" once = sort_function(arr) twice = sort_function(once) return once == twice # ============================================# REVERSE PROPERTIES# ============================================ def reverse_function(arr: list[int]) -> list[int]: """The reverse function being tested.""" return arr[::-1] def prop_reverse_is_involution(arr: list[int]) -> bool: """Reversing twice returns original.""" return reverse_function(reverse_function(arr)) == arr def prop_reverse_preserves_length(arr: list[int]) -> bool: """Reversing preserves length.""" return len(reverse_function(arr)) == len(arr) def prop_reverse_first_last_swap(arr: list[int]) -> bool: """First element becomes last after reverse.""" if not arr: return True result = reverse_function(arr) return arr[0] == result[-1] and arr[-1] == result[0] # ============================================# BINARY SEARCH PROPERTIES# ============================================ def binary_search(arr: list[int], target: int) -> int: """Binary search returning index or -1.""" left, right = 0, len(arr) - 1 while left <= right: mid = left + (right - left) // 2 if arr[mid] == target: return mid elif arr[mid] < target: left = mid + 1 else: right = mid - 1 return -1 def prop_search_correct_index(arr: list[int], target: int) -> bool: """If found, index contains the target value.""" arr = sorted(arr) # Binary search needs sorted array idx = binary_search(arr, target) if idx == -1: return target not in arr return arr[idx] == target def prop_search_finds_existing(arr: list[int]) -> bool: """Search always finds an element that exists.""" arr = sorted(arr) if not arr: return True target = random.choice(arr) return binary_search(arr, target) != -1 # ============================================# RUN PROPERTY TESTS# ============================================ def test_sort_properties(): """Test all sorting properties.""" gen = lambda: [random.randint(-100, 100) for _ in range(random.randint(0, 50))] properties = [ ("preserves_length", prop_sort_preserves_length), ("preserves_elements", prop_sort_preserves_elements), ("is_ordered", prop_sort_is_ordered), ("is_idempotent", prop_sort_is_idempotent), ] results = {} for name, prop in properties: result = property_test(prop, [gen], num_tests=1000) results[name] = result status = "PASS" if result['failed'] == 0 else "FAIL" print(f"{name}: {status} ({result['passed']}/{result['passed']+result['failed']})") return resultsDifferent algorithm categories have specific test patterns that target their unique failure modes.
| Algorithm Category | Critical Test Cases | Properties to Verify |
|---|---|---|
| Sorting | Empty, single, two elements; all same; sorted/reverse-sorted; random | Ordered; length preserved; elements preserved; stable (if required) |
| Binary Search | Empty; single; target at first/last/middle; target absent; duplicates | Returns correct index; not-found returns -1; O(log n) time |
| DP (Memoization) | Base cases; large inputs testing memo; overlapping subproblems | Matches naive solution; fills table correctly; no stack overflow |
| Graph Traversal | Empty; single node; disconnected; cycles; tree structure | Visits all reachable nodes; no infinite loops; correct order (BFS/DFS) |
| Tree Operations | Empty; single node; left-only; right-only; balanced; maximum depth | Correct traversal order; handles null; no stack overflow |
| String Matching | Empty strings; pattern longer than text; no match; multiple matches; overlapping | Correct positions; all matches found; O(n+m) time (for KMP/Z) |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239
import randomfrom typing import Generator # ============================================# BINARY SEARCH TEST GENERATOR# ============================================ def generate_binary_search_tests(max_size: int = 1000) -> list[dict]: """Generate comprehensive binary search test cases.""" tests = [] # Empty array tests.append({ 'arr': [], 'target': 5, 'expected': -1, 'description': 'Empty array' }) # Single element - found tests.append({ 'arr': [5], 'target': 5, 'expected': 0, 'description': 'Single element, found' }) # Single element - not found tests.append({ 'arr': [5], 'target': 3, 'expected': -1, 'description': 'Single element, not found' }) # Two elements tests.append({ 'arr': [2, 5], 'target': 2, 'expected': 0, 'description': 'Two elements, find first' }) tests.append({ 'arr': [2, 5], 'target': 5, 'expected': 1, 'description': 'Two elements, find second' }) # Target at boundaries arr = list(range(1, 11)) # [1, 2, ..., 10] tests.append({ 'arr': arr, 'target': 1, 'expected': 0, 'description': 'Target at first position' }) tests.append({ 'arr': arr, 'target': 10, 'expected': 9, 'description': 'Target at last position' }) tests.append({ 'arr': arr, 'target': 5, 'expected': 4, 'description': 'Target in middle' }) # Target not in array tests.append({ 'arr': arr, 'target': 0, 'expected': -1, 'description': 'Target less than all' }) tests.append({ 'arr': arr, 'target': 11, 'expected': -1, 'description': 'Target greater than all' }) # Large array large_arr = list(range(max_size)) tests.append({ 'arr': large_arr, 'target': max_size // 2, 'expected': max_size // 2, 'description': f'Large array ({max_size} elements)' }) return tests # ============================================# GRAPH TEST GENERATOR# ============================================ def generate_graph_tests() -> list[dict]: """Generate comprehensive graph algorithm test cases.""" tests = [] # Empty graph tests.append({ 'graph': {}, 'description': 'Empty graph' }) # Single node tests.append({ 'graph': {0: []}, 'description': 'Single isolated node' }) # Single node with self-loop tests.append({ 'graph': {0: [0]}, 'description': 'Single node with self-loop' }) # Two nodes, connected tests.append({ 'graph': {0: [1], 1: [0]}, 'description': 'Two connected nodes' }) # Two nodes, disconnected tests.append({ 'graph': {0: [], 1: []}, 'description': 'Two disconnected nodes' }) # Linear chain (n = 5) tests.append({ 'graph': { 0: [1], 1: [0, 2], 2: [1, 3], 3: [2, 4], 4: [3] }, 'description': 'Linear chain of 5 nodes' }) # Complete graph (n = 4) tests.append({ 'graph': { 0: [1, 2, 3], 1: [0, 2, 3], 2: [0, 1, 3], 3: [0, 1, 2] }, 'description': 'Complete graph K4' }) # Tree tests.append({ 'graph': { 0: [1, 2], 1: [0, 3, 4], 2: [0], 3: [1], 4: [1] }, 'description': 'Binary tree' }) # Graph with cycle tests.append({ 'graph': { 0: [1], 1: [2], 2: [0] # Creates cycle }, 'description': 'Simple cycle' }) # Disconnected components tests.append({ 'graph': { 0: [1], 1: [0], # Component 1 2: [3], 3: [2], # Component 2 4: [] # Isolated node }, 'description': 'Three disconnected components' }) return tests # ============================================# DP TEST GENERATOR# ============================================ def generate_fibonacci_tests() -> list[dict]: """Generate test cases for Fibonacci (classic DP problem).""" return [ {'n': 0, 'expected': 0, 'description': 'Base case n=0'}, {'n': 1, 'expected': 1, 'description': 'Base case n=1'}, {'n': 2, 'expected': 1, 'description': 'First addition n=2'}, {'n': 10, 'expected': 55, 'description': 'Medium n=10'}, {'n': 20, 'expected': 6765, 'description': 'Larger n=20'}, {'n': 45, 'expected': 1134903170, 'description': 'Stress test n=45'}, ] def generate_coin_change_tests() -> list[dict]: """Generate test cases for coin change problem.""" return [ { 'coins': [1], 'amount': 0, 'expected': 0, 'description': 'Zero amount' }, { 'coins': [1], 'amount': 5, 'expected': 5, 'description': 'Only 1-cent coins' }, { 'coins': [2], 'amount': 3, 'expected': -1, 'description': 'Impossible with even coins' }, { 'coins': [1, 2, 5], 'amount': 11, 'expected': 3, 'description': 'Classic example: 5+5+1' }, { 'coins': [1, 5, 10, 25], 'amount': 100, 'expected': 4, 'description': 'US coins making $1' }, { 'coins': [186, 419, 83, 408], 'amount': 6249, 'expected': 20, 'description': 'Complex case' }, ]Here's a systematic workflow for testing any algorithm implementation, combining all the techniques we've covered.
In interviews, you typically have time for steps 1-5 (about 5 minutes total). Always verbalize your test cases: 'Let me test with empty input... now with a single element... now a typical case.' This demonstrates systematic thinking even if you don't catch all bugs.
Congratulations! You've completed the Edge Cases & Testing Strategies module. You now have a comprehensive toolkit for identifying edge cases, handling them robustly, and generating tests that verify your implementations. These skills will serve you in interviews, production development, and throughout your engineering career.