Loading content...
"Should I use an array or a hash map for storing child nodes?"
This question comes up in virtually every trie implementation discussion, interview, and code review. It's the single most impactful design decision you'll make when building a trie, yet the answer is often presented as overly simplistic: "Arrays for small alphabets, hash maps for large ones."
The reality is more nuanced. In this page, we'll provide the complete picture—examining both approaches in depth, analyzing their behavior under different conditions, and giving you a definitive framework for making the right choice in any scenario.
By completing this page, you will: (1) Deeply understand how arrays and hash maps work for trie children storage, (2) Know the precise performance characteristics of each approach, (3) Have a decision framework for choosing between them, (4) Be able to implement both approaches and hybrid strategies, and (5) Confidently answer this question in interviews and code reviews.
The array approach uses a fixed-size array where the index corresponds to the character. This creates a direct mapping from character to child node without any search or hashing.
How It Works:
Alphabet: a b c d e f g h i j k l m n o p q r s t u v w x y z
Index: 0 1 2 3 4 5 6 7 8 9 ...
To find child for 'c': children[2]
To find child for 'z': children[25]
Implementation Details:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
/** * Complete Array-Based Trie Node with Performance Analysis */class ArrayTrieNode { /** * Fixed-size array for children * Each index corresponds to a character position in the alphabet * null means no child exists for that character */ children: (ArrayTrieNode | null)[]; isEndOfWord: boolean = false; // Configurable alphabet private alphabetStart: number; // e.g., 'a'.charCodeAt(0) = 97 constructor(alphabetSize: number = 26, alphabetStart: string = 'a') { // Allocate the full array immediately this.children = new Array(alphabetSize).fill(null); this.alphabetStart = alphabetStart.charCodeAt(0); } /** * Convert character to array index * This is the critical operation - must be O(1) and fast */ private charToIndex(char: string): number { return char.charCodeAt(0) - this.alphabetStart; } /** * Get child for character * Time: O(1) - single array access * Memory access: Single cache line (usually) */ getChild(char: string): ArrayTrieNode | null { const index = this.charToIndex(char); // Bounds check (can be omitted if input is guaranteed valid) if (index < 0 || index >= this.children.length) { return null; } return this.children[index]; } /** * Set child for character * Time: O(1) - single array assignment */ setChild(char: string, node: ArrayTrieNode): void { const index = this.charToIndex(char); this.children[index] = node; } /** * Check if child exists * Time: O(1) */ hasChild(char: string): boolean { return this.getChild(char) !== null; } /** * Count actual children - requires full scan * Time: O(|Σ|) - must check every slot */ countChildren(): number { let count = 0; for (const child of this.children) { if (child !== null) count++; } return count; } /** * Iterate over actual children with their characters * Time: O(|Σ|) - must scan all slots */ *iterateChildren(): Generator<[string, ArrayTrieNode]> { for (let i = 0; i < this.children.length; i++) { if (this.children[i] !== null) { const char = String.fromCharCode(this.alphabetStart + i); yield [char, this.children[i]!]; } } } /** * Check if this is a leaf node * Time: O(|Σ|) - must scan to confirm no children */ isLeaf(): boolean { return this.children.every(child => child === null); } /** * Memory analysis for this node */ static analyzeMemory(alphabetSize: number): { arrayBytes: number; overheadBytes: number; totalBytes: number; usefulBytesIfFull: number; wastedIfEmpty: number; } { const pointerSize = 8; // 64-bit pointers const objectOverhead = 24; // Typical JS object overhead const booleanSize = 4; // Boolean with padding const intSize = 4; // alphabetStart const arrayBytes = alphabetSize * pointerSize; const overheadBytes = objectOverhead + booleanSize + intSize; const totalBytes = arrayBytes + overheadBytes; return { arrayBytes, overheadBytes, totalBytes, usefulBytesIfFull: arrayBytes, // All pointers point to real nodes wastedIfEmpty: arrayBytes, // All null pointers }; }} // Memory analysis for common alphabet sizesconsole.log("26-char (lowercase):", ArrayTrieNode.analyzeMemory(26));// { arrayBytes: 208, totalBytes: 240, wastedIfEmpty: 208 } console.log("52-char (case-sensitive):", ArrayTrieNode.analyzeMemory(52));// { arrayBytes: 416, totalBytes: 448, wastedIfEmpty: 416 } console.log("128-char (ASCII):", ArrayTrieNode.analyzeMemory(128));// { arrayBytes: 1024, totalBytes: 1056, wastedIfEmpty: 1024 }The hash map approach stores only the children that actually exist, using a hash table to map characters to child nodes. This provides flexibility and space efficiency at the cost of some access time.
How It Works:
Hash Map State:
'c' → TrieNode1
'a' → TrieNode2
't' → TrieNode3
To find child for 'c': hash('c') → bucket → search for 'c' → TrieNode1
To find child for 'z': hash('z') → bucket → search for 'z' → not found
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
/** * Complete Hash Map-Based Trie Node with Performance Analysis */class HashMapTrieNode { /** * Hash map for children * Keys are single characters, values are child nodes * Only stores children that exist */ children: Map<string, HashMapTrieNode>; isEndOfWord: boolean = false; constructor() { this.children = new Map(); } /** * Get child for character * Time: O(1) average, O(k) worst case where k is bucket size * In practice: ~10-50ns due to hash computation and lookup */ getChild(char: string): HashMapTrieNode | undefined { return this.children.get(char); } /** * Set child for character * Time: O(1) average, may trigger rehash (amortized O(1)) */ setChild(char: string, node: HashMapTrieNode): void { this.children.set(char, node); } /** * Check if child exists * Time: O(1) average */ hasChild(char: string): boolean { return this.children.has(char); } /** * Delete a child * Time: O(1) average */ deleteChild(char: string): boolean { return this.children.delete(char); } /** * Count actual children - direct property access * Time: O(1) - Map maintains size */ countChildren(): number { return this.children.size; } /** * Iterate over actual children * Time: O(k) where k is actual children count * Much faster than array when sparse! */ *iterateChildren(): Generator<[string, HashMapTrieNode]> { for (const entry of this.children) { yield entry; } } /** * Check if this is a leaf node * Time: O(1) - just check size */ isLeaf(): boolean { return this.children.size === 0; } /** * Get all child characters (in insertion order for JS Map) */ getChildChars(): string[] { return Array.from(this.children.keys()); } /** * Memory analysis based on child count * Note: Actual memory varies significantly by JS engine */ static analyzeMemory(childCount: number): { baseOverhead: number; perEntryBytes: number; totalBytes: number; vsArrayBytes: (alphabetSize: number) => number; } { // JavaScript Map overhead estimates (varies by engine) const objectOverhead = 24; const mapBaseOverhead = 48; // Empty Map object const booleanSize = 4; // Per-entry cost: key + value + internal pointers + hash metadata // Rough estimate: 40-60 bytes per entry const perEntryBytes = 48; const totalBytes = objectOverhead + mapBaseOverhead + booleanSize + (childCount * perEntryBytes); return { baseOverhead: objectOverhead + mapBaseOverhead + booleanSize, perEntryBytes, totalBytes, vsArrayBytes: (alphabetSize: number) => { const arrayTotal = objectOverhead + booleanSize + (alphabetSize * 8); return arrayTotal - totalBytes; // Positive = Map saves space } }; }} // Compare memory at different child counts for 26-char alphabetconst scenarios = [ { children: 1, desc: "Sparse (1 child)" }, { children: 5, desc: "Low (5 children)" }, { children: 13, desc: "Medium (13 children)" }, { children: 20, desc: "High (20 children)" }, { children: 26, desc: "Full (26 children)" },]; console.log("Memory comparison: Hash Map vs Array (26-char alphabet)\n");for (const { children, desc } of scenarios) { const mapMem = HashMapTrieNode.analyzeMemory(children); const savings = mapMem.vsArrayBytes(26); console.log(`${desc}:`); console.log(` Map: ~${mapMem.totalBytes} bytes`); console.log(` Array: ~240 bytes (fixed)`); console.log(` ${savings > 0 ? 'Map saves' : 'Array saves'}: ${Math.abs(savings)} bytes`);}Let's compare the two approaches across key performance dimensions with concrete numbers.
Operation Time Comparison:
| Operation | Array Time | Hash Map Time | Difference | Winner |
|---|---|---|---|---|
| Single child lookup | 1-2 ns | 10-30 ns | 5-15x | Array |
| Child lookup (warm cache) | 1 ns | 8-15 ns | 8-15x | Array |
| Child lookup (cold cache) | 50-100 ns | 50-100 ns | Similar | Tie |
| Insert new child | 1-2 ns | 15-40 ns | 7-20x | Array |
| Delete child | 1 ns | 10-25 ns | 10-25x | Array |
| Count children | O(|Σ|) scan | O(1) property | 26x+ for |Σ|=26 | Hash Map |
| Iterate k children | O(|Σ|) scan | O(k) direct | |Σ|/k ratio | Hash Map (if sparse) |
| Check if leaf | O(|Σ|) scan | O(1) | |Σ|x | Hash Map |
Space Comparison:
| Scenario | Children | Array (|Σ|=26) | Hash Map | Winner |
|---|---|---|---|---|
| Empty node | 0 | ~240 bytes | ~80 bytes | Hash Map (3x) |
| Single child | 1 | ~240 bytes | ~128 bytes | Hash Map (1.9x) |
| Sparse | 3 | ~240 bytes | ~176 bytes | Hash Map (1.4x) |
| Moderate | 8 | ~240 bytes | ~320 bytes | Array (1.3x) |
| Dense | 15 | ~240 bytes | ~560 bytes | Array (2.3x) |
| Full | 26 | ~240 bytes | ~800 bytes | Array (3.3x) |
For a 26-character alphabet, hash maps become more space-efficient when nodes have fewer than ~6-8 children on average. This crossover point shifts based on alphabet size and language-specific hash map overhead. For larger alphabets (ASCII, Unicode), the crossover favors hash maps even more.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
/** * Side-by-side performance comparison * Run this to see actual numbers on your hardware */function runComparison() { const ITERATIONS = 1_000_000; const ALPHABET = 'abcdefghijklmnopqrstuvwxyz'; // Prepare test nodes const arrayNode = new ArrayTrieNode(); const hashNode = new HashMapTrieNode(); // Add some children for (const char of 'acdfhjmptx') { // 10 children arrayNode.setChild(char, new ArrayTrieNode()); hashNode.setChild(char, new HashMapTrieNode()); } // Benchmark: Random child lookups const lookupChars = ALPHABET.split(''); // Array lookups let arrayStart = performance.now(); for (let i = 0; i < ITERATIONS; i++) { const char = lookupChars[i % 26]; arrayNode.getChild(char); } let arrayTime = performance.now() - arrayStart; // Hash map lookups let hashStart = performance.now(); for (let i = 0; i < ITERATIONS; i++) { const char = lookupChars[i % 26]; hashNode.getChild(char); } let hashTime = performance.now() - hashStart; console.log(`Child Lookup (${ITERATIONS.toLocaleString()} iterations):`); console.log(` Array: ${arrayTime.toFixed(2)}ms (${(arrayTime/ITERATIONS*1000000).toFixed(0)}ns/op)`); console.log(` Hash Map: ${hashTime.toFixed(2)}ms (${(hashTime/ITERATIONS*1000000).toFixed(0)}ns/op)`); console.log(` Ratio: Hash Map is ${(hashTime/arrayTime).toFixed(1)}x slower\n`); // Benchmark: Iterate all children const ITER_COUNT = 100_000; arrayStart = performance.now(); for (let i = 0; i < ITER_COUNT; i++) { for (const _ of arrayNode.iterateChildren()) { /* consume */ } } arrayTime = performance.now() - arrayStart; hashStart = performance.now(); for (let i = 0; i < ITER_COUNT; i++) { for (const _ of hashNode.iterateChildren()) { /* consume */ } } hashTime = performance.now() - hashStart; console.log(`Child Iteration (${ITER_COUNT.toLocaleString()} iterations):`); console.log(` Array: ${arrayTime.toFixed(2)}ms (scans all 26 slots)`); console.log(` Hash Map: ${hashTime.toFixed(2)}ms (visits only 10 children)`); console.log(` Ratio: ${(arrayTime/hashTime).toFixed(1)}x difference\n`); // Benchmark: Check if leaf const LEAF_COUNT = 1_000_000; arrayStart = performance.now(); for (let i = 0; i < LEAF_COUNT; i++) { arrayNode.isLeaf(); } arrayTime = performance.now() - arrayStart; hashStart = performance.now(); for (let i = 0; i < LEAF_COUNT; i++) { hashNode.isLeaf(); } hashTime = performance.now() - hashStart; console.log(`Leaf Check (${LEAF_COUNT.toLocaleString()} iterations):`); console.log(` Array: ${arrayTime.toFixed(2)}ms (O(|Σ|) scan)`); console.log(` Hash Map: ${hashTime.toFixed(2)}ms (O(1) size check)`); console.log(` Ratio: Array is ${(arrayTime/hashTime).toFixed(1)}x slower`);} // Run: runComparison();Based on our analysis, here's a systematic framework for choosing between array and hash map implementations.
Primary Decision Factors:
| Factor | Favor Array When... | Favor Hash Map When... |
|---|---|---|
| Alphabet Size | |Σ| ≤ 52 (letters) | |Σ| > 52 or variable |
| Node Density | Average >6 children per node | Average <5 children per node |
| Operation Mix | Lookup-heavy (>90% lookups) | Iteration-heavy (prefix search) |
| Memory Budget | Abundant (server context) | Constrained (browser, mobile) |
| Latency Requirements | Tight P99 (microseconds) | Relaxed (milliseconds OK) |
| Character Set | Fixed, known alphabet | Dynamic, user-defined |
| Codebase Style | Performance-critical systems code | Flexibility-first application code |
Quick Decision Flowchart:
1. Is |Σ| > 128 (beyond ASCII)?
→ YES: Use Hash Map (array is impractical)
→ NO: Continue to 2
2. Is |Σ| ≤ 4 (tiny alphabet like DNA)?
→ YES: Use Array (overhead negligible)
→ NO: Continue to 3
3. Do you need to support arbitrary/unknown characters?
→ YES: Use Hash Map (arrays can't handle unknowns)
→ NO: Continue to 4
4. Is memory a strict constraint?
→ YES: Use Hash Map (space-adaptive)
→ NO: Continue to 5
5. Is sub-microsecond latency required?
→ YES: Use Array (fastest access)
→ NO: Continue to 6
6. Are prefix operations (autocomplete) dominant?
→ YES: Use Hash Map (O(k) iteration)
→ NO: Continue to 7
7. For |Σ| = 26 (lowercase English):
→ Default to Array (simpler, fast enough)
→ Switch to Hash Map if profiling shows sparsity
If you're unsure, start with a hash map. It's more flexible, adapts to sparsity, handles any character, and the performance difference rarely matters except in the most demanding scenarios. You can always optimize later based on profiling.
In production systems, hybrid approaches often outperform pure array or hash map implementations. Here are proven strategies:
Hybrid Strategy 1: Depth-Based Selection
Use arrays near the root (where branching is high) and hash maps deeper in the trie (where nodes are typically sparse).
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209
/** * Hybrid Strategy 1: Depth-Based Selection * Use arrays near root, hash maps deeper */interface DepthAwareNode { depth: number; // Either array or map, never both arrayChildren?: (DepthAwareNode | null)[]; mapChildren?: Map<string, DepthAwareNode>; isEndOfWord: boolean;} class DepthHybridTrie { private root: DepthAwareNode; private readonly arrayDepthThreshold: number; private readonly alphabetSize: number; constructor(alphabetSize: number = 26, arrayDepthThreshold: number = 3) { this.alphabetSize = alphabetSize; this.arrayDepthThreshold = arrayDepthThreshold; this.root = this.createNode(0); } private createNode(depth: number): DepthAwareNode { const useArray = depth < this.arrayDepthThreshold; return { depth, arrayChildren: useArray ? new Array(this.alphabetSize).fill(null) : undefined, mapChildren: useArray ? undefined : new Map(), isEndOfWord: false }; } private charToIndex(char: string): number { return char.charCodeAt(0) - 97; } getChild(node: DepthAwareNode, char: string): DepthAwareNode | null { if (node.arrayChildren) { return node.arrayChildren[this.charToIndex(char)]; } return node.mapChildren?.get(char) ?? null; } setChild(node: DepthAwareNode, char: string, child: DepthAwareNode): void { if (node.arrayChildren) { node.arrayChildren[this.charToIndex(char)] = child; } else { node.mapChildren!.set(char, child); } } insert(word: string): void { let current = this.root; for (let i = 0; i < word.length; i++) { const char = word[i]; let child = this.getChild(current, char); if (!child) { child = this.createNode(current.depth + 1); this.setChild(current, char, child); } current = child; } current.isEndOfWord = true; }} /** * Hybrid Strategy 2: Adaptive Conversion * Start with hash map, convert to array when dense enough */class AdaptiveNode { private static readonly CONVERSION_THRESHOLD = 0.4; // 40% density private mode: 'map' | 'array' = 'map'; private mapChildren: Map<string, AdaptiveNode> = new Map(); private arrayChildren: (AdaptiveNode | null)[] | null = null; isEndOfWord: boolean = false; getChild(char: string): AdaptiveNode | null { if (this.mode === 'array') { return this.arrayChildren![char.charCodeAt(0) - 97]; } return this.mapChildren.get(char) ?? null; } setChild(char: string, node: AdaptiveNode): void { if (this.mode === 'array') { this.arrayChildren![char.charCodeAt(0) - 97] = node; } else { this.mapChildren.set(char, node); this.checkConversion(); } } private checkConversion(): void { if (this.mode !== 'map') return; const density = this.mapChildren.size / 26; if (density >= AdaptiveNode.CONVERSION_THRESHOLD) { this.convertToArray(); } } private convertToArray(): void { this.arrayChildren = new Array(26).fill(null); for (const [char, node] of this.mapChildren) { this.arrayChildren[char.charCodeAt(0) - 97] = node; } this.mapChildren.clear(); this.mode = 'array'; } getMode(): 'map' | 'array' { return this.mode; }} /** * Hybrid Strategy 3: Size-Tiered Storage * Different structures for different expected sizes */class TieredNode { private tier: 'single' | 'few' | 'many'; // Tier 1: Single child (common for word endings) private singleChar?: string; private singleChild?: TieredNode; // Tier 2: Few children (2-5) - small array of pairs private fewChildren?: { char: string; node: TieredNode }[]; // Tier 3: Many children - full hash map private manyChildren?: Map<string, TieredNode>; isEndOfWord: boolean = false; constructor() { this.tier = 'single'; } getChild(char: string): TieredNode | null { switch (this.tier) { case 'single': return this.singleChar === char ? this.singleChild! : null; case 'few': const found = this.fewChildren!.find(c => c.char === char); return found?.node ?? null; case 'many': return this.manyChildren!.get(char) ?? null; } } setChild(char: string, node: TieredNode): void { switch (this.tier) { case 'single': if (!this.singleChar) { this.singleChar = char; this.singleChild = node; } else { // Upgrade to 'few' tier this.fewChildren = [ { char: this.singleChar, node: this.singleChild! }, { char, node } ]; this.singleChar = undefined; this.singleChild = undefined; this.tier = 'few'; } break; case 'few': const existing = this.fewChildren!.findIndex(c => c.char === char); if (existing >= 0) { this.fewChildren![existing].node = node; } else if (this.fewChildren!.length < 5) { this.fewChildren!.push({ char, node }); } else { // Upgrade to 'many' tier this.manyChildren = new Map(); for (const child of this.fewChildren!) { this.manyChildren.set(child.char, child.node); } this.manyChildren.set(char, node); this.fewChildren = undefined; this.tier = 'many'; } break; case 'many': this.manyChildren!.set(char, node); break; } } getTier(): string { return this.tier; }}Hybrid approaches add implementation complexity. Use them only when profiling shows meaningful benefit. For most applications, a simple hash map implementation is sufficient and much easier to maintain.
The array vs hash map decision is influenced by language-specific characteristics. Here's guidance for common languages:
JavaScript/TypeScript:
Map is well-optimized, generally prefer for flexibilityObject (as hash map) has edge cases with numeric keysUint8Array/Int32Array for index-based approachesPython:
dict is extremely fast and memory-efficientlist (as array) is flexible but has object overheaddict is almost always the right choice__slots__ for class-based nodes to reduce memoryJava:
HashMap has significant overhead per entryEnumMap for enum-based alphabetsTIntObjectHashMap (Trove) for primitive keys| Language | Small Alphabet (≤26) | Large Alphabet (>26) | Notes |
|---|---|---|---|
| JavaScript/TS | Array or Map (both fine) | Map | Map has excellent JS optimization |
| Python | dict (easier) | dict | Python dict is highly optimized |
| Java | Array (prefer) | HashMap | HashMap overhead is significant |
| C++ | array/vector | unordered_map | std::array is stack-allocated |
| Rust | Vec or [T; N] | HashMap | Arrays can be stack-allocated |
| Go | slice or array | map | Consider struct with array field |
| C# | Array | Dictionary | Array lookup is heavily optimized |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
"""Python-optimized trie node using dict and __slots__""" class OptimizedTrieNode: """ Memory-efficient trie node using __slots__ Reduces per-object overhead significantly """ __slots__ = ('children', 'is_end_of_word') def __init__(self): self.children: dict[str, 'OptimizedTrieNode'] = {} self.is_end_of_word: bool = False def get_child(self, char: str) -> 'OptimizedTrieNode | None': return self.children.get(char) def set_child(self, char: str, node: 'OptimizedTrieNode') -> None: self.children[char] = node def has_child(self, char: str) -> bool: return char in self.children class OptimizedTrie: """Complete trie using optimized nodes""" __slots__ = ('root',) def __init__(self): self.root = OptimizedTrieNode() def insert(self, word: str) -> None: node = self.root for char in word: if char not in node.children: node.children[char] = OptimizedTrieNode() node = node.children[char] node.is_end_of_word = True def search(self, word: str) -> bool: node = self.root for char in word: if char not in node.children: return False node = node.children[char] return node.is_end_of_word def starts_with(self, prefix: str) -> bool: node = self.root for char in prefix: if char not in node.children: return False node = node.children[char] return True # Memory comparisonimport sys class StandardNode: def __init__(self): self.children = {} self.is_end_of_word = False optimized = OptimizedTrieNode()standard = StandardNode() print(f"Optimized node size: {sys.getsizeof(optimized)} bytes")print(f"Standard node size: {sys.getsizeof(standard)} bytes")# Optimized is typically 30-50% smallerThe array vs hash map question comes up frequently in interviews and code reviews. Here's how to handle it confidently.
In Interviews:
State your assumption first: "I'll use a hash map for children since it's flexible. If we know the alphabet is limited to lowercase letters, I'd switch to an array for better constant factors."
Show you understand the trade-off: "Arrays give O(1) access with better constants, but hash maps are more space-efficient for sparse nodes and handle any character set."
Ask clarifying questions:
Default to hash map implementation: It's more general, handles edge cases, and interviewers rarely penalize the small performance difference.
In Code Reviews:
If you see an array-based implementation:
If you see a hash map implementation:
Red flags in either case:
"For this implementation, I'll use a hash map because it's more flexible—it handles any character and adapts to sparsity. If we profile and find child lookup is a bottleneck, and we're certain our alphabet is small and fixed, we could switch to an array for better constant factors. But let's not optimize prematurely."
The array vs hash map decision is fundamental to trie implementation, but it's not as complex as it might seem. Let's consolidate the key points:
| Use Array When... | Use Hash Map When... |
|---|---|
| Alphabet size ≤ 26-52 | Alphabet size > 52 or unknown |
| Expected density > 25% | Expected density < 20% |
| Lookup latency is critical | Space efficiency is critical |
| Input is well-controlled | Input may contain surprises |
| Iteration is rare | Prefix search is common |
Congratulations! You've completed the Trie Node Representation module. You now understand how to design trie nodes from first principles, choose between arrays and hash maps with confidence, navigate the full space-time trade-off landscape, and implement production-quality trie structures. This knowledge forms the foundation for all advanced trie techniques covered in subsequent modules.