Loading content...
In the realm of data structures, few exhibit the elegant simplicity of tries when it comes to time complexity. Unlike hash tables that promise average-case O(1) with occasional worst-case degradation, or balanced trees that guarantee O(log n) but with constant factors, tries offer something distinctly different: O(m) time complexity for all fundamental operations, where m is the length of the key.
This isn't O(n) where n is the number of keys stored. It's O(m) where m is the length of the specific key you're working with. This distinction is profound—a trie containing a million 10-character strings performs exactly the same operation count for search as a trie containing ten 10-character strings.
In this page, we will dissect this remarkable property, understand why it holds, and appreciate the implications for real-world applications where tries excel.
By the end of this page, you will deeply understand why trie operations run in O(m) time, how this differs from other data structures, and why this property makes tries uniquely suitable for certain classes of problems. You'll be able to analyze any trie operation and confidently state its time complexity.
To truly grasp trie time complexity, we must first understand what m represents and why it—rather than n (the number of stored keys)—dominates the analysis.
Definition of m:
The Fundamental Insight:
Every trie operation processes exactly m nodes in the worst case—one node per character of the key. No more, no less. The structure of the trie, built from potentially millions of keys, does not add to this count. You descend through exactly m levels, performing a constant-time operation at each level.
Consider searching for the word "algorithm" in a dictionary. With a hash table, you hash the entire word (O(m)) and then (ideally) do O(1) lookup, but collisions can degrade this. With a balanced BST storing strings, you compare strings at each node, potentially comparing up to m characters at each of O(log n) nodes—yielding O(m log n). With a trie, you simply follow 9 edges (one per character), regardless of dictionary size. A dictionary with 10 words or 10 million words takes the exact same number of steps.
Breaking Down the Constant-Time Operations:
At each node in a trie, we perform one of these operations:
The crucial question: are these operations truly O(1)?
The answer depends on how children are stored in each node:
| Child Storage | Lookup Time | Insert Time | Space per Node |
|---|---|---|---|
| Array (fixed alphabet) | O(1) | O(1) | O(Σ) |
| Hash Map | O(1) average | O(1) average | O(k) where k = children count |
| Sorted Array + Binary Search | O(log Σ) | O(Σ) | O(k) |
| Linked List | O(k) | O(k) | O(k) |
Where Σ is the alphabet size and k is the actual number of children at a node.
For the standard implementations (array or hash map), child operations are O(1), making the overall operation O(m) where we perform m such operations.
Let's rigorously analyze the insert operation to understand exactly why it runs in O(m) time.
The Algorithm:
insert(word):
node = root
for each character c in word:
if node.children[c] does not exist:
node.children[c] = new TrieNode()
node = node.children[c]
node.isEndOfWord = true
Step-by-Step Complexity Analysis:
| Operation | Occurrences | Cost Each | Total Cost |
|---|---|---|---|
| Initialize node = root | 1 | O(1) | O(1) |
| Loop iteration (for each c) | m | O(1) | O(m) |
| Check if child exists | m | O(1)* | O(m) |
| Create new node (worst case) | m | O(1) | O(m) |
| Navigate to child | m | O(1) | O(m) |
| Set isEndOfWord flag | 1 | O(1) | O(1) |
*Assuming array-based or hash map-based children storage.
Total time complexity: O(1) + O(m) + O(m) + O(m) + O(m) + O(1) = O(m)
Best Case vs Worst Case:
Unlike many data structures, the trie insert has the same time complexity in all cases:
Best case: The word shares all characters with an existing word, and we only traverse existing nodes. Still O(m) traversals.
Worst case: The word is completely new, and we create m new nodes. Still O(m) operations.
Average case: Some prefix exists, some nodes are new. Still O(m).
This uniformity is a significant advantage—there are no surprising slowdowns.
What About Node Creation Cost?
When using array-based children, each new node allocates an array of size Σ (alphabet size). This is O(Σ) per node, but Σ is a constant (e.g., 26 for lowercase English letters). Since we create at most m nodes, the total allocation cost is O(m × Σ) = O(m) when Σ is constant.
When using hash map-based children, node creation is O(1), making the analysis cleaner.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
class TrieNode { children: Map<string, TrieNode> = new Map(); isEndOfWord: boolean = false;} class Trie { root: TrieNode = new TrieNode(); /** * Insert a word into the trie. * Time Complexity: O(m) where m = word.length * - We iterate through exactly m characters * - Each iteration performs O(1) map operations * - Total: O(m) * * Space Complexity: O(m) worst case for new nodes * - If word shares no prefix, we create m new nodes * - Each node is O(1) space (hash map starts empty) */ insert(word: string): void { let node = this.root; // O(1) for (const char of word) { // Exactly m iterations if (!node.children.has(char)) { // O(1) hash lookup node.children.set(char, new TrieNode()); // O(1) average } node = node.children.get(char)!; // O(1) hash lookup } node.isEndOfWord = true; // O(1) } /** * Demonstration: Insert "algorithm" step by step * * m = 9 (length of "algorithm") * * Step 1: char = 'a' → check/create child, move to child * Step 2: char = 'l' → check/create child, move to child * Step 3: char = 'g' → check/create child, move to child * Step 4: char = 'o' → check/create child, move to child * Step 5: char = 'r' → check/create child, move to child * Step 6: char = 'i' → check/create child, move to child * Step 7: char = 't' → check/create child, move to child * Step 8: char = 'h' → check/create child, move to child * Step 9: char = 'm' → check/create child, move to child, mark end * * Total: 9 iterations = O(9) = O(m) ✓ */}The search operation follows a nearly identical pattern to insert, but with a crucial difference: it never creates nodes, only traverses existing ones.
The Algorithm:
search(word):
node = root
for each character c in word:
if node.children[c] does not exist:
return false
node = node.children[c]
return node.isEndOfWord
Why Search is O(m):
The search algorithm performs the following operations:
Total: O(1) + m × O(1) + O(1) = O(m)
The Power of Independence from n:
Consider a real-world scenario: a dictionary application containing 171,476 words (the Oxford English Dictionary word count). Finding whether "serendipity" (11 characters) exists requires exactly 11 node traversals—the same as if the dictionary contained only 10 words.
Comparison with Alternative Data Structures:
| Data Structure | Search Complexity | Notes |
|---|---|---|
| Trie | O(m) | Independent of n, no comparisons needed |
| Hash Table | O(m) average, O(n × m) worst | Must hash entire key; collisions can degrade |
| Balanced BST (strings) | O(m × log n) | Compare strings at O(log n) nodes |
| Sorted Array + Binary Search | O(m × log n) | String comparison at each binary search step |
| Unsorted Array/List | O(n × m) | Must compare with all n strings |
Hash tables are often cited as O(1) lookup, but for string keys, hashing requires reading all m characters—making it O(m). The "O(1)" claim assumes the key can be hashed in constant time, which isn't true for variable-length strings. Thus, for string operations, tries and hash tables have similar O(m) average-case lookup, but tries offer additional capabilities (prefix operations) that hash tables cannot match.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
class Trie { root: TrieNode = new TrieNode(); /** * Search for completely matching word in the trie. * Time Complexity: O(m) where m = word.length * * Analysis: * - Best case: First character doesn't exist → O(1) * - Worst case: All characters exist → O(m) * - Both are O(m) in big-O terms */ search(word: string): boolean { let node = this.root; // O(1) for (const char of word) { // At most m iterations if (!node.children.has(char)) { // O(1) lookup return false; // Early termination possible } node = node.children.get(char)!; // O(1) lookup } return node.isEndOfWord; // O(1) - crucial check! } /** * Example trace: Searching "cat" in trie with ["car", "card", "cat", "category"] * * Iteration 1: char='c' → exists, move to 'c' node * Iteration 2: char='a' → exists, move to 'a' node * Iteration 3: char='t' → exists, move to 't' node * Check: isEndOfWord = true (because "cat" was inserted) * Return: true * * Total operations: 3 + 1 = O(3) = O(m) ✓ */ /** * Example trace: Searching "can" in same trie * * Iteration 1: char='c' → exists, move to 'c' node * Iteration 2: char='a' → exists, move to 'a' node * Iteration 3: char='n' → does NOT exist! * Return: false (early termination) * * Total operations: 3 = O(3) = O(m) ✓ */}The startsWith operation is what truly distinguishes tries from hash tables. It answers: "Does any word in the trie begin with this prefix?" This is where tries demonstrate their unique power.
The Algorithm:
startsWith(prefix):
node = root
for each character c in prefix:
if node.children[c] does not exist:
return false
node = node.children[c]
return true // If we reached here, the prefix exists
Key Observation:
The startsWith operation is essentially search without the final isEndOfWord check. We only care whether the path exists, not whether it represents a complete word.
Why This is O(m):
The analysis is identical to search:
With a hash table, checking if any word starts with a given prefix requires iterating through all n stored words—O(n × m) in the worst case. With a trie, it's always O(m) regardless of how many words exist. This is the fundamental reason tries power autocomplete systems, spell checkers, and IP routing tables.
Extended Prefix Operations:
The O(m) prefix traversal enables several powerful operations:
Each of these starts with the O(m) prefix traversal, then performs additional work proportional to the results—not to the total trie size.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
class Trie { root: TrieNode = new TrieNode(); /** * Check if any word in the trie starts with given prefix. * Time Complexity: O(m) where m = prefix.length * * This is the operation that makes tries uniquely valuable. * Hash tables cannot do this efficiently. */ startsWith(prefix: string): boolean { let node = this.root; for (const char of prefix) { if (!node.children.has(char)) { return false; } node = node.children.get(char)!; } return true; // Prefix path exists } /** * Navigate to the node representing a prefix. * Time Complexity: O(m) where m = prefix.length * * This helper enables all extended prefix operations. */ private findPrefixNode(prefix: string): TrieNode | null { let node = this.root; for (const char of prefix) { if (!node.children.has(char)) { return null; } node = node.children.get(char)!; } return node; } /** * Count how many words start with given prefix. * Time Complexity: O(m + k) where: * - m = prefix length (to reach prefix node) * - k = number of nodes in subtree (to count descendants) * * Note: k is NOT n (total words). It's specific to this prefix. */ countWordsWithPrefix(prefix: string): number { const prefixNode = this.findPrefixNode(prefix); // O(m) if (!prefixNode) return 0; return this.countWords(prefixNode); // O(k) } private countWords(node: TrieNode): number { let count = node.isEndOfWord ? 1 : 0; for (const child of node.children.values()) { count += this.countWords(child); } return count; } /** * Get all words starting with given prefix. * Time Complexity: O(m + k) where: * - m = prefix length * - k = total characters in all matching words * * Space Complexity: O(k) for storing results */ getWordsWithPrefix(prefix: string): string[] { const prefixNode = this.findPrefixNode(prefix); // O(m) if (!prefixNode) return []; const results: string[] = []; this.collectWords(prefixNode, prefix, results); // O(k) return results; } private collectWords(node: TrieNode, currentWord: string, results: string[]): void { if (node.isEndOfWord) { results.push(currentWord); } for (const [char, child] of node.children) { this.collectWords(child, currentWord + char, results); } }}The delete operation is more nuanced than insert or search, but still maintains O(m) complexity. The challenge is deciding which nodes to actually remove versus which to merely unmark.
The Complexity of Deletion:
When deleting "cat" from a trie that also contains "catalog":
When deleting "catalog" from a trie that also contains "cat":
Two Approaches:
Eager Deletion Algorithm:
delete(word):
return deleteHelper(root, word, 0)
deleteHelper(node, word, depth):
if depth == word.length:
if not node.isEndOfWord:
return false // word doesn't exist
node.isEndOfWord = false
return node.children.isEmpty() // can delete this node?
char = word[depth]
if char not in node.children:
return false // word doesn't exist
shouldDeleteChild = deleteHelper(node.children[char], word, depth + 1)
if shouldDeleteChild:
delete node.children[char]
return node.children.isEmpty() and not node.isEndOfWord
return false
Why O(m):
Even with eager deletion, we process at most m nodes:
Total: O(m) + O(m) = O(2m) = O(m)
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
class Trie { root: TrieNode = new TrieNode(); /** * Delete a word from the trie (eager deletion). * Time Complexity: O(m) where m = word.length * * We traverse down to the word's end (O(m)), then traverse back * up potentially deleting nodes (O(m)). Total: O(2m) = O(m). */ delete(word: string): boolean { return this.deleteHelper(this.root, word, 0); } private deleteHelper(node: TrieNode, word: string, depth: number): boolean { // Base case: reached end of word if (depth === word.length) { // Word doesn't exist if not marked as end if (!node.isEndOfWord) { return false; } node.isEndOfWord = false; // Return true if this node can be deleted // (no children and no longer marks a word end) return node.children.size === 0; } const char = word[depth]; const childNode = node.children.get(char); // Character path doesn't exist if (!childNode) { return false; } // Recurse to next level const shouldDeleteChild = this.deleteHelper(childNode, word, depth + 1); // Delete the child node if it's now orphaned if (shouldDeleteChild) { node.children.delete(char); // This node can be deleted if: // - It has no remaining children // - It doesn't mark another word's end return node.children.size === 0 && !node.isEndOfWord; } return false; }} /** * Example: Delete "cat" from trie containing ["car", "cat", "catalog"] * * Before deletion: * root * | * c * | * a * / \ * r* t* * | * a * | * l * | * o * | * g* * * After deleting "cat": * - Navigate to 't' (depth 3) * - Unmark 't' as word end * - 't' still has child 'a', so don't delete 't' * - Return false (no deletion needed) * * Result: Only the isEndOfWord flag changes, O(m) = O(3) */While the O(m) complexity is theoretically clean, real-world performance involves additional factors that can significantly impact actual running time.
Factors Affecting Real-World Performance:
| Operation | Trie (Theoretical) | Trie (Practical) | Hash Table | Notes |
|---|---|---|---|---|
| Insert (short key) | O(m) | ~200ns | ~80ns | Hash table wins for simple insert |
| Insert (long key) | O(m) | ~500ns | ~150ns | Trie overhead compounds |
| Search (exact) | O(m) | ~180ns | ~60ns | Hash table optimized for this |
| Prefix search | O(m) | ~180ns | O(n × m) | Trie dominates completely |
| Autocomplete (k results) | O(m + k) | ~1ms | O(n × m) | Trie's killer feature |
While tries have O(m) complexity for all operations, the constant factors are higher than hash tables for exact-match operations. Choose tries when you need prefix operations or lexicographic ordering—not as a general-purpose string storage. For simple key-value storage with string keys, hash tables are usually faster in practice.
When O(m) Beats O(1) Average:
Despite the practical overhead, tries win in specific scenarios:
Worst-case Guarantees: Hash tables can degrade to O(n) with bad hash functions or adversarial input. Tries maintain O(m) always.
Prefix-Heavy Workloads: Any operation involving prefixes is dramatically faster with tries.
Ordered Operations: Tries naturally support lexicographic iteration. Hash tables require sorting.
Longest Common Prefix: Finding the LCP of all stored strings is O(total characters) with tries, but much harder with hash tables.
Incremental Matching: Processing a string character-by-character (like streaming input) maps naturally to trie traversal.
Let's consolidate our understanding of trie time complexity:
| Operation | Time Complexity | Key Insight |
|---|---|---|
| Insert | O(m) | Create at most m nodes, one per character |
| Search | O(m) | Traverse exactly m nodes in successful search |
| StartsWith | O(m) | Same as search, but no end-of-word check |
| Delete (lazy) | O(m) | Traverse + toggle flag |
| Delete (eager) | O(m) | Traverse down + backtrack up |
| Count prefix matches | O(m + k) | m to find prefix, k for subtree traversal |
| List prefix matches | O(m + k) | m for prefix, k characters in results |
You now have a rigorous understanding of why trie operations run in O(m) time. This foundation is essential for making informed decisions about when to use tries. In the next page, we'll explore the other side of the equation: space complexity, where tries reveal their more challenging characteristics.