Loading learning content...
Every time your application recalculates the same result for the same input, it wastes CPU cycles, increases latency, and burns through resources unnecessarily. In performance-critical systems, this redundancy compounds into observable slowdowns, higher infrastructure costs, and degraded user experience.
Memoization is the technique of caching the results of expensive function calls and returning the cached result when the same inputs occur again. It transforms pure functions into self-optimizing units that remember their past computations.
This isn't merely an optimization trick—it's a fundamental pattern that appears throughout software engineering, from dynamic programming algorithms to React's useMemo hook to database query caching. Understanding memoization deeply enables you to identify opportunities for dramatic performance improvements across your entire codebase.
By the end of this page, you will understand the theoretical foundations of memoization, implement memoization in multiple programming paradigms, recognize when memoization is appropriate versus harmful, and apply advanced memoization strategies in production systems.
Memoization derives from the Latin word memorandum ("to be remembered"). The term was coined by Donald Michie in 1968 to describe the automatic storage of function results. At its core, memoization exploits a fundamental property of pure functions: given the same inputs, they always produce the same outputs.
The Pure Function Requirement:
For memoization to work correctly, the function being memoized must be referentially transparent—its output must depend solely on its inputs, with no side effects. Consider these two functions:
123456789101112131415161718
// ✅ PURE - Safe to memoizefunction calculateTax(income: number, rate: number): number { return income * rate;} // ❌ IMPURE - Dangerous to memoizelet callCount = 0;function calculateTaxWithSideEffect(income: number, rate: number): number { callCount++; // Side effect! console.log(`Call #${callCount}`); // Another side effect! return income * rate;} // ❌ IMPURE - Output depends on external statefunction calculateTaxWithExternalDependency(income: number): number { const currentRate = fetchCurrentTaxRate(); // External dependency return income * currentRate;}Memoizing impure functions creates subtle bugs. If the external state changes but cached results remain, your application serves stale data. This is particularly dangerous because these bugs often appear correct during testing but fail unpredictably in production when underlying assumptions change.
The Time-Space Tradeoff:
Memoization exemplifies the classic time-space tradeoff in computer science. You exchange memory (storing cached results) for time (avoiding recomputation). This tradeoff isn't always favorable:
| Scenario | Memoization Effect | Recommendation |
|---|---|---|
| Expensive computation, repeated inputs | Dramatic speedup | ✅ Strongly recommended |
| Cheap computation, repeated inputs | Marginal speedup, memory overhead | ⚠️ Profile first |
| Expensive computation, unique inputs | No speedup, memory bloat | ❌ Avoid |
| Recursive algorithms with overlapping subproblems | Exponential → Polynomial | ✅ Essential |
The simplest memoization implementation uses a dictionary (hash map) to store computed results. The function input becomes the cache key, and the computed result becomes the value.
Basic Single-Argument Memoization:
12345678910111213141516171819202122232425262728293031323334353637
// Generic memoization for single-argument functionsfunction memoize<T, R>(fn: (arg: T) => R): (arg: T) => R { const cache = new Map<T, R>(); return (arg: T): R => { // Check cache first if (cache.has(arg)) { return cache.get(arg)!; } // Compute and cache result const result = fn(arg); cache.set(arg, result); return result; };} // Example: Expensive Fibonacci calculationfunction slowFibonacci(n: number): number { if (n <= 1) return n; return slowFibonacci(n - 1) + slowFibonacci(n - 2);} // Memoized version - transforms O(2^n) to O(n)const fastFibonacci = memoize(function fib(n: number): number { if (n <= 1) return n; return fastFibonacci(n - 1) + fastFibonacci(n - 2);}); // Performance comparisonconsole.time('slow');slowFibonacci(40); // Takes ~1-2 secondsconsole.timeEnd('slow'); console.time('fast');fastFibonacci(40); // Takes <1 millisecondconsole.timeEnd('fast');Multi-Argument Memoization:
When functions accept multiple arguments, creating cache keys becomes more complex. You must serialize arguments into a consistent key format:
1234567891011121314151617181920212223242526272829303132333435363738
// Multi-argument memoization with custom key serializationfunction memoizeMulti<T extends unknown[], R>( fn: (...args: T) => R, keySerializer: (...args: T) => string = (...args) => JSON.stringify(args)): (...args: T) => R { const cache = new Map<string, R>(); return (...args: T): R => { const key = keySerializer(...args); if (cache.has(key)) { return cache.get(key)!; } const result = fn(...args); cache.set(key, result); return result; };} // Example: Distance calculation between coordinatesinterface Point { x: number; y: number; } const calculateDistance = memoizeMulti( (p1: Point, p2: Point): number => { console.log('Computing distance...'); return Math.sqrt( Math.pow(p2.x - p1.x, 2) + Math.pow(p2.y - p1.y, 2) ); }, // Custom serializer for Point objects (p1, p2) => `${p1.x},${p1.y}->${p2.x},${p2.y}`); // First call computescalculateDistance({x: 0, y: 0}, {x: 3, y: 4}); // Logs "Computing..."// Second call returns cachedcalculateDistance({x: 0, y: 0}, {x: 3, y: 4}); // No log, returns 5JSON.stringify doesn't guarantee consistent key ordering for objects! The keys {a:1, b:2} and {b:2, a:1} stringify differently. For object arguments, either sort keys before serializing or use a deterministic serialization library like fast-json-stable-stringify.
Production systems require more sophisticated memoization strategies that handle cache size limits, expiration, and memory pressure.
LRU (Least Recently Used) Memoization:
Unbounded caches grow indefinitely, eventually consuming all available memory. LRU caching maintains a fixed-size cache by evicting the least recently accessed entries:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
class LRUCache<K, V> { private cache = new Map<K, V>(); constructor(private maxSize: number) {} get(key: K): V | undefined { if (!this.cache.has(key)) return undefined; // Move to end (most recently used) const value = this.cache.get(key)!; this.cache.delete(key); this.cache.set(key, value); return value; } set(key: K, value: V): void { // Remove if exists (to update position) if (this.cache.has(key)) { this.cache.delete(key); } // Evict oldest if at capacity if (this.cache.size >= this.maxSize) { const oldestKey = this.cache.keys().next().value; this.cache.delete(oldestKey); } this.cache.set(key, value); } has(key: K): boolean { return this.cache.has(key); }} function memoizeLRU<T, R>( fn: (arg: T) => R, maxCacheSize: number = 100): (arg: T) => R { const cache = new LRUCache<T, R>(maxCacheSize); return (arg: T): R => { if (cache.has(arg)) { return cache.get(arg)!; } const result = fn(arg); cache.set(arg, result); return result; };}Time-Based Expiration (TTL Memoization):
For data that becomes stale over time, combine memoization with time-to-live semantics:
12345678910111213141516171819202122232425262728293031323334353637
interface CacheEntry<T> { value: T; expiresAt: number;} function memoizeWithTTL<T, R>( fn: (arg: T) => R, ttlMs: number): (arg: T) => R { const cache = new Map<T, CacheEntry<R>>(); return (arg: T): R => { const now = Date.now(); const entry = cache.get(arg); // Return cached if valid and not expired if (entry && entry.expiresAt > now) { return entry.value; } // Compute, cache with expiration const result = fn(arg); cache.set(arg, { value: result, expiresAt: now + ttlMs }); return result; };} // Example: Cache user data for 5 minutesconst getUserProfile = memoizeWithTTL( async (userId: string) => { return await database.users.findById(userId); }, 5 * 60 * 1000 // 5 minutes);In object-oriented systems, memoization integrates naturally with class designs. Instance-level memoization caches results per object, while class-level memoization shares caches across all instances.
Instance-Level Memoization:
123456789101112131415161718192021222324252627282930313233343536373839
class TaxCalculator { private cache = new Map<string, number>(); constructor( private readonly baseRate: number, private readonly brackets: TaxBracket[] ) {} calculateTax(income: number, deductions: number): number { const key = `${income}:${deductions}`; if (this.cache.has(key)) { return this.cache.get(key)!; } // Expensive calculation with bracket application const taxableIncome = Math.max(0, income - deductions); let tax = 0; let remainingIncome = taxableIncome; for (const bracket of this.brackets) { const taxableInBracket = Math.min( remainingIncome, bracket.ceiling - bracket.floor ); tax += taxableInBracket * bracket.rate; remainingIncome -= taxableInBracket; if (remainingIncome <= 0) break; } this.cache.set(key, tax); return tax; } // Allow cache invalidation when brackets change clearCache(): void { this.cache.clear(); }}Decorator-Based Memoization:
Modern languages support decorators that make memoization declarative and clean:
123456789101112131415161718192021222324252627282930313233343536373839404142
// TypeScript decorator for method memoizationfunction Memoize() { return function ( target: any, propertyKey: string, descriptor: PropertyDescriptor ) { const originalMethod = descriptor.value; const cacheKey = Symbol(`__memoize_${propertyKey}`); descriptor.value = function (...args: any[]) { if (!this[cacheKey]) { this[cacheKey] = new Map(); } const key = JSON.stringify(args); if (this[cacheKey].has(key)) { return this[cacheKey].get(key); } const result = originalMethod.apply(this, args); this[cacheKey].set(key, result); return result; }; return descriptor; };} // Usage - clean and declarativeclass PricingService { @Memoize() calculateDiscount(customerId: string, orderTotal: number): number { // Complex discount logic... return this.fetchCustomerTier(customerId) * orderTotal * 0.1; } @Memoize() async fetchProductPrice(productId: string): Promise<number> { return await this.productRepository.getPrice(productId); }}You now understand the memoization pattern deeply—from its theoretical foundations in pure functions to practical implementation strategies including LRU and TTL caching. Next, we'll explore lazy loading with caching, combining deferred computation with result caching for optimal resource utilization.