Loading learning content...
Cache bugs are among the most challenging issues to debug in software systems. They're intermittent, often timing-dependent, and can make systems behave in ways that seem to violate basic logic. A user sees their old profile picture despite updating it. A product shows the wrong price for some customers but not others. An API response includes data that shouldn't exist anymore.
The fundamental challenge is that caching adds a layer of indirection between the source of truth and what users see. When something goes wrong in that layer, the symptoms often appear far from the actual cause. A cache key collision in one service might manifest as inconsistent data in a completely different service downstream.
This page equips you with systematic debugging methodology—a structured approach to diagnosing cache issues that replaces guesswork with investigation, and trial-and-error with targeted analysis.
By the end of this page, you will master a systematic approach to cache debugging, understand common cache failure patterns and their signatures, learn techniques for reproducing elusive cache bugs, and develop skills in cache forensics—reconstructing what happened from available evidence.
When facing a cache issue, resist the temptation to start changing things randomly. Instead, follow a structured approach that maximizes the information gained from each investigation step.
The Cache Debugging Methodology:
Step 1: Characterize the Symptom
Precise symptom characterization dramatically narrows the debugging search space. Poor characterization wastes hours investigating the wrong hypotheses.
| Vague Symptom | Precise Characterization |
|---|---|
| "Users see wrong data" | "Users see their avatar from 2 days ago despite updating it yesterday" |
| "Cache isn't working" | "Hit rate dropped from 85% to 40% starting at 14:30 UTC" |
| "Slow responses" | "P95 latency increased from 50ms to 800ms for /api/products endpoint" |
| "Inconsistent behavior" | "10% of requests to /cart return stale prices, consistent per-user for ~1 hour" |
Who is affected? (All users, specific users, specific regions?) What is wrong? (Wrong data, stale data, missing data?) When did it start? (Specific time, after deployment, gradually?) Where does it manifest? (Which endpoint, which service?) Why might it be happening? (Initial hypothesis to test)
Most cache issues fall into recognizable patterns. Familiarizing yourself with these patterns enables faster diagnosis—you can match symptoms to known patterns rather than investigating from scratch every time.
| Pattern | Symptoms | Common Causes | Investigation Approach |
|---|---|---|---|
| Stale Data | Users see outdated information that should have been updated | Missing invalidation, TTL too long, invalidation race condition | Compare cache content to database; trace invalidation flow |
| Cache Stampede | Sudden spike in backend load when cache entry expires | No stampede protection, popular key expiration | Check for concurrent cache misses on same key; correlate with backend latency |
| Key Collision | Different entities return each other's data | Hash collision, incorrect key generation | Examine actual cache keys; verify key uniqueness across entities |
| Serialization Mismatch | Errors or garbled data on cache read | Schema change without cache flush, version mismatch | Inspect raw cached bytes; compare with expected serialization format |
| Partial Invalidation | Some related caches invalidated, others stale | Incomplete invalidation logic, missed dependencies | Map all caches affected by operation; verify each is invalidated |
| Cache Warming Failure | Low hit rate after deployment/restart | Warming script failure, insufficient warming time | Check warming logs; compare entry count before/after restart |
| Memory Pressure Eviction | Unpredictable evictions, hit rate decline | Cache undersized for workload, memory leak | Monitor memory usage trend; check eviction counts by reason |
| Connection Pool Exhaustion | Timeouts, increased latency, errors | Pool too small, connection leaks, slow queries | Check connection pool metrics; look for leak patterns |
Deep Dive: The Stale Data Pattern
Stale data is the most common cache issue. It occurs when the cache contains outdated information that should have been invalidated. The challenge is understanding why the invalidation didn't happen or wasn't effective.
Stale Data Investigation Checklist:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132
// Investigation tool for stale data issuesclass StaleDataInvestigator { constructor( private cache: Redis, private database: Database, private logger: Logger ) {} async investigate(entityType: string, entityId: string): Promise<Investigation> { const investigation: Investigation = { timestamp: new Date().toISOString(), entityType, entityId, findings: [], conclusion: 'unknown', }; // Step 1: Get current cache state const cacheKey = `${entityType}:${entityId}`; const cachedRaw = await this.cache.get(cacheKey); const ttl = await this.cache.ttl(cacheKey); investigation.cacheState = { key: cacheKey, exists: cachedRaw !== null, ttlSeconds: ttl, value: cachedRaw ? JSON.parse(cachedRaw) : null, }; if (!cachedRaw) { investigation.findings.push('Cache entry does not exist'); investigation.conclusion = 'cache_miss_not_stale_data'; return investigation; } // Step 2: Get current database state const dbValue = await this.database.findById(entityType, entityId); investigation.databaseState = { value: dbValue, lastUpdated: dbValue?.updatedAt, }; // Step 3: Compare cache vs database const serializedDb = JSON.stringify(dbValue); const serializedCache = JSON.stringify(investigation.cacheState.value); if (serializedDb === serializedCache) { investigation.findings.push('Cache matches database - no staleness detected'); investigation.conclusion = 'cache_consistent'; return investigation; } investigation.findings.push('STALE DATA DETECTED: Cache differs from database'); // Step 4: Determine cache age const cacheTimestamp = investigation.cacheState.value?._cachedAt; if (cacheTimestamp) { const cacheAge = Date.now() - new Date(cacheTimestamp).getTime(); investigation.findings.push(`Cache entry is ${Math.round(cacheAge / 1000)}s old`); // Compare to last database update if (dbValue?.updatedAt) { const dbUpdateTime = new Date(dbValue.updatedAt).getTime(); const cacheTime = new Date(cacheTimestamp).getTime(); if (cacheTime < dbUpdateTime) { investigation.findings.push( `Cache was written BEFORE database update (stale by ${Math.round((dbUpdateTime - cacheTime) / 1000)}s)` ); investigation.conclusion = 'invalidation_missed'; } else { investigation.findings.push('Cache was written AFTER database update - possible race condition'); investigation.conclusion = 'race_condition'; } } } // Step 5: Check invalidation logs const recentInvalidations = await this.logger.query({ message: 'cache_invalidation', key: cacheKey, timestamp: { $gt: new Date(Date.now() - 3600000) } // Last hour }); investigation.invalidationHistory = recentInvalidations; if (recentInvalidations.length === 0) { investigation.findings.push('No invalidation attempts found in last hour'); } else { investigation.findings.push(`Found ${recentInvalidations.length} invalidation attempt(s)`); recentInvalidations.forEach(inv => { investigation.findings.push(` - ${inv.timestamp}: ${inv.result}`); }); } // Step 6: Check for re-population after invalidation const cacheWrites = await this.logger.query({ message: 'cache_set', key: cacheKey, timestamp: { $gt: new Date(Date.now() - 3600000) } }); if (cacheWrites.length > 1) { investigation.findings.push('Multiple cache writes detected - possible re-population with stale data'); investigation.cacheWriteHistory = cacheWrites; } return investigation; } async generateReport(investigation: Investigation): Promise<string> { let report = `=== STALE DATA INVESTIGATION REPORT ===Timestamp: ${investigation.timestamp}Entity: ${investigation.entityType}:${investigation.entityId} CACHE STATE: Key: ${investigation.cacheState.key} Exists: ${investigation.cacheState.exists} TTL: ${investigation.cacheState.ttlSeconds}s remaining DATABASE STATE: Last Updated: ${investigation.databaseState?.lastUpdated || 'unknown'} FINDINGS:${investigation.findings.map(f => ' • ' + f).join('\n')} CONCLUSION: ${investigation.conclusion}`; return report; }}Not every bug that looks like a cache issue is actually caused by the cache. Before deep-diving into cache internals, verify that the cache is actually the culprit.
The Cache Bypass Test:
The most definitive way to isolate a cache issue is to temporarily bypass the cache and observe whether the problem persists. If the problem disappears when bypassing cache, the cache is definitely involved. If it persists, look elsewhere.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
// Production-safe cache bypass for debuggingclass DebuggableCache<T> implements Cache<T> { private bypassedKeys: Set<string> = new Set(); private globalBypass: boolean = false; constructor( private innerCache: Cache<T>, private metrics: CacheMetrics ) {} async get(key: string): Promise<T | null> { // Check for bypass conditions if (this.globalBypass || this.bypassedKeys.has(key)) { this.metrics.recordBypass(key); return null; // Force cache miss } return this.innerCache.get(key); } async set(key: string, value: T, ttlSeconds?: number): Promise<void> { if (this.globalBypass || this.bypassedKeys.has(key)) { return; // Skip caching entirely } await this.innerCache.set(key, value, ttlSeconds); } // Debug endpoints (protect with authentication in production!) async bypassKey(key: string, durationMs: number = 60000): Promise<void> { this.bypassedKeys.add(key); console.log(`[CACHE DEBUG] Bypassing key: ${key} for ${durationMs}ms`); setTimeout(() => { this.bypassedKeys.delete(key); console.log(`[CACHE DEBUG] Re-enabling key: ${key}`); }, durationMs); } async bypassPatternTemporarily(pattern: string, durationMs: number = 60000): Promise<void> { // For pattern-based bypassing, set a flag that get() checks console.log(`[CACHE DEBUG] Bypassing pattern: ${pattern} for ${durationMs}ms`); } async setGlobalBypass(enabled: boolean): Promise<void> { this.globalBypass = enabled; console.log(`[CACHE DEBUG] Global bypass: ${enabled}`); }} // API endpoint for debugging (PROTECT THIS!)app.post('/debug/cache/bypass', authenticate, requireAdmin, async (req, res) => { const { key, pattern, duration, global } = req.body; if (global !== undefined) { await cache.setGlobalBypass(global); return res.json({ message: `Global bypass set to: ${global}` }); } if (key) { await cache.bypassKey(key, duration || 60000); return res.json({ message: `Bypassing key: ${key}` }); } res.status(400).json({ error: 'Specify key, pattern, or global' });});Additional Isolation Techniques:
Cache bypass in production should be surgical and time-limited. Global bypass can overwhelm your backend. Always set automatic re-enablement. Log all bypass operations for audit. Protect bypass endpoints with strong authentication.
When you've confirmed a cache issue, the next step is examining the actual cache state. For distributed caches like Redis, this means connecting directly to inspect keys, values, and metadata.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
# Connect to Redis CLIredis-cli -h cache-host -p 6379 # ===== KEY EXISTENCE AND METADATA ===== # Check if specific key existsEXISTS product:12345 # Get key typeTYPE product:12345 # Get TTL remaining (seconds)TTL product:12345 # Get TTL remaining (milliseconds, more precise)PTTL product:12345 # Get key's idle time (seconds since last access)OBJECT IDLETIME product:12345 # Get memory usage of specific keyMEMORY USAGE product:12345 # ===== EXAMINING VALUES ===== # Get string valueGET product:12345 # Get hash fieldsHGETALL user:profile:789 # Get list elementsLRANGE recent-views:456 0 -1 # Get set membersSMEMBERS tags:electronics # Get sorted set with scoresZRANGE leaderboard 0 -1 WITHSCORES # ===== KEY PATTERN SEARCH ===== # Find keys matching pattern (CAUTION: blocks on large datasets)# Use SCAN in production insteadKEYS product:* # Safer: Scan incrementallySCAN 0 MATCH product:* COUNT 100 # Count keys matching pattern# (Iterate SCAN, count matches - no built-in command) # ===== DEBUGGING SPECIFIC ISSUES ===== # Check when key was last modified (if using Redis Streams or custom tracking)# Note: Redis doesn't track this natively for all data types # See recently modified keys (if using keyspace notifications)PSUBSCRIBE __keyspace@0__:* # Get all configurationCONFIG GET * # Get memory statisticsINFO memory # Get cache hit/miss statsINFO stats# Look for: keyspace_hits, keyspace_misses # ===== SAFE DATA INSPECTION ===== # Dump raw bytes (for binary data or unknown encoding)DUMP product:12345 # Pretty-print JSON (pipe through jq)GET product:12345 | jq . # ===== CAREFUL WITH THESE IN PRODUCTION ===== # Delete a key (for manual fix during debugging)DEL product:12345 # Set a key to expire immediatelyEXPIRE product:12345 0 # Flush entire database (NEVER DO IN PRODUCTION)# FLUSHDBBuilding a Forensics Toolkit:
Create scripts that automate common forensics tasks so you can execute them quickly during incidents:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155
// Comprehensive cache forensics toolkitclass CacheForensics { constructor(private redis: Redis) {} // Get complete snapshot of a key's state async keySnapshot(key: string): Promise<KeySnapshot> { const [ exists, type, ttl, idleTime, memoryUsage, value ] = await Promise.all([ this.redis.exists(key), this.redis.type(key), this.redis.ttl(key), this.redis.object('IDLETIME', key), this.redis.memory('USAGE', key), this.getValue(key) ]); return { key, exists: exists === 1, type, ttlSeconds: ttl, idleTimeSeconds: idleTime, memoryBytes: memoryUsage, value, capturedAt: new Date().toISOString(), }; } // Compare cache value to expected value async compareToExpected(key: string, expected: any): Promise<Comparison> { const cached = await this.getValue(key); if (cached === null) { return { match: false, reason: 'Key does not exist in cache' }; } const differences = this.deepDiff(cached, expected); return { match: differences.length === 0, differences, cached, expected, }; } // Find all keys for an entity async findRelatedKeys(entityId: string): Promise<string[]> { const patterns = [ `*:${entityId}`, `*:${entityId}:*`, `*:*:${entityId}`, ]; const allKeys: string[] = []; for (const pattern of patterns) { let cursor = '0'; do { const [newCursor, keys] = await this.redis.scan( cursor, 'MATCH', pattern, 'COUNT', 100 ); cursor = newCursor; allKeys.push(...keys); } while (cursor !== '0'); } return [...new Set(allKeys)]; // Deduplicate } // Track key changes over time async watchKey(key: string, intervalMs: number, durationMs: number): Promise<KeyHistory> { const history: Array<{timestamp: string; value: any; ttl: number}> = []; const startTime = Date.now(); while (Date.now() - startTime < durationMs) { const [value, ttl] = await Promise.all([ this.getValue(key), this.redis.ttl(key) ]); history.push({ timestamp: new Date().toISOString(), value, ttl, }); await new Promise(resolve => setTimeout(resolve, intervalMs)); } return { key, history }; } // Find keys with abnormal characteristics async findAnomalousKeys(pattern: string): Promise<AnomalousKeyReport> { const anomalies: AnomalousKey[] = []; let cursor = '0'; do { const [newCursor, keys] = await this.redis.scan( cursor, 'MATCH', pattern, 'COUNT', 100 ); cursor = newCursor; for (const key of keys) { const snapshot = await this.keySnapshot(key); // Check for anomalies if (snapshot.ttlSeconds === -1) { anomalies.push({ key, issue: 'No TTL set (immortal key)' }); } if (snapshot.memoryBytes && snapshot.memoryBytes > 1000000) { anomalies.push({ key, issue: `Large key: ${snapshot.memoryBytes} bytes` }); } if (snapshot.idleTimeSeconds && snapshot.idleTimeSeconds > 3600) { anomalies.push({ key, issue: 'Idle for >1 hour (possibly orphaned)' }); } } } while (cursor !== '0'); return { pattern, anomalies, scannedAt: new Date().toISOString() }; } private async getValue(key: string): Promise<any> { const type = await this.redis.type(key); switch (type) { case 'string': const str = await this.redis.get(key); try { return JSON.parse(str!); } catch { return str; } case 'hash': return this.redis.hgetall(key); case 'list': return this.redis.lrange(key, 0, -1); case 'set': return this.redis.smembers(key); case 'zset': return this.redis.zrange(key, 0, -1, 'WITHSCORES'); default: return null; } } private deepDiff(obj1: any, obj2: any, path: string = ''): string[] { const diffs: string[] = []; // ... deep diff implementation return diffs; }}Cache bugs are often maddingly difficult to reproduce. They depend on timing, on specific sequences of operations, on concurrent requests arriving in just the wrong order. But without reproduction, you can't verify fixes.
Strategies for Cache Bug Reproduction:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166
// Framework for reproducing cache race conditionsclass CacheRaceConditionReproducer { constructor( private cache: Cache<any>, private database: Database, private service: ProductService ) {} // Reproduce: Read returning stale data during write async reproduceReadWriteRace(): Promise<ReproductionResult> { const productId = 'test-product-race'; const originalProduct = { id: productId, name: 'Original', version: 1 }; const updatedProduct = { id: productId, name: 'Updated', version: 2 }; // Setup: Cache and database have original await this.database.insert(originalProduct); await this.cache.set(`product:${productId}`, originalProduct, 3600); // Race: Start read, update, complete read const results: any[] = []; // Create a read that will definitely see the old value const slowRead = new Promise(async (resolve) => { // Simulate a slow read that starts before update const cached = await this.cache.get(`product:${productId}`); await sleep(100); // Processing delay resolve({ operation: 'read', value: cached }); }); // Update that invalidates cache const update = new Promise(async (resolve) => { await sleep(10); // Start slightly after read await this.database.update(productId, updatedProduct); await this.cache.delete(`product:${productId}`); resolve({ operation: 'update', completed: true }); }); // Another read that happens after invalidation but uses stale variable const stalePossibleRead = new Promise(async (resolve) => { await sleep(120); // After update completes const result = await this.service.getProduct(productId); resolve({ operation: 'subsequentRead', value: result }); }); const allResults = await Promise.all([slowRead, update, stalePossibleRead]); // Analyze: Did the race occur? const subsequentReadResult = allResults.find(r => r.operation === 'subsequentRead'); const hasStaleData = subsequentReadResult?.value?.version === 1; return { raceOccurred: hasStaleData, timeline: allResults, analysis: hasStaleData ? 'BUG: Subsequent read returned stale data (v1 instead of v2)' : 'PASS: Subsequent read correctly returned updated data', }; } // Reproduce: Cache stampede on expiration async reproduceStampede(concurrency: number = 100): Promise<StampedeResult> { const productId = 'test-product-stampede'; const product = { id: productId, name: 'Popular Product' }; // Setup: Cache with very short TTL await this.database.insert(product); await this.cache.set(`product:${productId}`, product, 1); // 1 second TTL // Wait for expiration await sleep(1100); // Stampede: Many concurrent requests hitting expired cache const requestStartTimes: number[] = []; const databaseHits: number[] = []; let dbQueryCount = 0; const interceptedDatabase = new Proxy(this.database, { get(target, prop) { if (prop === 'findById') { return async (...args: any[]) => { dbQueryCount++; databaseHits.push(Date.now()); return target.findById(...args); }; } return target[prop as keyof Database]; } }); const service = new ProductService(this.cache, interceptedDatabase); // Fire concurrent requests const requests = Array.from({ length: concurrency }, async () => { requestStartTimes.push(Date.now()); return service.getProduct(productId); }); await Promise.all(requests); return { totalRequests: concurrency, databaseQueryCount: dbQueryCount, stampedeOccurred: dbQueryCount > 1, analysis: dbQueryCount > 1 ? `STAMPEDE: ${dbQueryCount} database hits instead of 1` : 'PROTECTED: Only 1 database hit despite concurrent requests', timeline: { requestStarts: requestStartTimes, databaseHits, }, }; } // Reproduce: Invalidation race condition async reproduceInvalidationRace(): Promise<InvalidationRaceResult> { const productId = 'test-product-invalidation-race'; // Timing scenario: // T0: Read starts (cache miss, starts DB fetch) // T1: Update happens (invalidates cache) // T2: Read completes (writes OLD data to cache) // T3: Subsequent reads get stale data! const v1 = { id: productId, name: 'Version 1', version: 1 }; const v2 = { id: productId, name: 'Version 2', version: 2 }; // Database starts with v1 await this.database.insert(v1); // Simulate slow fetch that gets v1 const slowFetch = async (): Promise<any> => { const data = await this.database.findById(productId); // Gets v1 await sleep(200); // Slow network/processing return data; }; // Start slow fetch const fetchPromise = slowFetch(); // While fetch is in progress, update to v2 await sleep(50); await this.database.update(productId, v2); await this.cache.delete(`product:${productId}`); // Fetch completes and writes v1 to cache const fetchedData = await fetchPromise; await this.cache.set(`product:${productId}`, fetchedData, 3600); // Check what's in cache now const cachedValue = await this.cache.get(`product:${productId}`); const dbValue = await this.database.findById(productId); return { raceOccurred: cachedValue?.version !== dbValue?.version, cachedVersion: cachedValue?.version, databaseVersion: dbValue?.version, analysis: cachedValue?.version === 1 && dbValue?.version === 2 ? 'BUG: Cache contains v1 but database has v2 (invalidation race)' : 'PASS: Cache and database are consistent', }; }} function sleep(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms));}Consider running reproduction tests in CI with many iterations. Race conditions are probabilistic—a test that passes once might fail on the 100th run. ThreadSanitizer-style tools can also help detect data races in cache access patterns.
Once you've identified the root cause, applying the fix is only half the battle. You must verify the fix actually resolves the issue, doesn't introduce new problems, and handles edge cases.
| Issue | Common Fix | Verification Method |
|---|---|---|
| Missing invalidation | Add cache.delete() call after update | Test that update → read returns new value |
| Race condition during population | Use distributed lock or request coalescing | Run reproduction test; verify single DB hit |
| Incomplete cascading invalidation | Explicitly invalidate all related keys | Map dependencies; verify all are invalidated |
| Serialization mismatch | Flush cache after schema changes | Verify roundtrip: serialize → store → retrieve → deserialize |
| TTL too long | Reduce TTL or add event-based invalidation | Verify data freshness after underlying change |
| Cache key collision | Include additional identifiers in key | Generate keys for all entity types; verify uniqueness |
| Stampede vulnerability | Implement lock or probabilistic early refresh | Run stampede reproduction test under load |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
// Verify fixes for common cache bugsdescribe('Cache Bug Fix Verification', () => { describe('Invalidation Fix Verification', () => { it('should return fresh data immediately after update', async () => { // Setup: Cache old data const original = { id: 'prod-1', name: 'Original', price: 10 }; await service.getProduct('prod-1'); // Populates cache // Update const updated = { id: 'prod-1', name: 'Updated', price: 20 }; await service.updateProduct('prod-1', updated); // Verify: Immediate read returns updated data const result = await service.getProduct('prod-1'); expect(result.price).toBe(20); }); it('should invalidate related caches on entity update', async () => { // Setup: Populate multiple related caches await service.getProduct('prod-1'); // product:prod-1 await service.getProductsByCategory('cat-1'); // category:cat-1:products await service.getInventory('prod-1'); // inventory:prod-1 // Update product await service.updateProduct('prod-1', { name: 'Updated' }); // Verify: All related caches should be invalidated expect(await cache.get('product:prod-1')).toBeNull(); expect(await cache.get('category:cat-1:products')).toBeNull(); expect(await cache.get('inventory:prod-1')).toBeNull(); }); }); describe('Stampede Prevention Verification', () => { it('should make exactly one database query under concurrent load', async () => { // Clear cache await cache.clear(); // Fire 100 concurrent requests const requests = Array.from({ length: 100 }, () => service.getProduct('prod-1') ); await Promise.all(requests); // Verify: Only one database hit expect(database.queryCount).toBe(1); }); }); describe('Race Condition Fix Verification', () => { it('should not cache stale data during concurrent update', async () => { const results: number[] = []; // Run many iterations to catch probabilistic races for (let i = 0; i < 100; i++) { await cache.clear(); // Setup: Start with v1 database.setProduct('prod-1', { version: 1 }); // Concurrent: Read (which populates cache) and update (which invalidates) await Promise.all([ service.getProduct('prod-1'), (async () => { await sleep(5); // Small delay database.setProduct('prod-1', { version: 2 }); await service.invalidateProduct('prod-1'); })(), ]); // Final read should always get v2 const final = await service.getProduct('prod-1'); results.push(final.version); } // All iterations should return v2 const staleCount = results.filter(v => v === 1).length; expect(staleCount).toBe(0); }); });});Cache debugging requires systematic methodology rather than random experimentation. Understanding common failure patterns and having the right forensics tools enables rapid diagnosis of even complex cache issues.
What's next:
With debugging skills in place, we'll explore Cache Performance Tuning—optimizing cache configuration for maximum effectiveness based on workload characteristics.
You now have a systematic framework for debugging cache issues, familiarity with common failure patterns, techniques for cache forensics and bug reproduction, and approaches for verifying that fixes actually work.