Loading content...
Write-around caching isn't just a technique—it's a philosophy about cache efficiency. By now, you understand the mechanics: writes bypass the cache, reads populate it lazily, and the cache naturally fills with data that's proven valuable through actual access.
But knowing how write-around works isn't enough. The crucial skill is knowing when to apply it. This page synthesizes everything you've learned into a practical decision framework, with real-world examples that illustrate where write-around excels—and where it doesn't.
By the end of this page, you will have a clear decision framework for choosing write-around caching. You'll understand the workload patterns it serves best, see concrete industry examples, and be able to articulate precisely why write-around is (or isn't) the right choice for your system.
Deciding between caching strategies requires analyzing your workload across multiple dimensions. The following framework helps you systematically evaluate whether write-around is appropriate.
| Criterion | Write-Around | Write-Through | Write-Back |
|---|---|---|---|
| Read-heavy, diverse writes | ✅ Optimal | ⚠️ Cache pollution | ⚠️ Cache pollution |
| Write-heavy, sparse reads | ✅ Optimal | ❌ Wasted cache space | ❌ Wasted cache space |
| Immediate read-after-write | ❌ Cache miss | ✅ Optimal | ✅ Optimal |
| Strong durability needed | ✅ DB confirms write | ✅ DB confirms | ⚠️ Cache loss risk |
| Write latency critical | ⚠️ DB latency | ❌ Slowest | ✅ Fastest |
| Cache memory limited | ✅ Efficient use | ❌ May overflow | ❌ May overflow |
| Simple operations | ✅ Simple | ✅ Simple | ⚠️ Complex |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
interface WorkloadProfile { readsPerSecond: number; writesPerSecond: number; readAfterWritePercentage: number; // % of writes followed by immediate read uniqueWriteKeysPerDay: number; cacheSizeBytes: number; averageValueSizeBytes: number; readAfterWriteLatencyToleranceMs: number; dataLossTolerance: 'none' | 'some' | 'acceptable'; staleDataToleranceSeconds: number;} function recommendCachingStrategy(profile: WorkloadProfile): CachingRecommendation { const readWriteRatio = profile.readsPerSecond / profile.writesPerSecond; const cacheCapacity = profile.cacheSizeBytes / profile.averageValueSizeBytes; const writeWouldFillCache = profile.uniqueWriteKeysPerDay > cacheCapacity; // Decision tree if (profile.dataLossTolerance === 'none' && profile.readAfterWriteLatencyToleranceMs < 10) { return { strategy: 'write-through', reason: 'Requires both durability and immediate read-after-write', }; } if (profile.dataLossTolerance === 'acceptable' && readWriteRatio > 5) { return { strategy: 'write-back', reason: 'Can tolerate some data loss, benefits from write amplification reduction', }; } if (writeWouldFillCache && profile.readAfterWritePercentage < 20) { return { strategy: 'write-around', reason: 'High write volume with sparse read-after-write prevents cache pollution', confidence: 0.95, }; } if (readWriteRatio > 10 && profile.staleDataToleranceSeconds > 60) { return { strategy: 'write-around', reason: 'Read-heavy with staleness tolerance allows lazy cache population', confidence: 0.85, }; } // Default to write-through for safety return { strategy: 'write-through', reason: 'Conservative default when workload is unclear', confidence: 0.60, };}Theoretical analysis only goes so far. The best approach is often to implement write-around with feature flags, measure actual cache efficiency, and compare against alternatives. Real workload data trumps predictions.
Write-around caching excels in specific scenarios where its characteristics—database-first writes, lazy cache population, and natural hot-data filtering—provide clear advantages.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// Example: Analytics event tracking// Pattern: Write-heavy, rarely read in real-time class AnalyticsEventService { constructor( private cache: WriteAroundCache<AnalyticsEvent>, private database: Database<AnalyticsEvent>, private batchProcessor: BatchProcessor ) {} // Write Path: High volume (10,000+ events/second) // These events are NEVER read immediately after write async trackEvent(event: AnalyticsEvent): Promise<void> { // Write-around: Goes straight to database/stream await this.database.insertEvent(event); // Cache is NOT populated // Why? These events will be: // 1. Processed in hourly/daily batches // 2. Never queried individually in real-time // 3. Totaling millions per day (would overflow cache) } // Read Path: Occasional, pattern-based access // When events ARE read, cache helps subsequent reads async getEventsForUser( userId: string, timeRange: TimeRange ): Promise<AnalyticsEvent[]> { const cacheKey = `events:${userId}:${timeRange.hash()}`; // Check cache first const cached = await this.cache.get(cacheKey); if (cached !== null) return cached; // Miss: Query database (aggregation query) const events = await this.database.queryEvents(userId, timeRange); // Populate cache for subsequent requests // This specific query might be repeated (user viewing dashboard) await this.cache.set(cacheKey, events, 300); // 5 min TTL return events; }} // Why write-around works here:// 1. Individual events (writes) are never read immediately// 2. Aggregated results (reads) benefit from caching// 3. Write volume (millions/day) would pollute cache// 4. Read patterns are predictable (dashboards, reports)| Use Case | Write Volume | Read Pattern | Write-Around Fit | Reason |
|---|---|---|---|---|
| Application logs | Very High | Rare, batch | ✅ Excellent | Prevents log flood in cache |
| Product catalog | Low | Very High | ✅ Excellent | Reads naturally cache hot items |
| Order history | Medium | Sparse | ✅ Good | Most orders never viewed again |
| User sessions | Medium | Immediate | ⚠️ Consider | Read-after-write latency critical |
| Real-time chat | High | Immediate | ❌ Poor | Messages read right after send |
Let's examine how major systems and industries apply write-around caching principles to solve real-world challenges.
Example 1: E-Commerce Product Discovery
A large e-commerce platform handles:
Why Write-Around Works:
123456789101112131415161718192021222324252627282930313233343536373839
class ProductCatalogCache { // Merchant updates product (write path) async updateProduct(productId: string, updates: ProductUpdates): Promise<void> { // Write-around: Update database directly await this.database.updateProduct(productId, updates); // Invalidate cache (optional, for faster consistency) await this.cache.delete(`product:${productId}`); // NOT populating cache because: // 1. Merchant may be editing, not publishing yet // 2. Product might have low traffic (long-tail catalog) // 3. Cache should hold products customers actually view } // Customer views product (read path) async getProduct(productId: string): Promise<Product | null> { // This is where cache population happens const cached = await this.cache.get(`product:${productId}`); if (cached) return cached; const product = await this.database.getProduct(productId); if (product) { // This product was accessed - worth caching await this.cache.set(`product:${productId}`, product, 600); } return product; } // Bulk catalog management (write path) async bulkImportProducts(products: Product[]): Promise<void> { // Large import - definitely write-around await this.database.bulkInsert(products); // Cache untouched // If we cached all 100,000 imported products, // we'd evict hot products customers are actually viewing! }}Example 2: Content Delivery Network (CDN)
CDNs use write-around at the edge layer:
Why Write-Around (Pull-Based) Works:
Example 3: Social Media Feed Generation
Social platforms handle:
Write-Around Application:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
class SocialPostCache { async createPost(userId: string, content: PostContent): Promise<Post> { // Write to database (primary storage) const post = await this.database.createPost(userId, content); // Write-around: NOT caching the new post globally // Reasons: // 1. Post might only be seen by a few followers // 2. If post goes viral, reads will cache it // 3. Prevents cache pollution from low-engagement content // However: Update the author's post list cache await this.invalidateUserPostsCache(userId); return post; } async getPost(postId: string): Promise<Post | null> { const cached = await this.cache.get(`post:${postId}`); if (cached) { this.metrics.recordCacheHit(); return cached; } const post = await this.database.getPost(postId); if (post) { // Post is being read - cache it // TTL based on post age (older = longer cache, less likely to change) const ttl = this.calculateTTL(post.createdAt); await this.cache.set(`post:${postId}`, post, ttl); } this.metrics.recordCacheMiss(); return post; } private calculateTTL(createdAt: Date): number { const ageHours = (Date.now() - createdAt.getTime()) / (1000 * 60 * 60); if (ageHours < 1) return 60; // New: 1 min (might be edited) if (ageHours < 24) return 300; // Recent: 5 min return 3600; // Old: 1 hour (stable) }}In all these examples, the key insight is the same: writes don't predict reads. The data written might or might not be accessed, and caching speculatively would waste resources. Letting reads drive cache population naturally optimizes for actual demand.
Understanding when write-around is the wrong choice is just as important as knowing when it's right. These anti-patterns indicate you should consider alternative strategies.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// ANTI-PATTERN: Write-around for checkout flow class CheckoutService_WRONG { async placeOrder(cart: Cart, payment: Payment): Promise<Order> { // Create order in database (write-around) const order = await this.database.createOrder(cart, payment); // Cache NOT populated... // User redirected to confirmation page return order; } async getOrderConfirmation(orderId: string): Promise<Order | null> { // PROBLEM: User lands here IMMEDIATELY after placeOrder // Cache miss guaranteed! const cached = await this.cache.get(`order:${orderId}`); if (cached) return cached; // Will never hit on first view // User waits 20-50ms for database query // This feels slow after just completing checkout return this.database.getOrder(orderId); }} // CORRECT: Use write-through for checkout flow class CheckoutService_CORRECT { async placeOrder(cart: Cart, payment: Payment): Promise<Order> { const order = await this.database.createOrder(cart, payment); // Write-through: Populate cache immediately await this.cache.set(`order:${order.id}`, order, 3600); return order; } async getOrderConfirmation(orderId: string): Promise<Order | null> { // Cache HIT guaranteed - instant response const cached = await this.cache.get(`order:${orderId}`); if (cached) return cached; // Fallback to database (shouldn't happen for fresh orders) const order = await this.database.getOrder(orderId); if (order) await this.cache.set(`order:${orderId}`, order, 3600); return order; }}| Scenario | Read-After-Write? | Recommended Strategy | Rationale |
|---|---|---|---|
| Form submission → Confirmation | Always, immediate | Write-Through | Eliminate first-read latency |
| Product update → Product page | Sometimes, delayed | Write-Around + Invalidation | Invalidate stale, populate on demand |
| Log event → Dashboard (batch) | Eventually, aggregated | Write-Around (pure) | No immediate read expected |
| Chat message → Recipient view | Always, real-time | Write-Through or Pub/Sub | Real-time delivery required |
| User signup → Profile page | Always, immediate | Write-Through | User expects to see profile |
Even if 50ms of latency is 'fast' by technical standards, users notice it in workflows where they just performed an action. The mental model is: 'I just did something, show me the result NOW.' Write-around breaks this expectation; write-through preserves it.
Real-world systems rarely use a single caching strategy uniformly. The most effective architectures apply different strategies to different data types based on their access patterns.
The Hybrid Approach:
Categorize your data by access pattern and apply the appropriate strategy to each category:
| Data Category | Access Pattern | Strategy |
|---|---|---|
| User sessions | Read-after-write, security-critical | Write-Through |
| Product catalog | Write-sparse, read-heavy | Write-Around |
| Shopping cart | Read-after-write, user-facing | Write-Through |
| Order history | Write-once, rarely-read | Write-Around |
| Analytics events | Write-only, never real-time read | Write-Around (skip cache) |
| Leaderboards | Real-time, high write + read | Write-Back or Write-Through |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
// Hybrid cache that applies different strategies per data type type CachingStrategy = 'write-through' | 'write-around' | 'write-back' | 'cache-skip'; interface DataTypeConfig { strategy: CachingStrategy; ttl: number; invalidateOnWrite: boolean;} class HybridCache<T> { private configs: Map<string, DataTypeConfig> = new Map([ ['user:', { strategy: 'write-through', ttl: 1800, invalidateOnWrite: true }], ['session:', { strategy: 'write-through', ttl: 3600, invalidateOnWrite: true }], ['product:', { strategy: 'write-around', ttl: 600, invalidateOnWrite: true }], ['order:', { strategy: 'write-around', ttl: 3600, invalidateOnWrite: false }], ['cart:', { strategy: 'write-through', ttl: 1800, invalidateOnWrite: true }], ['analytics:', { strategy: 'cache-skip', ttl: 0, invalidateOnWrite: false }], ['log:', { strategy: 'cache-skip', ttl: 0, invalidateOnWrite: false }], ]); async write(key: string, value: T): Promise<void> { const config = this.getConfig(key); switch (config.strategy) { case 'write-through': // Cache AND database together await Promise.all([ this.database.write(key, value), this.cache.set(key, value, config.ttl), ]); break; case 'write-around': // Database only, optionally invalidate await this.database.write(key, value); if (config.invalidateOnWrite) { await this.cache.delete(key); } break; case 'write-back': // Cache first, async database await this.cache.set(key, value, config.ttl); this.asyncDatabaseWrite(key, value); break; case 'cache-skip': // Database only, no cache interaction await this.database.write(key, value); break; } } async read(key: string): Promise<T | null> { const config = this.getConfig(key); if (config.strategy === 'cache-skip') { // Direct database read for non-cached data return this.database.read(key); } // Standard cache-aside read for all other strategies const cached = await this.cache.get(key); if (cached !== null) return cached; const data = await this.database.read(key); if (data !== null) { await this.cache.set(key, data, config.ttl); } return data; } private getConfig(key: string): DataTypeConfig { for (const [prefix, config] of this.configs) { if (key.startsWith(prefix)) return config; } // Default to write-around return { strategy: 'write-around', ttl: 300, invalidateOnWrite: true }; }}Don't over-engineer from day one. Start with write-around as your default (it's safest for durability and cache efficiency), then identify specific data types that need write-through based on user-facing latency requirements.
Adopting write-around caching in an existing system requires careful planning. Here's a practical approach to migration.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
class CacheMigrator { constructor( private oldCache: WriteThoughCache, private newCache: WriteAroundCache, private featureFlags: FeatureFlags, private metrics: MetricsCollector ) {} async write(key: string, value: any): Promise<void> { const useNewStrategy = await this.shouldUseNewStrategy(key); if (useNewStrategy) { // Write-around: Database only await this.database.write(key, value); await this.newCache.invalidate(key); // Shadow: Also write to old cache for comparison if (this.featureFlags.get('shadow_mode')) { await this.oldCache.set(key, value).catch(() => {}); } this.metrics.record('write_strategy', 'write_around'); } else { // Legacy write-through await Promise.all([ this.database.write(key, value), this.oldCache.set(key, value), ]); this.metrics.record('write_strategy', 'write_through'); } } async read(key: string): Promise<any> { const useNewStrategy = await this.shouldUseNewStrategy(key); if (useNewStrategy) { const result = await this.newCache.get(key); // Shadow comparison if (this.featureFlags.get('shadow_mode')) { const oldResult = await this.oldCache.get(key).catch(() => null); if (JSON.stringify(result) !== JSON.stringify(oldResult)) { this.metrics.record('cache_divergence', key); } } return result; } else { return this.oldCache.get(key); } } private async shouldUseNewStrategy(key: string): Promise<boolean> { // Gradual rollout by data type AND traffic percentage const dataType = this.extractDataType(key); const rolloutPercent = await this.featureFlags.get( `write_around_rollout.${dataType}` ); // Consistent hashing ensures same key always uses same strategy const keyHash = this.consistentHash(key); return keyHash < rolloutPercent; }}During migration, heavily monitor: cache hit rates, database query volume, p99 latencies, and user-facing error rates. Be prepared to rollback quickly if metrics degrade unexpectedly.
When implementing write-around caching, set realistic performance expectations based on your workload characteristics.
| Workload | Expected Hit Rate | Write Latency | Read Latency (Hit) | Read Latency (Miss) |
|---|---|---|---|---|
| Read-heavy (95% reads) | 85-95% | 5-20ms | <2ms | 10-30ms |
| Balanced (50/50) | 60-80% | 5-20ms | <2ms | 10-30ms |
| Write-heavy (80% writes) | 40-60% | 5-20ms | <2ms | 10-30ms |
| Bulk import phase | 10-30% | 5-15ms | <2ms | 10-30ms |
| Steady state (post-warm) | 80-95% | 5-20ms | <2ms | 10-30ms |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
interface PerformanceBaseline { targetHitRate: number; maxWriteLatencyP99: number; maxReadHitLatencyP99: number; maxReadMissLatencyP99: number; maxDatabaseQPS: number;} class PerformanceValidator { private baseline: PerformanceBaseline = { targetHitRate: 0.85, // 85% cache hit rate maxWriteLatencyP99: 50, // 50ms max write maxReadHitLatencyP99: 5, // 5ms max cache hit maxReadMissLatencyP99: 100, // 100ms max cache miss maxDatabaseQPS: 1000, // 1000 queries/sec to DB }; validate(metrics: CacheMetrics): ValidationResult { const issues: ValidationIssue[] = []; if (metrics.hitRate < this.baseline.targetHitRate) { issues.push({ severity: 'warning', metric: 'hitRate', actual: metrics.hitRate, expected: this.baseline.targetHitRate, suggestion: 'Consider increasing cache size or TTL', }); } if (metrics.writeLatencyP99 > this.baseline.maxWriteLatencyP99) { issues.push({ severity: 'critical', metric: 'writeLatencyP99', actual: metrics.writeLatencyP99, expected: this.baseline.maxWriteLatencyP99, suggestion: 'Database write performance degraded', }); } if (metrics.dbQPS > this.baseline.maxDatabaseQPS * 0.8) { issues.push({ severity: 'warning', metric: 'dbQPS', actual: metrics.dbQPS, expected: this.baseline.maxDatabaseQPS, suggestion: 'Approaching database capacity limit', }); } return { passed: issues.filter(i => i.severity === 'critical').length === 0, issues, }; }}Always measure current performance before implementing write-around. Without a baseline, you can't prove the new strategy is better. Track before/after metrics for: hit rate, latency percentiles, database load, and cache memory usage.
Write-around caching is a powerful strategy for systems where writes don't predict reads. By letting read demand drive cache population, you build naturally efficient caches that hold truly valuable data without pollution from transient writes.
| Characteristic | Write-Around Behavior | Implication |
|---|---|---|
| Write path | Database only | Durability guaranteed |
| Read path | Cache → DB → Cache | Lazy population |
| First read after write | Cache miss | Higher latency |
| Cache pollution | Prevented | Efficient cache use |
| Consistency | Eventually consistent | Staleness possible |
| Complexity | Low | Easy to implement/operate |
The Expert Perspective:
As a Principal Engineer evaluating caching strategies, remember that write-around represents a philosophical choice: optimize for the common case, not the edge case. Most data written will never be read, or will be read much later when lazy loading is perfectly acceptable. By not caching speculatively, you preserve cache space for data with proven value.
The best engineers don't just implement write-around—they understand why it works for their specific workload and can articulate the trade-offs to stakeholders. This module has given you that depth of understanding.
Congratulations! You've completed the Write-Around Caching module. You now understand the complete mechanics—how writes bypass the cache, how reads populate it, what happens on cache misses, and when to apply this strategy. You're equipped to design, implement, and optimize write-around caching for production systems.