Loading learning content...
In the landscape of caching strategies, write-around caching represents a deliberate philosophical choice: not all data deserves immediate cache residency. Unlike write-through and write-back strategies that ensure written data lives in the cache, write-around takes a fundamentally different approach—it bypasses the cache entirely for write operations, sending data directly to the persistent storage layer.
This might initially seem counterintuitive. Isn't the whole point of caching to keep frequently accessed data in fast storage? The insight behind write-around is that writes don't predict reads. Just because data was written doesn't mean it will be read soon—or ever. Write-around recognizes this reality and optimizes accordingly.
By the end of this page, you will understand the complete mechanics of write-around caching—how data flows through the system, why the cache is bypassed during writes, how reads trigger cache population, and the architectural patterns that emerge from this strategy. You'll develop deep intuition for when write-around is the optimal choice.
Write-around caching implements a simple but powerful principle: writes go to the database; reads populate the cache. This creates an asymmetric data flow where the cache serves purely as a read accelerator rather than a write buffer.
The Core Data Flow:
Understanding write-around requires visualizing two distinct paths—the write path and the read path—that operate independently of each other.
The Key Insight:
The write path and read path are completely decoupled. A write operation has no effect on the cache—it doesn't add data, doesn't update existing entries, and doesn't invalidate stale data. Only read operations interact with the cache, and only when data isn't already present (cache miss) does the cache get populated.
This decoupling is both the strength and the weakness of write-around caching. It prevents cache pollution from transient writes but creates the potential for stale reads immediately after writes.
1234567891011121314151617181920212223242526272829303132333435363738
class WriteAroundCache<T> { constructor( private cache: CacheStore<T>, private database: Database<T> ) {} // Write path: Database only, cache bypassed entirely async write(key: string, value: T): Promise<void> { await this.database.write(key, value); // Note: No cache interaction whatsoever // Cache may now contain stale data for this key! } // Read path: Check cache first, populate on miss async read(key: string): Promise<T | null> { // Step 1: Try cache first const cachedValue = await this.cache.get(key); if (cachedValue !== null) { return cachedValue; // Cache hit - serve from cache } // Step 2: Cache miss - fetch from database const dbValue = await this.database.read(key); if (dbValue !== null) { // Step 3: Populate cache for future reads await this.cache.set(key, dbValue); } return dbValue; } // Delete path: Remove from database (optionally invalidate cache) async delete(key: string): Promise<void> { await this.database.delete(key); // Optional: Invalidate cache to prevent stale reads await this.cache.delete(key); }}Notice that after a write, the cache may contain stale data. If key 'user:123' is in the cache with value A, and we write value B using write-around, the cache still contains A. Any reads before the cache entry expires will return stale data. This is the fundamental trade-off of write-around—you must decide if this staleness window is acceptable for your use case.
Write-around caching creates distinct architectural patterns based on how the system handles the interplay between written data, cached data, and read requests. Understanding these patterns is essential for designing robust systems.
Pattern 1: Pure Write-Around (No Invalidation)
In its purest form, write-around never touches the cache during writes. This creates maximum write performance but relies entirely on TTL (Time-To-Live) for eventual consistency.
123456789101112131415
// Pure Write-Around: Maximum write performanceasync function pureWriteAround(key: string, data: Data): Promise<void> { // Single database write - cache completely ignored await database.write(key, data); // Staleness resolved by: // 1. TTL expiration on cache entries (eventually consistent) // 2. Natural read patterns bringing fresh data // 3. Background refresh processes (if implemented)} // Implications:// - Writes: O(1) database operation only// - Reads after write: May return stale data until TTL expires// - Best for: Write-heavy workloads with tolerance for eventual consistencyPattern 2: Write-Around with Invalidation
A more conservative approach adds cache invalidation to the write path. While still bypassing cache population, it removes stale entries to ensure subsequent reads fetch fresh data from the database.
123456789101112131415
// Write-Around with Invalidation: Balance of performance and consistencyasync function writeAroundWithInvalidation(key: string, data: Data): Promise<void> { // Step 1: Write to database await database.write(key, data); // Step 2: Invalidate cache entry (but don't populate) await cache.delete(key); // Fast O(1) operation // Next read will be a cache miss → fetch from DB → populate cache} // Implications:// - Writes: O(1) database + O(1) cache delete// - Reads after write: Cache miss guaranteed, fresh data loaded// - Slightly slower writes, but immediate consistency on next readPattern 3: Write-Around with Background Refresh
Advanced implementations combine write-around with background processes that proactively refresh stale cache entries based on database change events or scheduled refreshes.
1234567891011121314151617181920212223242526272829303132333435
// Write-Around with Background Refreshclass WriteAroundWithRefresh { private changeLog: ChangeEvent[] = []; async write(key: string, data: Data): Promise<void> { await database.write(key, data); // Emit change event for background processing this.changeLog.push({ key, timestamp: Date.now(), action: 'write' }); } // Background worker refreshes hot cache entries async backgroundRefreshWorker(): Promise<void> { while (true) { const changes = await this.consumeChanges(); for (const change of changes) { const isHotKey = await this.isFrequentlyAccessed(change.key); if (isHotKey) { // Proactively refresh cache for hot keys const freshData = await database.read(change.key); await cache.set(change.key, freshData); } // Cold keys can wait for natural read-through } await sleep(REFRESH_INTERVAL); } }}| Pattern | Write Latency | Read Consistency | Complexity | Best Use Case |
|---|---|---|---|---|
| Pure Write-Around | Lowest | Eventually consistent (TTL-based) | Minimal | Logging, analytics, metrics |
| With Invalidation | Low-Medium | Consistent on next read | Low | User profiles, configurations |
| With Background Refresh | Lowest | Near real-time for hot data | High | Mixed workloads, critical hot data |
Write-around caching is often implemented as the write policy within a broader cache-aside (also called lazy-loading) architecture. Understanding this relationship clarifies how write-around fits into the larger caching ecosystem.
Cache-Aside Architecture:
In cache-aside, the application is responsible for managing the cache—the cache doesn't automatically handle reads or writes. The application must:
Write-around provides the answer to #3: do nothing (or invalidate).
Why Write-Around is Natural for Cache-Aside:
Cache-aside already assumes the application manages cache population lazily—on read misses. Write-around extends this philosophy to writes: if we don't know that data will be read, why eagerly cache it? Let the read path handle cache population when (and if) the data is actually needed.
This creates a demand-driven cache where cache contents precisely reflect actual read patterns rather than write patterns. The cache becomes a true LRU (Least Recently Used) read cache.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
class CacheAsideWithWriteAround<T> { constructor( private cache: Cache<T>, private database: Database<T>, private ttl: number = 3600 // 1 hour default ) {} // Cache-Aside Read Pattern (Lazy Loading) async get(key: string): Promise<T | null> { // 1. Check cache first const cached = await this.cache.get(key); if (cached !== null) { return cached; // Cache hit } // 2. Cache miss: Load from database const data = await this.database.get(key); if (data !== null) { // 3. Populate cache for future reads await this.cache.set(key, data, this.ttl); } return data; } // Write-Around Write Pattern (Database Only) async set(key: string, value: T): Promise<void> { // Write directly to database await this.database.set(key, value); // Optionally invalidate stale cache entry await this.cache.delete(key); // Note: NOT populating cache // If this data is needed, get() will load it } // Bulk write example: Write-around shines here async bulkInsert(records: Record<string, T>[]): Promise<void> { // All writes go to database await this.database.bulkInsert(records); // Cache is NOT polluted with bulk data // Only frequently-read records will enter cache naturally }}Write-around embraces 'lazy' as a virtue. Why do work that might never be needed? By deferring cache population to actual read demand, the system automatically optimizes cache contents for real access patterns. This is analogous to lazy evaluation in functional programming—compute only what's necessary.
Understanding the timing profile of write-around caching is crucial for capacity planning and setting performance expectations. Let's analyze the latency characteristics for different operations.
| Operation | Cache Involvement | Expected Latency | Components |
|---|---|---|---|
| Write (pure) | None | 5-20ms | Network → Database write → Response |
| Write (with invalidation) | Delete only | 5-25ms | Network → Database write + Cache delete → Response |
| Read (cache hit) | Read only | 1-5ms | Network → Cache read → Response |
| Read (cache miss) | Read + Write | 10-30ms | Network → Cache read → DB read → Cache write → Response |
| First read after write | Miss + Populate | 10-30ms | Always a cache miss (guaranteed fresh data) |
The First-Read Penalty:
A defining characteristic of write-around is the first-read penalty after writes. When data is written, the cache either doesn't have it (pure write-around) or has been invalidated (write-around with invalidation). Either way, the first read operation pays the full cost of a cache miss:
Write → [Database stores data]
Read → [Cache miss] → [Database read] → [Cache population] → [Return data]
This is the price paid for write-path simplicity. Whether this trade-off is acceptable depends on the read-after-write latency requirements of your application.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
interface LatencyMetrics { writeLatency: number; readHitLatency: number; readMissLatency: number; cacheHitRate: number;} class InstrumentedWriteAroundCache<T> { private metrics: LatencyMetrics = { writeLatency: 0, readHitLatency: 0, readMissLatency: 0, cacheHitRate: 0, }; async write(key: string, value: T): Promise<void> { const start = performance.now(); await this.database.write(key, value); await this.cache.delete(key); // Invalidation this.metrics.writeLatency = this.ema( this.metrics.writeLatency, performance.now() - start ); } async read(key: string): Promise<T | null> { const start = performance.now(); const cached = await this.cache.get(key); if (cached !== null) { this.metrics.readHitLatency = this.ema( this.metrics.readHitLatency, performance.now() - start ); this.recordHit(); return cached; } // Cache miss path const data = await this.database.read(key); if (data !== null) { await this.cache.set(key, data); } this.metrics.readMissLatency = this.ema( this.metrics.readMissLatency, performance.now() - start ); this.recordMiss(); return data; } // Exponential moving average for smooth metrics private ema(current: number, sample: number, alpha = 0.1): number { return alpha * sample + (1 - alpha) * current; }}While individual operations may vary, the amortized performance of write-around depends heavily on your read/write ratio. In read-heavy workloads (90%+ reads), the cache hit rate dominates and overall latency is excellent. In write-heavy workloads, the cache provides less benefit but also causes less overhead. This balance is why write-around often outperforms alternatives in aggregate metrics.
Write-around caching implements an eventually consistent model with configurable staleness windows. Understanding the consistency guarantees—and limitations—is essential for choosing this strategy appropriately.
Staleness Window Analysis:
The staleness window is the period during which cache reads may return outdated data after a write. Understanding this window is critical for determining if write-around is appropriate.
1234567891011121314151617181920212223242526272829303132333435363738394041424344
// Staleness window depends on implementation variant // Pure Write-Around: Staleness = TTL - elapsed timefunction pureWriteAroundStaleness(ttl: number, timeSinceWrite: number): number { // Maximum staleness equals remaining TTL return Math.max(0, ttl - timeSinceWrite); // Example: TTL=1hour, write happened 10min ago // Staleness window = 50 minutes (worst case)} // Write-Around with Invalidation: Staleness = 0function invalidationStaleness(): number { // Invalidation removes stale entry // Next read fetches from database return 0; // No staleness after invalidation} // Practical staleness scenariosconst scenarios = { // User updates profile, reads immediately profileUpdate: { pattern: "write-then-read", pureWriteAround: "May see old profile for up to TTL", withInvalidation: "Cache miss, fresh data guaranteed", recommendation: "Use invalidation for user-facing updates", }, // Analytics event written, never read by same user analyticsWrite: { pattern: "write-only", pureWriteAround: "No staleness concern (no read)", withInvalidation: "Unnecessary overhead", recommendation: "Pure write-around, skip invalidation", }, // Inventory update, read by many other users inventoryUpdate: { pattern: "write-then-read-by-others", pureWriteAround: "Other users may see stale inventory", withInvalidation: "All users see fresh data on next read", recommendation: "Use invalidation, consider short TTL", },};| Use Case | Staleness Tolerance | Recommended Variant | TTL Strategy |
|---|---|---|---|
| User session data | Zero (security) | Write-Through or Invalidation | Short TTL + invalidation |
| Product catalog | Minutes acceptable | Pure Write-Around | 5-15 minute TTL |
| Inventory counts | Seconds | Invalidation | 30 second TTL + invalidation |
| Analytics/logs | Hours/Never read | Pure Write-Around | Long TTL or no caching |
| Social feed | Seconds to minutes | Invalidation + Background | 1-5 minute TTL |
| Configuration | Zero | Write-Through | Long TTL + version-based invalidation |
Be careful with pure write-around for user-facing updates. If a user changes their email address and immediately sees their old email displayed (from cache), they'll assume the update failed. This UX degradation may outweigh the performance benefits. For user-initiated updates with immediate visibility requirements, prefer write-around with invalidation.
To truly understand write-around, we must compare it with its siblings: write-through and write-back caching. Each strategy makes different trade-offs along the dimensions of latency, consistency, and failure resilience.
| Dimension | Write-Through | Write-Back | Write-Around |
|---|---|---|---|
| Write latency | High (cache + DB sync) | Low (cache only) | Medium (DB only) |
| Cache pollution | High (all writes cached) | High (all writes cached) | Low (reads only) |
| Data loss risk | None (DB confirmed) | High (cache loss = data loss) | None (DB confirmed) |
| Read-after-write consistency | Guaranteed | Guaranteed | Eventual (or invalidate) |
| Cache hit rate (write-heavy) | High (but wasted) | High (but wasted) | Low (but efficient) |
| Implementation complexity | Low | High (async + recovery) | Low |
When Write-Around Wins:
Write-around excels in scenarios that penalize the other strategies:
High write volume with sparse reads — If you're writing 100,000 records but only 1% will ever be read, write-through wastes cache space on 99,000 entries that will expire unused. Write-around keeps the cache lean.
Cost-sensitive cache infrastructure — Cache memory is expensive. If your workload would fill the cache with write-through but only reads a fraction, write-around provides better cost efficiency.
Write bursts and bulk operations — Batch imports, ETL processes, and event logging can overwhelm caches. Write-around lets these operations complete without cache interference.
Strict durability requirements — Unlike write-back, write-around confirms database persistence before acknowledging writes. There's no window for data loss.
Many production systems use hybrid strategies—write-through for hot, frequently-read data and write-around for cold, rarely-read data. The key is understanding your access patterns and applying the appropriate strategy to each data category.
Write-around caching implements a simple but powerful philosophy: let read patterns drive cache contents. By bypassing the cache during writes, this strategy prevents cache pollution and optimizes for workloads where writes don't predict reads.
What's Next:
Now that you understand how write-around works at a mechanical level, the next page explores why writes go directly to the database—examining the architectural motivations, durability guarantees, and failure mode analysis that make database-first writes the right choice for write-around systems.
You now understand the fundamental mechanics of write-around caching—the data flow patterns, timing characteristics, consistency model, and how it compares to other strategies. You can visualize how writes bypass the cache while reads populate it on demand, and you understand when this trade-off is beneficial.