Loading content...
The defining characteristic of write-around caching is its database-first write path. Unlike write-back caching that buffers writes in fast cache storage, write-around sends every write operation directly to the database, completely bypassing the cache layer.
This isn't just a design choice—it's a deliberate trade-off that prioritizes durability, simplicity, and cache efficiency over raw write speed. Understanding why this design works requires examining the fundamental guarantees that databases provide and the risks that cache-buffered writes introduce.
By the end of this page, you will deeply understand the architectural motivations for database-first writes—the durability guarantees, transaction semantics, failure mode analysis, and the system design principles that make write-around a compelling choice for production systems where data integrity cannot be compromised.
Durability is the 'D' in ACID, and it represents a fundamental guarantee: once data is committed, it will survive system failures—crashes, power outages, hardware failures. Databases are engineered from the ground up to provide this guarantee. Caches are not.
Why Databases Are Durable:
Database durability isn't accidental—it's the result of decades of engineering specifically for this purpose:
Why Caches Are Not Durable:
Caches are optimized for speed, not persistence. This optimization comes at the cost of durability:
Redis can persist data to disk, but this doesn't make it a database. Redis persistence (RDB snapshots, AOF logs) is designed for cold restart, not crash consistency. There's a window (potentially seconds) where acknowledged writes can be lost. Write-back caching through Redis means accepting this data loss window. Write-around eliminates it.
1234567891011121314151617181920212223242526272829
// Write-Back Pattern (Through Cache)async function writeBack_DANGEROUS(key: string, data: Data): Promise<void> { // Step 1: Write to cache await cache.set(key, data); // Acknowledged! User thinks data is saved // Step 2: Async write to database (background) backgroundQueue.enqueue({ key, data }); // DANGER WINDOW: If the system crashes here: // - Cache has the data (in RAM, will be lost) // - Database doesn't have it (not yet written) // - User's data is LOST} // Write-Around Pattern (Database First)async function writeAround_SAFE(key: string, data: Data): Promise<void> { // Step 1: Write to database with durability guarantee await database.write(key, data); // Data is now on disk, replicated, and safe // Step 2: Optionally invalidate cache await cache.delete(key); // NO DANGER WINDOW: // - If crash after database.write: data is safe // - If crash before database.write: user sees error, retries // - Never in a state where user thinks data is saved but it isn't}System design isn't just about the happy path—it's about understanding what happens when things go wrong. Let's analyze failure scenarios for write-around compared to write-back caching.
| Failure Event | Write-Back Impact | Write-Around Impact |
|---|---|---|
| Cache node crash | ⚠️ Uncommitted data lost | ✅ No data loss (cache is optional) |
| Database node crash | ⚠️ Writes orphaned in cache | ✅ Write fails, client retries |
| Network partition: App ↔ Cache | ⚠️ Writes fail (primary path broken) | ✅ No impact (cache not in write path) |
| Network partition: App ↔ DB | ⚠️ Writes queue in cache indefinitely | ✅ Writes fail fast, client aware |
| Memory pressure (cache eviction) | ⚠️ Data evicted before DB sync | ✅ No risk (writes not in cache) |
| Full cache restart | ⚠️ Pending writes may be lost | ✅ No writes pending (all in DB) |
| Slow async sync (write-back) | ⚠️ Growing queue, memory pressure | N/A (no async sync needed) |
The Recovery Story Matters:
When analyzing failure modes, consider the complete recovery story: How do you get back to a consistent state after the failure?
Write-Back Recovery Complexity:
123456789101112131415161718192021222324252627282930313233343536
// Write-Back: What happens after cache crash?class WriteBackCacheRecovery { async recoverFromCacheCrash(): Promise<void> { // The cache contained: // 1. Data that was synced to DB (safe, but now missing from cache) // 2. Data that was NOT yet synced (LOST FOREVER) // Questions you cannot answer: // - Which keys had pending writes? // - What were the values? // - In what order should they be applied? // Your options: // Option A: Accept data loss, hope it doesn't matter // Option B: Replay from WAL (if you built one... for your cache) // Option C: Flag all users' data as potentially stale throw new Error("Data recovery is impossible without external log"); }} // Write-Around: What happens after cache crash?class WriteAroundCacheRecovery { async recoverFromCacheCrash(): Promise<void> { // The cache contained: // - Only READ data (loaded from database) // - No pending writes (writes went to DB) // Recovery is simple: // 1. Restart cache with empty state // 2. Reads will miss and repopulate // 3. No data was lost because DB has everything console.log("Cache restarted. Will repopulate on reads. Zero data loss."); }}Write-back caching essentially creates a distributed transaction problem: cache write + database write must both succeed atomically. Without sophisticated two-phase commit protocols, you're accepting either data loss or data inconsistency. Write-around sidesteps this entirely—there's only one write target, so atomicity is trivial.
Database-first writes in write-around naturally inherit the full transactional capabilities of your database. This is a significant advantage over cache-first approaches that must implement transaction semantics manually.
What Database Transactions Provide:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
class WriteAroundWithTransactions { async transferFunds( fromAccount: string, toAccount: string, amount: number ): Promise<void> { // Database transaction with full ACID guarantees await this.database.transaction(async (tx) => { // Debit source account const source = await tx.accounts.findUnique({ where: { id: fromAccount } }); if (source.balance < amount) { throw new Error("Insufficient funds"); } await tx.accounts.update({ where: { id: fromAccount }, data: { balance: { decrement: amount } }, }); // Credit destination account await tx.accounts.update({ where: { id: toAccount }, data: { balance: { increment: amount } }, }); // Log the transfer await tx.transfers.create({ data: { from: fromAccount, to: toAccount, amount, timestamp: new Date() }, }); }); // Transaction commits HERE - all or nothing // Cache invalidation (after successful commit) await this.cache.delete(`account:${fromAccount}`); await this.cache.delete(`account:${toAccount}`); // Why this order matters: // 1. Database transaction ensures atomicity // 2. Cache invalidation happens only after success // 3. If cache invalidation fails, data is still correct (just stale) }} // Write-back alternative: How would you ensure atomicity?class WriteBackTransactionProblem { async transferFunds_BROKEN( fromAccount: string, toAccount: string, amount: number ): Promise<void> { // Write to cache first... await this.cache.set(`account:${fromAccount}`, { balance: 100 }); await this.cache.set(`account:${toAccount}`, { balance: 200 }); // What if the async database sync fails? // - Cache says transfer happened // - Database says it didn't // - Your audit log is inconsistent // - Regulatory compliance violated // You'd need to implement: // - Two-phase commit // - Compensation transactions // - Saga pattern // - All the complexity databases already solved }}The Constraint Enforcement Advantage:
Databases enforce constraints that caches cannot:
With write-around, all writes go through the database, so all constraints are enforced. With write-back, you'd either skip constraints (dangerous) or duplicate enforcement logic (error-prone).
Databases have spent 40+ years perfecting transaction handling, constraint enforcement, and durability. Write-around lets you leverage this investment fully. Write-back forces you to partially replicate it in application code—and you won't do it as well.
Let's examine the hardware and network layers involved in write-around's database-first architecture. Understanding this stack explains the latency profile and failure characteristics.
The Complete Write Path:
┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ ┌────────────────┐
│ Application │────▶│ Network Hop │────▶│ Database Server │────▶│ Storage Engine │
│ Code │ │ (1-5ms) │ │ (Query Parse) │ │ (WAL + Data) │
└─────────────┘ └──────────────┘ └─────────────────┘ └────────────────┘
│
▼
┌─────────────────────┐
│ Disk/SSD Persistence│
│ (fsync guarantee) │
└─────────────────────┘
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
interface WritePathLatencyProfile { // Each stage in the write path stages: { applicationPrepare: "50-200μs"; // Serialize, validate networkToDatabase: "0.1-5ms"; // TCP/IP to DB server databaseParsing: "10-100μs"; // SQL parsing/planning lockAcquisition: "10μs-10ms"; // Row/table locks indexMaintenance: "50μs-5ms"; // B-tree updates walWrite: "100μs-2ms"; // Sequential log write fsync: "0.5-10ms"; // Force to disk replicationWait: "0-20ms"; // If synchronous replication responseToApp: "0.1-5ms"; // Network return }; // Total: typically 5-40ms for a single write // Compare to cache write: 0.1-1ms} // Why is this slower than cache? And why is it worth it?const tradeoffAnalysis = { // Cache write: Fast but fragile cacheWrite: { latency: "0.1-1ms", durability: "None (RAM only)", constraints: "None enforced", transactions: "Not supported", }, // Database write: Slower but reliable databaseWrite: { latency: "5-40ms", durability: "Full (WAL + fsync)", constraints: "All enforced", transactions: "Full ACID", }, // The insight: Durability has a price, and it's worth paying insight: ` A 10x latency increase for writes is acceptable when: 1. Write volume is manageable (not millions/second) 2. Data loss is not acceptable (financial, legal, safety) 3. Consistency requirements are strong (audit trails, compliance) Write-around makes this trade-off explicit: PAY the latency cost on writes, SAVE it on reads (from cache) `,};| Component | Latency Contribution | What It Provides | Can It Fail? |
|---|---|---|---|
| Network to DB | 1-5ms | Connectivity to storage | Yes - retry with backoff |
| Query Parsing | 10-100μs | SQL interpretation | Yes - syntax errors |
| Lock Acquisition | 10μs-10ms | Concurrency control | Yes - deadlocks, timeouts |
| Index Updates | 100μs-5ms | Query performance | No (part of transaction) |
| WAL Write | 100μs-2ms | Crash recovery capability | Rare - disk failures |
| Fsync | 0.5-10ms | Durability guarantee | Very rare - hardware |
| Replication | 0-20ms | HA, disaster recovery | Yes - network partitions |
The move from HDD to SSD dramatically reduced the fsync penalty. HDD fsync could take 10-20ms (waiting for platters). NVMe SSD fsync is often under 100μs. This makes database-first writes far more viable than a decade ago. Modern write-around latencies are quite acceptable for most applications.
Beyond durability, bypassing the cache during writes serves multiple architectural purposes. Let's examine the complete rationale.
The Cache Pollution Problem:
Consider an e-commerce system that logs every product view. With write-through caching:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
// Scenario: E-commerce product view logging // Write-Through: Every view cached (PROBLEMATIC)async function logProductView_WriteThrough( userId: string, productId: string): Promise<void> { const viewLog = { userId, productId, timestamp: Date.now(), sessionData: getSessionData(), }; const key = `view:${userId}:${productId}:${viewLog.timestamp}`; // Write to cache AND database await cache.set(key, viewLog); // Pollutes cache! await database.write(key, viewLog); // Problems: // 1. Millions of view logs fill the cache // 2. View logs are rarely read (only for analytics) // 3. Hot product data gets evicted to make room for cold logs // 4. Cache hit rate for actual product queries drops // 5. User experience degrades (slower product pages)} // Write-Around: Views go straight to DB (OPTIMAL)async function logProductView_WriteAround( userId: string, productId: string): Promise<void> { const viewLog = { userId, productId, timestamp: Date.now(), sessionData: getSessionData(), }; const key = `view:${userId}:${productId}:${viewLog.timestamp}`; // Write to database only await database.write(key, viewLog); // Cache is untouched - retains hot product data // Benefits: // 1. Cache stays clean and efficient // 2. Hot product data remains in cache // 3. Analytics queries go to database (appropriate for batch processing) // 4. Cache hit rate remains high for user-facing queries // 5. Better user experience (fast product pages)}Write-around naturally separates hot and cold data. Hot data (frequently read) lives in cache. Cold data (rarely read) lives only in the database. This separation happens automatically—you don't need complex logic to classify data. The read path handles it: if data is read, it enters the cache; if not, it stays out.
Database-first writes dramatically simplify operations compared to cache-first architectures. Let's examine the operational advantages.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
// Write-Around: Simple operation procedures class WriteAroundOperations { // Cache maintenance: Just flush and restart async performCacheMaintenance(): Promise<void> { // Step 1: Flush cache await this.cache.flushAll(); console.log("Cache flushed. Will repopulate on demand."); // That's it. No data loss, no recovery needed. } // Scaling: Add nodes without coordination async addCacheNode(): Promise<void> { // Step 1: Add node to cluster await this.cacheCluster.addNode(newNode); // Step 2: Enable routing await this.loadBalancer.addBackend(newNode); // New node starts empty, fills on demand // No data migration, no write re-routing } // Technology migration: Swap cache implementations async migrateCacheProvider(): Promise<void> { // Step 1: Deploy new cache (empty) const newCache = new MemcachedCache(); // Step 2: Switch traffic (instant) this.cache = newCache; // Old cache data was ephemeral anyway // Reads will repopulate the new cache // Zero-downtime migration } // Debugging: Database is always correct async debugInconsistency(key: string): Promise<void> { const cacheValue = await this.cache.get(key); const dbValue = await this.database.get(key); if (cacheValue !== dbValue) { // Database is correct. Cache is stale. // Simple fix: await this.cache.delete(key); console.log("Stale cache entry removed. Next read will be fresh."); } }} // Write-Back: Complex operation proceduresclass WriteBackOperations { async performCacheMaintenance(): Promise<void> { // Step 1: Stop writes (user impact!) await this.disableWrites(); // Step 2: Drain write queue while (await this.hasUnsynced()) { await this.syncBatch(); } // Step 3: Verify all data persisted await this.verifyDataIntegrity(); // Step 4: Now safe to flush await this.cache.flushAll(); // Step 5: Re-enable writes await this.enableWrites(); // Total maintenance window: minutes to hours }}Ask any on-call engineer: would you rather debug 'cache is stale, invalidate and move on' or 'write queue is backed up, need to drain without losing data'? Write-around leads to simpler alerts, faster resolution, and fewer 3 AM pages.
Despite its advantages, database-first writes aren't universally optimal. Understanding the counter-cases helps you make informed decisions.
| Requirement | Write-Around | Write-Through | Write-Back |
|---|---|---|---|
| Strong durability | ✅ Best | ✅ Good | ❌ Poor |
| Low write latency | ⚠️ Medium | ❌ Worst | ✅ Best |
| Read-after-write speed | ⚠️ Poor (first read) | ✅ Best | ✅ Best |
| Cache efficiency | ✅ Best | ❌ Cache pollution | ❌ Cache pollution |
| Operational simplicity | ✅ Best | ✅ Good | ❌ Complex |
| Transaction support | ✅ Full DB ACID | ⚠️ Limited | ❌ Complex |
Production systems often use hybrid strategies: write-around for bulk/cold data, write-through for hot paths where read-after-write matters. The key is understanding your access patterns and applying the appropriate strategy per data category.
Database-first writes in write-around caching represent a philosophy: durability over speed, simplicity over optimization. By routing all writes through the database, we gain the full power of ACID transactions, constraint enforcement, and decades of engineering focused on data integrity.
What's Next:
With a deep understanding of why writes go to the database, we'll now examine the other side of write-around: how the cache is populated on reads. The next page explores the read path—cache miss behavior, lazy loading, and the demand-driven cache population that makes write-around efficient.
You now understand the complete rationale for database-first writes—durability guarantees, failure mode resilience, transaction semantics, cache pollution prevention, and operational simplicity. You can articulate why bypassing the cache during writes is a feature, not a limitation.