Loading learning content...
In the first page, we established that write-through caching writes to both cache and database synchronously. But what does "together" really mean in the context of distributed systems? Can we truly write to two different systems simultaneously? What happens when one succeeds and the other fails?
These questions cut to the heart of distributed systems theory. The two-generals problem teaches us that achieving perfect coordination between two separate systems is theoretically impossible. Yet write-through caching manages to provide strong consistency guarantees in practice. Understanding how it achieves this—and its limitations—is essential for building reliable systems.
This page examines the coordination mechanisms that make write-through caching work, the consistency models it provides, and the subtle edge cases that can still occur.
By the end of this page, you will understand how write-through achieves its consistency guarantees without true atomic transactions, the different levels of consistency it can provide, and how to handle the edge cases where perfect coordination isn't achievable.
When we say data is written to cache and database "together," we need to be precise about what this means—and what it doesn't mean.
What "Together" DOES Mean:
What "Together" DOES NOT Mean:
Write-through caching achieves consistency through careful ordering and error handling, not through distributed transactions. It trades true atomicity for simplicity and performance. This is a pragmatic engineering choice that works well in practice but has implications you must understand.
The Consistency Window:
Between the database write completing and the cache write completing, there's a brief window where the two systems are inconsistent:
Time │ Database State │ Cache State │ System Consistency
──────┼──────────────────┼───────────────┼────────────────────
T0 │ value = A │ value = A │ ✓ Consistent
T1 │ value = B │ value = A │ ✗ Inconsistent (window opens)
T2 │ value = B │ value = B │ ✓ Consistent (window closes)
This window typically lasts milliseconds (the time to update the cache), but it exists. During this window:
For most applications, this window is negligible. But for systems requiring strict linearizability, this is a crucial consideration.
The coordination protocol in write-through caching follows a specific sequence designed to maximize consistency while minimizing complexity. Let's examine each phase in detail.
Phase 1: Write Initiation
The application initiates a write operation. This phase establishes the write context:
┌─────────────────────────────────────────────────────────────┐
│ WRITE INITIATION │
├─────────────────────────────────────────────────────────────┤
│ 1. Application prepares write payload │
│ 2. Optionally acquire distributed lock (if needed) │
│ 3. Generate idempotency key (for retry safety) │
│ 4. Open database connection (or acquire from pool) │
│ 5. Begin implicit or explicit transaction │
└─────────────────────────────────────────────────────────────┘
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107
class WriteCoordinator { async executeWriteThrough<T>( key: string, writeOperation: () => Promise<T>, options: WriteThroughOptions = {} ): Promise<T> { const { idempotencyKey, lockTimeout = 5000, maxRetries = 3, cacheErrorPolicy = 'log-and-continue' } = options; // Phase 1: Write Initiation const writeContext = await this.initializeWrite(key, idempotencyKey); if (writeContext.alreadyProcessed) { // Idempotency check: this operation already succeeded return writeContext.previousResult; } // Optional: Acquire distributed lock for strict ordering const lock = options.requireLock ? await this.acquireLock(key, lockTimeout) : null; try { // Phase 2: Database Write const result = await this.executeDatabaseWrite( key, writeOperation, writeContext ); // Phase 3: Cache Sync await this.syncCache(key, result, cacheErrorPolicy); // Phase 4: Finalize await this.finalizeWrite(writeContext, result); return result; } finally { // Always release lock if (lock) await lock.release(); } } private async executeDatabaseWrite<T>( key: string, operation: () => Promise<T>, context: WriteContext ): Promise<T> { const startTime = performance.now(); try { const result = await operation(); this.metrics.recordDatabaseLatency( performance.now() - startTime ); // Record successful write for idempotency await this.recordWriteSuccess(context, result); return result; } catch (error) { this.metrics.incrementDatabaseFailures(); // Database failed - no cache update, fail fast throw new DatabaseWriteError(key, error); } } private async syncCache<T>( key: string, value: T, errorPolicy: CacheErrorPolicy ): Promise<void> { const startTime = performance.now(); try { await this.cache.set(key, value, this.options.ttl); this.metrics.recordCacheLatency(performance.now() - startTime); } catch (error) { this.metrics.incrementCacheFailures(); switch (errorPolicy) { case 'log-and-continue': // Database succeeded, accept temporary cache miss this.logger.error('Cache write failed', { key, error }); break; case 'throw': // Strict mode: fail the entire operation // Note: Database write already succeeded - may need compensation throw new CacheWriteError(key, error); case 'retry': // Attempt async retry this.scheduleRetry(key, value); break; } } }}Phase 2: Database Write
The database write is the authoritative operation. Key considerations:
Phase 3: Cache Synchronization
After database success, update the cache:
Phase 4: Finalization
Complete the write operation:
Write-through caching provides different consistency guarantees depending on implementation choices. Understanding these models helps you choose the right configuration for your use case.
| Consistency Level | Guarantee | Implementation | Trade-off |
|---|---|---|---|
| Read-After-Write | Writer sees their own writes immediately | Default write-through behavior | No additional cost; standard behavior |
| Monotonic Reads | Once a value is read, older values are never seen | Per-client session caching + versioning | Requires version tracking |
| Monotonic Writes | Writes from same client appear in order | Sequence numbers or locks | Additional coordination overhead |
| Causal Consistency | Causally related operations appear in order | Vector clocks or hybrid logical clocks | Significant complexity increase |
| Strong Consistency | All clients see writes simultaneously | Distributed locking + synchronous replication | Highest latency, lowest availability |
Read-After-Write Consistency (Default):
This is what write-through provides naturally. If Client A writes a value and then reads it, they're guaranteed to see their write:
Client A: WRITE(key, 'value1') → SUCCESS
Client A: READ(key) → 'value1' ✓ (guaranteed)
Client B: READ(key) → 'value1' or 'old_value' (depends on timing)
This works because:
Strengthening Consistency with Versions:
For stronger guarantees, implement version-aware caching:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
interface VersionedValue<T> { value: T; version: number; timestamp: number;} class VersionedWriteThroughCache { async write(key: string, value: any): Promise<VersionedValue<any>> { // Database write with version increment const result = await this.db.transaction(async (tx) => { const current = await tx.findUnique({ where: { key } }); const newVersion = (current?.version ?? 0) + 1; return tx.upsert({ where: { key }, create: { key, value, version: newVersion, updatedAt: new Date() }, update: { value, version: newVersion, updatedAt: new Date() }, }); }); // Cache with version const versionedValue: VersionedValue<any> = { value: result.value, version: result.version, timestamp: result.updatedAt.getTime(), }; await this.cache.set(key, versionedValue); return versionedValue; } async read( key: string, minVersion?: number ): Promise<VersionedValue<any> | null> { const cached = await this.cache.get<VersionedValue<any>>(key); // Version check: cache might be stale if (cached && minVersion && cached.version < minVersion) { // Client expects a newer version; bypass cache const fresh = await this.loadFromDatabase(key); if (fresh) { await this.cache.set(key, fresh); } return fresh; } if (cached) return cached; // Cache miss return this.loadFromDatabase(key); } // For causal consistency, clients track their "causal cut" async causalRead( key: string, clientCausalState: Map<string, number> ): Promise<VersionedValue<any> | null> { const expectedVersion = clientCausalState.get(key) ?? 0; return this.read(key, expectedVersion); }}The most challenging aspect of dual-write coordination is handling partial failures—when one system succeeds and the other fails. Let's examine each scenario and the strategies to address them.
Strategy 1: Accept Temporary Inconsistency
The pragmatic approach for most systems: accept that a cache write failure after DB success results in temporary staleness.
Database Write Success
↓
Cache Write Attempt
↓
┌───┴───┐
│ FAIL │
└───┬───┘
↓
Log Error + Return Success
↓
Next Read → Cache Miss → Load from DB → Cache Populated
↓
Consistency Restored
When to use: Most applications where occasional stale reads (for seconds) are acceptable.
Strategy 2: Background Retry with Dead Letter Queue
For systems needing faster convergence, implement async retry:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
class CacheRetrySystem { private retryQueue: Queue<CacheRetryItem>; private deadLetterQueue: Queue<CacheRetryItem>; constructor() { this.retryQueue = new Queue('cache-retry', { attempts: 3, backoff: { type: 'exponential', delay: 1000 } }); this.deadLetterQueue = new Queue('cache-retry-dlq'); // Process retries this.retryQueue.process(async (job) => { await this.cache.set(job.data.key, job.data.value, job.data.ttl); }); // Handle final failures this.retryQueue.on('failed', async (job, err) => { if (job.attemptsMade >= job.opts.attempts) { await this.deadLetterQueue.add(job.data); this.alerting.notify('CacheWriteFailedPermanently', job.data.key); } }); } async handleCacheWriteFailure( key: string, value: any, ttl: number ): Promise<void> { // Enqueue retry immediately await this.retryQueue.add({ key, value, ttl, failedAt: Date.now(), }); }} // Integration with write-throughclass RobustWriteThrough { async write(key: string, value: any): Promise<any> { const result = await this.database.write(key, value); try { await this.cache.set(key, result); } catch (error) { // Schedule retry instead of failing await this.retrySystem.handleCacheWriteFailure(key, result, 3600); } return result; }}Strategy 3: Compensating Transaction (Strict Mode)
For systems requiring strict consistency, rollback the database write on cache failure:
Database Write Success
↓
Cache Write Attempt
↓
┌───┴───┐
│ FAIL │
└───┬───┘
↓
Begin Compensating Transaction
↓
Rollback Database Write
↓
Return Failure to Client
Caution: This approach is complex and introduces its own failure modes. What if the rollback fails? You need careful saga-style orchestration. Generally not recommended unless absolutely necessary.
For 95% of applications, Strategy 1 (accept temporary inconsistency) with good monitoring is the right choice. Strategy 2 is appropriate for systems with high cache write failure rates. Strategy 3 should only be used when regulatory or business requirements mandate it.
When multiple clients write to the same key simultaneously, the "together" guarantee becomes more complex. We need to ensure that both the database and cache end up with the same final value.
The Race Condition Problem:
Consider two concurrent writes:
Time │ Client A │ Client B
───────┼─────────────────────────────┼─────────────────────────
T0 │ DB.write(key, 'A') │
T1 │ │ DB.write(key, 'B')
T2 │ DB returns success │
T3 │ │ DB returns success
T4 │ Cache.set(key, 'A') │
T5 │ │ Cache.set(key, 'B')
Database final value: 'B' (last write wins) Cache final value: 'B' (last write wins) Result: Consistent ✓
This case works correctly. But consider a different timing:
Time │ Client A │ Client B
───────┼─────────────────────────────┼─────────────────────────
T0 │ DB.write(key, 'A') │
T1 │ │ DB.write(key, 'B')
T2 │ DB returns success │
T3 │ │ DB returns success
T4 │ │ Cache.set(key, 'B')
T5 │ Cache.set(key, 'A') │
Database final value: 'B' (Client B wrote last) Cache final value: 'A' (Client A's cache write happened last) Result: Inconsistent ✗
The cache and database disagree because the cache writes happened in reverse order of the database writes.
Solution 1: Optimistic Versioning
Use versions to detect and resolve conflicts:
123456789101112131415161718192021222324252627282930
class VersionedCacheWrite { async writeThrough(key: string, value: any): Promise<void> { // Database write returns version const { data, version } = await this.db.writeWithVersion(key, value); // Only cache if this version is still latest await this.conditionalCacheSet(key, data, version); } private async conditionalCacheSet( key: string, value: any, version: number ): Promise<void> { // Redis Lua script for atomic check-and-set const script = ` local currentVersion = redis.call('HGET', KEYS[1], 'version') if currentVersion == false or tonumber(currentVersion) < tonumber(ARGV[1]) then redis.call('HSET', KEYS[1], 'value', ARGV[2], 'version', ARGV[1]) return 1 end return 0 `; await this.redis.eval(script, { keys: [key], arguments: [version.toString(), JSON.stringify(value)] }); }}Solution 2: Distributed Locking
Serialize writes to prevent concurrent access:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
class LockedWriteThrough { async writeThrough(key: string, value: any): Promise<void> { const lockKey = `lock:${key}`; const lock = await this.redlock.acquire(lockKey, 5000); try { // Writes are now serialized await this.db.write(key, value); await this.cache.set(key, value); } finally { await lock.release(); } }} // Using Redlock algorithm for distributed lockingclass Redlock { constructor(private clients: Redis[], private options: RedlockOptions) {} async acquire(resource: string, ttl: number): Promise<Lock> { const start = Date.now(); const value = crypto.randomUUID(); let successCount = 0; for (const client of this.clients) { try { const acquired = await client.set( resource, value, 'NX', 'PX', ttl ); if (acquired) successCount++; } catch (e) { // Node failed, continue } } const drift = ttl * 0.01 + 2; // Clock drift compensation const validity = ttl - (Date.now() - start) - drift; if (successCount >= this.quorum && validity > 0) { return new Lock(this, resource, value, validity); } // Failed to acquire - release any successful locks await this.release(resource, value); throw new LockError('Failed to acquire lock'); } private get quorum(): number { return Math.floor(this.clients.length / 2) + 1; }}Optimistic versioning has lower overhead but may result in cache updates being skipped (the right version still wins eventually). Distributed locking has higher overhead but guarantees strict ordering. Choose based on your concurrent write frequency and consistency requirements.
How you structure data in the cache affects the "together" guarantee. Different patterns have different trade-offs for consistency and performance.
| Pattern | Structure | Pros | Cons |
|---|---|---|---|
| Whole Object | Single key contains entire entity | Simple; atomic updates; easy invalidation | Large objects waste bandwidth; can't partial update |
| Field-Level | Separate key per field | Efficient partial updates; fine-grained caching | Many keys; complex coordination; race conditions |
| Hash Maps | Redis HSET with fields | Atomic field updates; efficient storage | Requires Redis; slightly more complex |
| Composite Keys | Hierarchical key structure | Namespace isolation; pattern-based invalidation | Key management complexity; larger key space |
Recommended Pattern: Whole Object with Versioning
For write-through caching, storing whole objects with version metadata provides the best balance of consistency and simplicity:
// Cache entry structure
{
"key": "user:12345",
"value": {
"id": 12345,
"name": "Alice",
"email": "alice@example.com",
"preferences": { "theme": "dark", "notifications": true }
},
"meta": {
"version": 42,
"updatedAt": 1704672000000,
"ttl": 3600
}
}
This structure ensures:
We've explored what it means to write data to cache and database "together" in write-through caching. Let's consolidate the key insights:
What's Next:
With a deep understanding of how cache and database are coordinated, we're ready to explore the consistency benefits of write-through caching. The next page examines why write-through provides stronger guarantees than other caching patterns and when these benefits matter most.
You now understand the nuances of dual-write coordination in write-through caching—the ordering guarantees, consistency models, partial failure handling, and concurrent write strategies. This knowledge prepares you to implement robust write-through caching in production systems.