Loading content...
How do you ensure that your cache and database always agree? The patterns we've explored so far—cache-aside and read-through—focus on the read path. They populate the cache with data from the database. But what happens when data changes? How do you prevent the cache from serving stale data after an update?
The write-through cache pattern addresses this by making the cache an integral part of every write operation. When data is written, it goes through the cache to the database. The cache intercepts writes, updates its own storage, and synchronously persists to the underlying store. Only after both succeed does the write complete.
This synchronous, two-phase write ensures the cache and database are always consistent—at the cost of write latency.
By the end of this page, you will understand the write-through pattern deeply—its synchronous write semantics, consistency guarantees, performance implications, and exactly when it's the right choice versus other write-handling strategies.
In the write-through pattern, the cache sits in the write path between the application and the database. Every write operation follows a strict sequence:
The key characteristic: synchronous persistence. The application blocks until the database write completes. The cache doesn't just buffer the write—it ensures durability before returning.
This differs fundamentally from write-behind (which writes asynchronously) and cache-aside (which leaves write handling to the application).
Just like read-through, the name describes data flow. Writes go "through" the cache to reach the database. The cache isn't a side storage—it's on the critical path. This makes the cache essential for writes, not just an optimization for reads.
Understanding the precise mechanics of write-through is essential for reasoning about its consistency properties and failure modes.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
/** * Write-Through Cache Implementation * * Writes to cache and database synchronously, ensuring consistency. */interface WriteThroughCache<K, V> { get(key: K): Promise<V | null>; set(key: K, value: V): Promise<void>; delete(key: K): Promise<void>;} type DatabaseWriter<K, V> = { insert: (key: K, value: V) => Promise<void>; update: (key: K, value: V) => Promise<void>; delete: (key: K) => Promise<void>; get: (key: K) => Promise<V | null>;}; class WriteThroughCacheImpl<K extends string, V> implements WriteThroughCache<K, V> { private cache: Map<K, V>; private database: DatabaseWriter<K, V>; private ttlSeconds: number; private expiryTimes: Map<K, number>; constructor(database: DatabaseWriter<K, V>, ttlSeconds: number) { this.cache = new Map(); this.database = database; this.ttlSeconds = ttlSeconds; this.expiryTimes = new Map(); } async get(key: K): Promise<V | null> { // Check cache first if (this.isValid(key)) { return this.cache.get(key) ?? null; } // Cache miss or expired - read from database const value = await this.database.get(key); if (value !== null) { this.cacheLocally(key, value); } return value; } async set(key: K, value: V): Promise<void> { // Write-through: update cache, then database synchronously // Step 1: Update local cache this.cacheLocally(key, value); try { // Step 2: Write through to database (synchronous) // Determine if this is insert or update const exists = await this.database.get(key); if (exists) { await this.database.update(key, value); } else { await this.database.insert(key, value); } } catch (error) { // Database write failed - roll back cache to maintain consistency this.cache.delete(key); this.expiryTimes.delete(key); throw new WriteThroughError( `Failed to write through to database for key ${key}`, error as Error ); } } async delete(key: K): Promise<void> { // Delete from both cache and database // Step 1: Remove from cache this.cache.delete(key); this.expiryTimes.delete(key); // Step 2: Delete from database (synchronous) try { await this.database.delete(key); } catch (error) { // Database delete failed - but cache is already clear // This could leave orphaned database data throw new WriteThroughError( `Failed to delete from database for key ${key}`, error as Error ); } } private cacheLocally(key: K, value: V): void { this.cache.set(key, value); this.expiryTimes.set(key, Date.now() + (this.ttlSeconds * 1000)); } private isValid(key: K): boolean { const expiry = this.expiryTimes.get(key); return expiry !== undefined && expiry > Date.now(); }} class WriteThroughError extends Error { constructor(message: string, public readonly cause: Error) { super(message); this.name = 'WriteThroughError'; }}Critical observations:
Rollback on failure — If the database write fails after updating the cache, we must roll back the cache to maintain consistency. Otherwise, the cache would have data the database doesn't.
The application only sees one interface — The set() method abstracts both cache and database operations. The application doesn't know about the database write.
Latency includes database round-trip — Every set() waits for the database. Write-through cannot be faster than your database.
TTL still applies — Even with write-through, cache entries can expire. This handles cases where the database is updated through other channels.
Write-through's primary value proposition is consistency. But what kind of consistency does it actually guarantee? And what happens when things fail?
What Write-Through Guarantees:
What Write-Through Does NOT Guarantee:
| Failure Point | What Happens | Data State | Recommended Handling |
|---|---|---|---|
| Cache write fails | Operation fails immediately | Neither updated | Propagate error to caller |
| Database write fails after cache update | Must rollback cache | Cache dirty → must clean | Rollback cache, propagate error |
| Cache crashes after successful DB write | Cache data lost | DB has data, cache empty | Reads will reload from DB (read-through) |
| Network partition during DB write | Operation times out | Unknown state | Retry with idempotency checks |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
/** * Enhanced Write-Through with Better Failure Handling * * Uses database-first approach to minimize inconsistency windows */class RobustWriteThroughCache<K extends string, V> { private cache: Map<K, V>; private database: DatabaseWriter<K, V>; private pendingWrites: Set<K>; // Track in-flight writes async set(key: K, value: V): Promise<void> { // Prevent concurrent writes to same key if (this.pendingWrites.has(key)) { throw new ConcurrentWriteError( `Write already in progress for key ${key}` ); } this.pendingWrites.add(key); try { // ALTERNATIVE: Database-first approach // Write to database first to ensure durability // Only update cache after confirmed persistence // Step 1: Persist to database first const exists = await this.database.get(key); if (exists) { await this.database.update(key, value); } else { await this.database.insert(key, value); } // Step 2: Update cache only after database success // If this fails, cache miss will reload from DB (acceptable) this.cache.set(key, value); } finally { this.pendingWrites.delete(key); } } /** * Idempotent write with version checking * Useful for retry scenarios */ async setWithVersion( key: K, value: V & { version: number } ): Promise<boolean> { const current = await this.database.get(key) as (V & { version: number }) | null; // Optimistic locking: only update if version matches if (current && current.version !== value.version - 1) { return false; // Version mismatch - concurrent modification } // Proceed with write await this.set(key, value); return true; }} /** * Transactional Write-Through (when database supports it) * * Uses database transactions to ensure atomicity */class TransactionalWriteThroughCache<K extends string, V> { async setWithContext( key: K, value: V, context: TransactionContext ): Promise<void> { // Participate in an existing transaction // The cache update is deferred until transaction commits const txn = context.transaction; // Register the cache update to run on commit txn.onCommit(() => { this.cache.set(key, value); }); // The database write happens within the transaction await txn.execute(async (db) => { const exists = await db.get(key); if (exists) { await db.update(key, value); } else { await db.insert(key, value); } }); }}You cannot guarantee atomic updates across cache and database without distributed transactions. Write-through minimizes inconsistency but doesn't eliminate it. For critical financial data, consider using the database as the sole source of truth with cache-aside for reads, or implement proper distributed transactions.
Write-through and read-through are natural partners. Together, they create a cache that handles both reads and writes transparently, positioning the cache as the complete data access layer.
Read-Through + Write-Through (Unified Cache):
The application never directly interacts with the database for either reads or writes. The cache becomes a unified interface with consistent, predictable behavior.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136
/** * Unified Read-Through + Write-Through Cache * * A complete data access layer that handles both reads and writes. */interface UnifiedCache<K, V> { get(key: K): Promise<V | null>; set(key: K, value: V): Promise<void>; update(key: K, updater: (current: V | null) => V): Promise<void>; delete(key: K): Promise<void>;} interface StorageBackend<K, V> { read(key: K): Promise<V | null>; write(key: K, value: V, isNew: boolean): Promise<void>; remove(key: K): Promise<void>;} class ReadWriteThroughCache<K extends string, V> implements UnifiedCache<K, V> { private cache: Map<K, { value: V; expiresAt: number }>; private storage: StorageBackend<K, V>; private ttlMs: number; private inFlightLoads: Map<K, Promise<V | null>>; constructor(storage: StorageBackend<K, V>, ttlSeconds: number) { this.cache = new Map(); this.storage = storage; this.ttlMs = ttlSeconds * 1000; this.inFlightLoads = new Map(); } // ===== Read-Through: Get ===== async get(key: K): Promise<V | null> { const now = Date.now(); const cached = this.cache.get(key); // Cache hit if (cached && cached.expiresAt > now) { return cached.value; } // Cache miss or expired - load through to storage return this.loadThrough(key); } private async loadThrough(key: K): Promise<V | null> { // Request coalescing const existing = this.inFlightLoads.get(key); if (existing) { return existing; } const loadPromise = this.executeLoad(key); this.inFlightLoads.set(key, loadPromise); try { return await loadPromise; } finally { this.inFlightLoads.delete(key); } } private async executeLoad(key: K): Promise<V | null> { const value = await this.storage.read(key); if (value !== null) { this.cacheLocally(key, value); } return value; } // ===== Write-Through: Set ===== async set(key: K, value: V): Promise<void> { // Determine if this is a new record const existing = this.cache.get(key) ?? await this.storage.read(key); const isNew = existing === null; // Write through to storage await this.storage.write(key, value, isNew); // Update cache after successful storage write this.cacheLocally(key, value); } // ===== Write-Through: Update ===== async update(key: K, updater: (current: V | null) => V): Promise<void> { // Get current value (uses read-through) const current = await this.get(key); // Apply update function const newValue = updater(current); // Write through await this.set(key, newValue); } // ===== Write-Through: Delete ===== async delete(key: K): Promise<void> { // Remove from storage first await this.storage.remove(key); // Then remove from cache this.cache.delete(key); } private cacheLocally(key: K, value: V): void { this.cache.set(key, { value, expiresAt: Date.now() + this.ttlMs }); }} // ===== Usage Example =====const userStorage: StorageBackend<string, User> = { read: async (id) => database.query('SELECT * FROM users WHERE id = $1', [id]), write: async (id, user, isNew) => { if (isNew) { await database.query('INSERT INTO users (id, name, email) VALUES ($1, $2, $3)', [id, user.name, user.email]); } else { await database.query('UPDATE users SET name = $2, email = $3 WHERE id = $1', [id, user.name, user.email]); } }, remove: async (id) => database.query('DELETE FROM users WHERE id = $1', [id])}; const userCache = new ReadWriteThroughCache<string, User>(userStorage, 3600); // Application code - completely agnostic to databaseconst user = await userCache.get('123'); // Read-throughawait userCache.set('123', updatedUser); // Write-throughawait userCache.update('123', u => ({...u, name: 'New Name'})); // Read + Writeawait userCache.delete('123'); // Write-through deleteWhen you combine read-through and write-through, the cache becomes an abstraction layer over your data store. This enables powerful patterns: swapping storage backends, adding metrics transparently, implementing caching policies without changing application code, and more.
Write-through's consistency comes at a performance cost. Every write operation incurs the latency of database persistence. For write-heavy workloads, this can become a significant bottleneck.
Performance characteristics:
| Metric | Write-Through | Write-Behind | Cache-Aside (invalidate) |
|---|---|---|---|
| Write Latency | High (DB round-trip) | Low (cache only) | Medium (DB + delete) |
| Read Latency (cache hit) | Low | Low | Low |
| Read Latency (cache miss) | Medium (DB read) | Medium | Medium |
| Consistency | Strong (within cache) | Eventual | Eventual |
| Durability Guarantee | Immediate | Delayed | Immediate |
| Write Throughput | Limited by DB | Limited by queue | Limited by DB |
Optimization strategies for write-through:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103
/** * Batched Write-Through Cache * * Collects writes over a short window and flushes to database in batches. * This is a hybrid approach between pure write-through and write-behind. */class BatchedWriteThroughCache<K extends string, V> { private cache: Map<K, V>; private writeBuffer: Map<K, V>; private flushPromise: Promise<void> | null; private batchSize: number; private maxDelayMs: number; private flushTimer: NodeJS.Timeout | null; constructor( private storage: StorageBackend<K, V>, options: { batchSize: number; maxDelayMs: number } ) { this.cache = new Map(); this.writeBuffer = new Map(); this.flushPromise = null; this.batchSize = options.batchSize; this.maxDelayMs = options.maxDelayMs; this.flushTimer = null; } async set(key: K, value: V): Promise<void> { // Update cache immediately (fast path) this.cache.set(key, value); // Add to write buffer this.writeBuffer.set(key, value); // Schedule or trigger flush if (this.writeBuffer.size >= this.batchSize) { // Batch is full - flush immediately await this.flush(); } else if (!this.flushTimer) { // Start timer for delayed flush this.scheduleFlush(); } } private scheduleFlush(): void { this.flushTimer = setTimeout(async () => { await this.flush(); }, this.maxDelayMs); } private async flush(): Promise<void> { // Wait for any in-progress flush if (this.flushPromise) { await this.flushPromise; } // Clear timer if (this.flushTimer) { clearTimeout(this.flushTimer); this.flushTimer = null; } // Nothing to flush if (this.writeBuffer.size === 0) { return; } // Swap buffers atomically const toFlush = this.writeBuffer; this.writeBuffer = new Map(); // Execute batch write this.flushPromise = this.executeBatchWrite(toFlush); try { await this.flushPromise; } finally { this.flushPromise = null; } } private async executeBatchWrite(batch: Map<K, V>): Promise<void> { const entries = Array.from(batch.entries()); // Use database batch/bulk insert await this.storage.writeBatch(entries); console.log(`Flushed batch of ${entries.length} writes`); } // Ensure pending writes are flushed on shutdown async close(): Promise<void> { await this.flush(); }} // Usage: writes are batched, but still synchronous from caller's viewconst cache = new BatchedWriteThroughCache(storage, { batchSize: 100, // Flush when 100 writes accumulated maxDelayMs: 50 // Or flush after 50ms, whichever first}); await cache.set('user:1', user1); // Returns after batch flushawait cache.set('user:2', user2); // May batch with user:1Batched write-through blurs the line between write-through and write-behind. While the caller still blocks until the batch flushes, multiple writes within the batch window share latency. This provides better throughput but slightly different consistency properties than pure write-through.
Write-through is a powerful pattern but isn't universally optimal. Its consistency guarantees come at a performance cost that not all applications can afford.
Write-through is ideal when:
| Scenario | Recommended Pattern | Rationale |
|---|---|---|
| User profile updates | Write-Through | Users read their profile immediately after updating |
| Analytics events | Write-Behind | High volume, no read-back, eventual consistency OK |
| E-commerce cart | Write-Through | Cart must be consistent—visible to user and checkout |
| Session storage | Write-Behind | High update frequency, eventual consistency acceptable |
| Configuration changes | Write-Through + Broadcast | Must be immediately consistent across all readers |
| Log aggregation | Direct to database | Write-only, never cached, high volume |
Implementing write-through in distributed systems introduces additional challenges. Multiple cache nodes, network partitions, and concurrent writers complicate the consistency story.
Distributed Write-Through Challenges:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
/** * Distributed Write-Through with Cache Coherency * * Uses a shared cache tier (Redis) with pub/sub for invalidation. */class DistributedWriteThroughCache<K extends string, V> { private localCache: Map<K, { value: V; expiresAt: number }>; // L1 cache private sharedCache: RedisClient; // L2 cache (shared) private database: DatabaseWriter<K, V>; private pubsub: PubSubClient; private instanceId: string; constructor( sharedCache: RedisClient, database: DatabaseWriter<K, V>, pubsub: PubSubClient ) { this.localCache = new Map(); this.sharedCache = sharedCache; this.database = database; this.pubsub = pubsub; this.instanceId = generateInstanceId(); // Listen for invalidation messages from other nodes this.pubsub.subscribe('cache:invalidate', this.handleInvalidation.bind(this)); } async get(key: K): Promise<V | null> { // L1: Check local cache const local = this.localCache.get(key); if (local && local.expiresAt > Date.now()) { return local.value; } // L2: Check shared cache const shared = await this.sharedCache.get<V>(key); if (shared !== null) { this.cacheLocally(key, shared); return shared; } // Miss: Load from database and populate both caches const value = await this.database.read(key); if (value !== null) { await this.sharedCache.set(key, value, 3600); this.cacheLocally(key, value); } return value; } async set(key: K, value: V): Promise<void> { // Determine if insert or update const exists = await this.database.read(key); // Write-through to database (synchronous) if (exists) { await this.database.update(key, value); } else { await this.database.insert(key, value); } // Update shared cache (L2) await this.sharedCache.set(key, value, 3600); // Update local cache (L1) this.cacheLocally(key, value); // Broadcast invalidation to other nodes await this.pubsub.publish('cache:invalidate', { key, sourceInstance: this.instanceId, action: 'update' }); } private handleInvalidation(message: InvalidationMessage): void { // Don't process our own messages if (message.sourceInstance === this.instanceId) { return; } // Invalidate local cache - next read will pull from shared cache this.localCache.delete(message.key); console.log(`Invalidated local cache for key ${message.key} from instance ${message.sourceInstance}`); } private cacheLocally(key: K, value: V): void { this.localCache.set(key, { value, expiresAt: Date.now() + 60000 // 1 minute local TTL }); }} interface InvalidationMessage { key: string; sourceInstance: string; action: 'update' | 'delete';}In distributed write-through, consider a two-tier cache: fast local caches (L1) in each application instance, and a shared cache (L2) like Redis. Writes update L2 and broadcast invalidations. This balances performance (local reads) with consistency (shared writes).
The write-through cache pattern ensures consistency by synchronously persisting every write to the database before acknowledging success. This trades write latency for strong consistency guarantees.
What's Next:
In the next and final page of this module, we'll explore the write-behind cache pattern (also called write-back), where writes are acknowledged immediately and persisted asynchronously. This trades consistency for performance—the inverse of write-through's tradeoffs.
You now understand the write-through cache pattern—how writes flow synchronously through the cache to the database, the consistency guarantees this provides, and when this pattern is the right choice. This completes your understanding of cache write handling, with write-behind remaining as the final piece.