Loading learning content...
In a strongly consistent system, the mental model for reads and writes is simple: a write completes, subsequent reads see that write. But in eventually consistent systems, this intuitive model breaks down. A write might complete successfully, yet the next read returns stale data. Two clients reading the same key might see different values.
This isn't a bug—it's the fundamental nature of eventual consistency. The challenge is designing read and write patterns that work correctly despite this uncertainty. How do you build applications that remain correct when reads might be stale and writes might conflict?
By the end of this page, you will understand how to configure quorum-based reads and writes for tunable consistency, implement session guarantees like read-your-writes and monotonic reads, handle stale reads gracefully in application logic, and design write patterns that work correctly with eventual consistency.
The key insight is that eventual consistency doesn't mean "no consistency." Through careful design of read and write patterns, you can achieve practical consistency guarantees that satisfy application requirements while preserving the availability and performance benefits of eventual consistency.
Quorum-based operations are the foundation of tunable consistency in eventually consistent systems. By controlling how many replicas participate in reads and writes, you can balance consistency, availability, and latency.
The N, W, R Parameters:
The Fundamental Rule:
If W + R > N, reads always overlap with at least one replica that has the latest write.
This condition is necessary (but not sufficient) for strong consistency. Let's explore what different configurations provide:
| Configuration | W + R | Consistency | Availability | Use Case |
|---|---|---|---|---|
| W=1, R=1 | 2 (< N) | Eventual | Highest | Metrics, logs, non-critical data |
| W=1, R=3 | 4 (> N) | Read-side strong | Lower read availability | Read-heavy, need current data |
| W=3, R=1 | 4 (> N) | Write-side strong | Lower write availability | Write-heavy, stale reads OK |
| W=2, R=2 | 4 (> N) | Balanced | Moderate | General purpose |
| W=3, R=3 | 6 (> N) | Strongest | Lowest | Critical data (rare in EC systems) |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
interface QuorumConfig { N: number; // Total replicas W: number; // Write quorum R: number; // Read quorum} class QuorumCoordinator { private replicas: ReplicaNode[]; private config: QuorumConfig; constructor(replicas: ReplicaNode[], config: QuorumConfig) { if (replicas.length !== config.N) { throw new Error(`Expected ${config.N} replicas, got ${replicas.length}`); } this.replicas = replicas; this.config = config; } async write(key: string, value: Value): Promise<WriteResult> { const timestamp = Date.now(); const versionedValue = { ...value, timestamp }; // Send write to all replicas, wait for W acknowledgments const results = await Promise.allSettled( this.replicas.map(r => r.write(key, versionedValue)) ); const successes = results.filter(r => r.status === 'fulfilled'); if (successes.length >= this.config.W) { return { success: true, acknowledgments: successes.length }; } else { throw new Error(`Write quorum not met: ${successes.length}/${this.config.W}`); } } async read(key: string): Promise<Value> { // Send read to all replicas, wait for R responses const results = await Promise.allSettled( this.replicas.map(r => r.read(key).then(value => ({ replica: r, value }))) ); const successes = results .filter(r => r.status === 'fulfilled') .map(r => (r as PromiseFulfilledResult<any>).value); if (successes.length < this.config.R) { throw new Error(`Read quorum not met: ${successes.length}/${this.config.R}`); } // Find the most recent value const mostRecent = successes.reduce((latest, current) => current.value.timestamp > latest.value.timestamp ? current : latest ); // Optionally trigger read repair for stale replicas this.triggerReadRepair(key, mostRecent.value, successes); return mostRecent.value; } private triggerReadRepair( key: string, correctValue: Value, responses: { replica: ReplicaNode; value: Value }[] ): void { for (const { replica, value } of responses) { if (value.timestamp < correctValue.timestamp) { // Fire-and-forget repair replica.write(key, correctValue).catch(console.error); } } }}Even with W + R > N, quorums don't provide linearizability. Concurrent reads and writes can still result in anomalies. A client might read a stale value if a concurrent write hasn't yet reached all W replicas. True linearizability requires additional coordination like consensus protocols.
Modern distributed databases offer tunable consistency, allowing you to choose different consistency levels per operation. This means you can use strong consistency for critical operations and eventual consistency for others, all within the same system.
Cassandra's Consistency Levels:
| Level | For Writes | For Reads | Trade-off |
|---|---|---|---|
| ANY | Write to any available node (includes hinted handoff) | N/A | Highest availability, weakest durability |
| ONE | Write to 1 replica | Read from 1 replica | Fast, eventual consistency |
| TWO | Write to 2 replicas | Read from 2 replicas | Slightly stronger |
| THREE | Write to 3 replicas | Read from 3 replicas | Stronger still |
| QUORUM | Write to majority (N/2 + 1) | Read from majority | Strong if W + R > N |
| LOCAL_QUORUM | Quorum in local datacenter | Quorum in local datacenter | Strong within DC, cross-DC eventual |
| EACH_QUORUM | Quorum in each datacenter | N/A | Strongest multi-DC write |
| ALL | Write to all replicas | Read from all replicas | Strongest, lowest availability |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// Different operations need different consistency levels class UserService { private db: Database; // Password changes: Critical, need strong consistency async changePassword(userId: string, newHash: string): Promise<void> { await this.db.write( { table: 'users', key: userId, field: 'passwordHash', value: newHash }, { consistency: 'QUORUM' } // W=2, R=2 for N=3 ); // Verify the write completed by reading back const verified = await this.db.read( { table: 'users', key: userId, field: 'passwordHash' }, { consistency: 'QUORUM' } ); if (verified !== newHash) { throw new Error('Password update verification failed'); } } // Profile view count: Non-critical, eventual consistency fine async incrementViewCount(userId: string): Promise<void> { await this.db.write( { table: 'user_stats', key: userId, field: 'viewCount', op: 'INCREMENT' }, { consistency: 'ONE' } // Fire and forget, eventually consistent ); } // User display name: Read from local replica for speed async getDisplayName(userId: string): Promise<string> { return await this.db.read( { table: 'users', key: userId, field: 'displayName' }, { consistency: 'LOCAL_ONE' } // Fast local read, might be slightly stale ); } // Balance check before purchase: Must be consistent async getBalance(userId: string): Promise<number> { return await this.db.read( { table: 'accounts', key: userId, field: 'balance' }, { consistency: 'SERIAL' } // Linearizable read (Cassandra's strongest) ); }}A good rule of thumb: Use eventual consistency (ONE/LOCAL_ONE) for reads that can tolerate staleness (display data, analytics). Use quorum for data that must be correct (financial data, security). Use ALL sparingly—only when absolute correctness outweighs availability.
Between eventual and strong consistency lies a family of session guarantees that provide useful properties within a client session without requiring full linearizability. These guarantees often meet application needs while preserving the benefits of eventual consistency.
Key Session Guarantees:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
class SessionConsistentClient { private lastWriteTimestamp: Map<string, number> = new Map(); private preferredReplica: ReplicaNode; async write(key: string, value: Value): Promise<void> { const timestamp = Date.now(); // Write with timestamp await this.db.write(key, { ...value, timestamp }); // Track when we last wrote this key this.lastWriteTimestamp.set(key, timestamp); } async read(key: string): Promise<Value> { const lastWrite = this.lastWriteTimestamp.get(key) || 0; // Strategy 1: Sticky sessions - read from same replica you wrote to if (this.preferredReplica) { const value = await this.preferredReplica.read(key); if (value.timestamp >= lastWrite) { return value; // Replica has our write, we're good } } // Strategy 2: Read with minimum timestamp requirement const replicas = await this.db.getReplicas(key); for (const replica of replicas) { const value = await replica.read(key); if (value.timestamp >= lastWrite) { // Remember this replica for future reads this.preferredReplica = replica; return value; } } // Strategy 3: Block and wait for propagation return await this.waitForConsistency(key, lastWrite); } private async waitForConsistency( key: string, minTimestamp: number, maxWaitMs: number = 5000 ): Promise<Value> { const startTime = Date.now(); while (Date.now() - startTime < maxWaitMs) { const value = await this.db.read(key); if (value.timestamp >= minTimestamp) { return value; } // Wait and retry await new Promise(r => setTimeout(r, 100)); } throw new Error('Timed out waiting for read-your-writes consistency'); }}Implementation Techniques:
Sticky Sessions: Route all requests from a client to the same replica. That replica always has the client's writes. Simple but limits load balancing flexibility.
Version/Timestamp Tracking: Client tracks the version of its last write. Reads specify a minimum version and block or retry if the replica is behind.
Read from Leader: For critical reads, route to the primary replica that received the write. Guarantees freshness but sacrifices some read scalability.
Causal Tokens: Pass a token representing the client's causal history. Systems like MongoDB use this with causal sessions.
Session guarantees apply to a single client's view of the system. They don't guarantee that different clients see consistent data with each other. For cross-client consistency, you need stronger models like linearizability or serializable isolation.
In eventually consistent systems, stale reads are not bugs—they're expected behavior. The application layer must be designed to handle them gracefully. Here are strategies for dealing with stale data:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
// React component example with stale data handling interface UserProfileProps { userId: string;} function UserProfile({ userId }: UserProfileProps) { const [profile, setProfile] = useState<Profile | null>(null); const [isStale, setIsStale] = useState(false); const [isRefreshing, setIsRefreshing] = useState(false); useEffect(() => { // Initial fast fetch (eventual consistency) fetchProfile(userId, { consistency: 'ONE' }) .then(data => { setProfile(data); // Check if data might be stale const ageMs = Date.now() - data.lastModified; if (ageMs > 5000) { setIsStale(true); // Trigger background refresh with stronger consistency refreshInBackground(); } }); }, [userId]); const refreshInBackground = async () => { setIsRefreshing(true); try { const freshData = await fetchProfile(userId, { consistency: 'QUORUM' }); setProfile(freshData); setIsStale(false); } finally { setIsRefreshing(false); } }; if (!profile) return <LoadingSkeleton />; return ( <div className="profile-card"> <h1>{profile.name}</h1> <p>{profile.bio}</p> {/* Show staleness indicator */} {isStale && ( <div className="stale-indicator"> <span>Data may be outdated</span> {isRefreshing ? ( <Spinner size="small" /> ) : ( <button onClick={refreshInBackground}>Refresh</button> )} </div> )} {/* Show when data was last updated */} <small> Last updated: {formatRelativeTime(profile.lastModified)} </small> </div> );} // Backend: Optimistic update with verificationasync function updateProfile(userId: string, changes: Partial<Profile>) { // Optimistic write await db.write(userId, changes, { consistency: 'ONE' }); // Background verification setTimeout(async () => { const current = await db.read(userId, { consistency: 'QUORUM' }); if (!deepEqual(current, { ...current, ...changes })) { // Write didn't propagate correctly, retry or alert console.warn('Profile update may have been lost, retrying...'); await db.write(userId, changes, { consistency: 'QUORUM' }); } }, 1000);}Users are more tolerant of stale data than you might expect, especially if you're transparent about it. A 'Last updated 5 seconds ago' indicator is often acceptable. What frustrates users is when stale data causes actions to fail unexpectedly—always validate before critical actions.
Writing in eventually consistent systems requires careful design to handle the possibilities of conflicts, lost updates, and reordering. Here are essential patterns for robust writes:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
// PATTERN 1: Idempotent Writes with Unique Request IDsasync function idempotentWrite( key: string, value: any, requestId: string // Client-generated unique ID): Promise<WriteResult> { // Check if this request was already processed const existing = await db.get(`requests:${requestId}`); if (existing) { return existing.result; // Return cached result } // Process the write await db.put(key, value); // Cache the result for future retries await db.put(`requests:${requestId}`, { result: 'success', timestamp: Date.now() }); return { result: 'success' };} // PATTERN 2: Conditional Writes (Compare-and-Swap)async function conditionalUpdate( key: string, updateFn: (current: Value) => Value): Promise<{ success: boolean; retries: number }> { const MAX_RETRIES = 5; for (let attempt = 0; attempt < MAX_RETRIES; attempt++) { // Read current value with version const { value, version } = await db.getWithVersion(key); // Compute new value const newValue = updateFn(value); // Attempt conditional write const result = await db.putIfVersion(key, newValue, version); if (result.success) { return { success: true, retries: attempt }; } // Version mismatch - another write happened, retry await delay(Math.pow(2, attempt) * 10); // Exponential backoff } return { success: false, retries: MAX_RETRIES };} // PATTERN 3: Append-Only Writesinterface Event { id: string; timestamp: number; type: string; payload: any;} async function appendEvent(streamId: string, event: Event): Promise<void> { // Events are immutable, no conflicts possible const eventKey = `${streamId}:${event.timestamp}:${event.id}`; await db.put(eventKey, event);} async function getState(streamId: string): Promise<State> { // Derive state by replaying events const events = await db.range(`${streamId}:`, `${streamId}~`); return events.reduce((state, event) => applyEvent(state, event), initialState);}Handling Write Conflicts:
When two clients write to the same key concurrently in an eventually consistent system, a conflict occurs. Common resolution strategies:
The best strategy depends on your data and use case. LWW is common for its simplicity, but CRDTs or explicit merge logic provide better semantics for many scenarios.
Combining read and write consistency levels creates different operational patterns, each suited to specific use cases:
| Pattern | Write Level | Read Level | Properties | Best For |
|---|---|---|---|---|
| Maximum Availability | ONE | ONE | Fastest, most available, least consistent | Analytics, metrics, logs |
| Strong Write, Fast Read | QUORUM | ONE | Durability guaranteed, reads may be stale | Write-once, read-many data |
| Fast Write, Strong Read | ONE | QUORUM | Writes may be lost, reads are fresh | Write-heavy, occasional accurate reads |
| Balanced | QUORUM | QUORUM | Overlapping quorums, practical consistency | Most production workloads |
| Strong Consistency | ALL | ONE | Ultra-durable writes, immediate visibility | Critical configuration |
| Paranoid | ALL | ALL | All replicas in sync | Financial, audit logs |
1234567891011121314151617181920212223242526272829303132333435363738
const consistencyConfig = { // User authentication: High consistency for security 'auth.passwords': { write: 'QUORUM', read: 'QUORUM' }, 'auth.sessions': { write: 'QUORUM', read: 'LOCAL_QUORUM' }, 'auth.tokens': { write: 'QUORUM', read: 'QUORUM' }, // Financial data: Maximum consistency 'payments.transactions': { write: 'ALL', read: 'QUORUM' }, 'payments.balances': { write: 'QUORUM', read: 'SERIAL' }, // Linearizable // User profile: Balanced, humans are tolerant of slight staleness 'users.profiles': { write: 'QUORUM', read: 'ONE' }, 'users.preferences': { write: 'ONE', read: 'ONE' }, // Social features: High availability, eventual consistency fine 'social.followers': { write: 'ONE', read: 'ONE' }, 'social.likes': { write: 'ANY', read: 'ONE' }, // Can be lost, not critical 'social.comments': { write: 'QUORUM', read: 'ONE' }, // Don't lose comments // Analytics: Maximum throughput 'analytics.events': { write: 'ANY', read: 'ONE' }, 'analytics.aggregates': { write: 'ONE', read: 'ONE' }, // System health: Fast access 'health.heartbeats': { write: 'ONE', read: 'ONE' }, 'health.alerts': { write: 'QUORUM', read: 'ONE' }, // Don't miss alerts}; function getConsistencyForKey(key: string): ConsistencyLevel { // Match key pattern to config for (const [pattern, config] of Object.entries(consistencyConfig)) { if (key.startsWith(pattern)) { return config; } } // Default: balanced return { write: 'QUORUM', read: 'ONE' };}Be careful with patterns like (W=ONE, R=ONE). While available, they can lead to data loss or permanent inconsistency. If W=1 and that replica fails before replicating, the write is lost. If R=1 and it reads from an out-of-sync replica, you get stale data with no way to detect it. Use these patterns only for truly non-critical data.
Working with eventual consistency requires careful design. Here are common mistakes that lead to bugs, data loss, or poor user experience:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// ❌ ANTI-PATTERN: Read-Modify-Write without versioningasync function transferBad(from: string, to: string, amount: number) { const fromBalance = await db.read(`balance:${from}`); const toBalance = await db.read(`balance:${to}`); // PROBLEM: Another transaction might have modified these in between await db.write(`balance:${from}`, fromBalance - amount); await db.write(`balance:${to}`, toBalance + amount);} // ✅ CORRECT: Use conditional writes with versioningasync function transferGood(from: string, to: string, amount: number) { let success = false; while (!success) { // Read with versions const { value: fromBalance, version: fromVersion } = await db.getWithVersion(`balance:${from}`); const { value: toBalance, version: toVersion } = await db.getWithVersion(`balance:${to}`); // Conditional writes - fail if version changed const results = await Promise.all([ db.putIfVersion(`balance:${from}`, fromBalance - amount, fromVersion), db.putIfVersion(`balance:${to}`, toBalance + amount, toVersion), ]); success = results.every(r => r.success); if (!success) { // Exponential backoff before retry await delay(Math.random() * 100); } }} // ❌ ANTI-PATTERN: Assuming immediate visibilityasync function createAndReadBad(data: any) { await db.write('key', data, { consistency: 'ONE' }); // PROBLEM: This read might go to a different replica that doesn't have the write yet return await db.read('key', { consistency: 'ONE' });} // ✅ CORRECT: Use matching consistency or session guaranteesasync function createAndReadGood(data: any) { await db.write('key', data, { consistency: 'QUORUM' }); // Read with quorum ensures overlap with write return await db.read('key', { consistency: 'QUORUM' });}Use tools like Jepsen to simulate network partitions, clock skew, and node failures. Consistency bugs are notoriously hard to find in normal testing but appear under stress. Jepsen has found bugs in nearly every distributed database it has tested.
Designing read and write operations for eventual consistency requires deliberate choices about consistency levels, session guarantees, and failure handling. Let's consolidate the key takeaways:
What's Next:
Now that we understand read and write patterns, the next page examines application-level handling—how to design application logic, user interfaces, and business processes that work correctly with eventual consistency. We'll explore specific patterns for common challenges like inventory management, collaborative editing, and financial transactions.
You now understand the essential read and write patterns for eventually consistent systems. These patterns—quorums, session guarantees, idempotent operations, and graceful stale handling—form the foundation for building reliable applications on eventually consistent infrastructure.