Loading content...
For decades, distributed systems were presented with a seemingly binary choice: strong consistency or eventual consistency. You could have one or the other, but the tradeoffs were fixed at design time. Once you chose an architecture, you lived with its limitations.
Modern quorum-based systems shatter this dichotomy. Through tunable consistency, they allow you to position your system anywhere on the consistency spectrum—and even more powerfully, to choose different positions for different operations at runtime.
This isn't just academic flexibility. It's the difference between a system that forces uniform tradeoffs on all data and one that optimizes each operation for its specific requirements. Your account balance can demand strict consistency while your analytics batch jobs accept eventual consistency. Your primary datacenter can provide linearizable reads while disaster recovery sites offer fast-but-stale fallbacks.
By the end of this page, you will understand the full spectrum of consistency levels available in tunable systems, master the art of selecting appropriate consistency for each operation, comprehend the implications of consistency mismatches between reads and writes, learn how to implement dynamic consistency selection in applications, and recognize patterns for degrading gracefully under system stress.
Rather than viewing consistency as a binary property, tunable consistency systems offer a spectrum of guarantees. Each level makes different tradeoffs between data freshness, availability, latency, and partition tolerance.
Understanding the Spectrum:
At one extreme, linearizability provides the illusion that all operations occur atomically at a single point in time—every read sees the most recent write as if there were only one copy of data. This is the strongest guarantee but requires the most coordination.
At the other extreme, eventual consistency guarantees only that if writes stop, all replicas will eventually converge to the same value. Reads may return stale data, and concurrent operations may see different views of the system.
Between these extremes exist numerous intermediate levels, each providing specific guarantees that may be sufficient for particular use cases while offering better performance than stronger levels.
| Consistency Level | Guarantee | Latency | Availability | Use Case |
|---|---|---|---|---|
| Linearizability | Real-time ordering; acts as if single copy | Highest | Lowest | Financial transactions, locks |
| Sequential Consistency | Total order agreed by all; may not be real-time | High | Low | Distributed coordination |
| Causal Consistency | Cause-effect relationships preserved | Medium | Medium | Social feeds, messaging |
| Read-Your-Writes | Readers see their own writes | Medium | Medium | User sessions, editing |
| Monotonic Reads | Once seen, a value never 'goes back' | Low-Medium | High | Dashboards, reporting |
| Monotonic Writes | Writes from one client applied in order | Low | High | Logging, audit trails |
| Eventual Consistency | Replicas converge eventually | Lowest | Highest | Analytics, metrics, caches |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
// Cassandra-style consistency levels (representative of most systems)enum ConsistencyLevel { // Single replica acknowledgment ANY = 'ANY', // Write: any node (including hints). Read: N/A ONE = 'ONE', // 1 replica TWO = 'TWO', // 2 replicas THREE = 'THREE', // 3 replicas // Quorum-based QUORUM = 'QUORUM', // floor(N/2) + 1 replicas (global) LOCAL_QUORUM = 'LOCAL_QUORUM', // Quorum in local datacenter EACH_QUORUM = 'EACH_QUORUM', // Quorum in every datacenter // All replicas ALL = 'ALL', // All N replicas must respond // Local datacenter variants LOCAL_ONE = 'LOCAL_ONE', // 1 replica in local DC // Serial consistency (for lightweight transactions) SERIAL = 'SERIAL', // Linearizable via Paxos LOCAL_SERIAL = 'LOCAL_SERIAL', // Linearizable within local DC} // Map consistency levels to effective N, W, R valuesfunction resolveConsistencyLevel( level: ConsistencyLevel, replicationFactor: number, localRF?: number // For multi-DC): { effectiveQuorum: number; scope: 'local' | 'global' | 'each' } { switch (level) { case ConsistencyLevel.ONE: case ConsistencyLevel.LOCAL_ONE: return { effectiveQuorum: 1, scope: level === ConsistencyLevel.LOCAL_ONE ? 'local' : 'global' }; case ConsistencyLevel.TWO: return { effectiveQuorum: 2, scope: 'global' }; case ConsistencyLevel.THREE: return { effectiveQuorum: 3, scope: 'global' }; case ConsistencyLevel.QUORUM: return { effectiveQuorum: Math.floor(replicationFactor / 2) + 1, scope: 'global' }; case ConsistencyLevel.LOCAL_QUORUM: if (!localRF) throw new Error('LOCAL_QUORUM requires local replication factor'); return { effectiveQuorum: Math.floor(localRF / 2) + 1, scope: 'local' }; case ConsistencyLevel.EACH_QUORUM: if (!localRF) throw new Error('EACH_QUORUM requires local replication factor'); return { effectiveQuorum: Math.floor(localRF / 2) + 1, scope: 'each' }; case ConsistencyLevel.ALL: return { effectiveQuorum: replicationFactor, scope: 'global' }; case ConsistencyLevel.ANY: return { effectiveQuorum: 1, scope: 'global' }; // Special: includes hints default: throw new Error(`Unknown consistency level: ${level}`); }}Choosing the right consistency level for each operation requires systematic analysis. The following framework helps you navigate these decisions by asking the right questions about your data and operations.
Question 1: What is the cost of reading stale data?
This is the fundamental question. If showing a slightly outdated product price for a few seconds is acceptable, eventual consistency may suffice. If showing an incorrect account balance could lead to financial loss, strong consistency is mandatory.
Question 2: What is the cost of write unavailability?
Strong consistency (high W) means writes fail when too many nodes are unavailable. For logging systems, a failed write might mean lost data forever. For transactional systems, it might mean a user retry.
Question 3: What latency can users tolerate?
Cross-datacenter consistency adds 50-200ms. For interactive user experiences, this may be unacceptable. For batch operations, it's irrelevant.
Question 4: What happens during network partitions?
During partitions, you must choose: reject operations (CP) or accept potentially inconsistent operations (AP). Different data may warrant different choices.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110
interface OperationCharacteristics { // Data criticality financialData: boolean; // Money, inventory, etc. legalRequirements: boolean; // Audit, compliance userFacingImmediacy: boolean; // User waiting for result? // Tolerance for issues staleReadTolerance: 'none' | 'seconds' | 'minutes' | 'hours'; writeFailureTolerance: 'none' | 'retry_acceptable' | 'loss_acceptable'; // Performance requirements maxLatencyMs: number; crossDatacenterAllowed: boolean; // Workload pattern readWriteRatio: number; // Reads per write operationsPerSecond: number;} function selectConsistencyLevel( op: OperationCharacteristics): { read: ConsistencyLevel; write: ConsistencyLevel; reasoning: string[] } { const reasoning: string[] = []; let readLevel: ConsistencyLevel; let writeLevel: ConsistencyLevel; // Financial/Legal requires strong consistency if (op.financialData || op.legalRequirements) { readLevel = ConsistencyLevel.QUORUM; writeLevel = ConsistencyLevel.QUORUM; reasoning.push('Financial/legal data requires strong consistency (W+R > N)'); // If cross-DC not allowed, use local quorum if (!op.crossDatacenterAllowed && op.maxLatencyMs < 100) { readLevel = ConsistencyLevel.LOCAL_QUORUM; writeLevel = ConsistencyLevel.LOCAL_QUORUM; reasoning.push('Latency constraint: using LOCAL_QUORUM'); } return { read: readLevel, write: writeLevel, reasoning }; } // User-facing with no stale tolerance if (op.userFacingImmediacy && op.staleReadTolerance === 'none') { readLevel = ConsistencyLevel.QUORUM; writeLevel = ConsistencyLevel.QUORUM; reasoning.push('User-facing with no stale tolerance: QUORUM for both'); return { read: readLevel, write: writeLevel, reasoning }; } // High read ratio with stale tolerance if (op.readWriteRatio > 100 && op.staleReadTolerance !== 'none') { writeLevel = ConsistencyLevel.ALL; readLevel = ConsistencyLevel.ONE; reasoning.push('Read-heavy workload: Write ALL, Read ONE (still strong: W=N, R=1, W+R > N)'); return { read: readLevel, write: writeLevel, reasoning }; } // High write volume with loss acceptable if (op.operationsPerSecond > 10000 && op.writeFailureTolerance === 'loss_acceptable') { writeLevel = ConsistencyLevel.ONE; readLevel = ConsistencyLevel.ONE; reasoning.push('High volume, loss acceptable: ONE for both (eventual consistency)'); return { read: readLevel, write: writeLevel, reasoning }; } // Minutes/hours stale tolerance if (op.staleReadTolerance === 'minutes' || op.staleReadTolerance === 'hours') { writeLevel = ConsistencyLevel.ONE; readLevel = ConsistencyLevel.ONE; reasoning.push('High stale tolerance: eventual consistency acceptable'); return { read: readLevel, write: writeLevel, reasoning }; } // Default: balanced quorum reasoning.push('Default selection: QUORUM for balanced consistency/performance'); return { read: ConsistencyLevel.QUORUM, write: ConsistencyLevel.QUORUM, reasoning };} // Example applicationsconst accountBalance = selectConsistencyLevel({ financialData: true, legalRequirements: true, userFacingImmediacy: true, staleReadTolerance: 'none', writeFailureTolerance: 'none', maxLatencyMs: 500, crossDatacenterAllowed: true, readWriteRatio: 10, operationsPerSecond: 100,});// Result: QUORUM/QUORUM - "Financial/legal data requires strong consistency" const productCatalog = selectConsistencyLevel({ financialData: false, legalRequirements: false, userFacingImmediacy: true, staleReadTolerance: 'seconds', writeFailureTolerance: 'retry_acceptable', maxLatencyMs: 50, crossDatacenterAllowed: false, readWriteRatio: 1000, operationsPerSecond: 10000,});// Result: ONE/ALL - "Read-heavy workload: Write ALL, Read ONE"A common pattern is to start with QUORUM/QUORUM for everything during initial development. As you identify performance bottlenecks and understand your data semantics better, selectively relax consistency for specific operations. It's much safer to start strict and relax than to start loose and tighten later—the latter may require data migration to correct accumulated inconsistencies.
The power and danger of tunable consistency both stem from the same source: you can mix and match read and write levels. Understanding what happens when you combine different levels is crucial for avoiding subtle bugs.
The Consistency Equation Applies Per-Operation:
Remember: W + R > N must hold for the actual W and R used in each operation, not just your default settings. If you write with QUORUM but read with ONE, you don't have strong consistency—even if your defaults are both QUORUM.
Cross-Operation Consistency:
Each read-write pair must be analyzed independently. If you write data in Operation A and read it in Operation B, the consistency guarantee is determined by A's write level and B's read level.
| Write Level | Read Level | W | R | W+R | N? | Guarantee |
|---|---|---|---|---|---|---|
| QUORUM | QUORUM | 2 | 2 | 4 | Yes | Strong consistency |
| ALL | ONE | 3 | 1 | 4 | Yes | Strong (write-heavy optimized) |
| ONE | ALL | 1 | 3 | 4 | Yes | Strong (read-heavy optimized) |
| QUORUM | ONE | 2 | 1 | 3 | No | Eventual - RISK! |
| ONE | QUORUM | 1 | 2 | 3 | No | Eventual - RISK! |
| TWO | TWO | 2 | 2 | 4 | Yes | Strong consistency |
| ONE | ONE | 1 | 1 | 2 | No | Eventual consistency |
| LOCAL_QUORUM | LOCAL_QUORUM | 2* | 2* | 4* | Yes* | *Within local DC only |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
interface Operation { name: string; type: 'read' | 'write'; consistency: ConsistencyLevel; key: string;} interface ConsistencyAnalysis { isStronglyConsistent: boolean; effectiveW: number; effectiveR: number; overlap: number; warning?: string;} function analyzeOperationPair( writeOp: Operation, readOp: Operation, replicationFactor: number): ConsistencyAnalysis { if (writeOp.type !== 'write' || readOp.type !== 'read') { throw new Error('Must provide write operation first, then read operation'); } if (writeOp.key !== readOp.key) { throw new Error('Operations must be for the same key'); } const writeResolution = resolveConsistencyLevel(writeOp.consistency, replicationFactor); const readResolution = resolveConsistencyLevel(readOp.consistency, replicationFactor); const effectiveW = writeResolution.effectiveQuorum; const effectiveR = readResolution.effectiveQuorum; const sum = effectiveW + effectiveR; const overlap = sum - replicationFactor; const isStronglyConsistent = sum > replicationFactor; let warning: string | undefined; // Detect common mistakes if (!isStronglyConsistent && ( writeOp.consistency === ConsistencyLevel.QUORUM || readOp.consistency === ConsistencyLevel.QUORUM )) { warning = 'One operation uses QUORUM but combination is eventually consistent!'; } // Detect scope mismatch if (writeResolution.scope !== readResolution.scope) { warning = `Scope mismatch: write scope=${writeResolution.scope}, read scope=${readResolution.scope}`; } return { isStronglyConsistent, effectiveW, effectiveR, overlap: Math.max(0, overlap), warning, };} // Example: Detecting a subtle bugconst buggyWrite: Operation = { name: 'updateUserProfile', type: 'write', consistency: ConsistencyLevel.QUORUM, key: 'user:123:profile',}; const buggyRead: Operation = { name: 'getUserProfile', type: 'read', consistency: ConsistencyLevel.ONE, // Developer wanted speed... key: 'user:123:profile',}; const analysis = analyzeOperationPair(buggyWrite, buggyRead, 3);console.log(analysis);/* Output:{ isStronglyConsistent: false, // BUG DETECTED! effectiveW: 2, effectiveR: 1, overlap: 0, warning: 'One operation uses QUORUM but combination is eventually consistent!'}*/ // This means: getUserProfile might not see recent updateUserProfile changes!One of the most common bugs in quorum systems is inconsistent consistency levels across operations touching the same data. A developer sets read consistency to ONE for 'performance' without realizing they've broken the consistency guarantee. Always analyze the W + R > N equation for every code path, not just your defaults. Consider building runtime assertions or linting rules to detect these mismatches.
One of the most powerful—and most overlooked—capabilities of tunable consistency systems is the ability to adjust consistency levels at runtime based on current conditions. This enables sophisticated strategies like graceful degradation and adaptive consistency.
Graceful Degradation:
When system health degrades (nodes down, high latency, network issues), you can automatically relax consistency to maintain availability. The key is defining clear policies that balance business risk against operational resilience.
Adaptive Consistency:
More sophisticated systems can adjust consistency based on:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
interface SystemHealth { availableNodes: number; totalNodes: number; averageLatencyMs: number; p99LatencyMs: number; errorRate: number; recentTimeouts: number;} interface ConsistencyPolicy { name: string; readLevel: ConsistencyLevel; writeLevel: ConsistencyLevel; degradesTo?: ConsistencyPolicy;} const POLICIES = { STRICT: { name: 'STRICT', readLevel: ConsistencyLevel.QUORUM, writeLevel: ConsistencyLevel.QUORUM, degradesTo: undefined as any, // Set below }, DEGRADED: { name: 'DEGRADED', readLevel: ConsistencyLevel.QUORUM, writeLevel: ConsistencyLevel.ONE, degradesTo: undefined as any, }, EMERGENCY: { name: 'EMERGENCY', readLevel: ConsistencyLevel.ONE, writeLevel: ConsistencyLevel.ONE, degradesTo: undefined, },}; POLICIES.STRICT.degradesTo = POLICIES.DEGRADED;POLICIES.DEGRADED.degradesTo = POLICIES.EMERGENCY; class AdaptiveConsistencyManager { private currentPolicy: ConsistencyPolicy = POLICIES.STRICT; private replicationFactor: number; private alertCallback: (message: string) => void; constructor(rf: number, alertCallback: (msg: string) => void) { this.replicationFactor = rf; this.alertCallback = alertCallback; } evaluateHealth(health: SystemHealth): ConsistencyPolicy { const oldPolicy = this.currentPolicy; // If we can't achieve quorum, degrade const quorumSize = Math.floor(this.replicationFactor / 2) + 1; if (health.availableNodes < quorumSize) { this.currentPolicy = POLICIES.EMERGENCY; this.alertCallback(`CRITICAL: Only ${health.availableNodes} nodes available, need ${quorumSize} for quorum`); } // If error rate is high, degrade else if (health.errorRate > 0.05) { // > 5% errors if (this.currentPolicy === POLICIES.STRICT) { this.currentPolicy = POLICIES.DEGRADED; this.alertCallback(`WARNING: Error rate ${health.errorRate * 100}%, degrading to ${this.currentPolicy.name}`); } } // If p99 latency is very high, degrade else if (health.p99LatencyMs > 1000) { if (this.currentPolicy === POLICIES.STRICT) { this.currentPolicy = POLICIES.DEGRADED; this.alertCallback(`WARNING: p99 latency ${health.p99LatencyMs}ms, degrading to ${this.currentPolicy.name}`); } } // If health is good, consider upgrading else if (health.errorRate < 0.01 && health.p99LatencyMs < 200) { if (this.currentPolicy === POLICIES.DEGRADED) { this.currentPolicy = POLICIES.STRICT; this.alertCallback(`INFO: Health restored, upgrading to ${this.currentPolicy.name}`); } else if (this.currentPolicy === POLICIES.EMERGENCY) { this.currentPolicy = POLICIES.DEGRADED; this.alertCallback(`INFO: Partial recovery, upgrading to ${this.currentPolicy.name}`); } } if (oldPolicy !== this.currentPolicy) { console.log(`Consistency policy changed: ${oldPolicy.name} -> ${this.currentPolicy.name}`); } return this.currentPolicy; } getReadConsistency(): ConsistencyLevel { return this.currentPolicy.readLevel; } getWriteConsistency(): ConsistencyLevel { return this.currentPolicy.writeLevel; }} // Usage in applicationconst consistencyManager = new AdaptiveConsistencyManager(3, alert => { console.log(alert); // Send to monitoring system}); // Periodic health check (every 10 seconds)setInterval(async () => { const health = await getSystemHealth(); consistencyManager.evaluateHealth(health);}, 10000); // In request handlersasync function handleRequest(req: Request) { const readConsistency = consistencyManager.getReadConsistency(); const writeConsistency = consistencyManager.getWriteConsistency(); // Use dynamically determined consistency const data = await db.read(req.key, { consistency: readConsistency }); // ...}Beyond per-operation and system-wide consistency, sophisticated applications implement session consistency—where the consistency level is determined by the user's context rather than the data being accessed.
Read-Your-Writes Guarantee:
The most common session consistency guarantee is "read-your-writes." If a user updates their profile, they should immediately see their changes—even if another user might briefly see stale data. This is a weaker guarantee than global strong consistency but sufficient for most user-facing applications.
Implementation Approaches:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112
interface SessionState { userId: string; lastWriteTimestamp: number; lastWriteNode: string; consistencyPreference: 'strong' | 'eventual' | 'session';} class SessionConsistencyManager { private sessions: Map<string, SessionState> = new Map(); recordWrite(sessionId: string, nodeId: string): void { const session = this.sessions.get(sessionId) || { userId: sessionId, lastWriteTimestamp: 0, lastWriteNode: '', consistencyPreference: 'session', }; session.lastWriteTimestamp = Date.now(); session.lastWriteNode = nodeId; this.sessions.set(sessionId, session); } async readWithSessionConsistency<T>( sessionId: string, key: string, replicas: Replica[] ): Promise<T> { const session = this.sessions.get(sessionId); if (!session || session.consistencyPreference === 'eventual') { // No session state or user accepts eventual - read from any node return this.readFromAny(key, replicas); } if (session.consistencyPreference === 'strong') { // User requires strong consistency - use quorum return this.readWithQuorum(key, replicas); } // Session consistency: must read value at least as fresh as last write const targetTimestamp = session.lastWriteTimestamp; // Strategy 1: Try to read from the node we wrote to (sticky session) const writeNode = replicas.find(r => r.id === session.lastWriteNode); if (writeNode) { const result = await writeNode.read(key); if (result && result.timestamp >= targetTimestamp) { return result.value; } } // Strategy 2: Query all nodes and find one with fresh enough data const responses = await Promise.all( replicas.map(async r => { try { const result = await r.read(key); return { nodeId: r.id, ...result }; } catch { return null; } }) ); const freshResponse = responses.find( r => r && r.timestamp >= targetTimestamp ); if (freshResponse) { return freshResponse.value; } // Strategy 3: Wait and retry (the write might still be propagating) await this.delay(50); return this.readWithSessionConsistency(sessionId, key, replicas); } private async readFromAny<T>(key: string, replicas: Replica[]): Promise<T> { const node = replicas[Math.floor(Math.random() * replicas.length)]; const result = await node.read(key); return result.value; } private async readWithQuorum<T>(key: string, replicas: Replica[]): Promise<T> { const quorum = Math.floor(replicas.length / 2) + 1; const responses = await Promise.all(replicas.map(r => r.read(key))); const latest = responses.reduce((a, b) => a.timestamp > b.timestamp ? a : b); return latest.value; } private delay(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms)); }} // Usage in applicationconst sessionManager = new SessionConsistencyManager(); async function updateUserProfile(sessionId: string, profile: Profile) { const node = await selectWriteNode(); await node.write(`profile:${sessionId}`, profile); sessionManager.recordWrite(sessionId, node.id);} async function getUserProfile(sessionId: string): Promise<Profile> { return sessionManager.readWithSessionConsistency( sessionId, `profile:${sessionId}`, allReplicas ); // User will always see their own writes!}Session consistency is often the sweet spot for user-facing applications. It provides the illusion of strong consistency for the user's own data while allowing the system to optimize reads from other sources. Users naturally expect to see their own changes but are surprisingly tolerant of slight delays in seeing other people's changes.
For some operations, even QUORUM consistency isn't enough. When multiple clients might concurrently modify the same data, you need linearizability—the guarantee that operations appear to execute atomically in some order consistent with real-time.
The Quorum Limitation:
Quorum-based consistency guarantees that reads see the latest completed write, but it doesn't prevent the following race condition:
Both writes succeed, but one overwrites the other. The final value is 7, losing Client A's +5 increment.
Lightweight Transactions (Compare-and-Set):
Systems like Cassandra provide "lightweight transactions" using the Paxos consensus protocol. These allow conditional updates:
123456789101112131415161718
-- Standard write (races possible)UPDATE accounts SET balance = 15 WHERE id = 'user-123'; -- Lightweight transaction (race-safe)-- Only updates if current balance is 10UPDATE accounts SET balance = 15 WHERE id = 'user-123'IF balance = 10; -- Returns: [applied] = true if successful, false if condition failed -- Insert only if row doesn't existINSERT INTO accounts (id, balance) VALUES ('user-456', 100)IF NOT EXISTS; -- Compare-and-set multiple conditionsUPDATE accounts SET balance = 105, last_modified = toTimestamp(now())WHERE id = 'user-123'IF balance = 100 AND status = 'active';123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
async function safeIncrement( accountId: string, amount: number, maxRetries: number = 5): Promise<{ success: boolean; newBalance: number }> { for (let attempt = 0; attempt < maxRetries; attempt++) { // Step 1: Read current balance with serial consistency const current = await db.query( 'SELECT balance FROM accounts WHERE id = ?', [accountId], { consistency: ConsistencyLevel.SERIAL } ); const currentBalance = current.rows[0].balance; const newBalance = currentBalance + amount; // Step 2: Conditional update const result = await db.query( 'UPDATE accounts SET balance = ? WHERE id = ? IF balance = ?', [newBalance, accountId, currentBalance], { consistency: ConsistencyLevel.SERIAL } // LWT uses serial consistency ); if (result.rows[0]['[applied]']) { // Success! Our update was applied return { success: true, newBalance }; } // Condition failed - someone else updated first // Get the current value from the result and retry const actualBalance = result.rows[0].balance; console.log(`LWT retry: expected ${currentBalance}, found ${actualBalance}`); // Optional: exponential backoff to reduce contention await delay(Math.pow(2, attempt) * 10); } // All retries exhausted return { success: false, newBalance: NaN };} // Usageconst result = await safeIncrement('user-123', 50);if (result.success) { console.log(`New balance: ${result.newBalance}`);} else { console.error('Failed to update balance after max retries');}| Aspect | Regular QUORUM Write | Lightweight Transaction |
|---|---|---|
| Round trips | 1 | 4 (Paxos prepare, promise, accept, commit) |
| Typical latency | 5-50ms | 20-200ms |
| Throughput impact | High | Low-Medium |
| Contention behavior | Last write wins | Serialized, may require retries |
| Use case | Independent updates | Conditional updates, counters, locks |
Lightweight transactions are 4-10x slower than regular writes due to the Paxos overhead. They should be reserved for operations that genuinely require linearizability: distributed locks, unique constraints, atomic counters, and compare-and-swap patterns. If you find yourself using LWT for everything, consider whether a different database or architecture would be more appropriate.
Even with a deep understanding of tunable consistency, it's easy to fall into patterns that seem reasonable but lead to subtle bugs or poor performance. Learning to recognize these anti-patterns can save significant debugging time.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
interface QueryMetrics { key: string; operation: 'read' | 'write'; consistency: ConsistencyLevel; serviceName: string; timestamp: number;} class ConsistencyAntiPatternDetector { private recentQueries: Map<string, QueryMetrics[]> = new Map(); private replicationFactor: number; constructor(rf: number) { this.replicationFactor = rf; } recordQuery(metrics: QueryMetrics): string[] { const warnings: string[] = []; const keyQueries = this.recentQueries.get(metrics.key) || []; keyQueries.push(metrics); // Keep only last 100 queries per key if (keyQueries.length > 100) keyQueries.shift(); this.recentQueries.set(metrics.key, keyQueries); // Anti-pattern 1: ONE for reads after QUORUM writes const recentWrites = keyQueries.filter(q => q.operation === 'write' && Date.now() - q.timestamp < 60000 ); if (metrics.operation === 'read' && metrics.consistency === ConsistencyLevel.ONE) { const hasQuorumWrite = recentWrites.some(w => w.consistency === ConsistencyLevel.QUORUM || w.consistency === ConsistencyLevel.LOCAL_QUORUM ); if (hasQuorumWrite) { warnings.push( `ANTI-PATTERN: Reading key '${metrics.key}' with ONE after QUORUM write. ` + `This breaks strong consistency.` ); } } // Anti-pattern 2: ALL used on hot path if (metrics.consistency === ConsistencyLevel.ALL) { warnings.push( `WARNING: Using ALL consistency for '${metrics.key}'. ` + `Any node failure will block this operation.` ); } // Anti-pattern 3: Inconsistent levels from different services const servicesForKey = new Set(keyQueries.map(q => q.serviceName)); if (servicesForKey.size > 1) { const writeConsistencies = [...new Set( keyQueries.filter(q => q.operation === 'write').map(q => q.consistency) )]; const readConsistencies = [...new Set( keyQueries.filter(q => q.operation === 'read').map(q => q.consistency) )]; if (writeConsistencies.length > 1 || readConsistencies.length > 1) { warnings.push( `ANTI-PATTERN: Key '${metrics.key}' accessed by multiple services ` + `with different consistency levels. Writes: ${writeConsistencies.join(', ')}, ` + `Reads: ${readConsistencies.join(', ')}` ); } } return warnings; }} // Usage: wrap database clientfunction wrapWithAntiPatternDetection(db: Database): Database { const detector = new ConsistencyAntiPatternDetector(db.replicationFactor); return { ...db, async query(q: Query) { const warnings = detector.recordQuery({ key: q.key, operation: q.type, consistency: q.consistency, serviceName: process.env.SERVICE_NAME || 'unknown', timestamp: Date.now(), }); warnings.forEach(w => console.warn(w)); return db.query(q); }, };}Tunable consistency transforms distributed databases from rigid, one-size-fits-all systems into flexible platforms that can be precisely configured for each use case. Let's consolidate the essential knowledge:
You now understand how to tune consistency across the entire spectrum, from per-operation selection to dynamic runtime adjustment. In the next page, we'll explore 'Sloppy Quorums'—an advanced technique that relaxes the strict quorum requirements to achieve even higher availability during node failures, at the cost of some consistency guarantees.