Loading learning content...
Understanding the mechanics of read-write locks is necessary, but not sufficient. The real skill lies in recognizing when to apply them, how to structure your code to maximize their benefits, and how to avoid subtle pitfalls that can lead to performance degradation or correctness bugs.
This page bridges theory and practice. We'll examine production-ready implementations across different domains, analyze common anti-patterns, and distill best practices from real-world experience. By the end, you'll have a toolkit of patterns you can apply immediately to your own systems.
By the end of this page, you will have production-ready patterns for thread-safe caches, configuration services, concurrent collections, and rate limiters. You'll understand common pitfalls like holding locks too long, lock ordering issues, and upgrade/downgrade mistakes. You'll be equipped to implement read-write lock solutions that are both correct and performant.
Caches are the quintessential read-write lock use case. Reads vastly outnumber writes, data is typically small enough to be held while locked, and consistency is important. Let's build a production-quality cache with features you'd expect in a real system:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180
interface CacheEntry<V> { value: V; expiresAt: number; lastAccessed: number; size: number;} interface CacheStats { hits: number; misses: number; evictions: number; expirations: number;} class ProductionCache<K, V> { private cache: Map<K, CacheEntry<V>> = new Map(); private rwLock: ReadWriteLock = new ReadWriteLock(); private maxSize: number; private defaultTtlMs: number; private stats: CacheStats = { hits: 0, misses: 0, evictions: 0, expirations: 0 }; constructor(options: { maxSize?: number; defaultTtlMs?: number } = {}) { this.maxSize = options.maxSize ?? 10000; this.defaultTtlMs = options.defaultTtlMs ?? 300000; // 5 minutes } /** * Get a value from cache. * Returns undefined if not found or expired. */ async get(key: K): Promise<V | undefined> { await this.rwLock.readLock(); try { const entry = this.cache.get(key); if (!entry) { this.stats.misses++; return undefined; } // Check expiration (under read lock - we don't remove, just return undefined) if (Date.now() > entry.expiresAt) { this.stats.expirations++; return undefined; // Cleanup will happen lazily } // Update access time (safe to mutate individual entry under read lock // because we're not changing the Map structure) entry.lastAccessed = Date.now(); this.stats.hits++; return entry.value; } finally { await this.rwLock.readUnlock(); } } /** * Store a value in cache with optional TTL. * Triggers eviction if cache is full. */ async set(key: K, value: V, options: { ttlMs?: number; size?: number } = {}): Promise<void> { await this.rwLock.writeLock(); try { const now = Date.now(); const entry: CacheEntry<V> = { value, expiresAt: now + (options.ttlMs ?? this.defaultTtlMs), lastAccessed: now, size: options.size ?? 1, }; // Evict if necessary to make room await this.evictIfNeeded(1); this.cache.set(key, entry); } finally { await this.rwLock.writeUnlock(); } } /** * Atomic get-or-compute: returns cached value or computes and caches. * Uses double-check pattern to minimize lock contention. */ async getOrCompute( key: K, compute: () => Promise<V>, options: { ttlMs?: number; size?: number } = {} ): Promise<V> { // First check: read lock only const cached = await this.get(key); if (cached !== undefined) { return cached; } // Cache miss - need to compute // NOTE: This section has a race window where multiple threads // might compute the same value. For expensive computations, // consider adding a per-key lock or using a loading cache pattern. const value = await compute(); await this.set(key, value, options); return value; } /** * Remove a specific key from cache. */ async delete(key: K): Promise<boolean> { await this.rwLock.writeLock(); try { return this.cache.delete(key); } finally { await this.rwLock.writeUnlock(); } } /** * Clear all expired entries. Call periodically from background task. */ async cleanup(): Promise<number> { await this.rwLock.writeLock(); try { const now = Date.now(); let cleaned = 0; for (const [key, entry] of this.cache) { if (now > entry.expiresAt) { this.cache.delete(key); cleaned++; } } return cleaned; } finally { await this.rwLock.writeUnlock(); } } /** * Get cache statistics. */ async getStats(): Promise<CacheStats & { size: number; hitRate: number }> { await this.rwLock.readLock(); try { const total = this.stats.hits + this.stats.misses; return { ...this.stats, size: this.cache.size, hitRate: total > 0 ? this.stats.hits / total : 0, }; } finally { await this.rwLock.readUnlock(); } } // Private: must be called while holding write lock private async evictIfNeeded(spaceNeeded: number): Promise<void> { while (this.cache.size + spaceNeeded > this.maxSize) { // Find LRU entry let lruKey: K | undefined; let lruTime = Infinity; for (const [key, entry] of this.cache) { if (entry.lastAccessed < lruTime) { lruTime = entry.lastAccessed; lruKey = key; } } if (lruKey !== undefined) { this.cache.delete(lruKey); this.stats.evictions++; } else { break; // No more entries to evict } } }}Updating lastAccessed on the entry object is safe under read lock because we're modifying the entry's content, not the Map structure itself. The Map's keys and value references remain unchanged. This is a common pattern that avoids write lock overhead for access-time tracking.
Configuration services are among the most read-heavy workloads in distributed systems. Applications query configuration on nearly every request, while updates happen rarely (perhaps a few times per deployment cycle). This makes them ideal candidates for read-write locks.
Let's build a configuration service that supports:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205
type ConfigValue = string | number | boolean | null;type ConfigListener = (changes: Map<string, ConfigValue>) => void; interface ConfigLayer { name: string; priority: number; // Higher = takes precedence values: Map<string, ConfigValue>;} class DynamicConfigService { private layers: ConfigLayer[] = []; private listeners: Set<ConfigListener> = new Set(); private rwLock: ReadWriteLock = new ReadWriteLock(); private version: number = 0; constructor() { // Initialize with default layer this.layers.push({ name: 'defaults', priority: 0, values: new Map(), }); } /** * Get a configuration value with type coercion. * Checks layers in priority order (highest first). */ async get<T extends ConfigValue>(key: string, defaultValue: T): Promise<T> { await this.rwLock.readLock(); try { // Search layers from highest to lowest priority const sortedLayers = [...this.layers].sort((a, b) => b.priority - a.priority); for (const layer of sortedLayers) { if (layer.values.has(key)) { const value = layer.values.get(key); if (value !== undefined) { return value as T; } } } return defaultValue; } finally { await this.rwLock.readUnlock(); } } /** * Get multiple configuration values atomically. * All values reflect the same configuration snapshot. */ async getMany<T extends Record<string, ConfigValue>>( keys: (keyof T)[], defaults: T ): Promise<T> { await this.rwLock.readLock(); try { const result = { ...defaults }; const sortedLayers = [...this.layers].sort((a, b) => b.priority - a.priority); for (const key of keys) { for (const layer of sortedLayers) { if (layer.values.has(key as string)) { const value = layer.values.get(key as string); if (value !== undefined) { result[key] = value as T[keyof T]; break; } } } } return result; } finally { await this.rwLock.readUnlock(); } } /** * Update configuration values in a specific layer. * Notifies all listeners of changes. */ async updateLayer( layerName: string, updates: Map<string, ConfigValue> ): Promise<void> { const changes = new Map<string, ConfigValue>(); await this.rwLock.writeLock(); try { let layer = this.layers.find(l => l.name === layerName); if (!layer) { throw new Error(`Layer '${layerName}' not found`); } // Track changes for notifications for (const [key, newValue] of updates) { const oldValue = await this.getEffectiveValueUnsafe(key); layer.values.set(key, newValue); const effectiveValue = await this.getEffectiveValueUnsafe(key); if (oldValue !== effectiveValue) { changes.set(key, effectiveValue); } } this.version++; } finally { await this.rwLock.writeUnlock(); } // Notify listeners OUTSIDE the lock if (changes.size > 0) { this.notifyListeners(changes); } } /** * Register a new configuration layer. */ async registerLayer(name: string, priority: number): Promise<void> { await this.rwLock.writeLock(); try { if (this.layers.some(l => l.name === name)) { throw new Error(`Layer '${name}' already exists`); } this.layers.push({ name, priority, values: new Map(), }); // Sort by priority for efficient lookups this.layers.sort((a, b) => b.priority - a.priority); this.version++; } finally { await this.rwLock.writeUnlock(); } } /** * Bulk replace all values in a layer (useful for full config refresh). */ async replaceLayer( layerName: string, newValues: Map<string, ConfigValue> ): Promise<void> { await this.rwLock.writeLock(); try { const layer = this.layers.find(l => l.name === layerName); if (!layer) { throw new Error(`Layer '${layerName}' not found`); } layer.values = new Map(newValues); this.version++; } finally { await this.rwLock.writeUnlock(); } } /** * Subscribe to configuration changes. */ subscribe(listener: ConfigListener): () => void { this.listeners.add(listener); return () => this.listeners.delete(listener); } /** * Get current configuration version (for cache invalidation). */ async getVersion(): Promise<number> { await this.rwLock.readLock(); try { return this.version; } finally { await this.rwLock.readUnlock(); } } // Private: called while holding read or write lock private async getEffectiveValueUnsafe(key: string): Promise<ConfigValue> { for (const layer of this.layers) { if (layer.values.has(key)) { return layer.values.get(key)!; } } return null; } private notifyListeners(changes: Map<string, ConfigValue>): void { for (const listener of this.listeners) { try { listener(changes); } catch (error) { console.error('Config listener error:', error); } } }}Notice that listener notifications happen AFTER releasing the write lock. This prevents potential deadlocks if listeners try to read configuration values, and reduces lock hold time. Always minimize the work done while holding locks.
Read-write locks can protect any data structure where reads are more frequent than writes. Let's examine two useful examples: a concurrent sorted set and a concurrent skip list.
A leaderboard needs fast range queries (top 10, rank around position X) while supporting score updates. This is a perfect fit for a read-write locked sorted structure.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
interface LeaderboardEntry { userId: string; score: number; metadata?: Record<string, any>;} class ConcurrentLeaderboard { // Using a sorted array for simplicity; production might use a skip list or tree private entries: LeaderboardEntry[] = []; private userIndex: Map<string, number> = new Map(); // userId -> array index private rwLock: ReadWriteLock = new ReadWriteLock(); /** * Get top N entries from the leaderboard. */ async getTopN(n: number): Promise<LeaderboardEntry[]> { await this.rwLock.readLock(); try { return this.entries.slice(0, Math.min(n, this.entries.length)); } finally { await this.rwLock.readUnlock(); } } /** * Get entries around a specific rank (for "players near me" feature). */ async getAroundRank(rank: number, count: number): Promise<{ entries: LeaderboardEntry[]; startRank: number; }> { await this.rwLock.readLock(); try { const halfCount = Math.floor(count / 2); const startRank = Math.max(0, rank - halfCount); const endRank = Math.min(this.entries.length, startRank + count); return { entries: this.entries.slice(startRank, endRank), startRank: startRank + 1, // 1-indexed rank }; } finally { await this.rwLock.readUnlock(); } } /** * Get a user's current rank and score. */ async getUserRank(userId: string): Promise<{ rank: number; entry: LeaderboardEntry } | null> { await this.rwLock.readLock(); try { const index = this.userIndex.get(userId); if (index === undefined) { return null; } return { rank: index + 1, // 1-indexed entry: this.entries[index], }; } finally { await this.rwLock.readUnlock(); } } /** * Update a user's score. Maintains sorted order. */ async updateScore(userId: string, newScore: number, metadata?: Record<string, any>): Promise<void> { await this.rwLock.writeLock(); try { const existingIndex = this.userIndex.get(userId); if (existingIndex !== undefined) { // Remove from current position this.entries.splice(existingIndex, 1); } // Find insertion point (binary search for sorted insert) const insertIndex = this.findInsertionPoint(newScore); const entry: LeaderboardEntry = { userId, score: newScore, metadata }; this.entries.splice(insertIndex, 0, entry); // Rebuild index (necessary because splice shifts indices) this.rebuildIndex(); } finally { await this.rwLock.writeUnlock(); } } /** * Remove a user from the leaderboard. */ async removeUser(userId: string): Promise<boolean> { await this.rwLock.writeLock(); try { const index = this.userIndex.get(userId); if (index === undefined) { return false; } this.entries.splice(index, 1); this.rebuildIndex(); return true; } finally { await this.rwLock.writeUnlock(); } } private findInsertionPoint(score: number): number { // Binary search for descending order let low = 0; let high = this.entries.length; while (low < high) { const mid = Math.floor((low + high) / 2); if (this.entries[mid].score > score) { low = mid + 1; } else { high = mid; } } return low; } private rebuildIndex(): void { this.userIndex.clear(); this.entries.forEach((entry, index) => { this.userIndex.set(entry.userId, index); }); }}Rate limiters are read-heavy (check limits frequently) with occasional writes (refill tokens, record usage). A read-write lock fits well:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175
interface RateLimitBucket { tokens: number; lastRefill: number; blocked: boolean;} class ConcurrentRateLimiter { private buckets: Map<string, RateLimitBucket> = new Map(); private rwLock: ReadWriteLock = new ReadWriteLock(); private readonly maxTokens: number; private readonly refillRatePerSecond: number; constructor(maxTokens: number, refillRatePerSecond: number) { this.maxTokens = maxTokens; this.refillRatePerSecond = refillRatePerSecond; } /** * Check if a request is allowed without consuming a token (peek). * Useful for UI hints ("you can make N more requests"). */ async getAvailableTokens(key: string): Promise<number> { await this.rwLock.readLock(); try { const bucket = this.buckets.get(key); if (!bucket) { return this.maxTokens; // New bucket would have full tokens } // Calculate tokens with simulated refill return this.calculateCurrentTokens(bucket); } finally { await this.rwLock.readUnlock(); } } /** * Check if a key is currently blocked (hard rate limit). */ async isBlocked(key: string): Promise<boolean> { await this.rwLock.readLock(); try { const bucket = this.buckets.get(key); return bucket?.blocked ?? false; } finally { await this.rwLock.readUnlock(); } } /** * Attempt to consume tokens. Returns true if allowed. */ async tryConsume(key: string, tokens: number = 1): Promise<{ allowed: boolean; remaining: number; retryAfterMs?: number; }> { // Optimization: check with read lock first const preCheck = await this.quickCheck(key, tokens); if (!preCheck.mightSucceed) { return { allowed: false, remaining: preCheck.remaining, retryAfterMs: preCheck.retryAfterMs, }; } // Might succeed - acquire write lock for actual consumption await this.rwLock.writeLock(); try { let bucket = this.buckets.get(key); if (!bucket) { bucket = { tokens: this.maxTokens, lastRefill: Date.now(), blocked: false, }; this.buckets.set(key, bucket); } // Perform refill this.refillBucket(bucket); if (bucket.blocked) { return { allowed: false, remaining: 0, retryAfterMs: 60000 }; } if (bucket.tokens >= tokens) { bucket.tokens -= tokens; return { allowed: true, remaining: bucket.tokens }; } // Not enough tokens const tokensNeeded = tokens - bucket.tokens; const retryAfterMs = (tokensNeeded / this.refillRatePerSecond) * 1000; return { allowed: false, remaining: bucket.tokens, retryAfterMs: Math.ceil(retryAfterMs), }; } finally { await this.rwLock.writeUnlock(); } } /** * Hard block a key (for abuse prevention). */ async block(key: string): Promise<void> { await this.rwLock.writeLock(); try { let bucket = this.buckets.get(key); if (!bucket) { bucket = { tokens: 0, lastRefill: Date.now(), blocked: true }; this.buckets.set(key, bucket); } else { bucket.blocked = true; bucket.tokens = 0; } } finally { await this.rwLock.writeUnlock(); } } private async quickCheck(key: string, tokens: number): Promise<{ mightSucceed: boolean; remaining: number; retryAfterMs?: number; }> { await this.rwLock.readLock(); try { const bucket = this.buckets.get(key); if (!bucket) { return { mightSucceed: true, remaining: this.maxTokens }; } if (bucket.blocked) { return { mightSucceed: false, remaining: 0, retryAfterMs: 60000 }; } const currentTokens = this.calculateCurrentTokens(bucket); if (currentTokens >= tokens) { return { mightSucceed: true, remaining: currentTokens }; } const tokensNeeded = tokens - currentTokens; return { mightSucceed: false, remaining: currentTokens, retryAfterMs: Math.ceil((tokensNeeded / this.refillRatePerSecond) * 1000), }; } finally { await this.rwLock.readUnlock(); } } private calculateCurrentTokens(bucket: RateLimitBucket): number { const now = Date.now(); const elapsed = (now - bucket.lastRefill) / 1000; const refilled = elapsed * this.refillRatePerSecond; return Math.min(this.maxTokens, bucket.tokens + refilled); } private refillBucket(bucket: RateLimitBucket): void { const now = Date.now(); const elapsed = (now - bucket.lastRefill) / 1000; const refilled = elapsed * this.refillRatePerSecond; bucket.tokens = Math.min(this.maxTokens, bucket.tokens + refilled); bucket.lastRefill = now; }}Notice the quickCheck method that uses a read lock first. This is an optimization: if we can determine the request will fail without modifying state, we avoid the more expensive write lock. This pattern is especially effective when most requests are rejected (e.g., rate limiting abusive clients).
Understanding what NOT to do is as important as knowing the correct patterns. Here are common mistakes that can lead to performance degradation, deadlocks, or correctness bugs.
123456789101112131415161718192021222324252627
// ❌ BAD: Holding lock during I/Oasync function badRefreshCache(): Promise<void> { await rwLock.writeLock(); try { // This HTTP call might take 500ms-5s // ALL reads are blocked during this time! const newData = await fetch('https://api.example.com/data'); cache = await newData.json(); } finally { await rwLock.writeUnlock(); }} // ✅ GOOD: Fetch outside the lockasync function goodRefreshCache(): Promise<void> { // Fetch data without holding any lock const newData = await fetch('https://api.example.com/data'); const parsedData = await newData.json(); // Only hold lock briefly for the actual update await rwLock.writeLock(); try { cache = parsedData; } finally { await rwLock.writeUnlock(); }}12345678910111213141516171819202122232425262728293031
// ❌ BAD: Nested locks with inconsistent orderingclass CacheA { private rwLock = new ReadWriteLock(); private cacheB: CacheB; async syncWithB(): Promise<void> { await this.rwLock.writeLock(); // Lock A first try { const bData = await this.cacheB.getAll(); // This acquires Lock B // If CacheB.syncWithA() acquires B then A, DEADLOCK! } finally { await this.rwLock.writeUnlock(); } }} // ✅ GOOD: Consistent lock ordering (alphabetical by resource)async function syncCaches(cacheA: CacheA, cacheB: CacheB): Promise<void> { // Always acquire in same order: A before B await cacheA.rwLock.writeLock(); try { await cacheB.rwLock.writeLock(); try { // Now safe to sync } finally { await cacheB.rwLock.writeUnlock(); } } finally { await cacheA.rwLock.writeUnlock(); }}1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
// ❌ BAD: Read-then-write without holding lockasync function badIncrement(key: string): Promise<number> { await rwLock.readLock(); let value: number; try { value = data.get(key) ?? 0; } finally { await rwLock.readUnlock(); } // DANGER: Another thread could modify 'key' here! await rwLock.writeLock(); try { data.set(key, value + 1); // Lost update if another thread incremented return value + 1; } finally { await rwLock.writeUnlock(); }} // ✅ GOOD: Use write lock for entire read-modify-writeasync function goodIncrement(key: string): Promise<number> { await rwLock.writeLock(); try { const value = (data.get(key) ?? 0) + 1; data.set(key, value); return value; } finally { await rwLock.writeUnlock(); }} // ✅ ALSO GOOD: Optimistic with retry (for high contention)async function optimisticIncrement(key: string): Promise<number> { while (true) { // Snapshot current version await rwLock.readLock(); const currentValue = data.get(key) ?? 0; const currentVersion = version; await rwLock.readUnlock(); await rwLock.writeLock(); try { // Check if version changed if (version !== currentVersion) { continue; // Retry } const newValue = currentValue + 1; data.set(key, newValue); version++; return newValue; } finally { await rwLock.writeUnlock(); } }}1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
// ❌ BAD: Single lock for all dataclass BadCache<K, V> { private data: Map<K, V> = new Map(); private rwLock = new ReadWriteLock(); // Single lock = bottleneck async get(key: K): Promise<V | undefined> { await this.rwLock.readLock(); try { return this.data.get(key); } finally { await this.rwLock.readUnlock(); } }} // ✅ GOOD: Striped locks for reduced contentionclass StripedCache<K, V> { private static STRIPE_COUNT = 16; private stripes: Array<{ data: Map<K, V>; rwLock: ReadWriteLock; }>; constructor() { this.stripes = Array.from({ length: StripedCache.STRIPE_COUNT }, () => ({ data: new Map<K, V>(), rwLock: new ReadWriteLock(), })); } private getStripe(key: K): typeof this.stripes[0] { // Hash key to determine stripe const hash = this.hashCode(key); const index = Math.abs(hash) % StripedCache.STRIPE_COUNT; return this.stripes[index]; } async get(key: K): Promise<V | undefined> { const stripe = this.getStripe(key); await stripe.rwLock.readLock(); try { return stripe.data.get(key); } finally { await stripe.rwLock.readUnlock(); } } async set(key: K, value: V): Promise<void> { const stripe = this.getStripe(key); await stripe.rwLock.writeLock(); try { stripe.data.set(key, value); } finally { await stripe.rwLock.writeUnlock(); } } private hashCode(key: K): number { // Simple hash for strings; adapt for your key type const str = String(key); let hash = 0; for (let i = 0; i < str.length; i++) { hash = ((hash << 5) - hash) + str.charCodeAt(i); hash |= 0; } return hash; }}Here's a comprehensive checklist for using read-write locks effectively in production systems:
Read-write locks are fundamental to many widely-used systems. Understanding how production systems use them provides valuable insight.
| System | Component | Usage | Notes |
|---|---|---|---|
| PostgreSQL | Buffer pool | Reader-writer locks protect each buffer page | Uses custom RW lock with trylock support |
| Linux Kernel | VFS inode cache | Read locks for lookups, write for modifications | High-performance seqlock variant for hot paths |
| Java ConcurrentHashMap | Segment locks (pre-Java 8) | Striped RW locks reducing contention | Replaced with lock-free in Java 8 |
| Redis | Key space | Read lock for GET, write for SET/DEL | Single-threaded but supports WAIT for replication |
| ZooKeeper | Data tree | Read locks for reads, write for proposals | Ensures linearizable reads |
| Kubernetes | API Server cache | Shared-exclusive locks for watch cache | Handles thousands of watches efficiently |
Production systems often use advanced variants: seqlocks (Linux: optimistic reads without write blocking), epoch-based reclamation (RCU: readers never block, writers wait for reader epochs to complete), or hybrid approaches (optimistic read with fallback to lock). These are beyond our scope but worth exploring for extreme-performance scenarios.
We've covered practical applications of read-write locks across multiple domains. Let's consolidate what we've learned:
Module Complete:
You've now mastered the Read-Write Lock pattern—from understanding the readers-writers problem, through implementation details and fairness policies, to production-ready usage patterns. You can confidently apply read-write locks to build high-performance, thread-safe systems that maximize concurrency for read-heavy workloads while maintaining correctness during writes.
Congratulations! You now have a deep understanding of the Read-Write Lock pattern. You can implement read-write locks from primitives, choose appropriate fairness policies, apply the pattern to real-world systems, and avoid common pitfalls. This knowledge is fundamental for building scalable concurrent systems.