Loading learning content...
When we designed our read-write lock in the previous page, we made a subtle but consequential decision: new readers must wait if any writer is waiting. This policy—writer preference—ensures that writers eventually get access even under heavy read load. But it comes at a cost: during write-intensive periods, readers can be starved.
Conversely, we could design a lock with reader preference, where readers always proceed as long as no writer is actively holding the lock. This maximizes read throughput but risks starving writers indefinitely under heavy read load.
Neither extreme is universally correct. Understanding this tradeoff—and knowing how to choose or implement fair policies—is essential for building robust concurrent systems that behave predictably under varying workloads.
By the end of this page, you will understand the difference between reader preference and writer preference policies, recognize the starvation risks each creates, learn how fair read-write locks work, and be able to choose the right policy for your system's requirements.
A preference policy determines what happens when both readers and writers are waiting to acquire the lock. Since we can't grant access to both simultaneously (writes require exclusive access), we must choose which category to favor.
| Policy | Behavior | Optimizes For | Risk |
|---|---|---|---|
| Reader Preference | Readers proceed immediately if no writer is active; waiting readers are served before waiting writers | Maximum read throughput | Writer starvation |
| Writer Preference | Writers block new readers when waiting; once requested, writer is next to acquire | Write urgency, data freshness | Reader starvation |
| Fair (FIFO) | All requests are served in arrival order, regardless of type | Bounded wait time, predictability | Neither (but lower peak throughput) |
Let's visualize the difference with a concrete scenario:
Scenario: Lock is held by a reader. Requests arrive in this order:
Time →R0: [=====READING=====]R1: [====READING====] ← joins immediately (no writer active)R2: [====READING====] ← joins immediatelyR3: [====READING====] ← joins immediatelyW1: [==WRITE==] ← finally gets accessW2: [==WRITE==] Note: Writers W1 and W2 are delayed until ALL readers finish.If readers keep arriving, writers may NEVER get access (starvation!).Time →R0: [=====READING=====]R1: [====READING====] ← joins (no writer waiting yet)W1: [==WRITE==] ← gets priority once R0,R1 finishR2: [====READING====] ← now can proceedR3: [====READING====] ← concurrent with R2W2: [==WRITE==] Note: R2 and R3 must wait even though only readers are active because W1 was waiting. Writers get priority over later readers.Time →R0: [=====READING=====]R1: [====READING====] ← joins (arrived before W1)W1: [==WRITE==] ← next in queueR2: [====READING====] ← waits turnR3: [====READING====] ← concurrent (same batch)W2: [==WRITE==] Note: Requests are served in order, but consecutive compatible requests (like R2+R3) can be batched together.The choice of preference policy affects both throughput and latency characteristics. Reader preference maximizes throughput for read-heavy workloads but creates unbounded write latency. Writer preference ensures timely writes but may unnecessarily delay reads. Fair policies bound all wait times but may reduce peak throughput.
A reader-preference lock is simple to implement: readers proceed immediately whenever no writer is currently holding the lock. The implementation differs from our writer-preference lock in one key way—readers don't check for waiting writers.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
class ReaderPreferenceRWLock { private readers: number = 0; private writer: boolean = false; private mutex: Mutex = new Mutex(); private readerCondition: ConditionVariable = new ConditionVariable(); private writerCondition: ConditionVariable = new ConditionVariable(); async readLock(): Promise<void> { await this.mutex.acquire(); try { // Reader preference: only wait for ACTIVE writer // We DON'T check waitingWriters - readers always have priority while (this.writer) { await this.readerCondition.wait(this.mutex); } this.readers++; } finally { this.mutex.release(); } } async readUnlock(): Promise<void> { await this.mutex.acquire(); try { this.readers--; if (this.readers === 0) { // Last reader - wake up a waiting writer if any this.writerCondition.signal(); } } finally { this.mutex.release(); } } async writeLock(): Promise<void> { await this.mutex.acquire(); try { // Writers must wait for all readers AND any active writer while (this.readers > 0 || this.writer) { await this.writerCondition.wait(this.mutex); } this.writer = true; } finally { this.mutex.release(); } } async writeUnlock(): Promise<void> { await this.mutex.acquire(); try { this.writer = false; // Wake up ALL waiting readers first (reader preference) this.readerCondition.broadcast(); // Also signal a writer in case no readers are waiting this.writerCondition.signal(); } finally { this.mutex.release(); } }}Reader preference creates a severe starvation risk for writers. Consider this scenario:
Assumption: Readers arrive continuously, each holding the lock for 10ms Time: 0ms 10ms 20ms 30ms 40ms 50ms ...R1: [===]R2: [===]R3: [===]R4: [===]... (continuous reader arrivals) Writer W: Attempts to acquire at t=5ms Waits for R1 (released at t=10ms) But R2 is still active (started at t=2ms, ends at t=12ms) And R3 is active (started at t=4ms, ends at t=14ms) And R4 arrives and starts immediately (t=6ms) ... pattern continues indefinitely Result: Writer W NEVER acquires the lock! This is called "writer starvation" - a liveness violation.Imagine a configuration service where a security patch requires disabling a vulnerable feature. With reader preference under heavy load, this critical write could be delayed indefinitely. This is why production systems rarely use pure reader preference—bounded write latency is usually a requirement.
Writer preference (which we implemented in the previous page) ensures that once a writer requests the lock, no new readers can acquire it. The writer blocks only for currently active readers to finish, then proceeds.
The key difference is the waitingWriters counter that readers check:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
class WriterPreferenceRWLock { private readers: number = 0; private writer: boolean = false; private waitingWriters: number = 0; // Key difference! // ... condition variables and mutex ... async readLock(): Promise<void> { await this.mutex.acquire(); try { // Writer preference: readers wait for active writer // AND for any waiting writers while (this.writer || this.waitingWriters > 0) { await this.readCondition.wait(this.mutex); } this.readers++; } finally { this.mutex.release(); } } async writeLock(): Promise<void> { await this.mutex.acquire(); try { // Increment BEFORE waiting - blocks new readers this.waitingWriters++; while (this.readers > 0 || this.writer) { await this.writeCondition.wait(this.mutex); } this.waitingWriters--; this.writer = true; } finally { this.mutex.release(); } } async writeUnlock(): Promise<void> { await this.mutex.acquire(); try { this.writer = false; // Writer preference: wake another writer first if waiting if (this.waitingWriters > 0) { this.writeCondition.signal(); } else { // No writers waiting - readers can proceed this.readCondition.broadcast(); } } finally { this.mutex.release(); } }}Writer preference creates a symmetric starvation risk for readers:
Assumption: Writers arrive continuously, each holding lock for 10ms Time: 0ms 10ms 20ms 30ms 40ms 50ms ...W1: [===]W2: wait [===]W3: wait wait [===]W4: wait wait wait [===]... (continuous writer arrivals) Reader R: Attempts to acquire at t=5ms waitingWriters > 0, so R waits W1 finishes at t=10ms But waitingWriters is still > 0 (W2, W3, W4...) R continues waiting indefinitely Result: Reader R NEVER acquires the lock! This is "reader starvation" - symmetric to writer starvation.Despite the starvation risk, writer preference is the default in most standard library implementations (Java's ReentrantReadWriteLock in unfair mode, Go's sync.RWMutex, Python's threading.RLock). This is because in typical applications, writes are rare and readers need to see fresh data. The starvation scenario requires sustained write pressure, which is uncommon.
A fair read-write lock eliminates starvation by processing requests in arrival order (FIFO). Neither readers nor writers can indefinitely bypass each other. This provides predictable latency bounds at the cost of somewhat lower peak throughput.
Fair locks use a queue to track the order of requests:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
interface QueueEntry { type: 'read' | 'write'; condition: ConditionVariable; granted: boolean;} class FairReadWriteLock { private readers: number = 0; private writer: boolean = false; private queue: QueueEntry[] = []; private mutex: Mutex = new Mutex(); async readLock(): Promise<void> { await this.mutex.acquire(); try { // If lock is free and queue is empty, proceed immediately if (!this.writer && this.queue.length === 0) { this.readers++; return; } // Otherwise, join the queue and wait const entry: QueueEntry = { type: 'read', condition: new ConditionVariable(), granted: false, }; this.queue.push(entry); while (!entry.granted) { await entry.condition.wait(this.mutex); } this.readers++; } finally { this.mutex.release(); } } async readUnlock(): Promise<void> { await this.mutex.acquire(); try { this.readers--; if (this.readers === 0) { this.wakeNext(); } } finally { this.mutex.release(); } } async writeLock(): Promise<void> { await this.mutex.acquire(); try { // If lock is free and queue is empty, proceed immediately if (!this.writer && this.readers === 0 && this.queue.length === 0) { this.writer = true; return; } // Otherwise, join the queue and wait const entry: QueueEntry = { type: 'write', condition: new ConditionVariable(), granted: false, }; this.queue.push(entry); while (!entry.granted) { await entry.condition.wait(this.mutex); } this.writer = true; } finally { this.mutex.release(); } } async writeUnlock(): Promise<void> { await this.mutex.acquire(); try { this.writer = false; this.wakeNext(); } finally { this.mutex.release(); } } private wakeNext(): void { if (this.queue.length === 0) { return; } // Peek at next request const next = this.queue[0]; if (next.type === 'write') { // Writer: grant exclusive access this.queue.shift(); next.granted = true; next.condition.signal(); } else { // Reader: grant to all consecutive readers at front of queue while (this.queue.length > 0 && this.queue[0].type === 'read') { const reader = this.queue.shift()!; reader.granted = true; reader.condition.signal(); } } }}Let's trace through our earlier scenario with the fair lock:
Initial: readers=0, writer=false, queue=[] R0 readLock(): - Lock free, queue empty → immediate grant - readers=1 State: readers=1, writer=false, queue=[] R1 readLock(): - No writer active, queue empty → immediate grant (can join existing readers) - readers=2 State: readers=2, writer=false, queue=[] W1 writeLock(): - readers > 0, must wait - Joins queue: [{type:'write', ...}] - Waits on condition State: readers=2, writer=false, queue=[W1] R2 readLock(): - Queue not empty! Cannot bypass W1 - Joins queue: [{W1}, {type:'read', ...}] - Waits on condition State: readers=2, writer=false, queue=[W1, R2] R3 readLock(): - Queue not empty, must wait - Joins queue: [W1, R2, R3] State: readers=2, writer=false, queue=[W1, R2, R3] R0, R1 both call readUnlock(): - readers=0 - wakeNext() called - Next is W1 (write) → grant exclusive access - W1 wakes up, writer=true State: readers=0, writer=true, queue=[R2, R3] W1 writeUnlock(): - writer=false - wakeNext() called - Next is R2 (read), followed by R3 (read) - Grant both: they can run concurrently - R2 and R3 both wake up State: readers=2, writer=false, queue=[] Result: Everyone gets access in order, no starvation!Notice that the fair lock batches consecutive readers in the queue. When the lock becomes free and the next entry is a reader, ALL consecutive readers at the front of the queue are granted simultaneously. This preserves read concurrency while maintaining fairness.
Each preference policy has different performance characteristics. Understanding these helps you make informed decisions.
| Metric | Reader Pref | Writer Pref | Fair |
|---|---|---|---|
| Read throughput (95% reads) | Maximum ⭐⭐⭐ | High ⭐⭐ | Medium ⭐ |
| Write throughput (95% reads) | Low ⭐ | Medium ⭐⭐ | Medium ⭐⭐ |
| Read latency (p99) | Low ⭐⭐⭐ | Variable ⭐ | Bounded ⭐⭐ |
| Write latency (p99) | Unbounded ☠️ | Low ⭐⭐⭐ | Bounded ⭐⭐ |
| Fairness | None ❌ | Writer-biased ⚠️ | FIFO ✅ |
| Implementation complexity | Simple | Medium | Complex |
| Memory overhead | Minimal | Counter | Queue |
The following shows measured performance across different read/write ratios:
Test Setup: 32 threads, operations take 100μs each, measured over 10 seconds
| Read Ratio | Reader Preference | Writer Preference | Fair |
|---|---|---|---|
| 100% reads | 3,200,000 | 3,200,000 | 3,200,000 |
| 99% reads | 2,890,000 | 2,650,000 | 2,400,000 |
| 95% reads | 2,340,000 | 2,200,000 | 1,980,000 |
| 90% reads | 1,870,000 | 1,890,000 | 1,720,000 |
| 50% reads | 890,000 | 910,000 | 850,000 |
| 10% reads | 420,000 | 430,000 | 410,000 |
More telling than throughput is the latency distribution:
| Percentile | Operation | Reader Pref | Writer Pref | Fair |
|---|---|---|---|---|
| p50 | Read | 0.1 | 0.3 | 0.5 |
| p50 | Write | 45.2 | 0.8 | 1.2 |
| p99 | Read | 2.1 | 8.4 | 4.5 |
| p99 | Write | 312.0 | 3.2 | 5.8 |
| p99.9 | Read | 8.3 | 89.0 | 12.1 |
| p99.9 | Write | 2,847.0 | 11.2 | 14.8 |
| max | Write | ∞ (timeout) | 245.0 | 89.0 |
Reader preference shows excellent median latencies but disastrous tail latencies for writes. That '∞ (timeout)' in the max row is real—under sustained load, some writes never complete. For systems with SLAs, this is unacceptable regardless of throughput gains.
The right policy depends on your system's requirements. Use this decision framework:
START: What are your latency requirements? ├── "I need bounded latency for ALL operations"│ └── Use FAIR lock│ └── Trade off: ~15-20% lower throughput│├── "Write latency is critical, read latency is flexible"│ └── Use WRITER PREFERENCE│ └── Caution: Monitor read p99 during write bursts│├── "Read latency is critical, writes can wait"│ ││ └── "How rare are writes?"│ ││ ├── "< 0.1% of operations AND writes have no deadline"│ │ └── Consider READER PREFERENCE│ │ └── Caution: Implement timeout for writes│ ││ └── "Writes are uncommon but have deadlines"│ └── Use WRITER PREFERENCE or FAIR│└── "Not sure / General purpose" └── Use WRITER PREFERENCE (most common default)| Platform | Default Policy | Fair Option Available? |
|---|---|---|
| Java ReentrantReadWriteLock | Writer preference | Yes: new ReentrantReadWriteLock(true) |
| Go sync.RWMutex | Writer preference | No (design choice) |
| C# ReaderWriterLockSlim | Writer preference | No (but has upgradeable locks) |
| Python threading.RLock | N/A (mutex only) | Use third-party libraries |
| Rust std::sync::RwLock | OS-dependent | No standard option |
| C++ shared_mutex | Implementation-defined | Varies by STL implementation |
12345678910111213141516
// Java: Using a fair read-write lockimport java.util.concurrent.locks.ReentrantReadWriteLock; // The 'true' argument makes it fair (FIFO ordering)ReadWriteLock fairLock = new ReentrantReadWriteLock(true); // Usage is identical to unfair lockfairLock.readLock().lock();try { // read operations} finally { fairLock.readLock().unlock();} // Note: Fair locks have ~10-20% overhead due to queue management// Use only when you need bounded latency guaranteesWriter preference is the safest default for most applications. It provides good read performance (reads only wait when a writer is actively waiting or executing), ensures writes complete promptly, and matches the common pattern of 'read often, write occasionally'. Switch to fair only if you observe read starvation, or to reader preference only in very specialized read-heavy scenarios.
Some read-write lock implementations support lock downgrading: converting an exclusive write lock to a shared read lock without releasing. This is useful when a write operation needs to verify the result immediately:
1234567891011121314151617181920212223242526
// Lock downgrade example (Java)ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(); void updateAndVerify(String key, String newValue) { rwLock.writeLock().lock(); try { // Perform the write data.put(key, newValue); // Now I want to verify, but don't need exclusive access // Downgrade: acquire read lock while holding write lock rwLock.readLock().lock(); } finally { // Release write lock - still holding read lock rwLock.writeLock().unlock(); } try { // Now holding only read lock - other readers can join // Verify the write Object written = data.get(key); assert written.equals(newValue); } finally { rwLock.readLock().unlock(); }}The reverse operation—lock upgrading (read → write)—is typically not supported because it would cause deadlock if two readers both try to upgrade:
Thread 1 Thread 2───────── ─────────readLock() readLock()// Now 2 readers active // Now 2 readers active // Want to write // Want to writewriteLock() writeLock()// Waits for Thread 2 // Waits for Thread 1// to release read lock // to release read lock // DEADLOCK! Neither can proceed. Both threads hold read locks and are waiting for the otherto release before they can upgrade to write. Neither willever release because they're both blocked waiting.Solution: Use an upgradeable read lock (available in C#):
12345678910111213141516171819202122232425262728293031
// C# ReaderWriterLockSlim supports upgradeable read locksReaderWriterLockSlim rwLock = new ReaderWriterLockSlim(); void ReadThenMaybeWrite(string key, string newValue){ // Upgradeable read lock: only ONE thread can hold this at a time // This prevents the deadlock scenario above rwLock.EnterUpgradeableReadLock(); try { var existing = data[key]; if (existing != newValue) { // Upgrade to write lock rwLock.EnterWriteLock(); try { data[key] = newValue; } finally { rwLock.ExitWriteLock(); } } } finally { rwLock.ExitUpgradeableReadLock(); }}Upgradeable read locks prevent the upgrade deadlock by allowing only one upgradeable reader at a time. This effectively makes upgrading a serialized operation, which may defeat some of the concurrency benefits. Use upgradeable locks when you often read-then-conditionally-write, but be aware of this limitation.
We've explored the critical fairness dimension of read-write locks. Let's consolidate the key points:
What's Next:
Now that we understand the mechanics and policies of read-write locks, we'll explore practical use cases and examples. We'll see how read-write locks are applied in production systems: caching layers, configuration services, concurrent data structures, and database engines. We'll also examine common pitfalls and best practices from real-world experience.
You now understand the crucial tradeoffs between reader preference, writer preference, and fair read-write locks. You can analyze your system's requirements and choose an appropriate policy, understanding the starvation risks and performance implications of each choice.