Loading content...
Not all concurrent access is equal. Consider a popular web page's cache: millions of users read the content simultaneously, while a single editorial update might occur once per hour. The producer-consumer model we studied treats all access uniformly, but this scenario reveals a crucial asymmetry:
Reading doesn't modify data. Multiple readers can safely access the same data concurrently without any risk of corruption. Forcing them to wait for exclusive access wastes parallelism.
Writing modifies data. A writer must have exclusive access—no other readers or writers may be present, or they'll see inconsistent, partially-updated state.
This asymmetry motivates the Readers-Writers Problem, introduced by P.J. Courtois, F. Heymans, and D.L. Parnas in 1971. It's deceptively simple to state, yet its variations expose fundamental tradeoffs in concurrency design that remain relevant in every database, cache, and file system built today.
By the end of this page, you will understand all major variations of the readers-writers problem: first-readers (reader preference), first-writers (writer preference), and fair (FIFO) solutions. You'll analyze starvation conditions, implement each variation, understand the tradeoffs, and recognize readers-writers patterns in real-world systems from databases to read-write locks.
The readers-writers problem concerns access to a shared resource (typically a data structure, file, or database) by two types of processes:
Readers: Processes that only read the shared resource. Multiple readers may access simultaneously.
Writers: Processes that modify the shared resource. A writer requires exclusive access—no other readers or writers.
A correct solution must satisfy:
Reader Concurrency: Multiple readers may hold the lock simultaneously.
Writer Exclusivity: A writer holds the lock exclusively—no concurrent readers or writers.
No Corruption: Readers never see partially-written data.
These constraints are straightforward, but any solution must also address starvation—the indefinite postponement of a process that wants to proceed:
Different solutions prioritize different goals, leading to three classical variations:
| Variation | Priority | Reader Starvation? | Writer Starvation? | Typical Use Case |
|---|---|---|---|---|
| First Readers (Reader Preference) | Readers | No | Yes (under heavy read load) | Read-heavy caches |
| First Writers (Writer Preference) | Writers | Yes (under heavy write load) | No | Write-critical databases |
| Fair (No Preference) | FIFO Order | No | No | General purpose RW locks |
There is no readers-writers solution that simultaneously maximizes reader concurrency, guarantees writer timeliness, and ensures fairness. Understanding the tradeoffs allows you to choose the right solution for your specific workload.
The first readers solution prioritizes readers: as long as any reader is active, new readers may join immediately, even if writers are waiting. This maximizes read concurrency but risks writer starvation.
The core idea is to track the number of active readers:
This requires coordinating two things:
write_lock)mutex)123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
// First Readers Solution (Reader Preference)// Writers may starve under continuous read load #include <semaphore.h> sem_t mutex; // Protects reader_countsem_t write_lock; // Exclusive access for writingint reader_count = 0; void init() { sem_init(&mutex, 0, 1); sem_init(&write_lock, 0, 1);} void reader(void) { while (true) { // Entry section for readers sem_wait(&mutex); reader_count++; if (reader_count == 1) { // First reader locks out writers sem_wait(&write_lock); } sem_post(&mutex); // CRITICAL SECTION: Read the shared resource read_data(); // Exit section for readers sem_wait(&mutex); reader_count--; if (reader_count == 0) { // Last reader releases for waiting writers sem_post(&write_lock); } sem_post(&mutex); // Remainder section process_data(); }} void writer(void) { while (true) { // Entry section for writers sem_wait(&write_lock); // May wait indefinitely if readers keep coming! // CRITICAL SECTION: Write to the shared resource write_data(); // Exit section for writers sem_post(&write_lock); // Remainder section prepare_next_write(); }}Consider this timeline with continuous read requests:
| Time | Event | reader_count | write_lock | Writer Status |
|---|---|---|---|---|
| T0 | Reader 1 arrives | 1 | Held by R1 | |
| T1 | Reader 2 arrives | 2 | Held | |
| T2 | Writer arrives | 2 | Held | Blocked |
| T3 | Reader 1 leaves | 1 | Held | Blocked |
| T4 | Reader 3 arrives | 2 | Held | Blocked |
| T5 | Reader 2 leaves | 1 | Held | Blocked |
| T6 | Reader 4 arrives | 2 | Held | Still blocked |
| ... | Readers keep coming | ≥1 | Held | STARVED |
The writer at T2 will never proceed if readers arrive faster than they complete. The reader count never drops to zero, so write_lock is never released for the writer.
This solution is appropriate when: (1) Reads vastly outnumber writes, (2) Read latency is critical and write latency is tolerable, (3) Writes are rare enough that starvation probability is acceptably low. Examples: configuration caches read millions of times but updated once a day, static content served from memory.
The first writers solution flips the priority: once a writer signals interest, no new readers may start. This prevents writer starvation but can starve readers when writers are frequent.
We introduce an additional mechanism:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
// First Writers Solution (Writer Preference)// Readers may starve under continuous write load #include <semaphore.h> sem_t mutex1; // Protects reader_countsem_t mutex2; // Protects writer_countsem_t read_lock; // Blocks new readers when writers waitingsem_t write_lock; // Exclusive access for writingsem_t read_attempt; // Ordering: ensures one reader attempt at a time int reader_count = 0;int writer_count = 0; void init() { sem_init(&mutex1, 0, 1); sem_init(&mutex2, 0, 1); sem_init(&read_lock, 0, 1); sem_init(&write_lock, 0, 1); sem_init(&read_attempt, 0, 1);} void reader(void) { while (true) { // Entry section for readers sem_wait(&read_attempt); // Serialize reader arrivals sem_wait(&read_lock); // May block if writer is waiting/active sem_wait(&mutex1); reader_count++; if (reader_count == 1) { sem_wait(&write_lock); // First reader blocks writers } sem_post(&mutex1); sem_post(&read_lock); // Allow other readers or next in queue sem_post(&read_attempt); // CRITICAL SECTION: Read the shared resource read_data(); // Exit section for readers sem_wait(&mutex1); reader_count--; if (reader_count == 0) { sem_post(&write_lock); // Last reader releases writers } sem_post(&mutex1); // Remainder section process_data(); }} void writer(void) { while (true) { // Entry section for writers sem_wait(&mutex2); writer_count++; if (writer_count == 1) { sem_wait(&read_lock); // First writer blocks new readers } sem_post(&mutex2); sem_wait(&write_lock); // Wait for exclusive access // CRITICAL SECTION: Write to the shared resource write_data(); // Exit section for writers sem_post(&write_lock); sem_wait(&mutex2); writer_count--; if (writer_count == 0) { sem_post(&read_lock); // Last writer allows readers again } sem_post(&mutex2); // Remainder section prepare_next_write(); }}Now consider continuous write requests:
| Time | Event | writer_count | read_lock | Reader Status |
|---|---|---|---|---|
| T0 | Writer 1 arrives | 1 | Held by W | |
| T1 | Reader arrives | 1 | Held | Blocked |
| T2 | Writer 2 arrives | 2 | Held | Blocked |
| T3 | Writer 1 completes | 1 | Held | Blocked |
| T4 | Writer 2 writes | 1 | Held | Blocked |
| T5 | Writer 3 arrives | 2 | Held | Blocked |
| T6 | Writer 2 completes | 1 | Held | Still blocked |
| ... | Writers keep coming | ≥1 | Held | STARVED |
The reader at T1 never proceeds because writer_count never drops to zero. Each new writer increments the count before any previous writer decrements it.
This solution is appropriate when: (1) Write freshness is critical—readers must see the latest data, (2) Writes are time-sensitive (e.g., financial transactions), (3) Read ability to be briefly delayed is acceptable. Examples: real-time bidding systems, order books, consensus logs.
The fair solution prevents starvation for both readers and writers by enforcing FIFO ordering. When a writer arrives, it queues behind all pending operations and blocks new readers from jumping ahead. This sacrifices some read concurrency for guaranteed progress.
The solution uses a turnstile (also called a serviceQueue)—a semaphore that serializes all arrivals:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
// Fair Readers-Writers Solution (Starvation-Free)// Neither readers nor writers starve; FIFO ordering #include <semaphore.h> sem_t mutex; // Protects reader_countsem_t write_lock; // Exclusive access for writingsem_t turnstile; // Enforces FIFO ordering for ALL arrivals int reader_count = 0; void init() { sem_init(&mutex, 0, 1); sem_init(&write_lock, 0, 1); sem_init(&turnstile, 0, 1);} void reader(void) { while (true) { // Entry section for readers sem_wait(&turnstile); // Take a ticket in arrival order sem_wait(&mutex); reader_count++; if (reader_count == 1) { sem_wait(&write_lock); // First reader locks out writers } sem_post(&mutex); sem_post(&turnstile); // Release turnstile for next arrival // CRITICAL SECTION: Read the shared resource read_data(); // Exit section for readers sem_wait(&mutex); reader_count--; if (reader_count == 0) { sem_post(&write_lock); // Last reader releases writers } sem_post(&mutex); // Remainder section process_data(); }} void writer(void) { while (true) { // Entry section for writers sem_wait(&turnstile); // Take a ticket in arrival order sem_wait(&write_lock); // Get exclusive access // Note: Writer holds turnstile until it gets write_lock! // This blocks all later arrivals (readers AND writers) sem_post(&turnstile); // Now release turnstile // CRITICAL SECTION: Write to the shared resource write_data(); // Exit section for writers sem_post(&write_lock); // Remainder section prepare_next_write(); }}The turnstile creates a strict arrival order. Consider this scenario:
| Time | Arrival | Turnstile Queue | Action |
|---|---|---|---|
| T0 | R1 | R1 | R1 gets turnstile, starts reading |
| T1 | R2 | R2 | R2 gets turnstile (R1 released), starts reading |
| T2 | W1 | W1 | W1 gets turnstile, holds it waiting for write_lock |
| T3 | R3 | R3→W1 | R3 blocked on turnstile behind W1 |
| T4 | R4 | R4→R3→W1 | R4 blocked on turnstile |
| T5 | R1 exits | reader_count = 1 | |
| T6 | R2 exits | reader_count = 0, write_lock released | |
| T7 | W1 | W1 gets write_lock, releases turnstile, writes | |
| T8 | R3 | R3 gets turnstile, starts reading | |
| T9 | R4 | R4 gets turnstile, starts reading |
Writer W1 cannot be overtaken by R3 or R4—they queue behind it. Readers R1 and R2 complete (they started before W1), then W1 writes, then R3 and R4 read.
The fair solution reduces read concurrency under write load. While a writer holds the turnstile, no new readers may start, even if there's room for more concurrent reads. This is the price of preventing starvation. Most real-world read-write lock implementations use this or similar approaches, preferring fairness over maximum throughput.
Modern programming languages provide read-write lock primitives that implement the readers-writers pattern. Understanding their semantics helps you use them correctly.
POSIX provides a standard read-write lock interface:
123456789101112131415161718192021222324252627282930313233343536373839404142
#include <pthread.h>#include <stdio.h> typedef struct { int data; pthread_rwlock_t rwlock;} shared_resource_t; shared_resource_t resource = {0, PTHREAD_RWLOCK_INITIALIZER}; void* reader_thread(void* arg) { int id = *(int*)arg; while (1) { // Acquire read lock - blocks only if writer holds lock pthread_rwlock_rdlock(&resource.rwlock); // Multiple readers execute here concurrently printf("Reader %d: value = %d", id, resource.data); pthread_rwlock_unlock(&resource.rwlock); usleep(100000); // 100ms between reads } return NULL;} void* writer_thread(void* arg) { int id = *(int*)arg; while (1) { // Acquire write lock - exclusive access pthread_rwlock_wrlock(&resource.rwlock); // Only one writer, no readers during this section resource.data++; printf("Writer %d: updated value to %d", id, resource.data); pthread_rwlock_unlock(&resource.rwlock); usleep(500000); // 500ms between writes } return NULL;}Java provides a feature-rich read-write lock with configurable fairness:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
import java.util.concurrent.locks.ReentrantReadWriteLock;import java.util.concurrent.locks.ReadWriteLock; public class SharedCache<K, V> { private final Map<K, V> cache = new HashMap<>(); private final ReadWriteLock rwLock = new ReentrantReadWriteLock(true); // true = fair private final Lock readLock = rwLock.readLock(); private final Lock writeLock = rwLock.writeLock(); // Multiple readers can call this concurrently public V get(K key) { readLock.lock(); try { return cache.get(key); } finally { readLock.unlock(); } } // Only one writer at a time; no concurrent readers public void put(K key, V value) { writeLock.lock(); try { cache.put(key, value); } finally { writeLock.unlock(); } } // Read-then-write pattern: lock upgrade not directly supported! // Must release read, acquire write (check-then-act race possible) public V computeIfAbsent(K key, Function<K, V> mappingFunction) { readLock.lock(); try { V value = cache.get(key); if (value != null) return value; } finally { readLock.unlock(); } // Gap here! Another thread could compute the same key writeLock.lock(); try { // Must re-check under write lock V value = cache.get(key); if (value != null) return value; value = mappingFunction.apply(key); cache.put(key, value); return value; } finally { writeLock.unlock(); } }}Most read-write locks do not support atomic upgrade from read to write lock. Releasing the read lock creates a gap where another thread may modify the data. You must re-check conditions after acquiring the write lock, as shown in computeIfAbsent above.
Rust's ownership system makes incorrect rwlock usage a compile-time error:
1234567891011121314151617181920212223242526272829
use std::sync::RwLock;use std::thread; fn main() { let data = RwLock::new(vec![1, 2, 3]); // Multiple readers - returns RwLockReadGuard let reader_handles: Vec<_> = (0..5).map(|i| { let data = &data; // Shared reference thread::spawn(move || { let guard = data.read().unwrap(); // Acquire read lock println!("Reader {}: {:?}", i, *guard); // Lock automatically released when guard goes out of scope }) }).collect(); // Single writer - returns RwLockWriteGuard let writer_handle = thread::spawn(move || { let mut guard = data.write().unwrap(); // Acquire write lock guard.push(4); // Exclusive access guaranteed println!("Writer: added element"); // Compile error if you try to return guard (can't outlive data) }); // The type system prevents: // - Forgetting to unlock (RAII guards) // - Using data without lock (requires going through guard) // - Data races (borrow checker ensures exclusive mutable access)}Real-world readers-writers implementations involve additional complexities beyond the basic models.
RW locks aren't always faster than mutexes! They involve more overhead:
| Operation | Mutex | RW Lock (read) | RW Lock (write) |
|---|---|---|---|
| Uncontended acquire | ~20ns | ~30-50ns | ~30-50ns |
| Reader count maintenance | N/A | Required | Required |
| Cache coherency traffic | Minimal | More (shared counter) | More |
Rule of Thumb: Use RW locks only when:
For short critical sections with moderate write rates, a simple mutex may outperform an RW lock.
To reduce writer overhead and improve throughput, batch multiple writes:
// Instead of: lock-write-unlock for each item
public void updateItems(List<Item> items) {
writeLock.lock();
try {
for (Item item : items) {
cache.put(item.key(), item);
}
} finally {
writeLock.unlock();
}
}
This reduces the number of lock acquisitions and maximizes the throughput benefit of exclusive access.
For extreme read-heavy workloads, the Linux kernel uses Read-Copy-Update (RCU):
RCU provides the best possible read performance but requires careful memory management and is typically kernel-level infrastructure.
Some implementations support downgrade from write to read lock (unlike upgrade). This is safe: the thread already has exclusive access, so it can atomically release write exclusivity while retaining read access. Java's ReentrantReadWriteLock supports this.
The readers-writers pattern appears throughout systems software. Recognizing it helps you apply appropriate solutions.
Modern databases often avoid traditional RW locks using Multi-Version Concurrency Control (MVCC):
This is conceptually similar to RCU: readers see old data while writers prepare new versions. PostgreSQL, MySQL (InnoDB), and Oracle all use MVCC for their default isolation levels.
The Linux kernel provides several RW lock variants:
| Lock Type | Use Case | Behavior |
|---|---|---|
rwlock_t | General purpose | Spinning lock, fair |
rw_semaphore | May sleep | Sleeps if cannot acquire |
seqlock | Very short reads | Readers retry if write occurred |
RCU | Read-mostly | Zero-cost reads, deferred free |
Whenever you see asymmetric access patterns—many readers coexisting safely but writes requiring isolation—you're looking at a readers-writers problem. Selecting the right solution (and the right variation) depends on your read/write ratio and latency requirements.
Even with well-understood solutions, readers-writers implementations have subtle failure modes.
123456789101112131415161718192021222324252627282930313233343536373839404142
// DEADLOCK: Two readers trying to upgrade// Thread 1 and Thread 2 both hold read locks// Thread 1 waits for Thread 2's read lock to upgrade// Thread 2 waits for Thread 1's read lock to upgrade// Neither can proceed! void problematicUpgrade() { readLock.lock(); try { if (needsUpdate()) { // WRONG: Still holding read lock! writeLock.lock(); // DEADLOCK if another thread also trying try { update(); } finally { writeLock.unlock(); } } } finally { readLock.unlock(); }} void safePattern() { readLock.lock(); try { if (!needsUpdate()) return; } finally { readLock.unlock(); // Release first! } // Gap: must re-check condition writeLock.lock(); try { if (needsUpdate()) { // Re-check! update(); } } finally { writeLock.unlock(); }}The readers-writers problem reveals the nuances of asymmetric concurrency—where not all accesses are equal. Let's consolidate the key insights:
What's Next:
We've now studied two fundamental synchronization problems: producer-consumer and readers-writers. In the next page, we'll tackle the Dining Philosophers Problem—a scenario that explores the complexities of resource allocation when multiple processes compete for multiple shared resources. This problem illuminates deadlock, livelock, and fairness in multi-resource scenarios.
You now possess comprehensive understanding of the readers-writers problem and its variations. You can analyze starvation behavior, implement each variation using semaphores or monitors, recognize the pattern in real-world systems, and avoid common pitfalls. This knowledge prepares you for designing any system with asymmetric concurrent access patterns.