Loading content...
Both Readers Preference and Writers Preference solutions suffer from starvation—they merely choose which class of threads to sacrifice. For many systems, this tradeoff is unacceptable. A web server cannot tell users "your request may never complete because writers are busy." A database cannot tell applications "your transaction may wait indefinitely."
The Fair Solution, also known as the Third Readers-Writers Problem or Starve-Free Solution, eliminates starvation for both readers and writers. It guarantees that every thread eventually accesses the resource, regardless of workload conditions. No thread can be bypassed indefinitely by later arrivals.
By the end of this page, you will understand: the formal definition of fairness in synchronization; how FIFO ordering prevents starvation; multiple implementation strategies for fair reader-writer locks; the performance cost of fairness guarantees; hybrid approaches that balance fairness and throughput; and when to choose fair solutions over preference-based ones.
Before implementing a fair solution, we need to precisely define what "fair" means in the context of reader-writer synchronization.
Bounded Waiting Property:
A solution provides bounded waiting if there exists a bound on the number of times other threads can bypass a waiting thread. More formally:
For any thread T that requests access at time t, there exists a finite number N such that at most N other threads can complete their access before T gains access.
This is distinct from simple progress (eventually getting access) because it guarantees a bound on waiting, not just eventual success.
FIFO Fairness:
The strongest fairness guarantee is First-In, First-Out (FIFO) ordering:
Threads gain access in the order they requested it.
With FIFO fairness, the bound N equals the number of threads ahead in the queue when the request was made. This is the most intuitive notion of fairness and what we'll primarily implement.
| Fairness Level | Guarantee | Starvation? | Implementation Complexity |
|---|---|---|---|
| None (Reader Pref) | Readers never wait for readers | Writers can starve | Low |
| None (Writer Pref) | Writers don't wait for new readers | Readers can starve | Medium |
| Weak (Progress) | Every thread eventually proceeds | Probabilistically no | Medium |
| Bounded Waiting | Limited bypass count | Never | Higher |
| FIFO (Strict) | Order-preserving access | Never | Highest |
Strict fairness limits concurrency. In FIFO ordering, if a writer is next in queue after active readers, no additional readers can start—even though they could safely proceed. This reduces reader concurrency. Fairness has a cost; we're trading throughput for predictability.
The conceptually simplest fair solution uses an explicit queue. All arriving threads (readers and writers) join a single queue. Access is granted in queue order, with the constraint that consecutive readers can proceed together.
Algorithm Sketch:
The Challenge:
Implementing this with semaphores is non-trivial because:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
/* * Fair Solution: Conceptual Pseudocode * * This shows the logical structure; actual implementation needs * careful semaphore placement to avoid races. */ typedef struct { ThreadType type; // READER or WRITER semaphore wait_sem; // Individual semaphore for this thread} QueueEntry; Queue waiting_queue;semaphore queue_mutex = 1;int active_readers = 0;bool writer_active = false; void reader_entry() { wait(queue_mutex); if (waiting_queue.empty() && !writer_active) { // No one waiting, no writer active: proceed immediately active_readers++; signal(queue_mutex); } else { // Must wait in queue QueueEntry* me = create_entry(READER); waiting_queue.enqueue(me); signal(queue_mutex); wait(me->wait_sem); // Block until signaled // When we wake, we're allowed to proceed active_readers++; // This needs protection too! }} void reader_exit() { wait(queue_mutex); active_readers--; if (active_readers == 0 && !waiting_queue.empty()) { // Last reader: wake next in queue wake_next_entry(); } signal(queue_mutex);} void wake_next_entry() { QueueEntry* next = waiting_queue.peek(); if (next->type == WRITER) { waiting_queue.dequeue(); signal(next->wait_sem); } else { // Wake all consecutive readers while (!waiting_queue.empty() && waiting_queue.peek()->type == READER) { QueueEntry* reader = waiting_queue.dequeue(); signal(reader->wait_sem); } }}The conceptual algorithm above has subtle issues: active_readers modifications need synchronization with the wakeup logic; the queue access patterns create races; and memory management for queue entries is tricky. Let's develop a more rigorous implementation.
A practical fair solution uses a turnstile or service order semaphore. This approach doesn't explicitly maintain a queue but achieves FIFO ordering through semaphore semantics.
The Turnstile Technique:
A turnstile is a semaphore that each thread must pass through sequentially. Like a physical turnstile at a subway entrance, it ensures threads proceed one at a time in arrival order.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
/* * Fair Readers-Writers Solution (Turnstile/Service Order) * * Key insight: All threads (readers AND writers) pass through * the same turnstile, ensuring FIFO ordering at that point. * * Once through the turnstile: * - Readers can proceed concurrently (if no writer active) * - Writers get exclusive access */ #include <semaphore.h> // Reader coordinationint read_count = 0;semaphore read_mutex = 1; // Protects read_count // Resource accesssemaphore resource = 1; // Actual resource access // Fairness mechanismsemaphore service_queue = 1; // The "turnstile" - FIFO ordering /* * FAIR READER PROTOCOL * * The service_queue ensures readers don't bypass waiting writers. * Each reader must "take a ticket" by passing through service_queue. */void fair_reader(int id) { while (true) { // ========== ENTRY SECTION ========== /* * Step 1: Enter the service queue (turnstile) * This blocks if a writer is holding service_queue * FIFO semantics ensure we proceed in arrival order */ wait(service_queue); // Enter turnstile (may block behind writer) wait(read_mutex); // Step 2: Lock read_count read_count++; // Step 3: Register reader if (read_count == 1) { wait(resource); // Step 4: First reader blocks writers } signal(read_mutex); // Step 5: Unlock read_count signal(service_queue); // Step 6: Exit turnstile - let next through // ========== CRITICAL SECTION ========== read_data(id); // ========== EXIT SECTION ========== wait(read_mutex); // Step 7: Lock read_count read_count--; // Step 8: Deregister reader if (read_count == 0) { signal(resource); // Step 9: Last reader allows writers } signal(read_mutex); // Step 10: Unlock read_count }} /* * FAIR WRITER PROTOCOL * * Writer acquires service_queue and HOLDS IT until done. * This blocks all new arrivals (readers OR writers). */void fair_writer(int id) { while (true) { // ========== ENTRY SECTION ========== /* * Step 1: Acquire service_queue * This blocks new readers from starting * HOLDS service_queue until writing complete */ wait(service_queue); // Acquire turnstile (block new arrivals) wait(resource); // Step 2: Wait for resource (active readers) // ========== CRITICAL SECTION ========== write_data(id); // ========== EXIT SECTION ========== signal(resource); // Step 3: Release resource signal(service_queue); // Step 4: Release turnstile }}How Fairness is Achieved:
Readers don't bypass writers: When a writer holds service_queue, arriving readers block at wait(service_queue). They cannot start until the writer finishes.
Writers don't bypass readers: When readers hold service_queue (briefly, one at a time), a writer waits its turn. The FIFO nature of semaphore wait queues preserves order.
Readers can still be concurrent: Between acquiring and releasing service_queue, a reader only does O(1) work. Multiple readers can pass through quickly and all be in the critical section.
The Key Insight:
Readers acquire and immediately release service_queue, while writers hold service_queue throughout their critical section. This asymmetry allows:
Let's trace through scenarios to see how the fair solution prevents starvation.
Scenario: Writer Arrives During Reader Activity
Initial: read_count=0, service_queue=1, resource=1 Time 0: R1 arrives R1: wait(service_queue) ✓ [service_queue=0] R1: wait(read_mutex) ✓ R1: read_count++ → 1 R1: read_count==1: wait(resource) ✓ [resource=0] R1: signal(read_mutex) R1: signal(service_queue) [service_queue=1] ← R1 releases turnstile! R1: ** READING ** Time 1: W1 arrives (while R1 reading) W1: wait(service_queue) ✓ [service_queue=0] ← W1 holds turnstile W1: wait(resource) BLOCKS [resource=0, held by readers] Time 2: R2 arrives (while R1 reading, W1 waiting) R2: wait(service_queue) BLOCKS [service_queue=0] ← R2 CANNOT start! (Behind W1 in turnstile queue) Time 3: R1 finishes R1: wait(read_mutex) ✓ R1: read_count-- → 0 R1: read_count==0: signal(resource) [resource=1] R1: signal(read_mutex) Time 3 (continued): W1 unblocks W1: wait(resource) ✓ [resource=0] W1: ** WRITING ** Time 4: W1 finishes W1: signal(resource) [resource=1] W1: signal(service_queue) [service_queue=1] Time 4 (continued): R2 unblocks R2: wait(service_queue) ✓ [service_queue=0] R2: proceeds to read...Key Observation: R2 arrived BEFORE W1 finished, but R2 was correctly blocked behind W1. No starvation for W1. R2 proceeds after W1, maintaining FIFO order.
Scenario: Multiple Threads, FIFO Ordering
Arrival order: R1, R2, W1, R3, R4, W2 Execution order will be:1. R1 starts immediately2. R2 passes through turnstile quickly, joins R1 in reading3. W1 acquires turnstile, waits for R1,R2 to finish4. R3 queues behind W1 in turnstile5. R4 queues behind R3 in turnstile6. W2 queues behind R4 in turnstile7. R1, R2 finish → W1 writes8. W1 finishes → R3 passes through, R4 follows quickly9. R3, R4 reading concurrently10. R3, R4 finish → W2 writes Actual concurrent access:- R1, R2 concurrent (both readers, no intervening writer)- W1 exclusive- R3, R4 concurrent (both readers, no intervening writer)- W2 exclusive Notice: R3 and R4 are "batched" even though W2 is behind themConsecutive readers in the queue still execute concurrently. The fairness mechanism doesn't force strict one-at-a-time access—it only ensures writers aren't bypassed. This preserves much of the reader concurrency benefit.
We now prove that the turnstile solution provides the desired fairness guarantees.
Theorem 1: No Writer Starvation Statement: Every waiting writer eventually gains access.
Proof:
service_queue (at most) and then resourceservice_queue has FIFO wait semanticsservice_queue, the next waiter (if a writer) acquires itservice_queue, no new readers can startresourceresource and proceedsTheorem 2: No Reader Starvation Statement: Every waiting reader eventually gains access.
Proof:
service_queue (at most)service_queue has FIFO wait semanticsservice_queue for a finite time (bounded critical sections)service_queue, the next waiter (reader or writer) acquires itTheorem 3: Bounded Waiting Statement: There exists a bound on how many threads can bypass a waiting thread.
Proof:
wait(service_queue) at time tservice_queueservice_queue after at most N othersservice_queue, it either:
read_count readers (those currently active)read_count)These proofs assume FIFO semaphore wait queues. POSIX semaphores don't guarantee FIFO ordering—threads may be woken in any order. For strict fairness guarantees, use fair mutexes (like pthread_mutexattr_setprotocol with PTHREAD_PRIO_INHERIT) or implement explicit FIFO queues.
The turnstile approach isn't the only way to achieve fairness. Let's examine alternative strategies.
Approach 2: Ticket-Based Ordering
Inspired by ticket locks, each thread takes a number and waits for its turn:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
/* * Ticket-Based Fair Readers-Writers Lock * * Each thread takes a ticket; access granted when ticket is called. * Consecutive readers with adjacent tickets can proceed together. */ atomic_uint next_ticket = 0;atomic_uint now_serving = 0;int active_readers = 0;uint current_batch_end = 0; // Last ticket in current reader batch void ticket_reader(int id) { // Take a ticket uint my_ticket = atomic_fetch_add(&next_ticket, 1); // Wait for our turn (or to join current reader batch) while (true) { uint serving = atomic_load(&now_serving); // Can proceed if: our ticket is being served, // OR we're in a reader batch and current is a reader batch if (my_ticket == serving) { // We're being served break; } if (my_ticket <= current_batch_end && active_readers > 0) { // We're part of current reader batch break; } // Not our turn yet - yield sched_yield(); // Or use condition variable for efficiency } // Entering - extend batch if consecutive readers behind us active_readers++; extend_batch_if_readers_follow(my_ticket); // === READING === read_data(id); // Exiting active_readers--; if (active_readers == 0) { // Last in batch: advance to next ticket atomic_store(&now_serving, current_batch_end + 1); current_batch_end = 0; }}Approach 3: Phase-Based Fairness
This approach alternates between "reader phases" and "writer phases":
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
/* * Phase-Based Fair Readers-Writers Lock * * System alternates: all waiting readers, then all waiting writers, * then all waiting readers, etc. * * Simpler than pure FIFO but still provides bounded waiting. */ enum Phase { READER_PHASE, WRITER_PHASE }; Phase current_phase = READER_PHASE;int readers_in_phase = 0;int writers_in_phase = 0;int waiting_readers = 0;int waiting_writers = 0;semaphore mutex = 1;semaphore reader_gate = 1;semaphore writer_gate = 0; void phase_reader(int id) { wait(mutex); if (current_phase == WRITER_PHASE || writers_in_phase > 0) { // Writer phase active - wait waiting_readers++; signal(mutex); wait(reader_gate); wait(mutex); waiting_readers--; // Decremented in switch_to_reader_phase } readers_in_phase++; signal(mutex); // === READING === read_data(id); wait(mutex); readers_in_phase--; if (readers_in_phase == 0 && waiting_writers > 0) { switch_to_writer_phase(); } signal(mutex);} void switch_to_writer_phase() { current_phase = WRITER_PHASE; // Wake one writer signal(writer_gate);} void switch_to_reader_phase() { current_phase = READER_PHASE; // Wake ALL waiting readers for (int i = 0; i < waiting_readers; i++) { signal(reader_gate); }}| Approach | Fairness Level | Complexity | Reader Concurrency | Overhead |
|---|---|---|---|---|
| Turnstile | FIFO | Low-Medium | Consecutive readers batch | Low |
| Ticket-Based | Strict FIFO | Higher | Full batching | Medium (spinning) |
| Phase-Based | Bounded (phase-level) | Medium | Full phase batching | Medium |
| Explicit Queue | Strict FIFO | Highest | Full batching | Higher (memory) |
Fair solutions trade some throughput for starvation freedom. Let's quantify this tradeoff.
Overhead Sources:
Serialization Point: All threads pass through service_queue (turnstile approach). This serialization adds latency.
Reduced Batching: In preference solutions, new readers can join ongoing reader sessions freely. In fair solutions, new readers after a waiting writer must wait.
Wakeup Overhead: Signaling and context-switching between phases costs CPU cycles.
Throughput Comparison:
Consider a workload with 90% readers, 10% writers, processing times R_time and W_time:
| Scenario | Readers Pref | Writers Pref | Fair (Turnstile) |
|---|---|---|---|
| Read-heavy (95/5) | 1.0x (baseline) | 0.8x | 0.85x |
| Balanced (70/30) | 0.9x | 0.85x | 0.82x |
| Write-heavy (30/70) | Variable (starvation) | 0.9x | 0.85x |
| Bursty writers | Variable | 0.95x | 0.9x |
| Uniform random | 0.95x | 0.87x | 0.83x |
Latency Characteristics:
| Metric | Readers Pref | Writers Pref | Fair |
|---|---|---|---|
| Avg Reader Latency | Lowest | Medium | Medium |
| Max Reader Latency | Low | Unbounded | Bounded |
| Avg Writer Latency | Medium-High | Low | Medium |
| Max Writer Latency | Unbounded | Low | Bounded |
| Tail Latency (p99) | Variable | Variable | Predictable |
The Fairness Tax:
Fair solutions typically see:
For many systems, the predictability is worth the throughput cost. Unbounded latency is often worse than consistently slightly elevated latency.
SLA-bound systems (99th percentile latency requirements), user-facing services, and real-time systems often prefer fair locks despite lower peak throughput. The absence of catastrophic outliers is worth more than marginally higher average performance.
Pure fairness may be too expensive for some systems. Hybrid approaches provide tunable tradeoffs.
Approach 1: Fairness with Batching Threshold
Allow some reader batching before enforcing fairness:
12345678910111213141516171819202122232425262728293031323334353637383940
/* * Hybrid: Fair with Reader Batching Threshold * * Allow up to MAX_BATCH_SIZE readers to bypass waiting writers, * then enforce fairness. */ #define MAX_BATCH_SIZE 10 int batch_count = 0;semaphore service_queue = 1;// ... other variables as in fair solution void hybrid_reader(int id) { wait(read_mutex); if (batch_count < MAX_BATCH_SIZE && read_count > 0) { // Join existing batch without acquiring turnstile batch_count++; read_count++; signal(read_mutex); if (read_count == 1) wait(resource); } else { // Normal fair path signal(read_mutex); wait(service_queue); wait(read_mutex); read_count++; batch_count = 1; // Start new batch if (read_count == 1) wait(resource); signal(read_mutex); signal(service_queue); } // ... read and exit}Approach 2: Adaptive Fairness
Switch between policies based on observed starvation:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
/* * Adaptive Fairness: Switch policies based on starvation detection * * Normally use reader-preference for throughput. * If a writer waits too long, temporarily switch to fair mode. */ #define STARVATION_THRESHOLD_MS 100 atomic_uint64_t oldest_waiting_writer_time = 0;bool fair_mode = false; void* starvation_monitor(void* arg) { while (true) { uint64_t oldest = atomic_load(&oldest_waiting_writer_time); if (oldest > 0) { uint64_t wait_time = current_time_ms() - oldest; if (wait_time > STARVATION_THRESHOLD_MS && !fair_mode) { fair_mode = true; log("Switching to fair mode - writer starvation detected"); } } else if (fair_mode) { // No waiting writers, can return to preference mode fair_mode = false; log("Returning to reader-preference mode"); } usleep(10000); // Check every 10ms }} void adaptive_reader(int id) { if (fair_mode) { // Use fair protocol fair_reader_entry(); } else { // Use reader-preference protocol reader_preference_entry(); } read_data(id); // Exit (same for both modes) reader_exit();}Production systems use various fair reader-writer lock implementations. Let's examine some notable examples.
Java's ReentrantReadWriteLock (fair mode):
123456789101112131415161718192021222324252627282930313233343536
import java.util.concurrent.locks.ReentrantReadWriteLock; /* * Java's ReentrantReadWriteLock supports a fairness parameter. * When fair=true, threads acquire in approximately arrival order. */ ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(true); // fair=trueReadLock readLock = rwLock.readLock();WriteLock writeLock = rwLock.writeLock(); void reader() { readLock.lock(); try { // Read data } finally { readLock.unlock(); }} void writer() { writeLock.lock(); try { // Write data } finally { writeLock.unlock(); }} /* * Fair mode properties: * - Uses a FIFO wait queue (AbstractQueuedSynchronizer) * - Readers check if a writer thread is ahead of them * - Writers wait for their turn in queue order * - Supports lock downgrade (write → read) but not upgrade */Linux Kernel's rwsem (fairness in modern kernels):
Linux kernel read-write semaphores have evolved toward fairness. Modern implementations use:
Go's sync.RWMutex:
Go's RWMutex provides writer preference with practical fairness:
| Platform | Lock Type | Fairness | Notes |
|---|---|---|---|
| Java | ReentrantReadWriteLock | Optional FIFO | Configurable fair/nonfair |
| Go | sync.RWMutex | Writer priority + bounds | No starvation in practice |
| Linux Kernel | rwsem | Writer preference + spinning | Evolved over versions |
| POSIX | pthread_rwlock_t | Implementation-defined | Often writer preference |
| C++ std | shared_mutex | Unspecified | Varies by implementation |
| Rust | RwLock | Unspecified | OS-dependent behavior |
Fairness guarantees vary by platform and even by version. Always verify your specific platform's behavior. If strict fairness is required, explicitly use fair mode (Java) or implement your own guarantee on top of basic primitives.
We have thoroughly examined fair solutions to the Readers-Writers Problem. Let's consolidate the key concepts:
What's Next:
We've now examined the three classic variants of the Readers-Writers Problem: readers preference, writers preference, and fair solutions. The final page synthesizes these approaches, presenting complete Semaphore Solutions with side-by-side comparisons, implementation guidelines, and best practices for choosing and deploying reader-writer locks in production systems.
You now understand fair solutions to the Readers-Writers Problem: their formal properties, implementation strategies, performance characteristics, and real-world applications. This completes the theoretical foundation; the next page focuses on practical implementation patterns.