Loading learning content...
So far, we've examined condition variables in relative isolation—one condition variable for one waiting need. But real-world synchronization often involves multiple distinct conditions that threads must wait for.
Consider a bounded buffer:
These are fundamentally different conditions. Using a single condition variable forces inefficiencies: when a consumer signals, we might wake a consumer instead of a producer. Using separate condition variables for each condition type enables precise signaling—wake exactly the threads that can make progress.
In this page, we'll explore the design patterns, tradeoffs, and implementation strategies for systems with multiple condition variables. You'll learn when to split conditions, how to coordinate multiple CVs with a single mutex, and how to solve classic problems with elegant multi-CV designs.
By the end of this page, you will understand: when and why to use multiple condition variables; how multiple CVs share a single mutex; the precise signaling pattern; classic synchronization solutions using multiple CVs; and advanced patterns for complex coordination.
Let's understand why a single condition variable often isn't enough.
The bounded buffer with one CV:
1234567891011121314151617181920212223242526272829303132333435
// Bounded buffer with SINGLE condition variable// Shows why this is suboptimal pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER; // One CV for everythingint count = 0;#define SIZE 10 void producer(Item item) { pthread_mutex_lock(&mutex); while (count == SIZE) { pthread_cond_wait(&cond, &mutex); // Wait: buffer not full } add_item(item); count++; pthread_cond_broadcast(&cond); // Must broadcast! Why? pthread_mutex_unlock(&mutex);} void consumer(void) { pthread_mutex_lock(&mutex); while (count == 0) { pthread_cond_wait(&cond, &mutex); // Wait: buffer not empty } consume_item(); count--; pthread_cond_broadcast(&cond); // Must broadcast! pthread_mutex_unlock(&mutex);} // Problems:// 1. Must use broadcast (signal might wake wrong thread type)// 2. When producer broadcasts, all consumers wake up// 3. All but one consumer immediately wait again// 4. Thundering herd! N consumers = N context switches for 1 itemWhy broadcast is required:
With a single CV:
Actually, with signal, Thread A would recheck, find the condition ("not full") true (since an item was removed), and proceed. But if Thread B had been woken instead, Thread B would find "not empty" is now false and wait again—a wasted wakeup.
The real issue is that the wrong thread might be woken, requiring broadcast to ensure the right thread eventually wakes.
With 100 waiting consumers and 1 waiting producer, a consumer adding an item must broadcast. All 100 consumers wake, 99 immediately wait again, and the producer (who actually needs to wake) competes with all of them for the mutex. This is extremely wasteful.
The solution is elegant: use multiple condition variables that share one mutex.
The key insight: The mutex protects the shared state (the buffer and count). Each condition variable represents a different condition on that state. Same state, different conditions, multiple CVs.
123456789101112131415161718192021222324252627282930313233343536
// Bounded buffer with MULTIPLE condition variables// This is the correct, efficient design pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t not_full = PTHREAD_COND_INITIALIZER; // CV for producerspthread_cond_t not_empty = PTHREAD_COND_INITIALIZER; // CV for consumersint count = 0;#define SIZE 10 void producer(Item item) { pthread_mutex_lock(&mutex); while (count == SIZE) { pthread_cond_wait(¬_full, &mutex); // Wait on not_full } add_item(item); count++; pthread_cond_signal(¬_empty); // Signal consumers! (no broadcast needed) pthread_mutex_unlock(&mutex);} void consumer(void) { pthread_mutex_lock(&mutex); while (count == 0) { pthread_cond_wait(¬_empty, &mutex); // Wait on not_empty } consume_item(); count--; pthread_cond_signal(¬_full); // Signal producers! (no broadcast needed) pthread_mutex_unlock(&mutex);} // Benefits:// 1. Signal (not broadcast) is correct - only wake matching waiters// 2. Producer adding item signals not_empty - wakes consumer directly// 3. Consumer removing item signals not_full - wakes producer directly// 4. No wasted wakeups, no thundering herdThe relationship visualized:
┌─────────────────────────┐
│ mutex │
│ (protects count, │
│ buffer state) │
└───────────┬─────────────┘
│
┌──────────────┴──────────────┐
│ │
┌───────▼───────┐ ┌─────────▼───────┐
│ not_full │ │ not_empty │
│ (producers │ │ (consumers │
│ wait here) │ │ wait here) │
└───────────────┘ └─────────────────┘
Both condition variables are associated with the same mutex. The mutex is passed to wait() for both. But each CV has its own wait queue with threads waiting for that specific condition.
A single mutex can be associated with any number of condition variables. Each CV represents a different condition on the shared state protected by that mutex. This is the fundamental pattern for efficient multi-condition synchronization.
Using multiple CVs enables precise signaling: wake exactly the threads that can make progress. Let's formalize this pattern.
The pattern:
Identifying distinct conditions:
| Condition | Who Waits? | When Signaled? |
|---|---|---|
| Buffer not full | Producers | After consumer removes item |
| Buffer not empty | Consumers | After producer adds item |
123456789101112131415161718192021222324252627282930313233343536373839
// Rules for precise signaling // Rule 1: Wait on the CV for YOUR condition// If you are a producer waiting for space:pthread_cond_wait(¬_full, &mutex); // Wait on not_full // If you are a consumer waiting for items:pthread_cond_wait(¬_empty, &mutex); // Wait on not_empty // Rule 2: Signal the CV whose condition you affect// When consumer takes an item (now there's space):pthread_cond_signal(¬_full); // Producers care about space // When producer adds an item (now there's data):pthread_cond_signal(¬_empty); // Consumers care about data // Rule 3: Signal, don't broadcast (when conditions are uniform)// All producers wait for the same thing: space// All consumers wait for the same thing: items// So signal() is correct - wake one waiter, any waiter works // Rule 4: Match signal quantity to resource quantity// Adding 1 item -> signal once (wake 1 consumer)// Adding N items -> either signal N times or broadcastvoid producer_batch(Item* items, int n) { pthread_mutex_lock(&mutex); for (int i = 0; i < n; i++) { while (count == SIZE) { pthread_cond_wait(¬_full, &mutex); } add_item(items[i]); count++; pthread_cond_signal(¬_empty); // One signal per item } pthread_mutex_unlock(&mutex);}Let's implement a complete, production-quality bounded buffer using multiple condition variables.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
#include <pthread.h>#include <stdlib.h>#include <stdbool.h> typedef struct { pthread_mutex_t mutex; pthread_cond_t not_full; // For producers waiting pthread_cond_t not_empty; // For consumers waiting void** items; int capacity; int count; int head; int tail; bool shutdown; // For graceful shutdown} BoundedBuffer; BoundedBuffer* buffer_create(int capacity) { BoundedBuffer* b = malloc(sizeof(BoundedBuffer)); pthread_mutex_init(&b->mutex, NULL); pthread_cond_init(&b->not_full, NULL); pthread_cond_init(&b->not_empty, NULL); b->items = malloc(capacity * sizeof(void*)); b->capacity = capacity; b->count = 0; b->head = 0; b->tail = 0; b->shutdown = false; return b;} // Put: blocks until space available or shutdown// Returns: true if item was added, false on shutdownbool buffer_put(BoundedBuffer* b, void* item) { pthread_mutex_lock(&b->mutex); while (b->count == b->capacity && !b->shutdown) { pthread_cond_wait(&b->not_full, &b->mutex); } if (b->shutdown) { pthread_mutex_unlock(&b->mutex); return false; } b->items[b->tail] = item; b->tail = (b->tail + 1) % b->capacity; b->count++; pthread_cond_signal(&b->not_empty); // Precise: wake one consumer pthread_mutex_unlock(&b->mutex); return true;} // Get: blocks until item available or shutdown// Returns: item, or NULL on shutdown with empty buffervoid* buffer_get(BoundedBuffer* b) { pthread_mutex_lock(&b->mutex); while (b->count == 0 && !b->shutdown) { pthread_cond_wait(&b->not_empty, &b->mutex); } if (b->count == 0 && b->shutdown) { pthread_mutex_unlock(&b->mutex); return NULL; } void* item = b->items[b->head]; b->head = (b->head + 1) % b->capacity; b->count--; pthread_cond_signal(&b->not_full); // Precise: wake one producer pthread_mutex_unlock(&b->mutex); return item;} // Shutdown: wake all waiters so they can exitvoid buffer_shutdown(BoundedBuffer* b) { pthread_mutex_lock(&b->mutex); b->shutdown = true; pthread_cond_broadcast(&b->not_empty); // Wake ALL consumers pthread_cond_broadcast(&b->not_full); // Wake ALL producers pthread_mutex_unlock(&b->mutex);} void buffer_destroy(BoundedBuffer* b) { pthread_mutex_destroy(&b->mutex); pthread_cond_destroy(&b->not_full); pthread_cond_destroy(&b->not_empty); free(b->items); free(b);}Notice that shutdown() uses broadcast on both CVs. This is intentional: ALL waiting threads need to check the shutdown flag. This is the correct use of broadcast—when the state change affects all waiters, not just one.
The readers-writers problem is a perfect case study for multiple condition variables. Different conditions govern reader and writer access:
Let's implement a fair solution where neither readers nor writers starve.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475
#include <pthread.h>#include <stdbool.h> typedef struct { pthread_mutex_t mutex; pthread_cond_t can_read; // Readers wait here pthread_cond_t can_write; // Writers wait here int active_readers; // Currently reading int active_writers; // Currently writing (0 or 1) int waiting_writers; // Writers waiting // For fairness: when writers are waiting, new readers wait} RWLock; void rwlock_init(RWLock* rw) { pthread_mutex_init(&rw->mutex, NULL); pthread_cond_init(&rw->can_read, NULL); pthread_cond_init(&rw->can_write, NULL); rw->active_readers = 0; rw->active_writers = 0; rw->waiting_writers = 0;} void rwlock_read_lock(RWLock* rw) { pthread_mutex_lock(&rw->mutex); // Wait if: writer is active OR writers are waiting (fairness) while (rw->active_writers > 0 || rw->waiting_writers > 0) { pthread_cond_wait(&rw->can_read, &rw->mutex); } rw->active_readers++; pthread_mutex_unlock(&rw->mutex);} void rwlock_read_unlock(RWLock* rw) { pthread_mutex_lock(&rw->mutex); rw->active_readers--; // If last reader and writers waiting, wake ONE writer if (rw->active_readers == 0 && rw->waiting_writers > 0) { pthread_cond_signal(&rw->can_write); } pthread_mutex_unlock(&rw->mutex);} void rwlock_write_lock(RWLock* rw) { pthread_mutex_lock(&rw->mutex); rw->waiting_writers++; // Wait for all readers and writers to finish while (rw->active_readers > 0 || rw->active_writers > 0) { pthread_cond_wait(&rw->can_write, &rw->mutex); } rw->waiting_writers--; rw->active_writers = 1; pthread_mutex_unlock(&rw->mutex);} void rwlock_write_unlock(RWLock* rw) { pthread_mutex_lock(&rw->mutex); rw->active_writers = 0; // Priority to waiting writers (maintains fairness) if (rw->waiting_writers > 0) { pthread_cond_signal(&rw->can_write); // Wake one writer } else { pthread_cond_broadcast(&rw->can_read); // Wake ALL readers } pthread_mutex_unlock(&rw->mutex);}Key observations:
Two CVs for two conditions: Readers wait on can_read, writers on can_write
Signal vs. Broadcast used appropriately:
can_write: Only one writer can proceed at a timecan_read: All waiting readers can proceed togetherFairness through waiting_writers: Readers check waiting_writers > 0 and wait, preventing reader starvation of writers
| Event | Action | CV Signaled |
|---|---|---|
| Last reader unlocks | If writers waiting | Signal can_write (one) |
| Writer unlocks | If writers waiting | Signal can_write (one) |
| Writer unlocks | If no writers waiting | Broadcast can_read (all) |
Sometimes you need even finer control than "one CV per condition type." An advanced pattern uses one condition variable per waiting thread.
When this is useful:
Example: Request queue with ordering
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
#include <pthread.h>#include <stdlib.h>#include <stdbool.h> // Each request has its own condition variabletypedef struct Request { pthread_cond_t done_cond; // Signal when this request is done void* result; bool completed; struct Request* next;} Request; typedef struct { pthread_mutex_t mutex; Request* head; Request* tail;} RequestQueue; // Client creates a request and waits for its completionvoid* submit_and_wait(RequestQueue* q, void* data) { // Create request with its own CV Request* req = malloc(sizeof(Request)); pthread_cond_init(&req->done_cond, NULL); req->result = NULL; req->completed = false; req->next = NULL; // Enqueue pthread_mutex_lock(&q->mutex); if (q->tail) { q->tail->next = req; } else { q->head = req; } q->tail = req; // Wait for THIS specific request to complete while (!req->completed) { pthread_cond_wait(&req->done_cond, &q->mutex); } void* result = req->result; pthread_mutex_unlock(&q->mutex); // Cleanup pthread_cond_destroy(&req->done_cond); free(req); return result;} // Worker completes a specific requestvoid complete_request(RequestQueue* q, Request* req, void* result) { pthread_mutex_lock(&q->mutex); req->result = result; req->completed = true; pthread_cond_signal(&req->done_cond); // Wake THIS specific waiter pthread_mutex_unlock(&q->mutex);}This pattern is common in database systems (waiting for specific transaction commits), RPC frameworks (waiting for specific responses), and async I/O (waiting for specific operation completions). The overhead of creating/destroying CVs is offset by perfect signaling precision.
A powerful pattern combines multiple CVs with state change notification: whenever state changes, signal all CVs that might care.
The pattern:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
// Complex resource manager with multiple conditions typedef struct { pthread_mutex_t mutex; // Multiple conditions on the resource state pthread_cond_t resource_available; // count > 0 pthread_cond_t all_returned; // count == total pthread_cond_t below_threshold; // count < threshold int count; int total; int threshold;} ResourcePool; void notify_state_change(ResourcePool* p) { // After any state change, signal relevant CVs if (p->count > 0) { pthread_cond_signal(&p->resource_available); } if (p->count == p->total) { pthread_cond_broadcast(&p->all_returned); } if (p->count < p->threshold) { // This might be used for "low watermark" alerts pthread_cond_signal(&p->below_threshold); }} void return_resource(ResourcePool* p) { pthread_mutex_lock(&p->mutex); p->count++; notify_state_change(p); // Let CVs figure out who cares pthread_mutex_unlock(&p->mutex);} // Wait for all resources returned (e.g., for shutdown)void wait_all_returned(ResourcePool* p) { pthread_mutex_lock(&p->mutex); while (p->count != p->total) { pthread_cond_wait(&p->all_returned, &p->mutex); } pthread_mutex_unlock(&p->mutex);}Benefits of this pattern:
Drawback: You might signal CVs that don't have waiters (minor overhead).
Different languages provide varying levels of support for multiple conditions.
Java with Lock and Condition:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
// Java: Multiple Conditions from single Lock import java.util.concurrent.locks.*; class BoundedBuffer<E> { private final Lock lock = new ReentrantLock(); private final Condition notFull = lock.newCondition(); private final Condition notEmpty = lock.newCondition(); private Object[] items = new Object[100]; private int count, head, tail; public void put(E item) throws InterruptedException { lock.lock(); try { while (count == items.length) { notFull.await(); } items[tail] = item; tail = (tail + 1) % items.length; count++; notEmpty.signal(); } finally { lock.unlock(); } } public E take() throws InterruptedException { lock.lock(); try { while (count == 0) { notEmpty.await(); } @SuppressWarnings("unchecked") E item = (E) items[head]; head = (head + 1) % items.length; count--; notFull.signal(); return item; } finally { lock.unlock(); } }}C++ with condition_variable:
12345678910111213141516171819202122232425262728293031323334353637383940
// C++: Multiple condition_variables, one mutex #include <mutex>#include <condition_variable>#include <queue> template<typename T>class BoundedBuffer { std::mutex mutex_; std::condition_variable not_full_; std::condition_variable not_empty_; std::queue<T> queue_; size_t capacity_; public: explicit BoundedBuffer(size_t capacity) : capacity_(capacity) {} void put(T item) { std::unique_lock<std::mutex> lock(mutex_); not_full_.wait(lock, [this] { return queue_.size() < capacity_; }); queue_.push(std::move(item)); lock.unlock(); not_empty_.notify_one(); } T take() { std::unique_lock<std::mutex> lock(mutex_); not_empty_.wait(lock, [this] { return !queue_.empty(); }); T item = std::move(queue_.front()); queue_.pop(); lock.unlock(); not_full_.notify_one(); return item; }};Python threading:
1234567891011121314151617181920212223242526
# Python: Multiple Condition objects share underlying lock import threading class BoundedBuffer: def __init__(self, capacity): self.capacity = capacity self.items = [] self.lock = threading.Lock() self.not_full = threading.Condition(self.lock) self.not_empty = threading.Condition(self.lock) def put(self, item): with self.not_full: # Acquires self.lock while len(self.items) >= self.capacity: self.not_full.wait() self.items.append(item) self.not_empty.notify() def take(self): with self.not_empty: # Acquires self.lock while len(self.items) == 0: self.not_empty.wait() item = self.items.pop(0) self.not_full.notify() return item| Language | Create Multiple CVs? | How to Share Lock |
|---|---|---|
| C (pthreads) | Yes, pthread_cond_t | Pass same mutex to wait() |
| Java | Yes, lock.newCondition() | CVs bound to Lock at creation |
| C++ | Yes, std::condition_variable | Pass same unique_lock to wait() |
| Python | Yes, Condition(lock) | Pass same Lock to Condition() |
| Rust | Yes, Condvar | Use with same Mutex via wait() |
Using multiple condition variables is the key to efficient, precise synchronization. Let's consolidate the key points:
Design checklist:
1. List all distinct predicates threads wait for
2. Create one CV for each distinct predicate
3. Associate all CVs with the same mutex (protecting predicate state)
4. Each waiter: wait on the CV for its predicate
5. Each modifier: signal CVs whose predicates are affected
6. Use signal for uniform waiters, broadcast for:
- Different-predicate waiters on same CV (avoid this design)
- State changes affecting all waiters (shutdown)
- Multiple resources becoming available
Module complete:
You've now mastered condition variables—from their fundamental purpose, through wait and signal operations, their essential relationship with mutexes, and finally the power of multiple conditions for sophisticated synchronization.
Congratulations! You now have a comprehensive understanding of condition variables—the essential synchronization primitive for waiting on arbitrary conditions. Combined with mutexes, condition variables enable the elegant, efficient coordination patterns that underpin everything from thread pools to databases. In the next module, we'll explore signal semantics in depth: the difference between Mesa and Hoare semantics and their practical implications.