Loading content...
At the center of condition variable functionality lies pthread_cond_wait()—the operation that makes efficient, event-driven thread synchronization possible. This function performs what is arguably the most sophisticated primitive operation in POSIX threading: it atomically releases a mutex and suspends the calling thread, then reacquires the mutex before returning.
This atomicity is not a convenience—it is the foundation of correctness. Without atomic release-and-wait, there would be a window between releasing the mutex and blocking during which a signal could be lost. The thread would wait forever for a signal that already came and went.
pthread_cond_wait() is the mechanism by which threads transform from active CPU consumers into passive waiters, relying on other threads to wake them when conditions change. Understanding this function—its semantics, its guarantees, its subtleties—is essential for writing correct concurrent programs.
This page provides an exhaustive, production-grade examination of pthread_cond_wait(), covering its signature, semantics, atomicity guarantees, spurious wakeup handling, usage patterns, error conditions, and integration with system schedulers.
By completing this page, you will: (1) Master the pthread_cond_wait() function signature and semantics, (2) Understand the critical atomicity of release-and-wait, (3) Learn why spurious wakeups occur and how to handle them, (4) Know the correct while-loop pattern for condition checking, (5) Understand cancellation points and signal handling, and (6) Be able to use pthread_cond_timedwait() for timeout-bounded waits.
#include <pthread.h>
int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);
Parameters:
cond: Pointer to a properly initialized condition variable. This is the condition variable on which the calling thread will wait.
mutex: Pointer to a mutex that the calling thread must already hold locked. This mutex is atomically released when the thread enters the wait state and automatically reacquired before the function returns.
Return Value:
0: Success. The thread was blocked, a signal (or spurious wakeup) occurred, and the mutex was reacquired.EINVAL for invalid parameters.When a thread calls pthread_cond_wait(), the following occurs atomically:
The thread remains blocked until:
pthread_cond_signal() and this thread is selected to wakepthread_cond_broadcast()When the thread wakes, before returning:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
#include <pthread.h>#include <stdbool.h>#include <stdio.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER;bool data_ready = false;int shared_data = 0; // =====================================================// Conceptual Expansion of pthread_cond_wait()// (This is what the function does internally)// ===================================================== /* * Conceptually, pthread_cond_wait(&cond, &mutex) behaves like: * * ATOMIC BEGIN: * add_to_wait_queue(cond, current_thread); * unlock(mutex); * ATOMIC END * * block_until_signaled_or_spurious(); * * lock(mutex); // May block waiting for mutex * * return 0; * * The atomicity of the first block is CRITICAL: * - If signal happens after unlock but before block, it's NOT lost * - Thread is in wait queue BEFORE mutex is released * - Signaler will see thread in queue (or thread already saw signal) */ // =====================================================// Correct Usage Pattern// ===================================================== void *consumer_thread(void *arg) { pthread_mutex_lock(&mutex); printf("Consumer: acquired mutex\n"); // CRITICAL: Always use while, never if while (!data_ready) { printf("Consumer: data not ready, waiting...\n"); // This call: // 1. Atomically releases mutex and blocks // 2. Thread sleeps here until signaled // 3. Reacquires mutex before returning pthread_cond_wait(&cond, &mutex); // After returning: // - We hold the mutex again // - But we MUST recheck data_ready (spurious wakeup!) printf("Consumer: woke up, rechecking condition...\n"); } // Here: mutex is held AND data_ready is true printf("Consumer: processing data = %d\n", shared_data); int value = shared_data; data_ready = false; // Consume the data pthread_mutex_unlock(&mutex); printf("Consumer: released mutex\n"); return (void *)(long)value;} void *producer_thread(void *arg) { pthread_mutex_lock(&mutex); printf("Producer: acquired mutex\n"); shared_data = 42; data_ready = true; printf("Producer: data produced, signaling...\n"); pthread_cond_signal(&cond); // Wake one waiter pthread_mutex_unlock(&mutex); printf("Producer: released mutex\n"); return NULL;}A common source of bugs is forgetting that pthread_cond_wait() returns WITH THE MUTEX LOCKED. The function reacquired the mutex for you. If you call pthread_mutex_lock() again after wait returns, you will deadlock (unless the mutex is recursive). Always proceed directly to checking your condition or releasing the mutex.
The atomicity of pthread_cond_wait() is what separates correct condition variable usage from broken synchronization. To understand why, let's examine what would happen without it.
Consider a naive (incorrect) implementation without atomic release-and-wait:
// BROKEN: Non-atomic release and wait
void broken_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) {
pthread_mutex_unlock(mutex); // Step 1: Release
// DANGER ZONE: Signal could happen here!
block_on_condition(cond); // Step 2: Block
pthread_mutex_lock(mutex); // Step 3: Reacquire
}
If a signal arrives between Step 1 and Step 2:
This is the lost wakeup problem—a signal that should have woken a thread instead disappears because the thread wasn't yet in the waiting state.
The POSIX specification requires atomicity but doesn't mandate implementation. Common approaches:
1. Kernel-Level Atomic Operation
The entire wait sequence is implemented as a single kernel system call:
futex() system call with FUTEX_WAIT operation2. Internal Spinlock + Wait Queue
3. Hardware-Assisted Primitives
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
#include <pthread.h>#include <stdio.h>#include <unistd.h> // =====================================================// Demonstration: How Lost Wakeups Would Occur// (This shows the PROBLEM, not the solution)// ===================================================== /* * Timeline of a lost wakeup (if pthread_cond_wait wasn't atomic): * * THREAD A (Consumer) THREAD B (Producer) * ---------------- ------------------ * lock(mutex) * check condition: false * unlock(mutex) * lock(mutex) * set condition = true * signal(cond) <-- no one waiting! * unlock(mutex) * block(cond) <-- FOREVER! * * The signal was "lost" because Thread A wasn't * in the wait queue when Thread B signaled. */ // =====================================================// The Correct Behavior (What Actually Happens)// ===================================================== /* * With atomic pthread_cond_wait: * * THREAD A (Consumer) THREAD B (Producer) * ---------------- ------------------ * lock(mutex) * check condition: false * [atomic: add to waitqueue + unlock(mutex)] * lock(mutex) <-- waits for unlock * [blocked in wait] * set condition = true * signal(cond) <-- A is in queue, will wake * [woken up] unlock(mutex) * lock(mutex) <---------- * check condition: TRUE * proceed... * * No lost wakeup because A was in queue before mutex released. */ // =====================================================// Key Insight: The Signaler Sees Correct State// ===================================================== /* * The atomicity ensures that from the signaler's perspective: * * Either: * 1. Waiter is in queue (will be woken by signal) * 2. Waiter hasn't reached wait yet (will see condition is true) * * There's NEVER a state where: * - Waiter checked condition (saw false) * - Waiter hasn't entered queue yet * - Signaler sets condition true and signals * - Waiter enters queue (misses signal) * * This is impossible because the waiter is in queue * BEFORE the mutex is released. */Think of it this way: the thread 'registers' for a wakeup call before 'going to sleep'. By the time the mutex is released (allowing signalers to run), the thread is guaranteed to be on the wakeup list. The atomicity means registration and sleep happen in one indivisible action.
One of the most important (and often misunderstood) aspects of pthread_cond_wait() is the possibility of spurious wakeups. The POSIX specification explicitly permits pthread_cond_wait() to return even when no thread has called pthread_cond_signal() or pthread_cond_broadcast().
A spurious wakeup occurs when a waiting thread is awakened without a corresponding signal. After such a wakeup:
This is not a bug—it is an explicitly permitted behavior that applications must handle.
Allowing spurious wakeups enables significant implementation flexibility:
1. Performance Optimization
Some implementations can use simpler, faster wait mechanisms that occasionally produce extra wakeups. The cost of these rare extra wakeups is far less than the cost of preventing them.
2. Multiprocessor Considerations
On SMP systems, threads might be woken due to cache coherency traffic or scheduling decisions on other CPUs.
3. Signal Handling
When a Unix signal is delivered to a waiting thread, some implementations wake the thread to handle the signal and then return from pthread_cond_wait() rather than re-entering the wait.
4. Implementation Simplification
Making spurious wakeups illegal would require complex tracking to ensure only "legitimate" wakeups return. This complexity isn't worth the minimal benefit.
Some developers dismiss spurious wakeups as 'theoretical'. They are not. On Linux with glibc/NPTL, spurious wakeups can occur when: (1) signals are delivered, (2) the thread is canceled and cancellation is deferred, (3) the futex implementation returns early, or (4) during high-contention scenarios. Code must handle them.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104
#include <pthread.h>#include <stdbool.h>#include <stdio.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER;bool work_available = false; // =====================================================// WRONG: Using 'if' (Vulnerable to Spurious Wakeups)// ===================================================== void *worker_BROKEN(void *arg) { pthread_mutex_lock(&mutex); if (!work_available) { // <-- WRONG: 'if' instead of 'while' pthread_cond_wait(&cond, &mutex); // Spurious wakeup: work_available might STILL be false! } // BUG: May execute with work_available == false printf("Processing work...\n"); process_work(); // Crashes or corrupts data pthread_mutex_unlock(&mutex); return NULL;} // =====================================================// CORRECT: Using 'while' (Handles Spurious Wakeups)// ===================================================== void *worker_CORRECT(void *arg) { pthread_mutex_lock(&mutex); while (!work_available) { // <-- CORRECT: 'while' loop pthread_cond_wait(&cond, &mutex); // Spurious wakeup? Loop rechecks condition. // If still false, we wait again. No problem. } // SAFE: work_available is guaranteed true here printf("Processing work...\n"); process_work(); work_available = false; // Mark work as consumed pthread_mutex_unlock(&mutex); return NULL;} // =====================================================// Why While is Required: The Invariant// ===================================================== /* * The while loop establishes a critical invariant: * * INVARIANT: Code after the while loop only executes when * the condition is TRUE. * * This invariant holds regardless of: * - Spurious wakeups * - Stolen wakeups (broadcast, then another thread handles it first) * - Racing signals * * The 'if' version only guarantees: "a signal was sent at some point" * The 'while' version guarantees: "the condition is currently true" */ // =====================================================// Complex Condition Example// ===================================================== typedef struct { pthread_mutex_t mutex; pthread_cond_t cond; int items[100]; int count; bool shutdown;} queue_t; void *queue_consumer(void *arg) { queue_t *q = (queue_t *)arg; pthread_mutex_lock(&q->mutex); // Multiple conditions in while: wait while empty AND not shutdown while (q->count == 0 && !q->shutdown) { pthread_cond_wait(&q->cond, &q->mutex); } // Post-loop: either count > 0 OR shutdown is true (or both) if (q->count > 0) { int item = q->items[--q->count]; pthread_mutex_unlock(&q->mutex); process_item(item); } else { // shutdown is true and count is 0 pthread_mutex_unlock(&q->mutex); printf("Worker exiting due to shutdown\n"); } return NULL;}Sometimes waiting indefinitely is unacceptable. Systems need to handle timeouts—detecting when expected events haven't occurred and taking alternative action. POSIX provides pthread_cond_timedwait() for this purpose.
int pthread_cond_timedwait(
pthread_cond_t *cond,
pthread_mutex_t *mutex,
const struct timespec *abstime
);
Parameters:
cond, mutex: Same as pthread_cond_wait()abstime: Pointer to a struct timespec representing the absolute time at which the wait should timeout. This is NOT a relative duration.Return Values:
0: Woken by signal (or spurious wakeup)ETIMEDOUT: The absolute time was reached before a signalEINVAL for invalid parameters)Critical Note: The timeout is an absolute time, not a duration. You must compute the deadline relative to the appropriate clock.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103
#include <pthread.h>#include <time.h>#include <errno.h>#include <stdio.h>#include <stdbool.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER;bool event_occurred = false; // =====================================================// Basic Timed Wait Pattern// ===================================================== int wait_for_event_with_timeout(int timeout_seconds) { struct timespec deadline; int rc; // Get current time (using the same clock as cond var) // Default is CLOCK_REALTIME unless configured otherwise clock_gettime(CLOCK_REALTIME, &deadline); // Add timeout to get absolute deadline deadline.tv_sec += timeout_seconds; // Note: If adding nanoseconds, must handle overflow to seconds pthread_mutex_lock(&mutex); while (!event_occurred) { rc = pthread_cond_timedwait(&cond, &mutex, &deadline); if (rc == ETIMEDOUT) { // Timeout occurred, condition still false pthread_mutex_unlock(&mutex); return -1; // Indicate timeout } // rc == 0: signaled (or spurious), recheck condition } // Event occurred pthread_mutex_unlock(&mutex); return 0; // Success} // =====================================================// Production Pattern: Monotonic Clock for Robustness// ===================================================== // First, create condition variable with CLOCK_MONOTONICpthread_cond_t robust_cond; void init_robust_condvar(void) { pthread_condattr_t attr; pthread_condattr_init(&attr); pthread_condattr_setclock(&attr, CLOCK_MONOTONIC); pthread_cond_init(&robust_cond, &attr); pthread_condattr_destroy(&attr);} int robust_wait_with_timeout(int timeout_ms) { struct timespec deadline; int rc; // Use CLOCK_MONOTONIC - immune to system time changes clock_gettime(CLOCK_MONOTONIC, &deadline); // Add milliseconds to deadline deadline.tv_sec += timeout_ms / 1000; deadline.tv_nsec += (timeout_ms % 1000) * 1000000L; // Handle nanosecond overflow if (deadline.tv_nsec >= 1000000000L) { deadline.tv_sec += 1; deadline.tv_nsec -= 1000000000L; } pthread_mutex_lock(&mutex); while (!event_occurred) { rc = pthread_cond_timedwait(&robust_cond, &mutex, &deadline); if (rc == ETIMEDOUT) { pthread_mutex_unlock(&mutex); return -1; } } pthread_mutex_unlock(&mutex); return 0;} // =====================================================// Helper: Compute Deadline from Duration// ===================================================== void deadline_from_now(struct timespec *deadline, clockid_t clock_id, long timeout_ms) { clock_gettime(clock_id, deadline); long total_ns = deadline->tv_nsec + (timeout_ms % 1000) * 1000000L; deadline->tv_sec += timeout_ms / 1000 + total_ns / 1000000000L; deadline->tv_nsec = total_ns % 1000000000L;}The most common timedwait bug is treating abstime as a duration. It's an ABSOLUTE time point ("wake me at 3:00 PM") not a duration ("wake me in 5 seconds"). Always compute deadline = now + duration. Using a duration value directly will either timeout immediately (if duration < now) or wait for decades.
Using absolute time rather than relative duration is a deliberate design choice:
1. Resumption After Spurious Wakeup
With absolute time, if a spurious wakeup occurs, the thread can simply re-call pthread_cond_timedwait() with the same deadline. The total wait time is preserved.
With relative time, each re-wait would reset the duration, potentially waiting much longer than intended.
2. Deadline-Based Programming
Real systems care about deadlines, not durations. "Complete by 3:00 PM" is more useful than "complete within 1 hour" because the latter changes meaning over time.
3. Composability
When calling multiple timed operations, an absolute deadline can be passed through unchanged. Relative durations would need to be recalculated considering elapsed time.
| Clock | Behavior | When to Use |
|---|---|---|
| CLOCK_REALTIME | Wall-clock time; can jump forward/backward | Only when coordinating with external wall-clock events |
| CLOCK_MONOTONIC | Steady clock; never jumps | Default choice for internal timeouts; NTP-safe |
| CLOCK_MONOTONIC_RAW | Not adjusted; may drift | Rarely needed; avoid unless measuring hardware time |
pthread_cond_wait() is a cancellation point—a function where thread cancellation can take effect if the thread has been canceled and cancellation is enabled.
When a canceled thread is in pthread_cond_wait():
The mutex reacquisition before cleanup is critical—cleanup handlers can safely access mutex-protected state.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
#include <pthread.h>#include <stdio.h>#include <stdbool.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER;bool data_available = false; // =====================================================// Cancellation-Safe Wait Pattern// ===================================================== void cleanup_handler(void *arg) { pthread_mutex_t *mtx = (pthread_mutex_t *)arg; printf("Cleanup: releasing mutex\n"); pthread_mutex_unlock(mtx);} void *cancelable_worker(void *arg) { pthread_mutex_lock(&mutex); // Register cleanup handler to release mutex on cancellation pthread_cleanup_push(cleanup_handler, &mutex); while (!data_available) { // pthread_cond_wait is a cancellation point // If canceled while waiting: // 1. Mutex is reacquired // 2. cleanup_handler runs (unlocks mutex) // 3. Thread terminates pthread_cond_wait(&cond, &mutex); } // Process data... data_available = false; // Pop cleanup handler (0 = don't execute) pthread_cleanup_pop(0); pthread_mutex_unlock(&mutex); return NULL;} // =====================================================// Disable Cancellation for Critical Sections// ===================================================== void *critical_section_worker(void *arg) { int old_state; pthread_mutex_lock(&mutex); // Disable cancellation during critical work pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &old_state); while (!data_available) { // Still need cleanup handler for robustness pthread_cleanup_push(cleanup_handler, &mutex); // Temporarily re-enable cancellation for the wait pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL); pthread_cond_wait(&cond, &mutex); // Disable again before checking condition pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL); pthread_cleanup_pop(0); } // Critical processing (cancellation disabled) do_critical_work(); // Restore original cancellation state pthread_setcancelstate(old_state, NULL); pthread_mutex_unlock(&mutex); return NULL;}When a Unix signal (not pthread signal) is delivered to a thread blocked in pthread_cond_wait():
POSIX Behavior:
pthread_cond_wait() may return after the handler completesImplementation Variation:
Unlike read() or write(), pthread_cond_wait() does not automatically restart when interrupted by a signal, regardless of the SA_RESTART flag in the signal handler registration. The thread may return from wait and must recheck the condition.
Notice how both cancellation and signal handling are naturally accommodated by the while-loop pattern. Whether the wait returns due to signal, cancellation attempt, spurious wakeup, or legitimate signal—the loop rechecks the condition. This is why the pattern is universal.
While pthread_cond_wait() rarely fails in practice, robust code must handle potential errors.
POSIX specifies that pthread_cond_wait() may fail with:
EINVAL: The condition variable or mutex was not properly initialized, or different mutexes were used with the same condition variable concurrently.
EPERM: The mutex is not owned by the calling thread (should have been locked before wait).
Note: These errors represent programming bugs, not runtime conditions. Well-formed code should never encounter them.
In addition to the above:
ETIMEDOUT: The absolute time specified has passed without a signal. This is not an error in the traditional sense—it's an expected outcome indicating the wait timed out.
EINVAL: The timespec value is invalid (negative tv_nsec, or tv_nsec ≥ 1 billion nanoseconds).
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
#include <pthread.h>#include <errno.h>#include <stdio.h>#include <stdlib.h>#include <stdbool.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;pthread_cond_t cond = PTHREAD_COND_INITIALIZER;bool ready = false; // =====================================================// Pattern 1: Defensive Error Checking// ===================================================== int wait_with_error_check(void) { int rc; rc = pthread_mutex_lock(&mutex); if (rc != 0) { fprintf(stderr, "mutex lock failed: %d\n", rc); return -1; } while (!ready) { rc = pthread_cond_wait(&cond, &mutex); if (rc != 0) { // This should never happen in correct code fprintf(stderr, "FATAL: cond_wait failed: %d\n", rc); pthread_mutex_unlock(&mutex); return -1; } } pthread_mutex_unlock(&mutex); return 0;} // =====================================================// Pattern 2: Timedwait with Proper Timeout Handling// ===================================================== typedef enum { WAIT_SUCCESS, WAIT_TIMEOUT, WAIT_ERROR} wait_result_t; wait_result_t wait_with_timeout(int timeout_ms) { struct timespec deadline; int rc; clock_gettime(CLOCK_REALTIME, &deadline); deadline.tv_sec += timeout_ms / 1000; deadline.tv_nsec += (long)(timeout_ms % 1000) * 1000000; if (deadline.tv_nsec >= 1000000000L) { deadline.tv_sec++; deadline.tv_nsec -= 1000000000L; } pthread_mutex_lock(&mutex); while (!ready) { rc = pthread_cond_timedwait(&cond, &mutex, &deadline); if (rc == ETIMEDOUT) { pthread_mutex_unlock(&mutex); return WAIT_TIMEOUT; } if (rc != 0) { // Unexpected error pthread_mutex_unlock(&mutex); return WAIT_ERROR; } // rc == 0: signaled, loop will recheck condition } pthread_mutex_unlock(&mutex); return WAIT_SUCCESS;} // =====================================================// Pattern 3: Debug Builds with Extensive Checking// ===================================================== #ifdef DEBUG void debug_cond_wait(pthread_cond_t *c, pthread_mutex_t *m, const char *file, int line) { // Verify mutex is locked (implementation-specific check) int rc = pthread_mutex_trylock(m); if (rc == 0) { // We got the lock, meaning it wasn't held! fprintf(stderr, "ERROR at %s:%d: wait called without lock\n", file, line); pthread_mutex_unlock(m); // Release the accidental lock abort(); } // EBUSY means mutex is already locked (correct) // EDEADLK means we own it (correct, for error-checking mutexes) rc = pthread_cond_wait(c, m); if (rc != 0) { fprintf(stderr, "ERROR at %s:%d: cond_wait returned %d\n", file, line, rc); }} #define COND_WAIT(c, m) debug_cond_wait(c, m, __FILE__, __LINE__) #else#define COND_WAIT(c, m) pthread_cond_wait(c, m)#endifUnderstanding the performance characteristics of pthread_cond_wait() helps design efficient concurrent systems.
When a thread calls pthread_cond_wait() and blocks:
Total blocking wait cost: ~10,000-20,000 CPU cycles
When the thread is later woken:
Total wakeup cost: ~5,000-20,000 CPU cycles
For very short waits (sub-microsecond), spinning can be more efficient than blocking because it avoids context switch overhead. For longer waits, blocking is essential to avoid wasting CPU.
Modern synchronization primitives (like Linux futexes) use adaptive strategies: spin briefly first, then block if the wait continues.
| Strategy | Short Wait (~100ns) | Medium Wait (~10μs) | Long Wait (~10ms) |
|---|---|---|---|
| Busy-wait (spin) | ~100 cycles ✓ | ~10,000 cycles | ~10,000,000 cycles ✗ |
| pthread_cond_wait (block) | ~20,000 cycles | ~20,000 cycles | ~20,000 cycles ✓ |
| Adaptive (spin then block) | ~100 cycles ✓ | ~1,000 cycles ✓ | ~21,000 cycles ≈ |
1. Batch Signals
If producing multiple items, consider signaling once after all are added:
pthread_mutex_lock(&mutex);
for (int i = 0; i < 100; i++) {
add_item_to_queue(items[i]);
}
pthread_cond_broadcast(&cond); // One broadcast instead of 100 signals
pthread_mutex_unlock(&mutex);
2. Avoid Unnecessary Signals
Signal only when the condition actually changes:
pthread_mutex_lock(&mutex);
bool was_empty = (count == 0);
add_item(&buffer, item);
if (was_empty) {
pthread_cond_signal(&cond); // Signal only if transitioning from empty
}
pthread_mutex_unlock(&mutex);
3. Prefer Signal Over Broadcast
Broadcast wakes ALL waiters; most will recheck and go back to sleep. Signal wakes only one—usually sufficient and more efficient.
The best optimization is avoiding contention altogether. Use per-thread work queues, sharding, or lock-free algorithms to reduce the number of threads competing for the same condition variable. A condition variable that's never waited on has zero cost.
We have explored pthread_cond_wait() comprehensively—the function that enables efficient, event-driven thread synchronization. Its atomic release-and-wait semantics solve the lost wakeup problem, while proper usage patterns handle spurious wakeups and cancellation.
You now understand pthread_cond_wait() at a deep, production-ready level. The next page explores pthread_cond_signal()—the operation that wakes a single waiting thread, complementing wait to enable efficient producer-consumer coordination.