Loading learning content...
After studying the Sleeping Barber, Cigarette Smokers, Producer-Consumer, Readers-Writers, and Dining Philosophers problems, you might notice something: the same patterns keep appearing. The specifics differ—barbers vs. philosophers, cigarettes vs. buffers—but the underlying synchronization challenges often reduce to a small set of recurring structures.
Recognizing these patterns is what separates experts from novices. When faced with a new synchronization problem, an expert doesn't start from scratch. They ask: "Is this a producer-consumer? A readers-writers with different constraints? A resource allocation with ordering?" And from pattern recognition, a solution path emerges.
This page distills the essential patterns that recur across synchronization problems. Master these, and you'll have a mental toolkit that applies far beyond the classic problems to real-world systems you encounter.
By the end of this page, you will:
• Identify six fundamental synchronization patterns • Recognize when a new problem is an instance of a known pattern • Understand how patterns compose to form more complex solutions • Apply pattern-based thinking to analyze unfamiliar problems • Build a vocabulary for discussing synchronization challenges
We'll examine six fundamental patterns that cover the vast majority of synchronization scenarios:
Beyond these fundamental patterns, we'll explore composite patterns that combine primitives:
| Pattern | Semaphore Usage | Purpose | Classic Example |
|---|---|---|---|
| Signaling | Binary, initial 0 | Notify occurrence | Wakeup notification |
| Rendezvous | Two binary semaphores | Mutual waiting | Thread synchronization point |
| Mutex | Binary, initial 1 | Exclusive access | Critical section |
| Multiplex | Counting, initial N | Limit concurrent access | Connection pool |
| Barrier | Counting + mutex | All-before-any | Parallel algorithm phases |
| Turnstile | Binary, initial 0 | Sequential release | Barrier implementation |
The Problem:
Thread A must wait until Thread B has completed some action. This is the most basic coordination: ensuring one event happens before another.
The Pattern:
Use a semaphore initialized to 0:
Key Properties:
1234567891011121314151617181920212223242526272829303132333435363738394041
/* * Signaling Pattern * * Ensure statement a1 in Thread A executes BEFORE * statement b1 in Thread B. */ sem_t event_happened = 0; // Initially: event hasn't happened void* thread_a(void* arg) { do_preparation(); // Signal that work is done sem_post(&event_happened); // Now event has happened continue_work(); return NULL;} void* thread_b(void* arg) { do_other_work(); // Wait for event sem_wait(&event_happened); // Block until thread_a signals // Guaranteed: thread_a's preparation is done use_preparation_result(); return NULL;} /* * Why semaphore (not just a flag)? * * Flag approach has race condition: * Thread B: while (!flag) { } // Busy wait, or... * Thread B: if (!flag) sleep(); else use_result(); * // If A sets flag between check and sleep: lost wakeup! * * Semaphore atomically checks condition and goes to sleep, * and post is guaranteed to wake a waiter if any exists. */Where This Pattern Appears:
Semaphore signals persist — if B signals before A waits, the signal isn't lost. This is fundamentally different from condition variables, where a signal to no waiter is lost. For simple event notification, semaphores are often cleaner.
The Problem:
Two threads must synchronize at a point—neither can proceed until both have arrived. This is mutual signaling: A waits for B, and B waits for A.
The Pattern:
Use two semaphores, one for each direction:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
/* * Rendezvous Pattern * * Both threads must reach the rendezvous point * before either can proceed. */ sem_t a_arrived = 0; // A signals arrival, B waitssem_t b_arrived = 0; // B signals arrival, A waits void* thread_a(void* arg) { statement_a1(); // RENDEZVOUS POINT sem_post(&a_arrived); // Signal: I'm here sem_wait(&b_arrived); // Wait for B // Now guaranteed: both a1 and b1 have executed statement_a2(); return NULL;} void* thread_b(void* arg) { statement_b1(); // RENDEZVOUS POINT sem_post(&b_arrived); // Signal: I'm here sem_wait(&a_arrived); // Wait for A // Now guaranteed: both a1 and b1 have executed statement_b2(); return NULL;} /* * IMPORTANT: Order matters! * * CORRECT: signal then wait (allows either to arrive first) * A: post(a_arrived); wait(b_arrived); * B: post(b_arrived); wait(a_arrived); * * DEADLOCK: wait then signal (both block before signaling) * A: wait(b_arrived); post(a_arrived); // A blocks * B: wait(a_arrived); post(b_arrived); // B blocks * // Neither can signal, deadlock! */The order of signal-then-wait is critical! If both threads wait before signaling, deadlock occurs. This is a common bug when implementing rendezvous.
Mnemonic: "Be polite—signal before you wait."
Where This Pattern Appears:
The Problem:
Allow at most N threads into a critical section simultaneously. This generalizes mutual exclusion (N=1) to limited concurrency.
The Pattern:
Use a semaphore initialized to N:
1234567891011121314151617181920212223242526272829303132
/* * Multiplex Pattern * * Allow up to N threads in the critical section simultaneously. */ #define MAX_CONCURRENT 5sem_t multiplex = MAX_CONCURRENT; // Initially: 5 permits available void* worker_thread(void* arg) { while (true) { do_other_work(); // Enter limited-concurrency section sem_wait(&multiplex); // Acquire one of N permits // CRITICAL SECTION // At most MAX_CONCURRENT threads are here simultaneously access_limited_resource(); sem_post(&multiplex); // Release permit } return NULL;} /* * Use cases: * - Connection pool with max connections * - Rate limiting: max N requests in flight * - Resource pool: max N concurrent users * - Thread pool: max N active workers */Relationship to Mutex:
A mutex is a multiplex with N=1. In fact, some systems implement mutexes as binary semaphores internally.
Relationship to Bounded Buffer:
The empty_slots semaphore in a bounded buffer is a multiplex limiting concurrent producers (to buffer capacity). The full_slots semaphore similarly limits consumers.
| Application | What N Represents | Resource Being Limited |
|---|---|---|
| Database connection pool | Max connections | Database sockets |
| Web server workers | Max concurrent requests | Handler threads |
| File handle pool | Max open files | OS file descriptors |
| GPU kernel dispatch | Max concurrent warps | Compute units |
| Bounded buffer slots | Buffer capacity | Array slots |
The Problem:
N threads must all reach a synchronization point before any of them can proceed. This is an extension of rendezvous to N parties.
The Pattern:
Use a counter protected by a mutex, and a semaphore for release:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
/* * Barrier Pattern (Basic, One-Shot) * * N threads wait until all have arrived, * then all proceed simultaneously. */ #define NUM_THREADS 4 pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;sem_t barrier = 0; // Threads block hereint count = 0; // Threads that have arrived void barrier_wait() { pthread_mutex_lock(&mutex); count++; if (count == NUM_THREADS) { // Last thread to arrive: release all others // Signal N-1 times (self doesn't need signal) for (int i = 0; i < NUM_THREADS - 1; i++) { sem_post(&barrier); } // Reset for reuse (if needed) count = 0; pthread_mutex_unlock(&mutex); } else { // Not last: wait for release pthread_mutex_unlock(&mutex); sem_wait(&barrier); }} void* thread_func(void* arg) { int id = *(int*)arg; // Phase 1 work printf("Thread %d: phase 1 complete\n", id); barrier_wait(); // Wait for all threads // Phase 2 work (guaranteed all threads finished phase 1) printf("Thread %d: starting phase 2\n", id); return NULL;} /* * Warning: This basic barrier has a bug for reuse! * If phase 2 is fast and a thread re-enters barrier_wait * before all threads have exited, count is corrupted. * * See "Reusable Barrier" pattern for the solution. */The basic barrier above is not safely reusable. If one thread completes phase 2 quickly and re-enters the barrier while others are still exiting phase 1, the count becomes corrupted. The Reusable Barrier pattern (using a turnstile) solves this.
Where Barriers Appear:
The Problem:
We need a barrier that can be used repeatedly without corruption. The key insight: threads must exit the barrier in a controlled way, not all at once.
The Turnstile Pattern:
A turnstile is a semaphore that starts at 0. When unlocked (signaled), threads pass through one at a time, each one re-locking it for the next.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
/* * Reusable Barrier Using Two Turnstiles * * A turnstile lets threads pass one at a time, * each relocking for the next. */ #define NUM_THREADS 4 typedef struct { pthread_mutex_t mutex; int count; sem_t turnstile1; // First turnstile sem_t turnstile2; // Second turnstile} ReusableBarrier; void barrier_init(ReusableBarrier* b) { pthread_mutex_init(&b->mutex, NULL); b->count = 0; sem_init(&b->turnstile1, 0, 0); // Initially locked sem_init(&b->turnstile2, 0, 1); // Initially open (for first use)} void barrier_wait(ReusableBarrier* b) { /* ======================== * PHASE 1: ENTRY * ======================== */ pthread_mutex_lock(&b->mutex); b->count++; if (b->count == NUM_THREADS) { // Last to enter: unlock turnstile1, lock turnstile2 sem_wait(&b->turnstile2); // Lock turnstile2 sem_post(&b->turnstile1); // Unlock turnstile1 } pthread_mutex_unlock(&b->mutex); // Pass through turnstile1 (wait then signal next thread) sem_wait(&b->turnstile1); sem_post(&b->turnstile1); // Let next thread through /* ======================== * PHASE 2: EXIT * ======================== */ pthread_mutex_lock(&b->mutex); b->count--; if (b->count == 0) { // Last to exit: unlock turnstile2, lock turnstile1 sem_wait(&b->turnstile1); // Lock turnstile1 sem_post(&b->turnstile2); // Unlock turnstile2 } pthread_mutex_unlock(&b->mutex); // Pass through turnstile2 sem_wait(&b->turnstile2); sem_post(&b->turnstile2);} /* * Why this works: * * - turnstile1 controls entry: locked until all arrive * - turnstile2 controls reuse: locked until all exit * - Each thread passes through each turnstile atomically * (wait then immediately post, letting next thread through) * * The "double turnstile" ensures complete separation between * barrier uses, preventing early arrivals from corrupting count. */The turnstile idiom: wait then immediately post.
sem_wait(&turnstile); // Enter
sem_post(&turnstile); // Let next person in
This creates a "one at a time" gate. When the turnstile is locked (value 0), all threads block. When unlocked, one thread passes, immediately unlocking for the next.
The fundamental patterns combine to form higher-level structures. Recognizing these composites accelerates problem-solving.
Producer-Consumer = Signaling + Multiplex
Readers-Writers = Asymmetric Multiplex
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
/* * Lightswitch Pattern * * Used in Readers-Writers: first reader turns on the light * (locks writers out), last reader turns off (allows writers). */ typedef struct { int counter; pthread_mutex_t mutex;} Lightswitch; void lightswitch_lock(Lightswitch* ls, sem_t* semaphore) { pthread_mutex_lock(&ls->mutex); ls->counter++; if (ls->counter == 1) { // First one in: acquire the semaphore sem_wait(semaphore); // Lock out "the other side" } pthread_mutex_unlock(&ls->mutex);} void lightswitch_unlock(Lightswitch* ls, sem_t* semaphore) { pthread_mutex_lock(&ls->mutex); ls->counter--; if (ls->counter == 0) { // Last one out: release the semaphore sem_post(semaphore); // Allow "the other side" } pthread_mutex_unlock(&ls->mutex);} /* * Usage in Readers-Writers: * * Lightswitch reader_switch; * sem_t room_empty = 1; // Initially: room is empty * * Reader: * lightswitch_lock(&reader_switch, &room_empty); * // READ * lightswitch_unlock(&reader_switch, &room_empty); * * Writer: * sem_wait(&room_empty); // Wait for all readers to leave * // WRITE * sem_post(&room_empty); * * Effect: Multiple readers can hold room_empty together, * but writer waits until all readers are gone. */Resource Ordering = Deadlock Prevention
When multiple resources must be acquired, acquire them in a fixed global order to prevent circular wait:
123456789101112131415161718192021222324252627282930313233343536
/* * Resource Ordering Pattern * * Acquire locks in a fixed order to prevent deadlock. * Used in Dining Philosophers (pick lower-numbered fork first). */ // BAD: Arbitrary order can cause deadlockvoid bad_acquire(int resource_a, int resource_b) { lock(&locks[resource_a]); lock(&locks[resource_b]); // If another thread does this in reverse order: deadlock possible} // GOOD: Always acquire in ascending ordervoid good_acquire(int resource_a, int resource_b) { int first = min(resource_a, resource_b); int second = max(resource_a, resource_b); lock(&locks[first]); lock(&locks[second]); // No thread can have second locked while waiting for first // because all threads acquire first first!} /* * Why it works: * * Circular wait requires: * Thread 1 holds A, waits for B * Thread 2 holds B, waits for A * * With ordering, if Thread 1 holds A, it will get B before * any thread can hold B while waiting for A (because that * thread would have to acquire A first, which Thread 1 holds). */| Composite Pattern | Building Blocks | Key Insight |
|---|---|---|
| Producer-Consumer | Signaling + Multiplex | Dual multiplex for capacity in both directions |
| Readers-Writers | Lightswitch + Mutex | Group lock acquisition/release |
| Dining Philosophers | Resource Ordering + Mutex | Fixed acquisition order breaks circular wait |
| Sleeping Barber | Signaling + Rendezvous + Bounded Buffer | Capacity-limited rendezvous |
| Cigarette Smokers | Signaling + Conditional Matching | Pushers provide missing conditional logic |
When faced with a new synchronization problem, use this systematic approach:
Step 1: Identify the Participants
Step 2: Identify the Resources
Step 3: Identify the Constraints
Step 4: Compose Patterns
Most real problems require multiple patterns. Identify each constraint, find its pattern, then compose:
Example Analysis: Print Server
Problem: Multiple applications submit print jobs. Jobs queue up (bounded capacity). A single printer processes jobs one at a time.
Analysis:
Pattern composition: Signaling (job ready) + Multiplex (queue capacity) + Mutex (queue access)
Experts don't think through these steps explicitly every time. After solving hundreds of problems, pattern matching becomes intuitive. They "see" the structure immediately.
But when stuck, returning to systematic analysis often reveals the pattern you couldn't see intuitively. The method is a scaffold until intuition develops.
Just as patterns represent good solutions, anti-patterns represent common mistakes. Recognizing anti-patterns helps avoid them.
Anti-Pattern 1: Wait Before Signal (in Rendezvous)
1234567891011121314151617181920
// DEADLOCK: Both threads wait before signaling void* thread_a(void* arg) { sem_wait(&b_arrived); // Wait for B (but B is also waiting!) sem_post(&a_arrived); return NULL;} void* thread_b(void* arg) { sem_wait(&a_arrived); // Wait for A (deadlock!) sem_post(&b_arrived); return NULL;} // Fix: Signal THEN waitvoid* thread_a_fixed(void* arg) { sem_post(&a_arrived); // Signal first sem_wait(&b_arrived); // Then wait return NULL;}Anti-Pattern 2: Holding Mutex Across Blocking Wait
12345678910111213141516171819202122232425
// DEADLOCK: Holding mutex while waiting on semaphore void* broken_consumer(void* arg) { pthread_mutex_lock(&mutex); sem_wait(&items); // Block while holding mutex! // Other threads can't signal 'items' because they // need the mutex first... deadlock! // process item... pthread_mutex_unlock(&mutex); return NULL;} // Fix: Release mutex before blocking waitvoid* fixed_consumer(void* arg) { sem_wait(&items); // Wait OUTSIDE mutex pthread_mutex_lock(&mutex); // process item... pthread_mutex_unlock(&mutex); return NULL;}Pattern recognition transforms synchronization from an intimidating field into a manageable skill. The same structures appear again and again; mastering them provides leverage across countless problems.
You now have a catalog of fundamental synchronization patterns and a method for applying them. These patterns are the vocabulary of concurrent programming—the building blocks from which all solutions are constructed. Next, we'll close this module with Solution Strategies, synthesizing everything into a systematic approach for tackling any synchronization problem.