Loading learning content...
Throughout this module, we've solved classic synchronization problems using monitors. Now we step back to identify the reusable patterns that emerge from these solutions. Just as design patterns revolutionized object-oriented programming, synchronization patterns provide a vocabulary and toolkit for building concurrent systems.
These patterns distill decades of hard-won experience into teachable, reusable solutions. A pattern not only solves a problem—it explains why the solution works and when to apply it. Mastering these patterns transforms you from someone who can implement specific solutions to someone who can design novel concurrent systems with confidence.
By the end of this page, you will:\n\n• Master core monitor-based synchronization patterns\n• Recognize anti-patterns and common implementation mistakes\n• Apply template designs for rapid, correct implementation\n• Understand testing and verification strategies for concurrent code\n• Build a mental library of reusable synchronization solutions
Every monitor-based solution follows a canonical structure. Understanding this template enables rapid, correct implementation of new synchronization challenges.
The Template Structure:
monitor AbstractMonitor {
private:
// 1. Protected shared state
StateVariables state;
// 2. Condition variables for coordination
condition cond1, cond2, ...;
// 3. Helper predicates (pure functions)
bool canProceed1() { return [condition on state]; }
bool canProceed2() { return [condition on state]; }
public:
// 4. Entry procedures (main API)
procedure operation1() {
while (!canProceed1()) wait(cond1);
// Modify state
if ([signaling condition]) signal/broadcast(condX);
}
procedure operation2() {
while (!canProceed2()) wait(cond2);
// Modify state
if ([signaling condition]) signal/broadcast(condX);
}
}
The Four Essential Components:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
// Canonical monitor pattern implementation typedef struct { // ======== 1. PROTECTED STATE ======== // All shared state encapsulated here SomeState state; // ======== 2. SYNCHRONIZATION PRIMITIVES ======== pthread_mutex_t lock; // Monitor lock pthread_cond_t condition1; // Named condition pthread_cond_t condition2; // Named condition} Monitor; // ======== 3. HELPER PREDICATES ========// Pure functions: depend only on state, no side effects static bool can_proceed_1(Monitor *m) { // Define when operation1 can proceed return /* some condition on m->state */;} static bool can_proceed_2(Monitor *m) { // Define when operation2 can proceed return /* some condition on m->state */;} // ======== 4. MONITOR OPERATIONS ========// Follow entry-wait-modify-signal-exit pattern void operation1(Monitor *m) { // ENTRY: Acquire monitor pthread_mutex_lock(&m->lock); // WAIT: Block until predicate true while (!can_proceed_1(m)) { pthread_cond_wait(&m->condition1, &m->lock); } // CRITICAL: Predicate now guaranteed true // MODIFY: Update state modify_state_for_op1(&m->state); // SIGNAL: Wake threads whose conditions may now be true if (might_enable_condition2(&m->state)) { pthread_cond_signal(&m->condition2); } // EXIT: Release monitor pthread_mutex_unlock(&m->lock);} void operation2(Monitor *m) { pthread_mutex_lock(&m->lock); while (!can_proceed_2(m)) { pthread_cond_wait(&m->condition2, &m->lock); } modify_state_for_op2(&m->state); if (might_enable_condition1(&m->state)) { pthread_cond_signal(&m->condition1); } pthread_mutex_unlock(&m->lock);}Extracting wait conditions into named predicate functions (can_proceed_X) dramatically improves code clarity. The predicate encapsulates the "what" (condition to wait for) while the monitor operation handles the "how" (waiting and signaling). This separation makes the logic easier to verify and modify.
Different scenarios require different signaling strategies. Here are the primary wait-notify patterns.
Pattern 1: Single Waiter, Single Notifier
The simplest pattern: one thread waits for another's signal.
Use case: Request-response, parent-child synchronization
12345678910111213141516171819202122232425262728293031323334
// Pattern: Single waiter, single notifier// Example: Wait for computation result typedef struct { bool result_ready; int result; pthread_mutex_t lock; pthread_cond_t ready;} FutureInt; // Consumer: wait for resultint future_get(FutureInt *f) { pthread_mutex_lock(&f->lock); while (!f->result_ready) { pthread_cond_wait(&f->ready, &f->lock); } int value = f->result; pthread_mutex_unlock(&f->lock); return value;} // Producer: provide resultvoid future_set(FutureInt *f, int value) { pthread_mutex_lock(&f->lock); f->result = value; f->result_ready = true; pthread_cond_signal(&f->ready); // Signal: one waiter pthread_mutex_unlock(&f->lock);}Pattern 2: Multiple Waiters, Any Can Proceed
Multiple threads wait; when condition becomes true, any one can proceed.
Use case: Resource pool, task queue
1234567891011121314151617181920212223242526272829303132333435
// Pattern: Multiple waiters, one can proceed// Example: Connection pool typedef struct { int available; int max_connections; pthread_mutex_t lock; pthread_cond_t conn_available;} ConnectionPool; Connection *pool_acquire(ConnectionPool *pool) { pthread_mutex_lock(&pool->lock); while (pool->available == 0) { pthread_cond_wait(&pool->conn_available, &pool->lock); } pool->available--; Connection *conn = get_connection_from_pool(pool); pthread_mutex_unlock(&pool->lock); return conn;} void pool_release(ConnectionPool *pool, Connection *conn) { pthread_mutex_lock(&pool->lock); return_connection_to_pool(pool, conn); pool->available++; // Signal ONE waiter: only one connection freed pthread_cond_signal(&pool->conn_available); pthread_mutex_unlock(&pool->lock);}Pattern 3: Multiple Waiters, All Must Proceed
Multiple threads wait; when condition changes, all should re-evaluate.
Use case: Barrier, configuration change, shutdown
12345678910111213141516171819202122232425262728293031323334353637383940
// Pattern: Multiple waiters, all must wake// Example: Shutdown notification typedef struct { bool shutdown; pthread_mutex_t lock; pthread_cond_t shutdown_cond;} ShutdownManager; // Worker: check for shutdownbool should_continue(ShutdownManager *sm) { pthread_mutex_lock(&sm->lock); bool cont = !sm->shutdown; pthread_mutex_unlock(&sm->lock); return cont;} void wait_for_shutdown(ShutdownManager *sm) { pthread_mutex_lock(&sm->lock); while (!sm->shutdown) { pthread_cond_wait(&sm->shutdown_cond, &sm->lock); } pthread_mutex_unlock(&sm->lock);} // Controller: trigger shutdownvoid trigger_shutdown(ShutdownManager *sm) { pthread_mutex_lock(&sm->lock); sm->shutdown = true; // BROADCAST: All waiters must wake and see shutdown pthread_cond_broadcast(&sm->shutdown_cond); pthread_mutex_unlock(&sm->lock);}| Scenario | Signal or Broadcast | Rationale |
|---|---|---|
| One resource freed | Signal | One waiter can use it |
| Multiple resources freed | Broadcast or N signals | Multiple waiters may proceed |
| State change affects all | Broadcast | All must re-evaluate |
| Different conditions for different waiters | Broadcast | Unknown who can proceed |
| Uncertain who can proceed | Broadcast | Conservative correctness |
| Shutdown/termination | Broadcast | All threads must see it |
Using signal when broadcast is needed causes threads to miss wake-ups and hang. Using broadcast when signal is sufficient is merely inefficient—threads wake up, check the condition, and go back to sleep. Correctness trumps efficiency; default to broadcast if unsure.
Complex synchronization often involves state machines with multiple states and transitions. This pattern structures monitors around explicit states.
The Pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
// State Machine Pattern: File Download Manager typedef enum { IDLE, // Ready to start download DOWNLOADING, // Download in progress PAUSED, // Download paused COMPLETED, // Download finished ERROR // Download failed} DownloadState; typedef struct { DownloadState state; int progress; // 0-100 char *error_message; pthread_mutex_t lock; pthread_cond_t state_changed; // Broadcast on any state change} DownloadManager; // Helper: check if state is one of the valid statesstatic bool is_in_state(DownloadManager *dm, DownloadState *valid, int count) { for (int i = 0; i < count; i++) { if (dm->state == valid[i]) return true; } return false;} // Wait until download reaches a terminal statevoid wait_for_completion(DownloadManager *dm) { pthread_mutex_lock(&dm->lock); DownloadState terminal[] = {COMPLETED, ERROR}; while (!is_in_state(dm, terminal, 2)) { pthread_cond_wait(&dm->state_changed, &dm->lock); } pthread_mutex_unlock(&dm->lock);} // Transition: IDLE -> DOWNLOADINGbool start_download(DownloadManager *dm) { pthread_mutex_lock(&dm->lock); // Can only start from IDLE if (dm->state != IDLE) { pthread_mutex_unlock(&dm->lock); return false; } // State transition dm->state = DOWNLOADING; dm->progress = 0; // Broadcast: state changed pthread_cond_broadcast(&dm->state_changed); pthread_mutex_unlock(&dm->lock); return true;} // Transition: DOWNLOADING -> PAUSEDbool pause_download(DownloadManager *dm) { pthread_mutex_lock(&dm->lock); if (dm->state != DOWNLOADING) { pthread_mutex_unlock(&dm->lock); return false; } dm->state = PAUSED; pthread_cond_broadcast(&dm->state_changed); pthread_mutex_unlock(&dm->lock); return true;} // Transition: PAUSED -> DOWNLOADINGbool resume_download(DownloadManager *dm) { pthread_mutex_lock(&dm->lock); if (dm->state != PAUSED) { pthread_mutex_unlock(&dm->lock); return false; } dm->state = DOWNLOADING; pthread_cond_broadcast(&dm->state_changed); pthread_mutex_unlock(&dm->lock); return true;} // Update progress (only while downloading)bool update_progress(DownloadManager *dm, int progress) { pthread_mutex_lock(&dm->lock); if (dm->state != DOWNLOADING) { pthread_mutex_unlock(&dm->lock); return false; } dm->progress = progress; if (progress >= 100) { dm->state = COMPLETED; pthread_cond_broadcast(&dm->state_changed); } pthread_mutex_unlock(&dm->lock); return true;}Explicit states make the synchronization logic crystal clear:\n\n• Verifiable: Each transition is explicit and can be checked\n• Debuggable: Current state is observable\n• Documentable: State diagram directly maps to code\n• Extensible: New states/transitions are straightforward to add
The guard pattern (also called "guarded methods") makes the wait condition explicit in the method signature or documentation. Each operation has a guard predicate that must be true for the operation to proceed.
Structure:
procedure operation() when [guard condition]:
// Guard is implicitly checked in while loop
while not [guard condition]:
wait(appropriate_condition)
// Proceed with operation
...
This pattern is particularly useful when operations have different guard conditions.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
// Guard Pattern: Priority Queue with Blocking typedef struct { PriorityHeap heap; int max_size; pthread_mutex_t lock; pthread_cond_t not_full; pthread_cond_t not_empty; pthread_cond_t high_priority_available;} BlockingPriorityQueue; // ============ GUARD PREDICATES ============// Each clearly named, documenting the guard condition static bool guard_can_insert(BlockingPriorityQueue *q) { return q->heap.size < q->max_size;} static bool guard_can_remove(BlockingPriorityQueue *q) { return q->heap.size > 0;} static bool guard_has_high_priority(BlockingPriorityQueue *q) { return q->heap.size > 0 && peek_priority(&q->heap) >= HIGH_PRIORITY;} // ============ GUARDED OPERATIONS ============ // insert(item) when guard_can_insertvoid pq_insert(BlockingPriorityQueue *q, void *item, int priority) { pthread_mutex_lock(&q->lock); // Guard: wait until can_insert while (!guard_can_insert(q)) { pthread_cond_wait(&q->not_full, &q->lock); } heap_insert(&q->heap, item, priority); // Signal based on what changed pthread_cond_signal(&q->not_empty); if (priority >= HIGH_PRIORITY) { pthread_cond_broadcast(&q->high_priority_available); } pthread_mutex_unlock(&q->lock);} // remove() when guard_can_removevoid *pq_remove(BlockingPriorityQueue *q) { pthread_mutex_lock(&q->lock); // Guard: wait until can_remove while (!guard_can_remove(q)) { pthread_cond_wait(&q->not_empty, &q->lock); } void *item = heap_extract_max(&q->heap); pthread_cond_signal(&q->not_full); pthread_mutex_unlock(&q->lock); return item;} // remove_high_priority() when guard_has_high_priority// Specialized operation: only remove high-priority itemsvoid *pq_remove_high_priority(BlockingPriorityQueue *q) { pthread_mutex_lock(&q->lock); // Guard: wait until high priority available while (!guard_has_high_priority(q)) { pthread_cond_wait(&q->high_priority_available, &q->lock); } void *item = heap_extract_max(&q->heap); pthread_cond_signal(&q->not_full); // Note: don't signal high_priority_available unless // next item is also high priority if (guard_has_high_priority(q)) { pthread_cond_signal(&q->high_priority_available); } pthread_mutex_unlock(&q->lock); return item;}Many modern languages support guard conditions directly in syntax (e.g., Haskell's guards, Erlang's when clauses). In languages without native support, document guards clearly in comments or function names: dequeue_when_not_empty(), insert_when_has_space().
Knowing what NOT to do is as important as knowing what to do. Here are anti-patterns that lead to bugs, deadlocks, or poor performance.
Anti-Pattern 1: Holding the Lock During I/O or Long Operations
1234567891011121314151617181920212223242526272829303132
// ❌ ANTI-PATTERN: Long lock hold void process_request_bad(Monitor *m, Request *req) { pthread_mutex_lock(&m->lock); // Long operation while holding lock! ResponseData *data = fetch_from_database(req); // ❌ Blocks for 100ms+ write_to_file(data); // ❌ I/O under lock pthread_mutex_unlock(&m->lock); // Other threads blocked for the entire duration!} // ✅ PATTERN: Release lock during long operations void process_request_good(Monitor *m, Request *req) { pthread_mutex_lock(&m->lock); // Copy what we need from shared state RequestParams params = copy_params(m, req); pthread_mutex_unlock(&m->lock); // Release before long work // Long operation without holding lock ResponseData *data = fetch_from_database(¶ms); write_to_file(data); // Re-acquire if needed to update state pthread_mutex_lock(&m->lock); update_completion_status(m, req->id); pthread_mutex_unlock(&m->lock);}Anti-Pattern 2: Nested Locking (Deadlock Risk)
12345678910111213141516171819202122232425262728
// ❌ ANTI-PATTERN: Nested locking with inconsistent order void transfer_bad(Account *from, Account *to, int amount) { pthread_mutex_lock(&from->lock); pthread_mutex_lock(&to->lock); // ❌ Deadlock risk! // If concurrent transfer(A,B) and transfer(B,A) // First locks A, second locks B // First waits for B, second waits for A // DEADLOCK! // ...} // ✅ PATTERN: Consistent lock ordering void transfer_good(Account *from, Account *to, int amount) { // Always lock lower ID first Account *first = (from->id < to->id) ? from : to; Account *second = (from->id < to->id) ? to : from; pthread_mutex_lock(&first->lock); pthread_mutex_lock(&second->lock); // Safe: consistent ordering prevents cycles // ... pthread_mutex_unlock(&second->lock); pthread_mutex_unlock(&first->lock);}Anti-Pattern 3: Signal Outside Lock
1234567891011121314151617181920212223242526272829303132333435363738
// ❌ ANTI-PATTERN: Signal outside lock (sometimes causes bugs) void producer_bad(Buffer *b, void *item) { pthread_mutex_lock(&b->lock); while (b->count == b->capacity) { pthread_cond_wait(&b->not_full, &b->lock); } b->buffer[b->in] = item; b->in = (b->in + 1) % b->capacity; b->count++; pthread_mutex_unlock(&b->lock); // ❌ Signal after unlock pthread_cond_signal(&b->not_empty); // Race: consumer might check, find empty, go to wait // BEFORE this signal, losing the wakeup} // ✅ PATTERN: Signal while holding lock void producer_good(Buffer *b, void *item) { pthread_mutex_lock(&b->lock); while (b->count == b->capacity) { pthread_cond_wait(&b->not_full, &b->lock); } b->buffer[b->in] = item; b->in = (b->in + 1) % b->capacity; b->count++; pthread_cond_signal(&b->not_empty); // ✅ Signal under lock pthread_mutex_unlock(&b->lock);}| Anti-Pattern | Problem | Solution |
|---|---|---|
| Long lock hold | Poor concurrency, starvation | Release lock during long ops |
| Nested locking | Deadlock risk | Consistent lock ordering |
| Signal outside lock | Lost wakeups | Signal while holding lock |
if instead of while | Condition violation | Always use while for conditions |
| Forgetting to signal | Indefinite blocking | Review all exit paths |
| Wrong condition variable | Wrong thread wakes | Name CVs descriptively |
Anti-pattern bugs often work 99% of the time and fail under specific timing. They pass tests, ship to production, and cause intermittent issues that are nearly impossible to reproduce. The only defense is discipline: follow patterns rigorously, even when shortcuts seem to work.
Concurrent code is notoriously difficult to test because bugs depend on scheduling, which is non-deterministic. Here are strategies to improve confidence.
Strategy 1: Stress Testing
Run many threads performing rapid operations to increase the likelihood of triggering races.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
#include <pthread.h>#include <assert.h> #define NUM_PRODUCERS 10#define NUM_CONSUMERS 10#define ITEMS_PER_THREAD 10000 BoundedBuffer buffer;long produced_sum = 0;long consumed_sum = 0;pthread_mutex_t sum_lock; void *producer(void *arg) { long local_sum = 0; for (int i = 0; i < ITEMS_PER_THREAD; i++) { int value = rand(); local_sum += value; bounded_buffer_insert(&buffer, (void *)(long)value); } pthread_mutex_lock(&sum_lock); produced_sum += local_sum; pthread_mutex_unlock(&sum_lock); return NULL;} void *consumer(void *arg) { long local_sum = 0; for (int i = 0; i < ITEMS_PER_THREAD; i++) { int value = (int)(long)bounded_buffer_remove(&buffer); local_sum += value; } pthread_mutex_lock(&sum_lock); consumed_sum += local_sum; pthread_mutex_unlock(&sum_lock); return NULL;} void stress_test() { bounded_buffer_init(&buffer, 100); pthread_mutex_init(&sum_lock, NULL); pthread_t prod[NUM_PRODUCERS], cons[NUM_CONSUMERS]; for (int i = 0; i < NUM_PRODUCERS; i++) pthread_create(&prod[i], NULL, producer, NULL); for (int i = 0; i < NUM_CONSUMERS; i++) pthread_create(&cons[i], NULL, consumer, NULL); for (int i = 0; i < NUM_PRODUCERS; i++) pthread_join(prod[i], NULL); for (int i = 0; i < NUM_CONSUMERS; i++) pthread_join(cons[i], NULL); // Invariant check: all produced values were consumed assert(produced_sum == consumed_sum); printf("Stress test passed: sum = %ld\n", produced_sum);}Strategy 2: Thread Sanitizer (TSan)
Use runtime tools that detect data races and synchronization errors.
# Compile with ThreadSanitizer
gcc -fsanitize=thread -g -O1 test.c -o test
# Run - TSan reports any races detected
./test
Strategy 3: Invariant Assertions
Embed assertions that check invariants; violations indicate bugs.
void check_invariants(BoundedBuffer *bb) {
pthread_mutex_lock(&bb->lock);
assert(bb->count >= 0);
assert(bb->count <= bb->capacity);
assert(bb->in >= 0 && bb->in < bb->capacity);
assert(bb->out >= 0 && bb->out < bb->capacity);
pthread_mutex_unlock(&bb->lock);
}
Strategy 4: Model Checking
Tools like SPIN or TLA+ can exhaustively verify concurrent algorithms.
Even extensive testing cannot prove the absence of bugs in concurrent code. The execution space is too vast. Combine testing with code review, formal verification for critical sections, and adherence to proven patterns. Defense in depth is essential.
Let's consolidate the lessons from this module into actionable best practices.
Design Practices:
can_read, not_full are better than cond1Implementation Practices:
while, Never if: Mesa semantics require loop checkingsignal: Use broadcast only when multiple threads should wakeTesting Practices:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
// Template following all best practices typedef struct { // ======== STATE ======== // All shared state encapsulated // ======== INVARIANTS ======== // INV1: 0 <= count <= capacity // INV2: in, out are valid indices // ======== SYNCHRONIZATION ======== pthread_mutex_t lock; pthread_cond_t can_read; // Clear name: when can read pthread_cond_t can_write; // Clear name: when can write} BestPracticeMonitor; // ======== PREDICATES ========// Separate, testable, documentable static inline bool pred_can_read(BestPracticeMonitor *m) { return /* condition */;} static inline bool pred_can_write(BestPracticeMonitor *m) { return /* condition */;} // ======== INVARIANT CHECKER ========// Can be called under lock for debugging static void assert_invariants(BestPracticeMonitor *m) { assert(/* invariant 1 */); assert(/* invariant 2 */);} // ======== PUBLIC OPERATIONS ========// Follow: lock -> while(!pred){wait} -> modify -> signal -> unlock void operation(BestPracticeMonitor *m) { pthread_mutex_lock(&m->lock); while (!pred_can_read(m)) { // while, not if pthread_cond_wait(&m->can_read, &m->lock); } // Modify state modify_state(m); // Assert invariants still hold (debug builds) assert_invariants(m); // Signal under lock if (pred_can_write(m)) { pthread_cond_signal(&m->can_write); } pthread_mutex_unlock(&m->lock);}Following these patterns and practices consistently leads to code that is:\n\n• Correct: Bugs are rare because patterns prevent common errors\n• Maintainable: Structure is clear, modifications are localized\n• Verifiable: Invariants, predicates, and state machines are reviewable\n• Testable: Encapsulation enables unit testing of synchronization logic
This page has equipped you with a toolkit of patterns for implementing monitor-based synchronization. These patterns are the distillation of decades of concurrent programming experience.
Module Complete:
You've now completed the Monitor-Based Solutions module. You can solve classic synchronization problems (bounded buffer, readers-writers, dining philosophers) using monitors, compare monitor and semaphore approaches, and apply proven implementation patterns.
These skills form the foundation for building reliable concurrent systems—from operating system kernels to distributed databases to high-performance servers.
Congratulations! You've mastered monitor-based synchronization, one of the most important topics in concurrent programming. You now have the knowledge and patterns to implement correct, efficient, and maintainable synchronization in any language that supports mutexes and condition variables.