Loading learning content...
Three conditions—mutual exclusion, hold-and-wait, and no preemption—create the possibility of waiting. But waiting alone does not constitute deadlock. A process waiting for a resource held by a running process will eventually be served when that process completes. Deadlock requires something more: a circular wait, where the waiting relationships form a closed cycle with no exit.
Circular wait is the keystone condition. When processes P₁, P₂, ..., Pₙ each wait for a resource held by the next process in the chain, and Pₙ waits for a resource held by P₁, the cycle is complete. No process can proceed because each is waiting for another in the cycle. This cyclic dependency converts static waiting into permanent, irresolvable deadlock.
By the end of this page, you will deeply understand the circular wait condition: its formal definition, how cycles emerge from resource request patterns, why cycles complete the deadlock requirements, detection through graph analysis, and critically, how imposing resource ordering prevents circular wait—one of the most widely used deadlock prevention strategies.
Formal Definition:
There exists a set of waiting processes {P₀, P₁, ..., Pₙ} such that P₀ is waiting for a resource held by P₁, P₁ is waiting for a resource held by P₂, ..., Pₙ₋₁ is waiting for a resource held by Pₙ, and Pₙ is waiting for a resource held by P₀.
Let's unpack this definition:
1. Ordered Chain — There is a sequence of processes, each waiting for the next. This creates a directional chain of dependencies.
2. Closure — The last process in the chain waits for the first, closing the loop. This transforms a chain into a cycle.
3. Everyone Blocked — Every process in the cycle is blocked, waiting for another process in the same cycle. There is no process that can proceed.
Coffman et al. stated this condition as: "There must be a circular chain of processes, each waiting for a resource held by the next." The circularity is essential—a linear chain of waiting is not deadlock because the end of the chain can eventually proceed.
Mathematical Formalization:
Define the wait-for relation: Pᵢ → Pⱼ if Pᵢ is waiting for a resource currently held by Pⱼ.
Circular wait exists when:
∃ P₀, P₁, ..., Pₙ : P₀ → P₁ → P₂ → ... → Pₙ → P₀
This is a cycle in the wait-for graph. The existence of any such cycle satisfies the circular wait condition.
Minimum Cycle Size:
The smallest possible cycle involves two processes:
P₀ → P₁ → P₀
P₀ waits for P₁, and P₁ waits for P₀. This is the classic two-process deadlock scenario. Larger cycles (3, 4, ... n processes) are also possible and increasingly difficult to detect during system design.
Circular waits don't appear spontaneously—they emerge from specific patterns of resource acquisition. Understanding these patterns helps identify and prevent potential deadlocks.
Pattern 1: Opposite Acquisition Order
The most common cause of circular wait is two (or more) processes acquiring resources in different orders:
Process A: Process B:
───────────── ─────────────
1. Acquire R1 1. Acquire R2
2. Acquire R2 2. Acquire R1
3. Use both resources 3. Use both resources
4. Release R2 4. Release R1
5. Release R1 5. Release R2
If both execute their step 1 concurrently:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
#include <pthread.h>#include <stdio.h> pthread_mutex_t lock_A = PTHREAD_MUTEX_INITIALIZER;pthread_mutex_t lock_B = PTHREAD_MUTEX_INITIALIZER; void *thread_1(void *arg) { // Thread 1 acquires locks in order: A, then B printf("Thread 1: Attempting to acquire lock A\n"); pthread_mutex_lock(&lock_A); printf("Thread 1: Acquired lock A\n"); // Simulate work to increase chance of deadlock usleep(100); printf("Thread 1: Attempting to acquire lock B\n"); pthread_mutex_lock(&lock_B); // BLOCKS if Thread 2 holds B printf("Thread 1: Acquired lock B\n"); // Critical section with both locks do_work_with_A_and_B(); pthread_mutex_unlock(&lock_B); pthread_mutex_unlock(&lock_A); return NULL;} void *thread_2(void *arg) { // Thread 2 acquires locks in OPPOSITE order: B, then A printf("Thread 2: Attempting to acquire lock B\n"); pthread_mutex_lock(&lock_B); printf("Thread 2: Acquired lock B\n"); // Simulate work to increase chance of deadlock usleep(100); printf("Thread 2: Attempting to acquire lock A\n"); pthread_mutex_lock(&lock_A); // BLOCKS if Thread 1 holds A printf("Thread 2: Acquired lock A\n"); // Critical section with both locks do_work_with_A_and_B(); pthread_mutex_unlock(&lock_A); pthread_mutex_unlock(&lock_B); return NULL;} // CERTAIN DEADLOCK if both threads run concurrently:// Thread 1: Holds A, wants B// Thread 2: Holds B, wants A// Circular wait: Thread1 → Thread2 → Thread1Pattern 2: Transitive Chain Formation
Larger cycles can form through chains of intermediate processes:
Process P1: Holds R1, wants R2 → waits for whoever holds R2
Process P2: Holds R2, wants R3 → waits for whoever holds R3
Process P3: Holds R3, wants R1 → waits for whoever holds R1
Chain: P1 → P2 → P3 → P1 (cycle formed!)
These larger cycles are insidious because no single process appears to be doing anything wrong—each acquires just two resources. The cycle emerges from the combination of their behaviors.
Circular waits are often emergent properties of the system, not visible in any single code path. Process A's code looks fine; Process B's code looks fine; but together they create deadlock. This is why deadlock analysis must consider the system holistically, not just individual processes.
Resource allocation graphs provide a powerful tool for visualizing and detecting circular waits. Understanding how to analyze these graphs is essential for deadlock detection.
Graph Components:
Cycle Detection for Single-Instance Resources:
When each resource type has exactly one instance, cycle detection is straightforward:
1. Construct the wait-for graph:
- For each process P holding resource R
- And each process Q requesting R
- Add edge Q → P (Q waits for P)
2. Apply standard cycle detection (DFS with back-edge detection)
3. If cycle found → DEADLOCK
If no cycle → No deadlock currently
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100
from typing import Dict, List, Set, Optionalfrom enum import Enumfrom dataclasses import dataclass class NodeState(Enum): UNVISITED = 0 VISITING = 1 # Currently in DFS stack VISITED = 2 # DFS complete for this node @dataclassclass Process: id: str holding: Set[str] # Resources this process holds waiting_for: Set[str] # Resources this process wants class DeadlockDetector: """ Detect circular wait in a system of processes and single-instance resources. """ def __init__(self, processes: List[Process]): self.processes = {p.id: p for p in processes} self.wait_for_graph: Dict[str, List[str]] = {} self._build_wait_for_graph() def _build_wait_for_graph(self) -> None: """ Build wait-for graph: edge from P to Q means P waits for Q. P waits for Q if P wants a resource that Q holds. """ # Create resource → holder mapping resource_holder: Dict[str, str] = {} for proc in self.processes.values(): for resource in proc.holding: resource_holder[resource] = proc.id # Build edges: waiting process → holding process for proc in self.processes.values(): self.wait_for_graph[proc.id] = [] for wanted_resource in proc.waiting_for: if wanted_resource in resource_holder: holder = resource_holder[wanted_resource] if holder != proc.id: # Can't wait for yourself self.wait_for_graph[proc.id].append(holder) def detect_cycle(self) -> Optional[List[str]]: """ Detect cycle using DFS with three-color marking. Returns the cycle if found, None otherwise. """ state = {pid: NodeState.UNVISITED for pid in self.processes} parent = {pid: None for pid in self.processes} for start_node in self.processes: if state[start_node] == NodeState.UNVISITED: cycle = self._dfs(start_node, state, parent) if cycle: return cycle return None def _dfs(self, node: str, state: Dict, parent: Dict) -> Optional[List[str]]: """DFS with back-edge detection for cycle finding.""" state[node] = NodeState.VISITING for neighbor in self.wait_for_graph.get(node, []): if state[neighbor] == NodeState.VISITING: # BACK EDGE FOUND - CYCLE DETECTED! # Reconstruct the cycle cycle = [neighbor] current = node while current != neighbor: cycle.append(current) current = parent[current] cycle.append(neighbor) return cycle[::-1] # Reverse to get correct order elif state[neighbor] == NodeState.UNVISITED: parent[neighbor] = node cycle = self._dfs(neighbor, state, parent) if cycle: return cycle state[node] = NodeState.VISITED return None # Example usageprocesses = [ Process("P1", holding={"R1"}, waiting_for={"R2"}), Process("P2", holding={"R2"}, waiting_for={"R1"}),] detector = DeadlockDetector(processes)cycle = detector.detect_cycle() if cycle: print(f"DEADLOCK DETECTED! Cycle: {' → '.join(cycle)}") # Output: DEADLOCK DETECTED! Cycle: P1 → P2 → P1else: print("No deadlock detected.")For multi-instance resources, a cycle in the resource allocation graph is necessary but not sufficient for deadlock. Additional analysis (like the banker's algorithm) is needed because a process requesting a resource might be servable from available instances even if others hold some instances.
Visual representation makes circular wait immediately apparent. Let's examine several scenarios to build intuition.
SCENARIO 1: Two-Process Deadlock (Classic)═══════════════════════════════════════════ ┌──────────────────────────────────────┐ │ R2 │ │ ┌────┐ │ │ Request┌───►│ • │◄───┐Assignment │ │ │ └────┘ │ │ │ │ │ │ │ ┌───┴───┐ ┌─────┴───┐ │ │ │ P1 │ │ P2 │ │ │ └───┬───┘ └────┬────┘ │ │ │ │ │ │ Assignment│ │Request │ │ │ ┌────┐ │ │ │ └───►│ • │◄───┘ │ │ └────┘ │ │ R1 │ └──────────────────────────────────────┘ CYCLE: P1 requests R2 → R2 assigned to P2 → P2 requests R1 → R1 assigned to P1 = P1 → P2 → P1 SCENARIO 2: Three-Process Deadlock══════════════════════════════════ ┌────┐ ┌───┤ R1 │───┐ │ └────┘ │ Holds │ │ Wants │ │ ┌───┴───┐ ┌───┴───┐ │ P1 │ │ P3 │ └───┬───┘ └───┬───┘ │ │ Wants │ │ Holds │ ┌────┐ │ └──►│ R2 │◄──┘ └─┬──┘ │ Assigned to │ ┌───┴───┐ │ P2 │ └───┬───┘ │ Holds R2, Wants R3 │ ┌─┴──┐ │ R3 │ ←── Held by P3 └────┘ CYCLE: P1 (H:R1, W:R2) → P2 (H:R2, W:R3) → P3 (H:R3, W:R1) → P1 SCENARIO 3: No Deadlock (Acyclic)═════════════════════════════════ ┌───────┐ ┌───────┐ │ P1 │────────►│ R1 │────────►┌───────┐ └───────┘ Wants └───────┘ Held by │ P2 │ └───┬───┘ │ Running (no waiting) P2 is not waiting for any resource.P2 will eventually release R1.P1 will then acquire R1 and proceed.NO CYCLE → NO DEADLOCKKey Visual Insight:
In a deadlock cycle, you can trace a closed loop through the graph following alternating request and assignment edges:
If you return to start, circular wait exists. If you reach a non-waiting process or a free resource, no cycle through that path.
Understanding why all four conditions are necessary—and why circular wait is the "final piece"—deepens comprehension of deadlock theory.
The Logical Chain:
1. MUTUAL EXCLUSION
└── Resources cannot be shared
└── Therefore: requesting process must WAIT if resource is held
2. HOLD AND WAIT
└── Waiting process holds other resources
└── Therefore: those resources are unavailable to others
└── Creates potential for CHAIN of waiting
3. NO PREEMPTION
└── Resources cannot be taken from holders
└── Therefore: waiting is permanent until voluntary release
└── Chains become STATIC (unchanging without progress)
4. CIRCULAR WAIT
└── Waiting chain forms a CYCLE
└── No process in cycle can proceed (each waits for another in cycle)
└── No process in cycle will release (can't proceed without waited resource)
└── DEADLOCK: permanent, irresolvable stalemate
Why No Subset Suffices:
Without mutual exclusion: Resources are shareable; no waiting occurs; no deadlock.
Without hold-and-wait: Waiting processes hold nothing; no blocking chains form.
Without no-preemption: Resources can be taken; cycles can be broken by preemption.
Without circular wait: Waiting chains are linear; the end of the chain can proceed, triggering cascading progress.
Only with ALL FOUR: The cycle ensures everyone waits, non-preemption ensures waiting persists, hold-and-wait ensures waiting creates blocking, and mutual exclusion ensures blocking is necessary.
For single-instance resources, the four conditions are both necessary AND sufficient—their simultaneous occurrence guarantees deadlock. For multi-instance resources, they remain necessary but additional conditions determine sufficiency (e.g., whether requests can be satisfied from available instances).
Circular wait is often the most practical condition to break. The key technique is resource ordering: impose a total order on all resources and require that processes request resources only in increasing order.
Why Resource Ordering Prevents Circular Wait:
Proof by contradiction:
Assume a cycle exists: P₀ → P₁ → P₂ → ... → Pₙ → P₀
If Pᵢ → Pᵢ₊₁, then Pᵢ is waiting for resource Rⱼ held by Pᵢ₊₁
By resource ordering: Pᵢ can only request Rⱼ if Rⱼ > all resources Pᵢ holds
Since Pᵢ₊₁ holds Rⱼ and waits for Rₖ, by ordering: Rₖ > Rⱼ
Following the cycle: each request is for a higher-numbered resource
But the cycle closes: Pₙ → P₀, meaning Pₙ waits for a resource held by P₀
This resource must be higher than what Pₙ holds, but P₀ holds the lowest-numbered resource in the cycle
Contradiction! The cycle cannot exist.
∴ Resource ordering prevents circular wait.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
#include <pthread.h>#include <stdio.h> // Assign resource ordering: lock_A(1) < lock_B(2) < lock_C(3)// Rule: Always acquire in increasing order: A before B before C pthread_mutex_t lock_A = PTHREAD_MUTEX_INITIALIZER; // Order = 1pthread_mutex_t lock_B = PTHREAD_MUTEX_INITIALIZER; // Order = 2pthread_mutex_t lock_C = PTHREAD_MUTEX_INITIALIZER; // Order = 3 // CORRECT: Both threads acquire locks in same order - NO DEADLOCK POSSIBLEvoid *thread_1(void *arg) { // Needs locks A and B - acquire in order A, then B pthread_mutex_lock(&lock_A); // Acquire #1 first printf("Thread 1: Acquired lock A\n"); pthread_mutex_lock(&lock_B); // Then #2 printf("Thread 1: Acquired lock B\n"); // Work with both locks do_work(); pthread_mutex_unlock(&lock_B); pthread_mutex_unlock(&lock_A); return NULL;} void *thread_2(void *arg) { // Also needs locks A and B - MUST acquire in order A, then B pthread_mutex_lock(&lock_A); // Same order as thread_1: A first printf("Thread 2: Acquired lock A\n"); pthread_mutex_lock(&lock_B); // Then B printf("Thread 2: Acquired lock B\n"); // Work with both locks do_work(); pthread_mutex_unlock(&lock_B); pthread_mutex_unlock(&lock_A); return NULL;} // Why this works:// - If thread_1 holds A, thread_2 waits for A (not holding anything)// - If thread_2 holds A, thread_1 waits for A (not holding anything)// - No process ever holds a higher resource while waiting for a lower one// - Therefore, no cycle can form // WRONG: Thread acquiring out of order would violate the protocolvoid *thread_WRONG(void *arg) { // VIOLATION: Acquiring B before A (order 2 before order 1) pthread_mutex_lock(&lock_B); // ERROR: Acquiring higher first pthread_mutex_lock(&lock_A); // Now holds higher, wants lower = VIOLATION // This creates deadlock potential!}In practice, resource ordering is often based on: memory address (lower address = lower order), creation time (earlier = lower), logical hierarchy (parent before child), or explicit numbering scheme. The Linux kernel uses static ordering defined by lock class annotations and verified by the lockdep tool.
Lock ordering is the most widely deployed deadlock prevention technique in production systems. Let's examine how real systems implement it.
Linux Kernel Lock Ordering:
The Linux kernel defines strict lock ordering rules. The lockdep subsystem dynamically verifies that locks are always acquired in consistent order.
Kernel Lock Hierarchy (simplified example):
1. i_mutex (inode mutex) ← Acquire first
2. mapping->tree_lock
3. page->mapping->i_mmap_mutex
4. mm->mmap_sem
5. mm->page_table_lock ← Acquire last
Rule: If holding lock at level N, can only acquire locks at level > N
When lockdep detects an out-of-order acquisition, it warns (or panics in debug builds), catching potential deadlocks before they cause hangs.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
import threadingfrom functools import total_ordering @total_orderingclass OrderedLock: """ Lock wrapper that enforces acquisition ordering. Locks must be acquired in order of their assigned order number. """ # Thread-local storage for tracking held locks _held_locks = threading.local() def __init__(self, name: str, order: int): self.name = name self.order = order self._lock = threading.Lock() def __lt__(self, other): return self.order < other.order def __eq__(self, other): return self.order == other.order def acquire(self): """Acquire lock, enforcing ordering protocol.""" held = getattr(self._held_locks, 'locks', []) # Check ordering: all held locks must have lower order for held_lock in held: if held_lock.order >= self.order: raise RuntimeError( f"Lock ordering violation! " f"Cannot acquire {self.name} (order={self.order}) " f"while holding {held_lock.name} (order={held_lock.order}). " f"Higher-order lock held while requesting lower-order lock." ) self._lock.acquire() held.append(self) self._held_locks.locks = held print(f"[Thread {threading.current_thread().name}] " f"Acquired {self.name} (order {self.order})") def release(self): """Release lock, updating held locks list.""" self._lock.release() held = getattr(self._held_locks, 'locks', []) held.remove(self) self._held_locks.locks = held print(f"[Thread {threading.current_thread().name}] " f"Released {self.name}") def __enter__(self): self.acquire() return self def __exit__(self, *args): self.release() # Define locks with explicit orderinglock_users = OrderedLock("users", order=1) # Lowest orderlock_accounts = OrderedLock("accounts", order=2)lock_transactions = OrderedLock("transactions", order=3) # Highest order def transfer_funds(from_user, to_user, amount): """Safe transfer: acquires locks in defined order.""" with lock_users: # Order 1 - first with lock_accounts: # Order 2 - second with lock_transactions: # Order 3 - third # All three locks held in order perform_transfer(from_user, to_user, amount) def unsafe_transfer(from_user, to_user, amount): """VIOLATION: Would raise RuntimeError.""" with lock_transactions: # Order 3 with lock_users: # Order 1 - VIOLATION! # RuntimeError: Cannot acquire lock_users (order=1) # while holding lock_transactions (order=3) passDatabase Lock Ordering:
Databases often use data-based ordering rather than hardcoded hierarchies:
-- Ordering strategy: Always lock by primary key order
-- Transfer from account 100 to account 50:
-- CORRECT: Lock lower ID first (50), then higher (100)
SELECT * FROM accounts WHERE id = 50 FOR UPDATE;
SELECT * FROM accounts WHERE id = 100 FOR UPDATE;
-- Another transfer: from 200 to 150
SELECT * FROM accounts WHERE id = 150 FOR UPDATE; -- Lower first
SELECT * FROM accounts WHERE id = 200 FOR UPDATE;
-- This ordering is CONSISTENT across all transactions
-- No circular wait possible regardless of transfer direction
Lock ordering requires knowing all needed resources upfront and has no natural 'partial order' handling (sometimes you legitimately need A before B in one context and B before A in another). For such cases, alternative strategies like lock timeouts or the try-lock pattern may be necessary.
In complex systems with many resources and dynamic resource needs, managing circular wait becomes challenging. Understanding advanced scenarios prepares you for real-world system design.
Multi-Layer Systems:
Modern systems have resources at multiple layers:
Application Layer: [Application locks, file handles, connections]
↓
Middleware Layer: [Connection pools, cache entries, sessions]
↓
Database Layer: [Row locks, page locks, table locks]
↓
OS Layer: [File locks, mutexes, semaphores]
↓
Hardware Layer: [Device access, memory regions]
Circular wait can span layers: an application holding a database connection while waiting for applock, while another process holds the app lock waiting for a database connection. Cross-layer deadlocks are particularly difficult to diagnose.
Distributed Circular Wait:
In distributed systems, circular wait spans machines:
Node A (NYC): Process P1 holds lock L1, waits for L2 on Node B
Node B (London): Process P2 holds lock L2, waits for L3 on Node C
Node C (Tokyo): Process P3 holds lock L3, waits for L1 on Node A
Cycle: P1 → P2 → P3 → P1 (spans 3 continents!)
Distributed cycle detection is expensive:
Solutions include:
| System Type | Challenges | Primary Mitigation |
|---|---|---|
| Single-threaded | None - no concurrency | N/A |
| Multi-threaded, few locks | Manageable manually | Lock ordering documentation |
| Multi-threaded, many locks | Hard to track all orderings | Automated lock order checking (lockdep) |
| Multi-process, shared memory | OS-level synchronization | Hierarchy by subsystem |
| Database-centric | Complex transaction patterns | Deadlock detection + victim selection |
| Microservices | Distributed locks | Timeouts, idempotent retries |
| Global scale | Network partitions, clock skew | Consensus protocols, lease-based locks |
We have examined circular wait—the fourth and final necessary condition for deadlock—in comprehensive depth. Let's consolidate the key insights:
What's Next:
We have now thoroughly examined all four necessary conditions for deadlock. The next page synthesizes this knowledge, examining why all four conditions must hold simultaneously for deadlock to occur. Understanding this interplay is crucial for selecting optimal prevention strategies and for recognizing when a system is or is not vulnerable to deadlock.
You now possess a thorough understanding of circular wait as a deadlock condition: its definition, emergence from resource request patterns, detection through graph analysis, its role completing deadlock requirements, and prevention through resource ordering. Next, we examine the synthesis of all four conditions and their simultaneous necessity.