Loading learning content...
In 1971, Edward G. Coffman, Jr., along with Melvin J. Elphick and Arie Shoshani, published a landmark paper that crystallized our understanding of deadlocks. They identified four conditions that must all hold simultaneously for a deadlock to occur. These conditions, now known as the Coffman Conditions, are both necessary and sufficient for deadlock.
This insight is transformative: if you can eliminate any single condition, deadlock becomes impossible. Prevention strategies all target one or more of these conditions.
By the end of this page, you will have a deep understanding of all four Coffman conditions—not just what they are, but why they're necessary, how they manifest in real code, and which conditions are practical targets for prevention. This knowledge is the foundation for designing deadlock-free systems.
1234567891011121314151617181920
┌─────────────────────────────────────────────────────────────────┐│ DEADLOCK REQUIRES ALL FOUR │├─────────────────────────────────────────────────────────────────┤│ ││ 1. MUTUAL EXCLUSION 2. HOLD AND WAIT ││ ┌───────────────────┐ ┌───────────────────┐ ││ │ Only one process │ │ Process holds │ ││ │ can use resource │ │ resources while │ ││ │ at a time │ │ waiting for more │ ││ └───────────────────┘ └───────────────────┘ ││ ││ 3. NO PREEMPTION 4. CIRCULAR WAIT ││ ┌───────────────────┐ ┌───────────────────┐ ││ │ Resources cannot │ │ P1 → P2 → P3 → P1 │ ││ │ be taken away │ │ Circular chain of │ ││ │ forcibly │ │ waiting processes │ ││ └───────────────────┘ └───────────────────┘ ││ ││ Break ANY ONE condition = No Deadlock │└─────────────────────────────────────────────────────────────────┘Definition: At least one resource must be held in a non-shareable mode. That is, only one process at a time can use the resource. If another process requests the resource, the requesting process must wait until the resource has been released.
Why it's necessary for deadlock:
If all resources could be shared by multiple processes simultaneously, there would be no waiting—every process could access what it needs immediately. Deadlock requires waiting, and waiting requires exclusive access.
The fundamental tension:
Mutual exclusion exists for good reason. Many resources are inherently non-shareable:
1234567891011121314151617181920212223242526272829303132333435363738394041
class SharedCounter { private value: number = 0; private mutex = new Mutex(); // Without mutual exclusion: DATA CORRUPTION async incrementUnsafe(): Promise<number> { // Race condition: read-modify-write is not atomic const current = this.value; await simulateWork(); // Context switch possible here! this.value = current + 1; return this.value; } // With mutual exclusion: CORRECT but enables deadlock async incrementSafe(): Promise<number> { await this.mutex.acquire(); // Only one thread at a time try { const current = this.value; await simulateWork(); this.value = current + 1; return this.value; } finally { this.mutex.release(); } }} // If two SharedCounters need to update atomically...// We've introduced potential for deadlockclass AtomicTransfer { async transfer(from: SharedCounter, to: SharedCounter, amount: number) { await from.mutex.acquire(); // Hold this await to.mutex.acquire(); // Wait for this - DEADLOCK RISK try { // Transfer logic } finally { to.mutex.release(); from.mutex.release(); } }}We often cannot eliminate mutual exclusion because the resource genuinely cannot be shared safely. This makes mutual exclusion the hardest condition to break. Prevention strategies more commonly target the other three conditions.
When mutual exclusion CAN be reduced:
Some resources can be redesigned to allow more sharing:
Read-write locks: Allow multiple readers OR a single writer. Reduces mutual exclusion for read-heavy workloads.
Spooling: Instead of giving exclusive printer access, jobs write to a spool queue. Only the spooler daemon has exclusive access.
Optimistic concurrency: Instead of locking, attempt operations and detect conflicts. Retry on conflict.
Immutable data structures: If data never changes, it can be shared freely without locks.
Copy-on-write: Share data until a write is needed, then create a copy.
12345678910111213141516171819202122232425262728
class ReadWriteCounter { private value: number = 0; private rwLock = new ReadWriteLock(); // Multiple threads can read simultaneously async getValue(): Promise<number> { await this.rwLock.acquireRead(); // Shared access try { return this.value; } finally { this.rwLock.releaseRead(); } } // Only one thread can write async increment(): Promise<number> { await this.rwLock.acquireWrite(); // Exclusive access try { this.value += 1; return this.value; } finally { this.rwLock.releaseWrite(); } }} // Result: Read operations don't block each other// Deadlock risk remains for write operationsDefinition: A process is currently holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes.
Why it's necessary for deadlock:
If processes released all their resources before requesting new ones, there would be no 'holding while waiting'—no chain of dependencies could form. The deadlock cycle requires that each process is simultaneously a holder (of what others need) and a waiter (for what others hold).
The pattern in code:
12345678910111213141516171819202122
async function holdAndWaitExample() { // HOLD: Acquire first resource await resourceA.acquire(); console.log("Holding resource A"); // WAIT: Try to acquire second resource while holding first // If another thread holds B and waits for A -> DEADLOCK await resourceB.acquire(); // <-- This is the "wait" part console.log("Now holding both A and B"); try { // Critical section using both resources await performOperation(resourceA, resourceB); } finally { resourceB.release(); resourceA.release(); }} // The window between acquiring A and acquiring B is dangerous.// Another thread could be in the same window with B acquired,// waiting for A.Breaking Hold and Wait:
This condition is a practical target for prevention. Two main strategies exist:
Strategy 1: All-or-Nothing (Atomic acquisition)
Require that a process requests all resources it will need before beginning execution. Either all resources are granted, or none are.
1234567891011121314151617181920212223242526272829303132333435363738394041424344
class ResourceManager { private resources: Map<string, Mutex> = new Map(); private managerLock = new Mutex(); async acquireAll(resourceIds: string[]): Promise<void> { await this.managerLock.acquire(); try { // Check if ALL resources are available for (const id of resourceIds) { const resource = this.resources.get(id); if (resource?.isLocked) { // Cannot acquire all - must wait // Release manager lock and retry throw new ResourcesNotAvailableError(resourceIds); } } // All are available - acquire atomically for (const id of resourceIds) { await this.resources.get(id)!.acquire(); } } finally { this.managerLock.release(); } } async releaseAll(resourceIds: string[]): Promise<void> { for (const id of resourceIds) { this.resources.get(id)?.release(); } }} // Usage: No hold-and-wait possibleasync function safeOperation() { // Request everything upfront await resourceManager.acquireAll(['A', 'B', 'C']); try { // Already have all resources - no waiting while holding await performOperation(); } finally { await resourceManager.releaseAll(['A', 'B', 'C']); }}Strategy 2: Release Before Request
Require that a process releases all currently held resources before requesting new ones.
12345678910111213141516171819202122
async function releaseBeforeRequestPattern() { // Phase 1: Work with resource A only await resourceA.acquire(); try { await workWithA(); } finally { resourceA.release(); // Release A BEFORE requesting B } // Phase 2: Work with resource B only await resourceB.acquire(); try { await workWithB(); } finally { resourceB.release(); } // No hold-and-wait: never holding one while waiting for another} // Problem: What if we need BOTH resources simultaneously?// We must redesign the operation or use all-or-nothing.Databases use a variant called strict two-phase locking: the transaction acquires all locks it needs (growing phase), then releases all locks only at commit/rollback (shrinking phase). This breaks hold-and-wait within a single transaction, though deadlocks can still occur between transactions.
Definition: Resources cannot be forcibly taken away from a process. A resource can only be released voluntarily by the process holding it, after that process has completed its task.
Why it's necessary for deadlock:
If resources could be preempted (taken away forcibly), the system could break any cycle by reclaiming resources from one of the waiting processes. The permanent nature of deadlock depends on resources being held indefinitely without possibility of forced release.
The challenge:
Many resources genuinely cannot be preempted without causing harm:
When preemption IS possible:
Some resources have state that can be saved and restored, making preemption safe:
| Resource Type | Preemptable? | Reason |
|---|---|---|
| CPU time slices | Yes | Context can be saved and restored perfectly |
| Memory pages | Yes | Can be swapped to disk and restored later |
| Mutex/lock | No | Critical section would be left incomplete |
| Database row lock | No | Transaction integrity would be violated |
| Printer (mid-job) | No | Physical output cannot be 'undone' |
| Network socket | Partially | Can be closed, but data may be lost |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
class PreemptableResourceManager { private holder: string | null = null; private state: any = null; async acquire(processId: string, priority: number): Promise<void> { if (this.holder === null) { this.holder = processId; return; } // If higher priority process needs resource const holderPriority = this.getPriority(this.holder); if (priority > holderPriority) { // Preempt: force current holder to release const previousHolder = this.holder; this.saveState(previousHolder); // Save current holder's state this.notifyPreemption(previousHolder); // Force rollback // Grant to higher priority process this.holder = processId; this.restoreState(processId); // Restore if previously preempted } else { // Wait in queue await this.waitForRelease(processId); } } private notifyPreemption(processId: string): void { // Tell the process to roll back its work // and release the resource this.emitEvent(`preemption:${processId}`, { resource: this, savedState: this.state }); }} // Usage: Process must handle preemption gracefullyclass PreemptionAwareWorker { async doWork(resourceManager: PreemptableResourceManager) { try { await resourceManager.acquire(this.id, this.priority); await this.performOperation(); } catch (error) { if (error instanceof PreemptionError) { // Roll back and retry later await this.rollback(); await this.scheduleRetry(); } throw error; } }}Database deadlock detection works by selecting a 'victim' transaction to abort. This is a form of preemption: the transaction's locks are forcibly released, and it must roll back and retry. The difference from true preemption is that the entire transaction's work is discarded, not saved and resumed.
Strategies to enable preemption:
Checkpointing: Periodically save state so work can be resumed after preemption
Idempotent operations: Design operations so they can be safely retried
Compensation transactions: For each operation, define an 'undo' operation
Timeout-based release: If a lock isn't released within a time limit, assume failure and preempt
Priority-based preemption: Higher-priority processes can force lower-priority to yield
Definition: There exists a set {P₀, P₁, ..., Pₙ} of waiting processes such that P₀ is waiting for a resource held by P₁, P₁ is waiting for a resource held by P₂, ..., Pₙ₋₁ is waiting for a resource held by Pₙ, and Pₙ is waiting for a resource held by P₀.
Why it's necessary for deadlock:
Without a cycle, there's always at least one process that isn't waiting for anything in the chain—that process can complete, release its resources, and allow others to proceed. The cycle is what makes the wait permanent.
Visualizing circular wait:
12345678910111213141516171819202122232425
┌──────────┐ │ P₀ │ └────┬─────┘ waits │ holds for R₁ │ R₀ ▼ ┌──────────┐ ┌──────────┐ ┌────│ P₁ │◄────────│ Pₙ │────┐ │ └──────────┘ └──────────┘ │ │ │ │ waits for R₂ waits for R₀ │ │ │ ▲ │ │ ▼ │ │ │ ┌──────────┐ ┌──────────┐ │ │ │ P₂ │───►│ ... │ │ │ └──────────┘ └──────────┘ │ │ │ └──────────────────────────────────────────┘ P₀ holds R₀, waits for R₁ (held by P₁) P₁ holds R₁, waits for R₂ (held by P₂) ... Pₙ holds Rₙ, waits for R₀ (held by P₀) CYCLE COMPLETE → DEADLOCKBreaking circular wait is the most commonly used prevention strategy because it doesn't require changing resource semantics. The key technique is resource ordering.
12345678910111213141516171819202122232425262728293031323334353637383940414243
// Assign a global order to all resourcesenum ResourceOrder { DATABASE_CONNECTION = 1, USER_CACHE = 2, SESSION_LOCK = 3, FILE_SYSTEM = 4, NETWORK_SOCKET = 5,} // Rule: Always acquire resources in ascending orderclass OrderedResourceManager { private heldResources: Set<ResourceOrder> = new Set(); async acquire(resource: ResourceOrder): Promise<void> { // Enforce ordering: can only acquire higher-numbered resources const maxHeld = Math.max(...this.heldResources, 0); if (resource <= maxHeld) { throw new Error( `Resource ordering violation: Cannot acquire ${resource} \ while holding resources up to ${maxHeld}. \ Release higher-numbered resources first.` ); } await this.resources.get(resource)!.acquire(); this.heldResources.add(resource); } release(resource: ResourceOrder): void { this.resources.get(resource)!.release(); this.heldResources.delete(resource); }} // BEFORE (deadlock possible):// Thread 1: acquire(A), acquire(B)// Thread 2: acquire(B), acquire(A) // AFTER (deadlock impossible):// Thread 1: acquire(A), acquire(B) // A < B, valid// Thread 2: acquire(A), acquire(B) // Must acquire in same order!// Both threads acquire A then B → No cycle possibleIf all threads acquire resources in the same order (say, always acquire locks in alphabetical order by name), then no circular wait can form. Thread A waiting for lock X means Thread A doesn't hold any lock that comes after X. Any thread holding X must also follow the ordering, so it can only be waiting for locks after X—never forming a cycle back to anything Thread A holds.
Practical considerations for resource ordering:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
class BankAccount { readonly id: string; private balance: number; private lock = new Mutex(); constructor(id: string, initialBalance: number) { this.id = id; this.balance = initialBalance; } // ORDER accounts by ID to prevent deadlock static orderAccounts(a: BankAccount, b: BankAccount): [BankAccount, BankAccount] { return a.id < b.id ? [a, b] : [b, a]; } static async transfer( from: BankAccount, to: BankAccount, amount: number ): Promise<void> { // Always acquire in consistent order regardless of transfer direction const [first, second] = BankAccount.orderAccounts(from, to); await first.lock.acquire(); try { await second.lock.acquire(); try { if (from.balance < amount) { throw new Error('Insufficient funds'); } from.balance -= amount; to.balance += amount; } finally { second.lock.release(); } } finally { first.lock.release(); } }} // Now these can run concurrently without deadlock:await Promise.all([ BankAccount.transfer(accountA, accountB, 100), BankAccount.transfer(accountB, accountA, 50), ]);// Both acquire locks in same order (by ID), no circular waitThe four conditions are not independent—they form a logical progression that enables deadlock. Understanding their relationship helps in choosing prevention strategies:
1234567891011121314151617181920212223
MUTUAL EXCLUSION │ │ Creates scarcity - resources cannot be shared ▼HOLD AND WAIT │ │ Creates chains - processes hold while waiting ▼NO PREEMPTION │ │ Creates permanence - chains cannot be broken ▼CIRCULAR WAIT │ │ Creates loops - the chain closes into a cycle ▼DEADLOCK Each condition builds on the previous:- Mutual exclusion → resources are contested- Hold and wait → contested resources form chains- No preemption → chains persist- Circular wait → chains form cycles → deadlock| Condition | Ease of Breaking | Common Approach | When Practical |
|---|---|---|---|
| Mutual Exclusion | Very Difficult | Use shareable resources where possible | Only for inherently shareable data (read-only, spooled) |
| Hold and Wait | Moderate | Atomic acquisition of all resources | When resources needed are known upfront |
| No Preemption | Moderate to Difficult | Allow rollback and retry | When operations are idempotent or have compensation |
| Circular Wait | Easy | Impose global resource ordering | Almost always practical; the preferred approach |
In most systems, breaking circular wait through resource ordering is the most practical prevention strategy. It doesn't require changing resource semantics, doesn't reduce concurrency, and can be enforced at compile time in some languages. This is why lock ordering is emphasized in concurrent programming guides.
Combining strategies:
Real systems often use multiple approaches:
This defense-in-depth approach acknowledges that deadlock prevention may not be 100% effective, especially in complex systems with many developers.
For critical systems, we want to prove that deadlock cannot occur, not just hope our prevention measures work. This requires formal reasoning about the four conditions.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
// TypeScript type system to enforce lock ordering at compile timetype LockLevel = 1 | 2 | 3 | 4 | 5; class TypedLock<Level extends LockLevel> { readonly level: Level; private locked = false; constructor(level: Level) { this.level = level; } acquire(): void { this.locked = true; } release(): void { this.locked = false; }} // Context tracks which lock levels are heldtype LockContext<Held extends LockLevel> = { held: Set<Held>;}; // Can only acquire a lock if its level is higher than all held locksfunction acquireOrdered< Held extends LockLevel, Target extends LockLevel>( ctx: LockContext<Held>, lock: TypedLock<Target>,): Target extends Held ? never : LockContext<Held | Target> { // Runtime check (compile-time type also enforces this) for (const level of ctx.held) { if (lock.level <= level) { throw new Error(`Order violation: cannot acquire ${lock.level} while holding ${level}`); } } lock.acquire(); return { held: new Set([...ctx.held, lock.level]) } as any;} // Example usageconst lockA = new TypedLock(1);const lockB = new TypedLock(2);const lockC = new TypedLock(3); function criticalSection() { let ctx: LockContext<never> = { held: new Set() }; ctx = acquireOrdered(ctx, lockA); // OK: holding nothing ctx = acquireOrdered(ctx, lockC); // OK: 3 > 1 // ctx = acquireOrdered(ctx, lockB); // TYPE ERROR: can't acquire 2 while holding 3}Proof structure for resource ordering:
To prove that strict resource ordering prevents deadlock:
Assume deadlock exists: There is a cycle P₀ → R₀ → P₁ → R₁ → ... → Pₙ → Rₙ → P₀
Apply ordering constraint: Each Pᵢ holds Rᵢ₋₁ and waits for Rᵢ. By ordering rule, order(Rᵢ₋₁) < order(Rᵢ).
Derive contradiction: Following the cycle:
Conclusion: No cycle can exist under strict ordering. Deadlock is impossible. ∎
For mission-critical systems, tools like TLA+, SPIN, or Java Pathfinder can model-check concurrent systems to verify absence of deadlock. These tools exhaustively explore all possible interleavings of thread execution to prove that no deadlock state is reachable.
We've done a deep dive into the theoretical foundation of deadlock prevention. Let's consolidate our understanding:
What's next:
Now that we understand the conditions required for deadlock, the next page will explore prevention strategies in depth—practical techniques for eliminating each condition in real systems, with code examples and trade-off analysis.
You now have a rigorous understanding of the four Coffman conditions. This theoretical foundation is essential—you can't effectively prevent what you don't understand. The next page will translate this understanding into practical prevention strategies you can apply in your systems.