Loading learning content...
Consider a thought experiment: if the operating system could forcibly take resources from processes at any time, would deadlock be possible? Two processes waiting for each other's resources could simply have those resources preempted and reallocated—deadlock dissolved. Yet this rarely happens for most resource types. The no preemption condition—processes cannot be forcibly deprived of resources they hold—is the third necessary condition for deadlock.
No preemption means that once a process acquires a resource, that resource belongs to the process until the process voluntarily releases it. The operating system, scheduler, or other processes cannot commandeer the resource. This protection is often essential for correctness, but it also means that waiting processes must wait indefinitely until voluntary release occurs—which may never happen if the holder is itself waiting.
By the end of this page, you will thoroughly understand no preemption as a deadlock condition: its formal definition, the deep reasons why preemption is problematic for many resources, the contrast with preemptible resources like CPU time, and how introducing preemption can serve as a deadlock prevention strategy when feasible.
Formal Definition:
Resources cannot be preempted; a resource can only be released voluntarily by the process holding it, after that process has completed its task.
Let's dissect this definition:
1. Preemption — The forcible taking of a resource from a process against its will. The process has no say in when or whether the resource is taken.
2. Voluntary Release — The exclusive mechanism for resource deallocation. The holding process decides when to release, not the system or other processes.
3. Task Completion Linkage — Resources are held until the process completes whatever task required them. Partial completion followed by forced release is not permitted.
Coffman et al. stated this condition as: "Resources already allocated cannot be forcibly removed from the processes holding them." This emphasizes that the system lacks the power to reclaim resources—only the holding process can perform the release.
Mathematical Formalization:
Define state transition function for resource allocation:
release(R, Pᵢ, t) → R becomes free at time t
The no-preemption condition states:
∀R, Pᵢ, t: release(R, Pᵢ, t) ⟹ Pᵢ initiated release(R) at time t
That is, every release event is caused by the holding process itself. There is no system-initiated release.
Contrast with Preemptive Systems:
In a preemptive model:
preempt(R, Pᵢ, t) → R taken from Pᵢ at time t, without Pᵢ's consent
If preemption were possible for all resources, the system could break any deadlock by preempting resources from one process in the cycle and granting them to another. No preemption disables this escape valve.
Unlike process scheduling where CPU preemption is routine, most resources cannot be safely preempted. Understanding why reveals fundamental constraints that make the no-preemption condition necessary for correctness.
| Resource | If Preempted Mid-Operation | Result |
|---|---|---|
| Mutex lock | Other process enters critical section concurrently | Data race, corruption, undefined behavior |
| Printer | Print job interrupted mid-page | Garbled output, paper waste, potential jam |
| Write lock on file | File left with partial writes | Data corruption, invalid file format |
| Database transaction | Partial commit | Violated ACID properties, inconsistent data |
| Network socket mid-send | Incomplete message transmitted | Protocol violation, connection reset |
| GPU compute context | Kernel execution interrupted | Undefined results, GPU state corrupted |
These consequences are unacceptable. Systems enforce no-preemption not by choice, but by necessity. The alternative—resource corruption and state inconsistency—is far worse than the deadlock risk introduced by non-preemptible resources.
Not all resources are non-preemptible. Understanding which resources can be safely preempted is crucial for system design and deadlock prevention strategies.
What Makes a Resource Preemptible?
A resource can be preempted if and only if:
State is Saveable — The complete state of the resource's use by the process can be captured and stored.
State is Restorable — The saved state can be restored later, allowing the process to continue exactly where it left off.
Preemption is Invisible — From the preempted process's perspective, preemption never happened—execution resumes seamlessly.
CPU and memory satisfy these criteria: process state is saved in the PCB, memory state in swap space, and resumption is transparent. A mutex lock does not satisfy these criteria: taking the lock away invalidates the mutual exclusion guarantee that the process depends on. The process would need to re-acquire the lock on resumption, potentially reintroducing the original deadlock.
CPU preemption is the gold standard for preemptible resources. The entire process state is saved (registers, PC, stack pointer, flags), and restoration is perfect. If all resources worked like CPUs—with complete state capture and transparent restoration—no-preemption would not be a deadlock concern. The challenge is that most resources lack this property.
At the heart of the no-preemption condition lies a fundamental problem: many resource operations are irreversible. Once performed, they cannot be undone, making preemption semantically meaningless or actively harmful.
Classes of Irreversibility:
1. Physical Irreversibility:
Physical actions in the real world cannot be reversed:
- Ink printed on paper
- Holes punched in tape
- Signals transmitted over wire
- Robot arm movements
- Chemical processes (etching, deposition)
Preemption here doesn't just pause the operation—it leaves physical artifacts that cannot be cleaned up.
2. Logical Irreversibility:
Some operations create persistent logical state that cannot be safely rolled back:
- Deleted file (without journaling/snapshot)
- Committed transaction (in many systems)
- Sent network packet (received by remote)
- Published event (consumed by subscribers)
- Incremented counter (observed by others)
Even if we could 'undo' these, other processes may have already observed and acted on the state, creating cascading inconsistencies.
123456789101112131415161718192021222324252627282930313233
// Example: File write - irreversible once partially complete void process_a() { int fd = open("critical_data.bin", O_WRONLY); flock(fd, LOCK_EX); // Acquire exclusive lock // Begin multi-block write operation write(fd, header, sizeof(header)); // Block 1: ✓ Written write(fd, data_part1, sizeof(part1)); // Block 2: ✓ Written // === IF PREEMPTION HAPPENED HERE === // The file now has: // - Valid header // - First part of data // - Missing second part of data // - Missing checksum/footer // // THIS FILE IS NOW CORRUPT // Preemption left the file in an invalid intermediate state write(fd, data_part2, sizeof(part2)); // Block 3: Not written write(fd, checksum, sizeof(checksum)); // Block 4: Not written flock(fd, LOCK_UN); // Release lock close(fd);} // The critical insight: we cannot simply "undo" the partial write// 1. We don't know what the original contents were// 2. Other processes might read the corrupt intermediate state// 3. If this was a journal or log, "undoing" creates gaps// // Non-preemptibility is ESSENTIAL for such operationsSome systems support 'compensating actions'—new operations that logically undo previous ones. But compensating actions are not preemption: they are new forward operations that require their own resources and can themselves deadlock. They also reveal the intermediate state to the system, which may violate isolation guarantees.
Understanding how no-preemption manifests in real systems illuminates the practical constraints system designers face.
Kernel Locks:
Operating system kernels protect critical data structures with locks:
spin_lock(&inode->i_lock); // Acquire inode lock
// Manipulate inode metadata
inode->i_size = new_size;
inode->i_mtime = current_time;
inode->i_blocks = calculate_blocks(new_size);
spin_unlock(&inode->i_lock); // Release inode lock
If the kernel could preempt the process holding i_lock and give the lock to another process, the inode would be in an inconsistent state:
i_size updatedi_mtime updatedi_blocks NOT YET updated (inconsistent with i_size!)The second process would operate on corrupt metadata. Non-preemptible locks are essential for kernel integrity.
Database Systems:
Databases implement row and page locks that cannot be preempted:
-- Transaction T1
BEGIN;
SELECT * FROM accounts WHERE id = 1 FOR UPDATE;
-- T1 now holds X-lock on row 1
-- This lock CANNOT be preempted by the DBMS
-- T1's transaction integrity depends on this guarantee
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- ... more work ...
COMMIT; -- Lock released on commit
Database locks are non-preemptible because:
Device Drivers:
Device drivers hold hardware state that cannot be preempted:
// DMA transfer in progress
void dma_write_handler(struct device *dev) {
// Hardware operation: DMA engine is actively copying data
// from memory to device.
//
// We CANNOT preempt this operation because:
// 1. The DMA engine is using physical addresses
// 2. If we free the memory buffer, DMA writes to freed memory
// 3. Hardware has no concept of "pause and resume"
// 4. Device might enter error state if protocol violated
wait_for_dma_complete(); // Must wait for hardware
release_buffer(); // Only safe after DMA done
}
Hardware often cannot be "preempted" in any meaningful sense—it operates on its own timeline, and software must coordinate with it, not dictate to it.
Since no-preemption is a necessary condition for deadlock, introducing preemption where feasible can prevent deadlocks. However, this requires careful consideration of which resources can be safely preempted.
123456789101112131415161718192021222324252627282930313233343536373839404142434445
#include <pthread.h>#include <stdbool.h> // Protocol: If we can't get a resource, release all and retrybool try_acquire_with_preemption( int process_id, resource_t *held_resources, int held_count, resource_t *wanted) { // Try to acquire the wanted resource without blocking if (try_lock(wanted)) { return true; // Got it! No preemption needed } // Cannot acquire wanted resource - must preempt ourselves printf("Process %d: Cannot acquire resource, preempting self\n", process_id); // Release ALL currently held resources for (int i = 0; i < held_count; i++) { release(held_resources[i]); printf("Process %d: Released resource %d\n", process_id, held_resources[i].id); } // Save state for restart (application-specific) save_checkpoint(process_id); // Re-request ALL resources (including newly wanted one) // This will block until all are available wait_for_all_resources(process_id, held_resources, held_count, wanted); // Restore state and continue restore_checkpoint(process_id); return true; // Eventually succeeded} // This protocol PREVENTS deadlock by ensuring:// 1. No process ever blocks while holding resources// 2. Resources are only held by running processes// 3. Wait cycles cannot form (all waiters release first)//// Tradeoff: Wasted work, potential livelock if unluckyDatabase transaction abort is a form of controlled preemption. When deadlock is detected, the DBMS aborts one transaction, releasing all its locks and rolling back its changes. This preempts the transaction's resource holdings while maintaining data consistency through the rollback mechanism.
Safe preemption requires rollback—the ability to undo the effects of partial execution and restore a consistent state. Understanding rollback mechanisms reveals when and how preemption becomes feasible.
Database Transaction Rollback:
Original State: Account A = $1000, Account B = $500
Transaction:
1. BEGIN
2. Read A: $1000
3. Write A: $900 (debit $100) -- logged in undo log
4. [PREEMPTION/ABORT HERE]
Rollback Process:
1. Examine undo log: "A was $1000, changed to $900"
2. Restore A: A = $1000
3. Release all locks
4. Transaction state = ABORTED
Final State: Account A = $1000, Account B = $500 (unchanged)
Databases make preemption safe through write-ahead logging (WAL):
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
import pickleimport os class Checkpointer: """ Enables preemption by saving and restoring process state. For preemption to be safe, we must be able to: 1. Save ALL relevant state before releasing resources 2. Restore that state when resources are re-acquired 3. Continue execution exactly where we left off """ def __init__(self, process_id): self.process_id = process_id self.checkpoint_file = f"/tmp/checkpoint_{process_id}.pkl" def save_checkpoint(self, state: dict) -> None: """ Save process state before preemption. State must include everything needed to resume: - Local variables - Data structure contents - Progress indicators - Partial results """ checkpoint = { 'state': state, 'resources_held': self.get_current_resources(), 'execution_point': self.get_execution_point(), 'timestamp': time.time() } with open(self.checkpoint_file, 'wb') as f: pickle.dump(checkpoint, f) print(f"Process {self.process_id}: Checkpoint saved") def restore_checkpoint(self) -> dict: """ Restore process state after re-acquiring resources. Returns the saved state so process can continue. """ if not os.path.exists(self.checkpoint_file): raise RuntimeError("No checkpoint to restore") with open(self.checkpoint_file, 'rb') as f: checkpoint = pickle.load(f) # Clean up checkpoint file os.remove(self.checkpoint_file) print(f"Process {self.process_id}: Checkpoint restored") return checkpoint['state'] def preempt_and_release(self, current_state: dict) -> None: """ Voluntarily preempt: save state and release all resources. """ self.save_checkpoint(current_state) # Release all held resources for resource in self.get_current_resources(): release_resource(resource) print(f"Process {self.process_id}: Released {resource}") # Key insight: This works for COMPUTATIONAL state# It does NOT work for:# - Partial I/O operations# - Hardware interactions# - Irreversible side effects# # Preemption is possible only when state is fully capturableRollback has limits: external side effects (network sends, hardware commands, user notifications) cannot be truly rolled back. At best, compensating actions can be performed. This is why true preemption is limited to computational resources with saveable state, not I/O or hardware resources.
Two classic schemes introduce controlled preemption to prevent deadlock in database systems: Wait-Die and Wound-Wait. Both use transaction timestamps to break symmetry and decide which transactions can preempt others.
Timestamp-Based Priority:
Both schemes assign timestamps to transactions at their start. Older transactions (smaller timestamps) have higher priority than younger transactions (larger timestamps).
Transaction T1 starts at time 100 → Timestamp = 100 (older, higher priority)
Transaction T2 starts at time 200 → Timestamp = 200 (younger, lower priority)
Wait-Die Scheme:
When transaction Tᵢ requests a resource held by Tⱼ:
IF Tᵢ is OLDER than Tⱼ:
Tᵢ WAITS for Tⱼ to release
(older waits for younger - safe)
ELSE (Tᵢ is YOUNGER than Tⱼ):
Tᵢ DIES (rolls back and restarts)
(younger dies, releases resources)
Key Property: Only older transactions wait; younger transactions are preempted (die). This prevents cycles because waiting always goes from older to younger—a partial order that cannot form cycles.
Wound-Wait Scheme:
When transaction Tᵢ requests a resource held by Tⱼ:
IF Tᵢ is OLDER than Tⱼ:
Tᵢ WOUNDS Tⱼ (forces Tⱼ to abort)
Tᵢ takes the resource
(older preempts younger)
ELSE (Tᵢ is YOUNGER than Tⱼ):
Tᵢ WAITS for Tⱼ
(younger waits for older - safe)
Key Property: Only younger transactions wait; older transactions preempt younger ones. This also prevents cycles because waiting always goes from younger to older.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
from enum import Enumimport time class PreemptionAction(Enum): WAIT = "wait" DIE = "die" WOUND = "wound" class Transaction: def __init__(self, id: int): self.id = id self.timestamp = time.time() # Older = smaller timestamp def is_older_than(self, other: 'Transaction') -> bool: return self.timestamp < other.timestamp # Wait-Die Implementationdef wait_die(requester: Transaction, holder: Transaction) -> PreemptionAction: """ Wait-Die Scheme: - Older transaction waits for younger - Younger transaction dies (aborts and restarts) Prevents deadlock: waits only go old → young (acyclic) """ if requester.is_older_than(holder): # Requester is older: it waits for younger holder # Older transactions are more "valuable" - they've done more work print(f"T{requester.id} (older) waits for T{holder.id} (younger)") return PreemptionAction.WAIT else: # Requester is younger: it dies and restarts later # Younger transactions have done less work - cheaper to restart print(f"T{requester.id} (younger) dies, T{holder.id} (older) continues") return PreemptionAction.DIE # Wound-Wait Implementation def wound_wait(requester: Transaction, holder: Transaction) -> PreemptionAction: """ Wound-Wait Scheme: - Older transaction wounds (aborts) younger holder - Younger transaction waits for older Prevents deadlock: waits only go young → old (acyclic) """ if requester.is_older_than(holder): # Requester is older: it wounds (aborts) the younger holder # Older transaction takes priority print(f"T{requester.id} (older) wounds T{holder.id} (younger)") return PreemptionAction.WOUND else: # Requester is younger: it waits for older holder print(f"T{requester.id} (younger) waits for T{holder.id} (older)") return PreemptionAction.WAIT # Both schemes guarantee no deadlock because:# - All wait edges point in the same direction (timestamp order)# - A DAG cannot have cyclesWait-Die has more restarts (younger transactions restart frequently) but no forced aborts. Wound-Wait has fewer restarts but requires forced aborts of younger transactions. In practice, Wound-Wait is often preferred because older transactions (which have done more work) are less likely to be interrupted.
We have examined no preemption—the third necessary condition for deadlock—in comprehensive depth. Let's consolidate the key insights:
What's Next:
Mutual exclusion, hold-and-wait, and no-preemption together still cannot guarantee deadlock—unless the waiting relationships form a cycle. The next page examines the circular wait condition: how cyclic dependencies emerge, why they complete the deadlock requirements, and how breaking circular wait through resource ordering is one of the most practical deadlock prevention strategies.
You now possess a thorough understanding of no preemption as a deadlock condition: its definition, necessity for correctness, relationship to state saveability, irreversibility constraints, preemption as prevention, rollback mechanisms, and timestamp-based schemes. Next, we examine circular wait to complete our analysis of the four necessary conditions.