Loading learning content...
The no-preemption condition states that resources cannot be forcibly taken away from a process—they can only be released voluntarily when the process no longer needs them. This condition is essential for deadlock: if we could simply take resources from waiting processes, we could break any potential deadlock cycle.
Breaking the no-preemption condition means allowing resource preemption: the system can forcibly revoke a resource from one process and give it to another. This is a powerful deadlock prevention mechanism, but it comes with significant constraints on which resources can be preempted safely.
Preemption is only feasible for resources whose state can be saved and restored. The process losing the resource must not be left in an inconsistent state, and when the resource is returned, the process must be able to continue from where it left off.
Why preemption breaks deadlock:
Consider the classic deadlock scenario:
With preemption enabled:
The deadlock is broken because resources can be taken away, preventing the indefinite wait.
Preemption in the context of resources means forcibly taking a resource away from a process before that process has voluntarily released it. This is analogous to CPU preemption, where the operating system can forcibly suspend a running process to schedule another.
Formal Definition:
A resource R is preemptible if:
Not all resources satisfy these conditions. Understanding which resources are preemptible is crucial for applying this prevention strategy.
| Resource Type | Preemptible? | Reason | Preemption Mechanism |
|---|---|---|---|
| CPU | Yes | State fully captured in registers/PCB | Context switch saves all execution state |
| Memory pages | Yes | Contents can be saved to disk | Page-out to swap, page-in when needed |
| CPU registers | Yes | Finite, well-defined state | Saved/restored during context switch |
| Virtual memory address space | Yes | Page table entries, mappings | Process can be swapped out entirely |
| Printer (mid-job) | No | Paper already has partial output | Cannot 'un-print' what's been printed |
| Tape drive (mid-write) | No | Tape position, partial writes | Sequential medium, no random access |
| Mutex/Lock | Dangerous | May leave protected data inconsistent | Would violate critical section guarantees |
| Database lock | Sometimes | If transaction can be rolled back | Transaction abort and rollback |
| File (mid-write) | Sometimes | Depends on write semantics | May need to rollback partial writes |
| Network connection | Partially | TCP state is complex | Connection may timeout, need retransmit |
CPU and memory are easily preemptible because their state is well-defined and can be fully captured. Physical devices mid-operation are generally NOT preemptible because the physical world doesn't have an 'undo' button. Synchronization primitives are dangerous to preempt because they protect invariants.
The CPU is the classic example of a preemptible resource. Modern operating systems regularly preempt the CPU from running processes, demonstrating that preemption can work smoothly when properly implemented.
Why CPU preemption works:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
// CPU Preemption - How the scheduler implements it typedef struct { // General purpose registers uint64_t rax, rbx, rcx, rdx; uint64_t rsi, rdi, rbp, rsp; uint64_t r8, r9, r10, r11, r12, r13, r14, r15; // Instruction pointer and flags uint64_t rip; // Where execution was interrupted uint64_t rflags; // CPU flags // Segment registers uint16_t cs, ds, es, fs, gs, ss; // Floating point / SIMD state uint8_t fpu_state[512]; // FXSAVE area // Additional state uint64_t cr3; // Page table base (address space)} cpu_context_t; typedef struct { pid_t pid; cpu_context_t context; // Saved CPU state process_state_t state; // RUNNING, READY, BLOCKED, etc. struct process* next; // For scheduler queues} process_t; // Timer interrupt handler - triggers preemptionvoid timer_interrupt_handler(interrupt_frame_t* frame) { process_t* current = get_current_process(); // Step 1: Save current process state save_cpu_context(¤t->context, frame); current->state = READY; // Step 2: Add to ready queue scheduler_enqueue(current); // Step 3: Select next process process_t* next = scheduler_dequeue(); // Step 4: Restore next process state restore_cpu_context(&next->context, frame); next->state = RUNNING; set_current_process(next); // Step 5: Return to next process // (interrupt return uses modified frame)} /* * KEY OBSERVATION: * * From the process's perspective, it has no idea preemption occurred. * The save/restore is transparent and complete. * * This is the gold standard for preemptible resources: * - Complete state capture * - Atomic save/restore * - No side effects * - Transparent to the preempted entity * * For deadlock prevention, we want to achieve this for OTHER resources. */The CPU preemption model informs resource preemption design:
When considering whether a resource can be preempted, we ask:
Resources that answer 'yes' to all four questions are candidates for preemption-based deadlock prevention.
Memory is another resource that can be preempted. The operating system can take physical memory frames away from one process and give them to another. This is the basis of virtual memory and swapping.
How memory preemption works:
From the process's perspective, memory access appears continuous—the preemption is transparent.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
// Memory Preemption via Paging typedef struct { uint64_t frame_number; // Physical frame if present bool present; // Is page in physical memory? bool dirty; // Has page been modified? bool referenced; // Has page been accessed recently? uint64_t swap_location; // Location in swap if not present} page_table_entry_t; // Memory pressure handler - preempts pages from processesvoid memory_pressure_handler() { while (free_frames_count() < MINIMUM_FREE_FRAMES) { // Select a page to preempt (e.g., using LRU) page_table_entry_t* victim = select_victim_page(); process_t* victim_owner = get_page_owner(victim); // Step 1: If dirty, write to swap if (victim->dirty) { victim->swap_location = allocate_swap_slot(); write_to_swap(victim->frame_number, victim->swap_location); } // Step 2: Mark page as not present victim->present = false; victim->frame_number = 0; // Step 3: Invalidate TLB entry invalidate_tlb_entry(victim_owner->pid, get_page_address(victim)); // Step 4: Add frame to free list add_to_free_list(victim->frame_number); // The victim process doesn't know its page is gone // until it tries to access it (page fault) }} // Page fault handler - restores preempted pagesvoid page_fault_handler(uint64_t faulting_address) { process_t* current = get_current_process(); page_table_entry_t* pte = get_page_table_entry(current, faulting_address); if (!pte->present && pte->swap_location != 0) { // Page was preempted - restore it // Step 1: Allocate a new frame uint64_t new_frame = allocate_frame(); if (new_frame == 0) { // No free frames - preempt another page first memory_pressure_handler(); new_frame = allocate_frame(); } // Step 2: Read from swap read_from_swap(pte->swap_location, new_frame); free_swap_slot(pte->swap_location); // Step 3: Update page table pte->frame_number = new_frame; pte->present = true; pte->swap_location = 0; // Step 4: Process can now continue - it never knew the preemption happened return; // Restart the faulting instruction } // Handle other fault types (e.g., invalid access) handle_segfault(current, faulting_address);} /* * DEADLOCK PREVENTION THROUGH MEMORY PREEMPTION: * * Scenario: Multiple processes need memory, total demand exceeds physical RAM * * Without preemption: Processes block indefinitely waiting for memory * * With preemption: * - OS preempts pages from waiting processes * - Frees memory for processes that can proceed * - When preempted processes run, their pages fault and are restored * - No deadlock possible - memory is always available (virtually) */Every system using virtual memory with demand paging is already using resource preemption! The OS regularly takes physical memory from processes and gives it to others. This is why memory deadlock is rare in modern systems—preemption is built into the memory management subsystem.
Several formal protocols exist for using preemption to prevent deadlock. These protocols define the conditions under which resources are preempted and how processes respond.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109
// Protocol 1: Preempt on Wait// If a process cannot get a resource, all its resources are preempted typedef struct { int held[MAX_RESOURCES]; // Resources currently held int waiting_for[MAX_RESOURCES]; // Resources we're waiting for bool preempted; // Have we been preempted? void* saved_state; // State before preemption} process_t; result_t request_resource_protocol1(process_t* proc, int resource_id) { mutex_lock(&rm.lock); if (rm.available[resource_id] > 0) { // Resource available - allocate normally rm.available[resource_id]--; proc->held[resource_id]++; mutex_unlock(&rm.lock); return SUCCESS; } // Resource not available - PREEMPT ALL our held resources proc->preempted = true; save_process_state(proc); for (int i = 0; i < MAX_RESOURCES; i++) { rm.available[i] += proc->held[i]; proc->waiting_for[i] += proc->held[i]; // Need these back later proc->held[i] = 0; } // Add the new request to our waiting set proc->waiting_for[resource_id]++; // Now wait for ALL resources (held nothing) while (!can_allocate_all(proc->waiting_for)) { cond_wait(&rm.available_cond, &rm.lock); } // All resources available - allocate them for (int i = 0; i < MAX_RESOURCES; i++) { rm.available[i] -= proc->waiting_for[i]; proc->held[i] = proc->waiting_for[i]; proc->waiting_for[i] = 0; } // Restore state and continue restore_process_state(proc); proc->preempted = false; mutex_unlock(&rm.lock); return SUCCESS;} // Protocol 2: Preempt from Waiting Processesresult_t request_resource_protocol2(process_t* requester, int resource_id) { mutex_lock(&rm.lock); if (rm.available[resource_id] > 0) { rm.available[resource_id]--; requester->held[resource_id]++; mutex_unlock(&rm.lock); return SUCCESS; } // Resource not available - find who has it process_t* holder = find_holder(resource_id); if (holder != NULL && is_waiting(holder)) { // Holder is waiting for something else - PREEMPT from holder save_process_state(holder); holder->held[resource_id]--; holder->waiting_for[resource_id]++; // Holder now waits for this too // Give resource to requester requester->held[resource_id]++; mutex_unlock(&rm.lock); return SUCCESS; } // Holder is not waiting (actively using resource) - must wait while (rm.available[resource_id] == 0) { cond_wait(&rm.available_cond, &rm.lock); } rm.available[resource_id]--; requester->held[resource_id]++; mutex_unlock(&rm.lock); return SUCCESS;} /* * COMPARISON: * * Protocol 1 (Preempt on Wait): * + Simple to implement * + Guaranteed deadlock-free * - May preempt resources unnecessarily * - Higher overhead (save/restore state frequently) * - Potential for poor resource utilization * * Protocol 2 (Preempt from Wait): * + Only preempts when necessary * + Better resource utilization * - More complex implementation * - May cascade (preempted process preempts another) * - Fairness concerns (some processes may starve) */While preemption is effective for CPU, memory, and some logical resources, many resources cannot be safely preempted. Understanding why helps us know when preemption-based prevention is applicable.
Categories of non-preemptible resources:
Case Study: Why Mutex Preemption is Dangerous
Consider a banking application:
mutex_lock(&account_mutex);
balance = read_balance(account_id);
balance -= withdrawal_amount;
// ** PREEMPTION POINT - mutex taken away **
write_balance(account_id, balance);
mutex_unlock(&account_mutex);
If the mutex is preempted after reading but before writing:
The mutex exists to prevent exactly this scenario. Preempting it defeats its purpose.
This is why deadlock prevention via preemption is limited in scope. Locks, semaphores, and similar synchronization primitives protect critical sections precisely because they cannot be preempted. For lock-based deadlocks, we must use other prevention strategies (resource ordering, hold-and-wait prevention) or detection/recovery.
In database systems, transaction abort and rollback serves as a form of preemption. When a transaction holds locks and must be terminated (for deadlock resolution or timeout), the database rolls back all changes, releasing the locks. This is preemption of logical resources.
Why database preemption works:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091
// Transaction Rollback as Resource Preemption typedef struct { transaction_id_t id; list_t read_set; // Objects read by this transaction list_t write_set; // Objects modified by this transaction list_t undo_log; // Log for rollback list_t held_locks; // Locks held by this transaction transaction_state_t state;} transaction_t; // Deadlock detected - select victim and abortvoid resolve_deadlock(list_t* deadlocked_transactions) { // Select victim (e.g., youngest transaction, smallest, etc.) transaction_t* victim = select_deadlock_victim(deadlocked_transactions); // PREEMPT: abort and rollback the victim abort_transaction(victim);} void abort_transaction(transaction_t* txn) { // Step 1: Apply undo log in reverse order for (int i = txn->undo_log.count - 1; i >= 0; i--) { undo_record_t* record = txn->undo_log.items[i]; switch (record->type) { case UNDO_UPDATE: // Restore old value write_data(record->object_id, record->old_value); break; case UNDO_INSERT: // Delete the inserted record delete_data(record->object_id); break; case UNDO_DELETE: // Re-insert the deleted record insert_data(record->object_id, record->old_value); break; } } // Step 2: Release all locks (this is the preemption!) for_each(lock, txn->held_locks) { release_lock(lock); // Other transactions can now acquire these } // Step 3: Mark transaction as aborted txn->state = ABORTED; // Step 4: Notify application (may retry) notify_transaction_aborted(txn->id);} // Application handles abort by retryingvoid application_logic() { int retries = 0; while (retries < MAX_RETRIES) { transaction_t* txn = begin_transaction(); try { // Perform operations... transfer_funds(account_a, account_b, amount); commit_transaction(txn); return; // Success! } catch (TransactionAbortException) { // Transaction was preempted (aborted for deadlock) retries++; wait_random_backoff(); // Prevent livelock // Loop will retry } } report_failure("Transaction failed after max retries");} /* * ANALYSIS: * * Database abort/rollback IS preemption of locks: * 1. Transaction holds locks (resources) * 2. Deadlock detected * 3. Victim selected * 4. Victim's locks forcibly released (preempted) * 5. Victim's work undone (state restored) * 6. Other transactions proceed * * The key is the UNDO LOG - it makes preemption safe by enabling * consistent state restoration. */The lesson from databases is that resources can be made preemptible by maintaining enough information to restore consistent state after preemption. Write-ahead logging, checkpointing, and undo records are all techniques that enable preemption of resources that would otherwise be non-preemptible.
Understanding when preemption-based prevention is practical requires careful analysis of costs and benefits.
| Factor | Consideration | Impact on Design |
|---|---|---|
| State size | How much state must be saved? | Large state → high preemption cost |
| Save/restore time | How long does preemption take? | Long time → throughput reduction |
| Frequency | How often will preemption occur? | Frequent → overhead dominates |
| Idempotency | Can work be safely redone? | Non-idempotent → complex recovery |
| Fairness | Who gets preempted? | Always same victim → starvation |
| Cascading | Does preemption cause more preemption? | Cascading → system instability |
| Progress guarantee | Will preempted process eventually complete? | No guarantee → starvation possible |
Real-World Application:
In practice, preemption-based deadlock prevention is primarily used for:
For application-level synchronization primitives (mutexes, semaphores), preemption is usually not used. Instead, prevention via resource ordering or detection with recovery is preferred.
We've examined how breaking the no-preemption condition can prevent deadlock. Let's consolidate our understanding:
What's next:
The final prevention strategy attacks the circular wait condition by imposing a total ordering on resources and requiring processes to acquire resources in order. This is often the most practical prevention strategy for application-level synchronization, as it doesn't require preemption or advance knowledge of all resource needs.
You now understand how preemption can prevent deadlock when resources support state save and restore. While powerful for virtual resources like CPU and memory, preemption is limited for physical devices and synchronization primitives, which is why resource ordering remains the most widely applicable prevention strategy.