Loading learning content...
When a higher-priority process becomes ready while a lower-priority process is running, the scheduler faces a critical decision: interrupt the running process immediately, or let it continue until it voluntarily yields? This decision defines the distinction between preemptive and non-preemptive scheduling.
The choice has profound implications for system responsiveness, throughput, complexity, and correctness. Understanding both approaches—and when to apply each—is essential for systems programming and real-time application development.
Understand the mechanics of preemptive and non-preemptive scheduling, analyze their tradeoffs, recognize appropriate use cases for each, and predict scheduling behavior given process arrival patterns.
In non-preemptive (cooperative) priority scheduling, once a process starts executing, it retains the CPU until it:
A higher-priority process arriving while a lower-priority process runs must wait. The scheduler only considers priorities at decision points—when the running process releases the CPU.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
"""Non-Preemptive Priority Scheduling SimulationHigher priority number = higher priority (runs first)"""from dataclasses import dataclassfrom typing import Listimport heapq @dataclassclass Process: pid: str arrival: int burst: int priority: int start: int = -1 finish: int = -1 def non_preemptive_priority(processes: List[Process]) -> List[Process]: """Simulate non-preemptive priority scheduling.""" time = 0 ready_queue = [] # Max-heap: (-priority, arrival, process) completed = [] waiting = list(processes) while waiting or ready_queue: # Add newly arrived processes to ready queue for p in waiting[:]: if p.arrival <= time: heapq.heappush(ready_queue, (-p.priority, p.arrival, p)) waiting.remove(p) if not ready_queue: time = min(p.arrival for p in waiting) # Jump to next arrival continue # Select highest priority (lowest -priority value) _, _, current = heapq.heappop(ready_queue) current.start = time current.finish = time + current.burst time = current.finish # Process runs to completion completed.append(current) return completed # Example: P2 (high priority) arrives while P1 runs - must waitprocesses = [ Process("P1", arrival=0, burst=10, priority=1), # Low priority Process("P2", arrival=2, burst=4, priority=3), # High priority Process("P3", arrival=3, burst=2, priority=2), # Medium priority] result = non_preemptive_priority(processes)print("Non-Preemptive Priority Schedule:")print(f"{'PID':<6}{'Arrival':<10}{'Start':<10}{'Finish':<10}{'Wait':<10}")for p in result: wait = p.start - p.arrival print(f"{p.pid:<6}{p.arrival:<10}{p.start:<10}{p.finish:<10}{wait:<10}")# P2 arrives at t=2 but waits until P1 finishes at t=10In preemptive priority scheduling, the scheduler can interrupt a running process when a higher-priority process becomes ready. The interrupted process returns to the ready queue, and the higher-priority process begins execution immediately.
Preemption occurs when:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
"""Preemptive Priority Scheduling SimulationDemonstrates immediate preemption on higher-priority arrival"""from dataclasses import dataclass, fieldfrom typing import Listimport heapq @dataclassclass Process: pid: str arrival: int burst: int priority: int remaining: int = field(init=False) timeline: List[tuple] = field(default_factory=list) def __post_init__(self): self.remaining = self.burst def preemptive_priority(processes: List[Process]) -> List[Process]: """Simulate preemptive priority scheduling.""" time = 0 ready_queue = [] completed = [] waiting = sorted(processes, key=lambda p: p.arrival) current = None while waiting or ready_queue or current: # Add newly arrived processes while waiting and waiting[0].arrival <= time: p = waiting.pop(0) heapq.heappush(ready_queue, (-p.priority, p.arrival, p)) # Check for preemption if current and p.priority > current.priority: heapq.heappush(ready_queue, (-current.priority, current.arrival, current)) current.timeline.append((current.timeline[-1][1] if current.timeline else time, time)) current = None # Select highest priority if no current process if not current and ready_queue: _, _, current = heapq.heappop(ready_queue) start = time if not current: time = waiting[0].arrival if waiting else time + 1 continue # Find next event (arrival or completion) next_arrival = waiting[0].arrival if waiting else float('inf') completion = time + current.remaining if completion <= next_arrival: current.remaining = 0 current.timeline.append((time, completion)) time = completion completed.append(current) current = None else: current.remaining -= (next_arrival - time) if time < next_arrival: current.timeline.append((time, next_arrival)) time = next_arrival return completed # Same example: P2 (high priority) NOW preempts P1processes = [ Process("P1", arrival=0, burst=10, priority=1), Process("P2", arrival=2, burst=4, priority=3), Process("P3", arrival=3, burst=2, priority=2),] result = preemptive_priority(processes)print("Preemptive Priority Schedule:")for p in result: print(f"{p.pid}: {p.timeline}")# P2 preempts P1 at t=2, runs t=2-6# P3 runs t=6-8, P1 resumes t=8-16| Criterion | Non-Preemptive | Preemptive |
|---|---|---|
| Response Time (high-priority) | Unpredictable, depends on running process | Minimal, bounded by interrupt latency |
| Throughput | Higher (fewer switches) | Lower (more switches) |
| Context Switch Overhead | Low | Higher |
| Implementation Complexity | Simple | Complex (requires interrupt handling) |
| Starvation of Low Priority | Less severe | More severe without aging |
| Critical Section Safety | Naturally safe | Requires explicit protection |
| Real-time Suitability | Soft real-time only | Hard real-time capable |
| Predictability | Process-level determinism | Priority-level determinism |
Many real systems use hybrid approaches: preemption between priority classes but non-preemptive (or round-robin) within a class. Linux SCHED_FIFO is preemptive across priorities but non-preemptive among equal-priority processes.
Implementing preemptive scheduling requires careful attention to:
When preempting, the kernel must save the complete execution context: CPU registers, program counter, stack pointer, FPU state, and potentially SIMD registers. This context must be restored exactly when the process resumes.
Code sequences that must not be interrupted require explicit protection. Options include:
Modern kernels like Linux support kernel preemption—preempting even kernel code paths (with careful exclusions). This improves latency but adds complexity.
12345678910111213141516171819202122232425262728293031323334353637
/* Linux kernel preemption control examples */ #include <linux/preempt.h>#include <linux/spinlock.h> /* Disable preemption for critical section */void critical_operation(void){ preempt_disable(); /* Increment preempt count */ /* Critical code - cannot be preempted */ do_critical_work(); preempt_enable(); /* Decrement; may reschedule */} /* Spinlock implicitly disables preemption */static DEFINE_SPINLOCK(my_lock); void spinlock_protected(void){ spin_lock(&my_lock); /* Disables preemption + acquires lock */ /* Protected code */ modify_shared_data(); spin_unlock(&my_lock); /* Re-enables preemption */} /* Check if preemption is safe */void conditional_yield(void){ if (need_resched() && preemptible()) { /* Voluntary preemption point */ cond_resched(); }}With priority-based preemption, low-priority processes may never run if higher-priority work constantly arrives. The next page examines this starvation problem and its manifestations.