Loading content...
Throughout this module, we've examined two contrasting scheduling algorithms:
These algorithms exist within a broader landscape that includes FCFS, SJF, SRTF, Priority Scheduling, and Round Robin. Understanding when each is appropriate requires analyzing their properties across multiple dimensions.
This final section synthesizes everything into a comprehensive comparison, equipping you with the knowledge to make informed scheduling decisions in real-world systems.
By the end of this page, you will be able to compare algorithms across multiple performance metrics, select appropriate algorithms for specific workload characteristics, understand the fundamental tradeoffs in scheduler design, and apply this knowledge to real system design decisions.
Let's begin with a comprehensive overview of each algorithm's key characteristics.
| Algorithm | Selection Criterion | Preemptive? | Starvation Risk | Burst Knowledge |
|---|---|---|---|---|
| FCFS | Arrival order | No | None | Not needed |
| SJF | Shortest burst time | No | High (long jobs) | Required |
| SRTF | Shortest remaining time | Yes | High (long jobs) | Required |
| LRTF | Longest remaining time | Yes | Extreme (short jobs) | Required |
| HRRN | Highest response ratio | No | None (aging) | Required |
| Priority | Assigned priority | Configurable | High (low priority) | Not needed |
| Round Robin | Time slice rotation | Yes (forced) | None | Not needed |
| MLFQ | Queue level + feedback | Yes | Low (with boost) | Learned |
Key Observations:
Knowledge Requirements: SJF, SRTF, LRTF, and HRRN all require burst time knowledge—either exact or estimated. In practice, this is often unavailable.
Starvation Patterns: The starvation risk varies widely. FCFS and RR have no starvation by design. SJF/SRTF starve long jobs. LRTF starves short jobs. HRRN's aging eliminates starvation.
Preemption Trade-offs: Preemptive algorithms respond faster to high-priority arrivals but incur context switch overhead.
Let's compare algorithms across standard scheduling metrics using a common benchmark workload.
Benchmark Workload:
| Process | Arrival | Burst |
|---|---|---|
| P1 | 0 | 8 |
| P2 | 1 | 4 |
| P3 | 2 | 9 |
| P4 | 3 | 5 |
| P5 | 4 | 2 |
Algorithm Performance Results:
| Algorithm | Avg Waiting | Avg Turnaround | Context Switches | Max Waiting |
|---|---|---|---|---|
| FCFS | 11.8 | 17.4 | 0 | 21 |
| SJF (non-preemptive) | 7.0 | 12.6 | 0 | 15 |
| SRTF | 5.8 | 11.4 | 4 | 17 |
| LRTF | 15.4 | 21.0 | 10+ | 24 |
| HRRN | 7.4 | 13.0 | 0 | 16 |
| Round Robin (q=3) | 10.2 | 15.8 | 8 | 18 |
Analysis:
Best Average Waiting Time: SRTF (5.8)
Worst Average Waiting Time: LRTF (15.4)
HRRN Position: 7.4 (close to optimal)
Context Switch Cost:
The table shows pure scheduling time, but context switches add real overhead (1-100+ microseconds each). At high switch rates (like LRTF), this overhead can dominate. Always factor switch cost into real-world comparisons.
Efficiency metrics (average waiting time) don't capture the full picture. Fairness measures how equitably CPU time is distributed.
Fairness Metrics:
| Algorithm | Jain's Index | Max/Min Ratio | CV (σ/μ) | Starvation Possible? |
|---|---|---|---|---|
| FCFS | 0.75 | ∞ (min=0) | 0.89 | No |
| SJF | 0.65 | ∞ (min=0) | 1.05 | Yes (long jobs) |
| SRTF | 0.63 | ∞ (min=0) | 1.12 | Yes (long jobs) |
| LRTF | 0.58 | ∞ (min=0) | 1.25 | Yes (short jobs) |
| HRRN | 0.82 | 8.0 | 0.72 | No |
| Round Robin | 0.88 | 3.0 | 0.45 | No |
Key Fairness Insights:
Round Robin is fairest: By design, RR distributes CPU time most evenly. Every process gets regular time slices.
HRRN is second fairest: The aging mechanism ensures no process waits disproportionately. Jain's index of 0.82 is excellent.
SJF/SRTF sacrifice fairness for efficiency: Low Jain's index reflects that long jobs bear disproportionate wait burden.
LRTF is unfair in the opposite direction: Short jobs suffer while long jobs are prioritized.
FCFS is moderately fair: First-come ordering is fair in a procedural sense, though random arrivals can cause variance.
There's generally an inverse relationship between efficiency (minimizing average wait) and fairness (equalizing wait times). HRRN achieves an exceptional balance—high efficiency (7.4 vs 5.8 optimal) with high fairness (0.82 Jain's). This is why HRRN is valuable despite not being strictly optimal for either metric.
Beyond waiting time, schedulers affect system-wide metrics like throughput (jobs/second) and CPU utilization.
Throughput Analysis:
For work-conserving schedulers (all except those allowing idle CPU despite available work), throughput is largely determined by workload, not algorithm choice. However:
CPU Utilization:
Utilization = (Total burst time) / (Makespan)
For our benchmark:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
"""Throughput and Utilization Analysis Compare schedulers based on system-wide efficiency metrics.""" from dataclasses import dataclassfrom typing import List @dataclassclass ScheduleResult: algorithm: str makespan: int # Total time to complete all jobs cpu_time: int # Time CPU spent executing (not switching) switch_time: int # Time spent on context switches idle_time: int # Time CPU was idle def compute_metrics(result: ScheduleResult, total_burst: int) -> dict: """Compute throughput and utilization metrics.""" # Effective throughput: jobs completed per unit time # In this model, all jobs complete, so compare makespans # CPU utilization: useful work / total time utilization = result.cpu_time / result.makespan # Overhead ratio: switching time / useful time overhead = result.switch_time / result.cpu_time if result.cpu_time > 0 else 0 # Theoretical minimum makespan (no overhead) theoretical_min = total_burst + max(0, result.idle_time) efficiency = theoretical_min / result.makespan return { 'utilization': utilization, 'overhead': overhead, 'efficiency': efficiency } # Example comparisonresults = [ ScheduleResult("FCFS", 28, 28, 0, 0), ScheduleResult("SJF", 28, 28, 0, 0), ScheduleResult("SRTF", 29, 28, 1, 0), # 1 unit switch overhead ScheduleResult("LRTF", 38, 28, 10, 0), # 10 units switch overhead (many switches) ScheduleResult("HRRN", 28, 28, 0, 0), ScheduleResult("RR-q3", 32, 28, 4, 0), # 4 units switch overhead] total_burst = 28 print("Algorithm | Utilization | Overhead | Efficiency")print("-" * 55)for r in results: m = compute_metrics(r, total_burst) print(f"{r.algorithm:13} | {m['utilization']:11.1%} | {m['overhead']:8.1%} | {m['efficiency']:10.1%}") # Output:# Algorithm | Utilization | Overhead | Efficiency# -------------------------------------------------------# FCFS | 100.0% | 0.0% | 100.0%# SJF | 100.0% | 0.0% | 100.0%# SRTF | 96.6% | 3.6% | 96.6%# LRTF | 73.7% | 35.7% | 73.7%# HRRN | 100.0% | 0.0% | 100.0%# RR-q3 | 87.5% | 14.3% | 87.5%LRTF's 73.7% efficiency demonstrates how excessive context switching degrades throughput. Over a quarter of CPU time is wasted switching. In real systems with heavier switch costs, this would be even worse.
For interactive systems, response time (time from request to first response) matters more than throughput.
Response Time Characteristics:
| Algorithm | Short Job Response | Long Job Response | Interactive Suitability |
|---|---|---|---|
| FCFS | Variable (depends on queue) | Variable | Poor |
| SJF | Excellent | Poor (starvation risk) | Moderate |
| SRTF | Excellent (preemptive) | Poor (starvation risk) | Good for short |
| LRTF | Terrible | Excellent | Terrible |
| HRRN | Good | Good (aging) | Moderate |
| Round Robin | Good (bounded by quantum) | Good (bounded) | Excellent |
| MLFQ | Excellent (high-priority queue) | Good (after boost) | Excellent |
Interactive System Requirements:
Interactive systems (desktops, web servers) require:
Best choices for interactive systems:
HRRN for interactive systems?
HRRN is non-preemptive, which limits its interactive suitability. A long-running job blocks all others. However, if job lengths are bounded and reasonably short, HRRN performs well.
Human perception research suggests ~100ms is the threshold for 'instantaneous' response. Interactive schedulers should target response times below this. MLFQ with short high-priority quantums (5-10ms) achieves this for interactive processes.
Different workloads have different optimal schedulers. Here's a decision matrix based on workload characteristics.
| Workload Type | Best Algorithm | Second Choice | Avoid |
|---|---|---|---|
| Batch processing (known lengths) | SJF/SRTF | HRRN | LRTF |
| Batch processing (unknown lengths) | FCFS or HRRN | RR | SJF (no data) |
| Interactive desktop | MLFQ | RR | FCFS, SJF |
| Real-time systems | EDF, Rate Monotonic | Priority | FCFS, RR |
| Web server (short requests) | SRTF | SJF | FCFS |
| Mixed interactive + batch | MLFQ + Priority | HRRN | Pure SJF |
| Database (varied queries) | HRRN | MLFQ | FCFS |
| HPC cluster (long jobs) | FCFS / Fair Share | Priority | SJF |
Decision Factors:
There is no universally optimal scheduler. Each algorithm embodies design tradeoffs. The 'best' scheduler depends on workload characteristics, performance priorities, and system constraints. Mastery lies in matching algorithm to context.
Practical scheduler selection must consider implementation complexity and runtime overhead.
| Algorithm | Data Structure | Selection Time | Maintenance | Memory |
|---|---|---|---|---|
| FCFS | Queue | O(1) | O(1) enqueue | O(n) |
| SJF | Min-heap by burst | O(1) | O(log n) insert | O(n) |
| SRTF | Min-heap by remaining | O(1) | O(log n) per event | O(n) |
| LRTF | Max-heap by remaining | O(1) | O(log n) per event | O(n) |
| HRRN | List or heap | O(n) or O(n log n) | O(1) or O(log n) | O(n) |
| Priority | Priority queue | O(1) or O(log n) | O(log n) | O(n) |
| RR | Circular queue | O(1) | O(1) | O(n) |
| MLFQ | Multiple queues | O(q) queues | O(1) per queue | O(n) |
HRRN Implementation Note:
HRRN's O(n) selection time (when using a list) stems from needing to recompute all response ratios at each decision. This can be optimized:
For small n (< 1000), O(n) selection is negligible. For large n, optimization matters.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
/** * Optimized HRRN Implementation * * Uses lazy ratio computation and early termination to * reduce selection overhead for large process counts. */ typedef struct { int pid; int arrival_time; int burst_time; float cached_ratio; int ratio_compute_time; // When ratio was last computed} Process; /** * Optimized selection: Only recompute ratios when necessary */Process* select_next_optimized(Process* ready[], int count, int current_time) { if (count == 0) return NULL; if (count == 1) return ready[0]; float max_ratio = -1; Process* best = NULL; // First pass: update cached ratios for (int i = 0; i < count; i++) { Process* p = ready[i]; // Only recompute if cache is stale if (p->ratio_compute_time < current_time) { int wait = current_time - p->arrival_time; p->cached_ratio = 1.0f + (float)wait / p->burst_time; p->ratio_compute_time = current_time; } } // Second pass: find maximum for (int i = 0; i < count; i++) { if (ready[i]->cached_ratio > max_ratio) { max_ratio = ready[i]->cached_ratio; best = ready[i]; } } return best;} /** * Alternative: Use heap with periodic re-heapification * * Maintain a max-heap by cached ratio. Reheapify: * - When a new process arrives * - When significant time passes (ratios may have reordered) */#define REHEAP_INTERVAL 10 // Reheapify every 10 time units void maybe_reheapify(MaxHeap* heap, int current_time, int* last_heapify) { if (current_time - *last_heapify >= REHEAP_INTERVAL) { // Update all ratios and rebuild heap for (int i = 0; i < heap->size; i++) { Process* p = heap->nodes[i]; int wait = current_time - p->arrival_time; p->cached_ratio = 1.0f + (float)wait / p->burst_time; } rebuild_heap(heap); *last_heapify = current_time; }}This module's two focal algorithms—LRTF and HRRN—represent opposite ends of design philosophies. Let's crystallize their contrast.
The Philosophical Spectrum:
LRTF ◄─────────────────────────────────────────────────► SRTF
Worst efficiency Best efficiency
Short job starvation Long job starvation
FCFS ────── HRRN ────── SJF
Fair but Balanced Optimal but
slow tradeoff unfair
HRRN occupies the balanced middle—achieving most of SJF's efficiency while eliminating its starvation problem. LRTF exists at the opposite extreme solely to define the worst case.
LRTF teaches us what to avoid; HRRN teaches us what's achievable. Together, they bracket the space of scheduling possibilities. Understanding both makes you a more complete systems engineer.
Use this flowchart to guide scheduler selection for real systems.
Quick Reference:
| If you need... | Use... |
|---|---|
| Minimal average wait (known bursts) | SRTF |
| No starvation (known bursts) | HRRN |
| Simple, deterministic | FCFS |
| Interactive responsiveness | MLFQ or RR |
| Hard real-time guarantees | EDF or Rate Monotonic |
| Minimal implementation | FCFS or RR |
We've completed a comprehensive exploration of LRTF and HRRN, understanding them within the broader landscape of CPU scheduling.
What You've Achieved:
You can now:
This knowledge forms a foundation for understanding OS schedulers, designing custom scheduling policies, and making informed system architecture decisions.
Congratulations! You've mastered LRTF, HRRN, aging mechanisms, and comprehensive algorithm comparison. You now possess deep understanding of this critical aspect of CPU scheduling—knowledge that will serve you in systems design, performance optimization, and technical interviews.