Loading learning content...
In our study of CPU scheduling algorithms, we've encountered algorithms designed to minimize waiting time, maximize throughput, and optimize system responsiveness. Shortest Job First (SJF) and its preemptive variant Shortest Remaining Time First (SRTF) represent the theoretical optimum for minimizing average waiting time.
Now, we turn to an algorithm that does precisely the opposite: Longest Remaining Time First (LRTF).
At first glance, studying an algorithm that deliberately maximizes what we typically want to minimize seems pointless—even perverse. Why would any operating system textbook dedicate significant coverage to a scheduling policy that no production system would willingly adopt?
The answer lies in the profound educational value of understanding through contrast. By rigorously analyzing LRTF, we develop deeper intuition for why optimal algorithms work, what makes scheduling genuinely challenging, and how algorithm design decisions ripple through system behavior. LRTF is not a practical scheduler—it is a pedagogical instrument of remarkable power.
By the end of this page, you will understand LRTF's formal definition and operation, analyze its performance characteristics mathematically, recognize why it represents a theoretical worst-case bound, and appreciate how studying extremes illuminates optimal design. This counterintuitive approach will sharpen your intuition for all scheduling algorithms.
Longest Remaining Time First (LRTF) is a preemptive CPU scheduling algorithm that, at every scheduling decision point, selects the process with the longest remaining burst time for execution.
Formal Definition:
Given a set of processes P = {p₁, p₂, ..., pₙ} where each process pᵢ has:
At any scheduling decision point t, LRTF selects process pⱼ such that:
pⱼ = argmax{rᵢ(t) | pᵢ ∈ ReadyQueue(t)}
Where ReadyQueue(t) is the set of all processes that have arrived (aᵢ ≤ t) and have not yet completed (rᵢ(t) > 0).
LRTF is inherently preemptive. When a new process arrives with a longer burst time than the currently executing process's remaining time, the scheduler immediately preempts the running process and switches to the newly arrived longer job. This is the mirror image of SRTF's preemption behavior.
Scheduling Decision Points:
LRTF triggers scheduling decisions at:
Unlike non-preemptive algorithms, LRTF continuously monitors the ready queue and can interrupt the running process at any moment when a "longer" candidate appears.
| Event | Action | Rationale |
|---|---|---|
| New process P arrives with burst B | If B > remaining time of running process, preempt | Always schedule the longest remaining job |
| Running process completes | Select process with max remaining time from ready queue | Standard scheduling decision |
| Tie between processes | Use arrival time (FCFS) or process ID as tiebreaker | Deterministic behavior guarantee |
| Ready queue empty | CPU remains idle | No schedulable work available |
Understanding LRTF's behavior requires tracing through its execution mechanics with precision. The algorithm maintains a max-heap (or equivalent priority structure) of ready processes, keyed by remaining burst time.
Data Structures for LRTF Implementation:
1234567891011121314151617181920212223242526272829
// Process Control Block for LRTF schedulingtypedef struct { int pid; // Process identifier int arrival_time; // Time when process entered system int burst_time; // Total CPU time required int remaining_time; // CPU time still needed int start_time; // First time process got CPU (-1 if never) int completion_time; // Time when process finished int waiting_time; // Total time spent waiting int turnaround_time; // Total time in system} Process; // Max-heap node for ready queuetypedef struct { Process* process; int priority; // Priority = remaining_time (higher = more priority)} HeapNode; // Ready queue as max-heaptypedef struct { HeapNode* nodes; int size; int capacity;} MaxHeap; // LRTF maintains:// 1. MaxHeap ready_queue - processes ready to run, ordered by remaining time// 2. Process* running - currently executing process (NULL if CPU idle)// 3. int current_time - global simulation clockThe LRTF Scheduling Loop:
The core algorithm operates as an event-driven simulation:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
void lrtf_schedule(Process processes[], int n) { MaxHeap ready_queue = create_max_heap(n); int current_time = 0; int completed = 0; int process_index = 0; // Next process to arrive // Sort processes by arrival time sort_by_arrival(processes, n); while (completed < n) { // Add all processes that have arrived by current_time while (process_index < n && processes[process_index].arrival_time <= current_time) { insert_heap(&ready_queue, &processes[process_index]); process_index++; } if (is_empty(&ready_queue)) { // No process ready - CPU idle, jump to next arrival if (process_index < n) { current_time = processes[process_index].arrival_time; } continue; } // Extract process with LONGEST remaining time Process* current = extract_max(&ready_queue); // Determine how long this process runs before next event int next_arrival = (process_index < n) ? processes[process_index].arrival_time : INT_MAX; int run_time = min(current->remaining_time, next_arrival - current_time); // Check if newly arriving process has longer remaining time if (process_index < n && next_arrival < current_time + current->remaining_time) { // Will need to reconsider when new process arrives run_time = next_arrival - current_time; } // Execute for run_time units if (current->start_time == -1) { current->start_time = current_time; } current->remaining_time -= run_time; current_time += run_time; if (current->remaining_time == 0) { // Process completed current->completion_time = current_time; current->turnaround_time = current->completion_time - current->arrival_time; current->waiting_time = current->turnaround_time - current->burst_time; completed++; } else { // Process preempted - reinsert to heap insert_heap(&ready_queue, current); } }}LRTF causes frequent preemptions because every new arrival potentially has a longer remaining time than the current process (which has been running and thus has decreasing remaining time). This creates significant context switch overhead—a hidden cost beyond just poor scheduling decisions.
Let's trace through LRTF execution with a concrete example to solidify understanding.
Process Set:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 6 |
| P2 | 1 | 2 |
| P3 | 2 | 8 |
| P4 | 3 | 3 |
Step-by-Step Execution Trace:
Time 0:
Time 1:
Time 2:
Time 3:
Time 4-9:
Time 9:
Let me recalculate precisely:
| Time | Event | Running | Ready Queue (remaining) | Action |
|---|---|---|---|---|
| 0 | P1 arrives | — | [P1(6)] | Start P1 |
| 1 | P2 arrives | P1(5) | [P2(2)] | P1 > P2, continue P1 |
| 2 | P3 arrives | P1(4) | [P2(2), P3(8)] | P3 > P1, preempt to P3 |
| 3 | P4 arrives | P3(7) | [P1(4), P2(2), P4(3)] | P3 > all, continue P3 |
| 4 | — | P3(6) | [P1(4), P4(3), P2(2)] | P3 > all, continue P3 |
| 5 | — | P3(5) | [P1(4), P4(3), P2(2)] | P3 > all, continue P3 |
| 6 | — | P3(4) | [P1(4), P4(3), P2(2)] | Tie P3=P1, continue P3 (FCFS) |
| 7 | — | P3(3) | [P1(4), P4(3), P2(2)] | P1 > P3, preempt to P1 |
| 8 | — | P1(3) | [P3(3), P4(3), P2(2)] | Tie, continue P1 |
| 9 | — | P1(2) | [P3(3), P4(3), P2(2)] | P3=P4 > P1, preempt to P3 |
| 10 | — | P3(2) | [P4(3), P1(2), P2(2)] | P4 > P3, preempt to P4 |
| 11 | — | P4(2) | [P3(2), P1(2), P2(2)] | Tie, continue P4 |
| 12 | — | P4(1) | [P3(2), P1(2), P2(2)] | Others > P4, preempt to P3 |
| 13 | — | P3(1) | [P1(2), P2(2), P4(1)] | P1=P2 > P3, preempt to P1 |
| 14 | — | P1(1) | [P2(2), P3(1), P4(1)] | P2 > P1, preempt to P2 |
| 15 | — | P2(1) | [P1(1), P3(1), P4(1)] | Tie, continue P2 |
| 16 | P2 done | — | [P1(1), P3(1), P4(1)] | P2 completes, pick any |
| 17 | P1 done | — | [P3(1), P4(1)] | P1 completes |
| 18 | P3 done | — | [P4(1)] | P3 completes |
| 19 | P4 done | — | [] | All complete |
Gantt Chart:
|P1|P1|P3|P3|P3|P3|P3|P1|P1|P3|P4|P4|P3|P1|P2|P2|P1|P3|P4|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Observations:
Compare this to SRTF on the same workload: SRTF would complete P2 by time 3 (2 time units of execution). Under LRTF, P2 waits until time 14 to even resume, completing at time 16. A 2-unit job takes 15 time units in the system!
Let's compute the performance metrics for our example to quantify LRTF's behavior.
Computing Metrics:
| Process | Arrival | Burst | Completion | Turnaround | Waiting |
|---|---|---|---|---|---|
| P1 | 0 | 6 | 17 | 17 - 0 = 17 | 17 - 6 = 11 |
| P2 | 1 | 2 | 16 | 16 - 1 = 15 | 15 - 2 = 13 |
| P3 | 2 | 8 | 18 | 18 - 2 = 16 | 16 - 8 = 8 |
| P4 | 3 | 3 | 19 | 19 - 3 = 16 | 16 - 3 = 13 |
| Average | — | — | — | 16.0 | 11.25 |
Comparison with SRTF (Optimal):
For the same process set, SRTF would yield:
| Process | Arrival | Burst | Completion | Turnaround | Waiting |
|---|---|---|---|---|---|
| P1 | 0 | 6 | 12 | 12 | 6 |
| P2 | 1 | 2 | 3 | 2 | 0 |
| P3 | 2 | 8 | 19 | 17 | 9 |
| P4 | 3 | 3 | 6 | 3 | 0 |
| Average | — | — | — | 8.5 | 3.75 |
The Performance Gap:
| Metric | LRTF | SRTF (Optimal) | Ratio |
|---|---|---|---|
| Average Turnaround | 16.0 | 8.5 | 1.88x worse |
| Average Waiting | 11.25 | 3.75 | 3.0x worse |
| Context Switches | ~12 | ~3 | 4x worse |
LRTF doesn't just perform poorly—it represents a theoretical bound on how bad scheduling can get while still maintaining work conservation (never leaving the CPU idle when work is available).
LRTF and SRTF form a duality in scheduling theory. If SRTF minimizes average waiting time for a given workload, LRTF maximizes it. This isn't coincidence—it's a mathematical consequence of the greedy selection criterion being exactly inverted.
If LRTF is impractical, why dedicate serious study to it? The value lies in multiple dimensions:
In technical interviews, demonstrating knowledge of LRTF signals deep understanding. When asked about scheduling algorithms, mentioning that SJF/SRTF are optimal and LRTF represents the theoretical worst case shows you understand the full spectrum, not just memorized algorithms.
LRTF has genuine applications in theoretical computer science, even if not in practical scheduling:
1. Approximation Algorithm Analysis
When designing approximation algorithms for NP-hard scheduling problems (like minimizing weighted completion time), researchers use LRTF as a baseline for proving approximation ratios. If an algorithm is provably never worse than LRTF, that's a minimum quality guarantee.
2. Online Algorithm Competitive Ratio
In online scheduling (where jobs arrive over time and decisions are irrevocable), the competitive ratio compares online performance to optimal offline performance. LRTF helps establish:
3. Game-Theoretic Scheduling
In multi-agent scheduling where jobs belong to selfish agents, Nash equilibria analysis sometimes involves LRTF-like strategies as worst-case outcomes for non-cooperative behavior.
4. Adversarial Inputs
For robust algorithm design, LRTF represents the scheduling an adversary would impose if they controlled the scheduler. Analyzing system behavior under LRTF reveals maximum damage from malicious scheduling.
1234567891011121314151617181920212223242526272829
"""Competitive ratio analysis using LRTF as worst-case bound. For single-machine scheduling with total weighted completion time:- Offline optimal: O(P) where P = sum of processing times- LRTF worst case: O(n * P) where n = number of jobs- Competitive ratio of online algorithms: often compared between these bounds""" def compute_lrtf_competitive_ratio(jobs): """ Compute how much worse LRTF is compared to SRTF (optimal). Args: jobs: List of (arrival_time, burst_time) tuples Returns: Ratio of LRTF_avg_waiting / SRTF_avg_waiting """ lrtf_metrics = simulate_lrtf(jobs) srtf_metrics = simulate_srtf(jobs) if srtf_metrics['avg_waiting'] == 0: return float('inf') # SRTF achieved zero waiting return lrtf_metrics['avg_waiting'] / srtf_metrics['avg_waiting'] # For any workload, this ratio is >= 1# For adversarial workloads, it can grow with O(n)LRTF represents the systematic application of the convoy effect. Recall that in FCFS scheduling, a "convoy" forms when short jobs queue behind a long job, waiting unnecessarily.
LRTF guarantees this pattern by always choosing the longest job. Every short job must wait for every long job to complete first. This is the convoy effect elevated to a scheduling policy.
The Convoy Cascade:
Quantifying the Convoy Damage:
Consider a workload with:
Under SRTF: Short jobs complete first (99ms total), then long job (1000ms)
Under LRTF: Long job runs first for 1000ms
This is an 18x degradation in average waiting time from a single scheduling decision difference.
In real systems, convoy effects impose invisible taxes on performance. LRTF shows the maximum possible convoy tax. When debugging slow systems, ask: 'Is our scheduling inadvertently creating convoys?' LRTF is the conceptual extreme to avoid.
We've explored LRTF not as a practical algorithm but as a theoretical instrument that illuminates scheduling design.
What's Next:
Having understood the theoretical worst case (LRTF), we now turn to an algorithm that addresses one of the most critical problems in practical scheduling: starvation. The Highest Response Ratio Next (HRRN) algorithm elegantly balances efficiency and fairness, preventing the indefinite postponement that pure SJF can cause.
HRRN introduces the concept of aging—automatically increasing a process's priority the longer it waits. This simple mechanism transforms scheduling from a pure optimization problem into a nuanced balance of throughput and fairness.
You now understand LRTF as a theoretical construct: its formal definition, execution mechanics, performance characteristics, and role in scheduling theory. This understanding of the worst case prepares you to appreciate the elegance of HRRN's starvation-prevention mechanism in the next section.