Loading content...
Operating systems make thousands of scheduling decisions per second, and the quality of these decisions determines whether applications feel responsive or sluggish, whether batch jobs complete in minutes or hours, and whether system resources are utilized efficiently or wasted. To understand, evaluate, and optimize scheduling behavior, we must be able to quantify it.
Process timing calculations form the quantitative backbone of CPU scheduling analysis. These calculations aren't merely academic exercises—they're the metrics that kernel developers, performance engineers, and system architects use daily to evaluate scheduling algorithms, diagnose performance bottlenecks, and make informed design decisions.
In technical interviews, numerical problems involving process timing are exceptionally common because they test both conceptual understanding and computational precision. A candidate who can correctly calculate turnaround times across different scheduling algorithms demonstrates genuine mastery of how schedulers work—not just surface-level familiarity with terminology.
By the end of this page, you will be able to confidently compute all standard scheduling metrics (turnaround time, waiting time, response time, throughput, CPU utilization) for any given process set under any scheduling algorithm. You will develop systematic approaches that eliminate calculation errors and build intuition for how different algorithms affect these metrics.
Before diving into calculations, we must establish precise definitions for the timing metrics we'll compute. Confusion in these definitions is the primary source of calculation errors, so invest time here to internalize these concepts completely.
The Process Timeline Model:
Every process follows a well-defined timeline from creation to termination. Understanding this timeline is essential for accurate timing calculations:
Arrival Time (AT): The instant when a process enters the ready queue and becomes available for scheduling. This is when the process transitions from 'non-existent' (from the scheduler's perspective) to 'ready'.
First Execution Time (FET): The instant when the process first receives CPU time. This is when the process transitions from 'ready' to 'running' for the first time.
Completion Time (CT): The instant when the process finishes all its CPU bursts and terminates. This is when the process transitions from 'running' to 'terminated'.
Burst Time (BT) / Service Time: The total CPU time required by the process to complete its work. This is a property of the process itself, not dependent on scheduling.
I/O Time: Time spent waiting for I/O operations. For pure CPU-bound scheduling problems, this is often zero or ignored.
Burst time is NOT how long a process takes from arrival to completion—that's turnaround time. Burst time is purely the CPU time the process needs. A process with a 5ms burst time might have a 20ms turnaround time if it waits 15ms in the ready queue.
Derived Metrics:
From these fundamental timestamps, we derive the key scheduling metrics:
Turnaround Time (TAT):
TAT = Completion Time - Arrival Time
TAT = CT - AT
Turnaround time measures the total time a process spends in the system from arrival to completion. It encompasses all waiting time, execution time, and any I/O time. This is the metric users most directly experience—it's how long they wait for their job to complete.
Waiting Time (WT):
WT = Turnaround Time - Burst Time
WT = TAT - BT
WT = (CT - AT) - BT
Waiting time measures how long a process spends in the ready queue waiting for CPU time. It excludes execution time but includes time spent waiting while other processes run. This metric directly reflects the overhead introduced by the scheduling algorithm.
Response Time (RT):
RT = First Execution Time - Arrival Time
RT = FET - AT
Response time measures how long a user waits before seeing any response from their process. For interactive systems, this is often more important than turnaround time—users want to see something happening quickly, even if the entire job takes longer.
| Metric | Formula | Measures | Optimized By |
|---|---|---|---|
| Turnaround Time | CT - AT | Total time in system | SJF, SRTF for batch |
| Waiting Time | TAT - BT | Time spent not executing | SJF minimizes average |
| Response Time | FET - AT | Time to first execution | Round Robin for interactive |
| Throughput | Processes / Time | System productivity | Minimizing context switches |
| CPU Utilization | (Busy Time / Total Time) × 100 | Resource efficiency | Avoiding idle time |
For non-preemptive algorithms, response time equals waiting time (the process runs uninterrupted once started). For preemptive algorithms, waiting time may be greater than response time because the process might be preempted and wait again after its initial execution.
FCFS is the simplest scheduling algorithm and provides an ideal starting point for timing calculations. Processes execute in arrival order, without preemption, making the mathematics straightforward.
FCFS Execution Model:
The FCFS Calculation Algorithm:
For each process in arrival order:
Start Time = max(Completion Time of previous process, Arrival Time of current process)
Completion Time = Start Time + Burst Time
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
The max() function handles the case where there's a gap between the completion of one process and the arrival of the next (CPU idle time).
Detailed Example: FCFS Scheduling
Consider the following process set:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 24 |
| P2 | 1 | 3 |
| P3 | 2 | 3 |
Step-by-Step Calculation:
Process P1:
Process P2:
Process P3:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
def fcfs_scheduling(processes): """ Calculate FCFS scheduling metrics. Args: processes: List of (process_id, arrival_time, burst_time) tuples Returns: Dictionary with timing metrics for each process """ # Sort by arrival time sorted_processes = sorted(processes, key=lambda x: x[1]) results = {} current_time = 0 for pid, arrival_time, burst_time in sorted_processes: # CPU may be idle if process arrives after current_time start_time = max(current_time, arrival_time) # If there's a gap, CPU was idle if start_time > current_time: idle_time = start_time - current_time print(f"CPU idle for {idle_time}ms before {pid}") completion_time = start_time + burst_time turnaround_time = completion_time - arrival_time waiting_time = turnaround_time - burst_time response_time = start_time - arrival_time results[pid] = { 'arrival_time': arrival_time, 'burst_time': burst_time, 'start_time': start_time, 'completion_time': completion_time, 'turnaround_time': turnaround_time, 'waiting_time': waiting_time, 'response_time': response_time } current_time = completion_time return results # Example from aboveprocesses = [ ('P1', 0, 24), ('P2', 1, 3), ('P3', 2, 3)] results = fcfs_scheduling(processes)for pid, metrics in results.items(): print(f"{pid}: TAT={metrics['turnaround_time']}, " f"WT={metrics['waiting_time']}, RT={metrics['response_time']}")Average Metrics:
Average Turnaround Time = (24 + 26 + 28) / 3 = 78 / 3 = 26 ms
Average Waiting Time = (0 + 23 + 25) / 3 = 48 / 3 = 16 ms
Average Response Time = (0 + 23 + 25) / 3 = 48 / 3 = 16 ms
Throughput = 3 processes / 30 ms = 0.1 processes/ms = 100 processes/second
The Convoy Effect:
Notice how P2 and P3 (with very short burst times) had to wait 23-25ms because P1 (a long process) arrived first. This is the convoy effect—short processes stuck behind a long process, dramatically increasing average waiting time.
FCFS is non-preemptive, so Response Time = Waiting Time for all processes. The algorithm is fair in arrival order but suffers from the convoy effect. For FCFS, there's only one possible schedule for a given process set—no optimization is possible.
SJF (Shortest Job First) and SRTF (Shortest Remaining Time First) are optimization algorithms that minimize average waiting time. Understanding their timing calculations is essential because they represent the theoretical optimum for this metric.
Non-Preemptive SJF:
When a process completes, the scheduler selects the shortest job from the ready queue:
Preemptive SRTF (Shortest Remaining Time First):
At every time unit (or at least at every arrival and completion):
SJF Example with the Same Process Set:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 24 |
| P2 | 1 | 3 |
| P3 | 2 | 3 |
Non-Preemptive SJF Analysis:
This gives the same result as FCFS! Non-preemptive SJF only helps when multiple processes are waiting at a decision point.
SRTF Example (Preemptive):
| Time | Running | Ready Queue | Event |
|---|---|---|---|
| 0 | P1 | [] | P1 arrives and starts |
| 1 | P2 | [P1(23)] | P2 arrives, RT=3 < 23 → preempt P1 |
| 2 | P2 | [P1(23), P3(3)] | P3 arrives, RT=2 = 3 → no preemption |
| 4 | P3 | [P1(23)] | P2 completes, P3 starts (BT=3 < 23) |
| 7 | P1 | [] | P3 completes, P1 resumes |
| 30 | - | [] | P1 completes |
SRTF Calculations:
Process P1:
Process P2:
Process P3:
| Metric | FCFS | SRTF | Improvement |
|---|---|---|---|
| P1 Turnaround | 24 | 30 | -6 (worse) |
| P2 Turnaround | 26 | 3 | +23 (better) |
| P3 Turnaround | 28 | 5 | +23 (better) |
| Average TAT | 26.0 | 12.67 | 51% improvement |
| Average WT | 16.0 | 2.67 | 83% improvement |
| P1 Response | 0 | 0 | Same |
| P2 Response | 23 | 0 | +23 (better) |
| P3 Response | 25 | 2 | +23 (better) |
SRTF can achieve dramatically lower average waiting times by allowing short processes to 'jump ahead' of long ones. However, this comes at the cost of increased turnaround time for long processes (P1 went from 24 to 30) and potential starvation if short processes keep arriving.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
def srtf_scheduling(processes): """ Calculate SRTF (Shortest Remaining Time First) scheduling metrics. Uses a timeline simulation approach. """ # Create working copy with remaining times procs = {pid: {'at': at, 'bt': bt, 'remaining': bt, 'start': None, 'completion': None} for pid, at, bt in processes} current_time = 0 completed = 0 n = len(processes) while completed < n: # Find process with shortest remaining time among arrived processes ready = [(pid, data) for pid, data in procs.items() if data['at'] <= current_time and data['remaining'] > 0] if not ready: # CPU idle, jump to next arrival current_time = min(data['at'] for data in procs.values() if data['remaining'] > 0) continue # Select process with shortest remaining time pid, data = min(ready, key=lambda x: x[1]['remaining']) # Record first execution time if data['start'] is None: data['start'] = current_time # Find next event (arrival or completion) future_arrivals = [data['at'] for p, data in procs.items() if data['at'] > current_time and data['remaining'] > 0] if future_arrivals: next_arrival = min(future_arrivals) run_time = min(data['remaining'], next_arrival - current_time) else: run_time = data['remaining'] data['remaining'] -= run_time current_time += run_time if data['remaining'] == 0: data['completion'] = current_time completed += 1 # Calculate metrics results = {} for pid, data in procs.items(): tat = data['completion'] - data['at'] wt = tat - data['bt'] rt = data['start'] - data['at'] results[pid] = { 'turnaround_time': tat, 'waiting_time': wt, 'response_time': rt } return resultsRound Robin (RR) introduces the concept of time quantum (time slice), making calculations more complex but also more representative of real interactive systems. Each process receives a fixed time slice, after which it's preempted and placed at the end of the ready queue.
Round Robin Calculation Complexity:
Unlike FCFS or even SRTF, Round Robin requires tracking:
Key RR Timing Rules:
Round Robin Example:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 10 |
| P2 | 1 | 4 |
| P3 | 2 | 2 |
Time Quantum = 3ms
Execution Timeline:
| Time | Running | Action | Ready Queue (after action) |
|---|---|---|---|
| 0 | P1 | P1 starts | [] |
| 1 | P1 | P2 arrives | [P2] |
| 2 | P1 | P3 arrives | [P2, P3] |
| 3 | P2 | P1 preempted (7 remaining), P2 starts | [P3, P1] |
| 6 | P3 | P2 preempted (1 remaining), P3 starts | [P1, P2] |
| 8 | P1 | P3 completes, P1 resumes | [P2] |
| 11 | P2 | P1 preempted (4 remaining), P2 starts | [P1] |
| 12 | P1 | P2 completes, P1 resumes | [] |
| 16 | - | P1 completes | [] |
Calculations:
Process P1:
Process P2:
Process P3:
| Process | AT | BT | CT | TAT | WT | RT |
|---|---|---|---|---|---|---|
| P1 | 0 | 10 | 16 | 16 | 6 | 0 |
| P2 | 1 | 4 | 12 | 11 | 7 | 2 |
| P3 | 2 | 2 | 8 | 6 | 4 | 4 |
| Average | 11.0 | 5.67 | 2.0 |
The choice of time quantum dramatically affects Round Robin performance. Too small → excessive context switching overhead. Too large → approaches FCFS behavior. A common heuristic is to set the quantum so that 80% of CPU bursts complete within one quantum.
Varying Time Quantum Analysis:
Let's see how different time quanta affect average waiting time for our example:
| Time Quantum | P1 WT | P2 WT | P3 WT | Average WT | Context Switches |
|---|---|---|---|---|---|
| 1 ms | 6 | 7 | 4 | 5.67 | 12 |
| 2 ms | 6 | 6 | 4 | 5.33 | 7 |
| 3 ms | 6 | 7 | 4 | 5.67 | 5 |
| 5 ms | 4 | 9 | 2 | 5.0 | 3 |
| 10 ms (=max BT) | 0 | 9 | 5 | 4.67 | 2 |
| ∞ (FCFS) | 0 | 9 | 12 | 7.0 | 0 |
The optimal quantum depends on the process mix and optimization goals (minimize waiting time vs. maximize responsiveness vs. minimize overhead).
Priority scheduling introduces external ordering based on process importance. Calculations must account for priority comparisons at each decision point, and whether the system is preemptive or non-preemptive.
Priority Convention:
Priority numbering varies by system:
Always clarify which convention is in use before solving problems!
Non-Preemptive Priority:
When the current process completes, select the highest-priority ready process.
Preemptive Priority:
If a higher-priority process arrives while a lower-priority process is running, preempt immediately.
Priority Scheduling Example:
| Process | Arrival Time | Burst Time | Priority (lower = higher) |
|---|---|---|---|
| P1 | 0 | 8 | 3 |
| P2 | 1 | 4 | 1 |
| P3 | 2 | 3 | 2 |
| P4 | 3 | 1 | 4 |
Non-Preemptive Priority:
| Time | Event |
|---|---|
| 0-8 | P1 runs (only process at t=0) |
| 8-12 | P2 runs (highest priority: 1) |
| 12-15 | P3 runs (priority 2 < priority 4) |
| 15-16 | P4 runs |
Preemptive Priority:
| Time | Running | Event |
|---|---|---|
| 0 | P1 | P1 starts (priority 3) |
| 1 | P2 | P2 arrives with priority 1 → preempt P1 |
| 5 | P3 | P2 completes; P3 (priority 2) > P1 (3), P4 (4) |
| 8 | P1 | P3 completes; P1 resumes |
| 15 | P4 | P1 completes; P4 runs |
| 16 | - | P4 completes |
| Process | AT | BT | Priority | CT | TAT | WT | RT |
|---|---|---|---|---|---|---|---|
| P1 | 0 | 8 | 3 | 15 | 15 | 7 | 0 |
| P2 | 1 | 4 | 1 | 5 | 4 | 0 | 0 |
| P3 | 2 | 3 | 2 | 8 | 6 | 3 | 3 |
| P4 | 3 | 1 | 4 | 16 | 13 | 12 | 12 |
| Average | 9.5 | 5.5 | 3.75 |
Notice P4 (lowest priority) has extreme waiting time (12ms for a 1ms job). In real systems with continuous arrivals, low-priority processes might wait indefinitely. This is 'starvation.' Solutions include aging (gradually increasing priority of waiting processes) or priority boosting.
Multilevel Feedback Queue (MLFQ) calculations are the most complex because processes can move between queues based on behavior. This requires tracking queue levels, time allotments, and promotion/demotion rules.
MLFQ Parameters to Track:
Standard MLFQ Rules:
MLFQ Example:
Configuration:
| Process | Arrival Time | Burst Time | I/O Behavior |
|---|---|---|---|
| P1 | 0 | 20 | CPU-bound |
| P2 | 2 | 6 | I/O at 3ms |
Execution Trace:
| Time | Q0 | Q1 | Q2 | Running | Event |
|---|---|---|---|---|---|
| 0 | P1 | P1 | P1 enters Q0 | ||
| 2 | P2,P1 | P1 | P2 enters Q0 | ||
| 4 | P2 | P1 | P2 | P1 used full quantum → demoted to Q1 | |
| 7 | P2 | P1 | P2 | P2 does I/O after 3ms, stays in Q0 | |
| 7 | P1 | P1 | P1 runs from Q1 (no Q0 processes) | ||
| 10 | P2 | P1 | P2 | P2 returns from I/O, resumes in Q0 | |
| 13 | - | P1 | P1 | P2 finishes, P1 continues | |
| 15 | P1 | P1 | P1 used full Q1 quantum → demoted to Q2 | ||
| 27 | - | P1 finishes |
Results:
For MLFQ problems: (1) Draw the queue state at each time unit, (2) Track remaining time AND current queue for each process, (3) Remember that queue checks happen at quantum boundaries and arrivals, (4) I/O behavior affects queue movement. Always verify that total execution time equals sum of burst times.
Beyond per-process metrics, system-wide metrics assess overall scheduler effectiveness.
Throughput:
Throughput = Number of completed processes / Total time period
From our FCFS example:
Throughput = 3 processes / 30 ms = 0.1 processes/ms = 100 processes/second
CPU Utilization:
CPU Utilization = (Total busy time / Total elapsed time) × 100%
Example with Idle Time:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 5 |
| P2 | 10 | 3 |
| P3 | 12 | 4 |
FCFS Timeline:
Total elapsed time = 17ms Total busy time = 5 + 3 + 4 = 12ms Total idle time = 5ms
CPU Utilization = 12/17 × 100% = 70.6%
Throughput = 3/17 = 0.176 processes/ms
Real systems include context switch time in calculations. If each switch costs 0.1ms and we have 5 context switches, add 0.5ms to total time. This reduces effective CPU utilization and throughput while slightly increasing all turnaround times.
| Metric | Without Overhead | With Overhead | Impact |
|---|---|---|---|
| Total Time | 17.0 ms | 17.5 ms | +2.9% |
| CPU Utilization | 70.6% | 68.6% | -2.0% |
| Effective Throughput | 0.176/ms | 0.171/ms | -2.8% |
Gantt charts are the standard visualization for CPU scheduling. They provide a timeline view that makes calculations easier to verify and understand.
Gantt Chart Format:
| P1 | P2 | P3 | | P2 |
0 5 8 10 12 15
Each box represents a continuous execution period for one process. The timeline below shows when each period starts and ends. Gaps indicate idle time.
Reading Metrics from Gantt Charts:
Always draw a Gantt chart when solving scheduling problems, even if not required. It prevents arithmetic errors by making the timeline visual. Double-check that the total of all execution blocks equals the sum of all burst times.
Timing calculations have several common error patterns. Recognizing these helps you debug your work and avoid losing points in exams or interviews.
Verification Checklist:
✓ Sum of all process burst times = Total CPU busy time ✓ For each process: BT ≤ TAT ✓ For each process: WT ≥ 0 ✓ For each process: RT ≤ WT (in preemptive systems, RT < WT is possible) ✓ Completion times are in valid order (a process can't complete before it starts) ✓ Gantt chart accounts for all time from first arrival to last completion
When two processes have equal priority/burst time, how do you break the tie? Common conventions: (1) Earlier arrival time wins, (2) Lower process ID wins, (3) FCFS order within the tie. The problem should specify, but if it doesn't, state your assumption.
Work through these problems to solidify your understanding. Try each problem yourself before checking the solution.
Given processes: P1(AT=0, BT=6), P2(AT=2, BT=3), P3(AT=4, BT=1), P4(AT=5, BT=4). Calculate average waiting time for both FCFS and non-preemptive SJF. Which algorithm performs better?
FCFS Order: P1 → P2 → P3 → P4
| Process | AT | BT | CT | TAT | WT |
|---|---|---|---|---|---|
| P1 | 0 | 6 | 6 | 6 | 0 |
| P2 | 2 | 3 | 9 | 7 | 4 |
| P3 | 4 | 1 | 10 | 6 | 5 |
| P4 | 5 | 4 | 14 | 9 | 5 |
Average WT = (0+4+5+5)/4 = 3.5 ms
Given processes: P1(AT=0, BT=5), P2(AT=1, BT=3), P3(AT=2, BT=1). Time quantum = 2ms. Calculate average response time and average turnaround time.
Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P1 |
0 2 4 5 7 8 9
Detailed Timeline:
| Process | AT | BT | First Run | CT | RT | TAT |
|---|---|---|---|---|---|---|
| P1 | 0 | 5 | 0 | 9 | 0 | 9 |
| P2 | 1 | 3 | 2 | 8 | 1 | 7 |
| P3 | 2 | 1 | 4 | 5 | 2 | 3 |
Average RT = (0+1+2)/3 = 1.0 ms Average TAT = (9+7+3)/3 = 6.33 ms
What's Next:
Having mastered process timing calculations, we'll move to memory-related numerical problems: page table size calculations. This involves understanding address translation, page table structures, and the trade-offs between page size and table overhead.
You now have comprehensive knowledge of process timing calculations across all major scheduling algorithms. Practice with varied process sets and scheduling parameters to build speed and accuracy for interviews and exams.