Loading content...
Imagine you're waiting in line at a coffee shop. The person who arrived before you gets served before you—regardless of whether they're ordering a simple espresso or an elaborate 12-modifier latte. This intuitive concept of "first in, first out" forms the basis of the oldest and simplest CPU scheduling algorithm: First-Come, First-Served (FCFS).
In operating systems, FCFS scheduling allocates CPU time to processes in the exact order they arrive in the ready queue. While this sounds perfectly fair and straightforward, this simplicity conceals profound implications for system performance—implications that shaped the evolution of all subsequent scheduling algorithms.
Understanding FCFS isn't just about learning an algorithm; it's about understanding why more complex scheduling algorithms were invented, and developing the analytical framework to evaluate scheduling tradeoffs.
By the end of this page, you will understand the complete mechanics of FCFS scheduling, including how processes are selected for execution, the data structures involved, the algorithm's formal properties, and the mathematical framework for analyzing its performance. You will be able to trace FCFS execution step-by-step and implement it from first principles.
At its heart, FCFS scheduling follows a single, inviolable rule: the process that arrives first in the ready queue is the one that gets the CPU next. This rule has no exceptions—priority, process size, expected runtime, and urgency are all irrelevant. The only criterion is arrival order.
The Ready Queue as the Foundation:
The ready queue is the central data structure in FCFS scheduling. It contains all processes that are ready to execute—meaning they have all the resources they need except CPU time. When a process transitions from the new state to the ready state (after being admitted by the long-term scheduler), it joins the back of the ready queue. When a process completes I/O and transitions from waiting to ready, it also joins the back of the ready queue.
The short-term scheduler (CPU scheduler) always selects the process at the front of this queue for execution. Once selected, the process runs until one of two events occurs:
Critically, in pure FCFS, there is no preemption—the scheduler never forcibly removes a running process from the CPU. This characteristic makes FCFS a non-preemptive scheduling algorithm.
The non-preemptive nature of FCFS means that once a process starts executing, it keeps the CPU until it voluntarily relinquishes it. This has profound implications: a long-running process can monopolize the CPU, starving other processes of execution time. This is the root cause of FCFS's performance limitations.
Formal Algorithm Definition:
The FCFS scheduling algorithm can be formally defined as follows:
This simplicity is FCFS's greatest strength for implementation but also the source of its performance weaknesses.
Implementing FCFS scheduling requires understanding the underlying data structures and the scheduler's integration with the rest of the operating system. Let's examine each component in detail.
The Ready Queue Data Structure:
The ready queue in FCFS is implemented as a simple queue data structure—typically a linked list or a circular buffer. Each node in the queue contains a pointer to a Process Control Block (PCB), along with next and possibly previous pointers for queue management.
In most OS implementations, the ready queue is implemented as a doubly-linked list for O(1) insertion at the tail and O(1) removal from the head:
This implementation guarantees:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152
/* FCFS Scheduler Implementation - Kernel-Level */ #include <stddef.h>#include <stdbool.h> /* Process Control Block (simplified) */typedef struct PCB { int pid; /* Process ID */ int arrival_time; /* Time when process arrived */ int burst_time; /* Total CPU time required */ int remaining_time; /* Remaining CPU time */ int waiting_time; /* Time spent waiting in ready queue */ int turnaround_time; /* Total time in system */ int completion_time; /* Time when process completed */ enum { NEW, READY, RUNNING, WAITING, TERMINATED } state; struct PCB *next; /* Next PCB in queue */ struct PCB *prev; /* Previous PCB in queue */} PCB; /* Ready Queue Structure */typedef struct { PCB *head; /* Front of queue (next to schedule) */ PCB *tail; /* Back of queue (most recent arrival) */ int count; /* Number of processes in queue */} ReadyQueue; /* Global scheduler state */static ReadyQueue ready_queue = { NULL, NULL, 0 };static PCB *current_process = NULL; /* Currently running process */static int current_time = 0; /* System clock */ /* Initialize the ready queue */void init_ready_queue(void) { ready_queue.head = NULL; ready_queue.tail = NULL; ready_queue.count = 0;} /* Enqueue a process at the tail of the ready queue */void enqueue_process(PCB *process) { process->next = NULL; process->prev = ready_queue.tail; process->state = READY; if (ready_queue.tail != NULL) { ready_queue.tail->next = process; } ready_queue.tail = process; if (ready_queue.head == NULL) { ready_queue.head = process; } ready_queue.count++;} /* Dequeue the process at the head of the ready queue */PCB* dequeue_process(void) { if (ready_queue.head == NULL) { return NULL; /* Queue is empty */ } PCB *process = ready_queue.head; ready_queue.head = process->next; if (ready_queue.head != NULL) { ready_queue.head->prev = NULL; } else { ready_queue.tail = NULL; /* Queue is now empty */ } process->next = NULL; process->prev = NULL; ready_queue.count--; return process;} /* Check if ready queue is empty */bool is_queue_empty(void) { return ready_queue.head == NULL;} /* FCFS Scheduler - Main scheduling decision */PCB* fcfs_schedule(void) { /* FCFS Rule: Select the process at the head of ready queue */ if (is_queue_empty()) { return NULL; /* No process to schedule - CPU will go idle */ } PCB *selected = dequeue_process(); selected->state = RUNNING; /* Calculate waiting time: current_time - arrival_time */ selected->waiting_time = current_time - selected->arrival_time; return selected;} /* Context switch to selected process */void dispatch_process(PCB *process) { current_process = process; /* Restore CPU registers from PCB */ /* Load program counter, stack pointer, general registers, etc. */ /* (Hardware-specific implementation) */ /* Transfer control to the process */ /* The process will run until it terminates or blocks */} /* Handle process completion */void handle_process_completion(PCB *process) { process->completion_time = current_time; process->turnaround_time = process->completion_time - process->arrival_time; process->state = TERMINATED; current_process = NULL; /* Cleanup: release resources, notify parent, etc. */ /* (Implementation depends on OS design) */} /* Handle process blocking (I/O request) */void handle_process_block(PCB *process, int device_id) { process->state = WAITING; current_process = NULL; /* Add to the appropriate device queue */ /* (Implementation depends on I/O subsystem) */} /* Main scheduler loop (simplified) */void scheduler_main_loop(void) { while (true) { /* Check if current process is still running */ if (current_process == NULL) { /* CPU is idle - need to schedule */ PCB *next = fcfs_schedule(); if (next != NULL) { dispatch_process(next); } else { /* No process to run - enter idle state */ cpu_idle(); } } /* Advance system clock */ current_time++; /* Check for process arrivals, I/O completions, etc. */ /* (Event-driven in real implementations) */ }}Key Implementation Observations:
The code above illustrates several critical aspects of FCFS implementation:
O(1) Scheduling Overhead: The scheduling decision is trivially simple—just dequeue from the head. This gives FCFS the lowest possible scheduler overhead of any algorithm.
No Comparison Logic: Unlike priority-based or SJF schedulers, FCFS never compares processes. It doesn't need to examine burst times, priorities, or any other attributes.
FIFO Discipline: The queue strictly maintains arrival order. The enqueue operation always adds to the tail; dequeue always removes from the head.
Waiting Time Calculation: Each process's waiting time is simply current_time - arrival_time at the moment it starts executing.
Non-Preemptive Execution: Once dispatched, a process runs until it voluntarily yields (termination or blocking). There's no timer interrupt handling for preemption.
Understanding FCFS requires examining its theoretical properties—the formal characteristics that determine its behavior under various conditions. These properties provide the analytical foundation for comparing FCFS with other scheduling algorithms.
Queueing Theory Perspective:
FCFS is the scheduling discipline studied in classical queueing theory as the M/M/1 queue (when arrival and service times follow exponential distributions). In queueing notation:
For an M/M/1 queue with arrival rate λ and service rate μ, the key performance metrics are:
These formulas reveal a fundamental property: as CPU utilization approaches 100% (ρ → 1), average waiting time and queue length approach infinity. This means FCFS performs reasonably at low utilization but degrades dramatically under heavy load.
| Utilization (ρ) | Avg Queue Length | Relative Delay | System Behavior |
|---|---|---|---|
| 10% | 0.11 processes | 1.11x | Excellent responsiveness |
| 50% | 1.00 processes | 2.00x | Good performance |
| 70% | 2.33 processes | 3.33x | Noticeable delays |
| 90% | 9.00 processes | 10.00x | Severe delays |
| 95% | 19.00 processes | 20.00x | Near-collapse |
| 99% | 99.00 processes | 100.00x | System unusable |
Order Statistics and Distribution:
In FCFS, process execution order is completely determined by arrival order. If we denote the arrival time of process Pi as A(Pi), then:
This determinism has important implications:
FCFS is inherently starvation-free because it treats all processes equally based on arrival time. Every process that enters the ready queue will eventually reach the front and get CPU time. This guarantee doesn't exist in all scheduling algorithms—priority scheduling, for instance, can starve low-priority processes indefinitely.
Mathematical Properties:
Let's formalize the key metrics for FCFS with n processes:
Definitions:
For FCFS, these values are calculated as:
CT[0] = AT[0] + BT[0] (first process)
CT[i] = max(CT[i-1], AT[i]) + BT[i] (subsequent processes)
TAT[i] = CT[i] - AT[i]
WT[i] = TAT[i] - BT[i]
Average TAT = Σ TAT[i] / n
Average WT = Σ WT[i] / n
The max(CT[i-1], AT[i]) term handles the case where a process arrives after the previous one has completed—the CPU may be idle between completions if there's a gap in arrivals.
Let's trace through a complete FCFS scheduling scenario to solidify understanding. We'll follow each process from arrival through completion, calculating all performance metrics.
Scenario: Four Processes with Varying Burst Times
| Process | Arrival Time | Burst Time (CPU) | Description |
|---|---|---|---|
| P1 | 0 | 24 ms | Long computational task |
| P2 | 1 | 3 ms | Quick I/O response handler |
| P3 | 2 | 6 ms | Medium computation |
| P4 | 3 | 4 ms | Short utility process |
Execution Timeline:
t=0: P1 arrives and enters the ready queue. Since the CPU is idle and P1 is the only process, the scheduler selects P1 immediately.
t=1: P1 continues execution (23 ms remaining). P2 arrives and enters the ready queue.
t=2: P1 continues execution (22 ms remaining). P3 arrives and joins behind P2.
t=3: P1 continues execution (21 ms remaining). P4 arrives and joins at the back.
t=24: P1 completes execution. Scheduler selects P2 (front of queue).
t=27: P2 completes execution (ran for 3 ms). Scheduler selects P3.
t=33: P3 completes execution (ran for 6 ms). Scheduler selects P4.
t=37: P4 completes execution. All processes finished.
Performance Metrics Calculation:
| Process | AT | BT | Start Time | CT | TAT | WT |
|---|---|---|---|---|---|---|
| P1 | 0 | 24 | 0 | 24 | 24 - 0 = 24 | 24 - 24 = 0 |
| P2 | 1 | 3 | 24 | 27 | 27 - 1 = 26 | 26 - 3 = 23 |
| P3 | 2 | 6 | 27 | 33 | 33 - 2 = 31 | 31 - 6 = 25 |
| P4 | 3 | 4 | 33 | 37 | 37 - 3 = 34 | 34 - 4 = 30 |
| Average | — | — | — | — | (24+26+31+34)/4 = 28.75 | (0+23+25+30)/4 = 19.5 |
Notice how the short processes (P2: 3ms, P4: 4ms) have waiting times vastly larger than their execution times—P2 waited 23ms to run for just 3ms! This is the essence of FCFS's performance problem: a single long process at the front can cause disproportionate delays for all subsequent processes, regardless of their size.
Alternative Arrival Order Analysis:
What if the same processes arrived in a different order? Let's recalculate with a more favorable ordering:
| Process | AT | BT | CT | TAT | WT |
|---|---|---|---|---|---|
| P2 | 0 | 3 | 3 | 3 | 0 |
| P4 | 0 | 4 | 7 | 7 | 3 |
| P3 | 0 | 6 | 13 | 13 | 7 |
| P1 | 0 | 24 | 37 | 37 | 13 |
| Average | — | — | — | (3+7+13+37)/4 = 15 | (0+3+7+13)/4 = 5.75 |
Dramatic Improvement:
This comparison reveals why FCFS is so sensitive to arrival order—and why algorithms like Shortest Job First (SJF) can achieve significantly better average performance by reordering execution intelligently.
One of FCFS's greatest strengths is its minimal computational overhead. Let's analyze the complexity of each scheduler operation:
| Operation | Time Complexity | Explanation |
|---|---|---|
| Scheduling Decision | O(1) | Always select head of queue—no comparison needed |
| Process Insertion | O(1) | Append to tail of linked list |
| Process Removal | O(1) | Remove from head of linked list |
| Find Next Process | O(1) | Head pointer provides direct access |
| Queue Traversal | O(n) | Rarely needed in FCFS |
Space Complexity:
FCFS requires O(n) space for n processes, with constant overhead per process:
Comparison with Other Algorithms:
| Algorithm | Scheduling Decision | Additional Data Structures |
|---|---|---|
| FCFS | O(1) | Simple FIFO queue |
| SJF (Non-preemptive) | O(n) or O(log n) | Sorted list or min-heap |
| SRTF (Preemptive SJF) | O(log n) | Min-heap for remaining time |
| Priority Scheduling | O(n) or O(log n) | Sorted list or priority queue |
| Round Robin | O(1) | FIFO queue + timer |
| Multi-Level Queue | O(k) | k queues for k priority levels |
In embedded systems, real-time systems with hard deadlines, and contexts where scheduler overhead must be absolutely minimized, FCFS's O(1) scheduling decision can be a significant advantage. Every microsecond spent in the scheduler is time not spent executing useful workloads.
To fully understand FCFS, we must situate it within the broader landscape of scheduling algorithms. Scheduling algorithms can be classified along several dimensions, and FCFS occupies a specific position in each:
Historical Context:
FCFS was the scheduling algorithm used in early batch processing systems of the 1950s and 1960s. Programs were submitted as jobs (often on punch cards), and the operator ran them in the order received. The concept of time-sharing—multiple users interacting with a computer simultaneously—didn't exist yet, so FCFS's poor response time characteristics weren't a concern.
As computing evolved toward interactive use (time-sharing systems like CTSS and Multics in the 1960s), FCFS proved inadequate, spurring the development of:
FCFS remains important not for widespread use in modern general-purpose systems, but as:
Despite its limitations, FCFS remains in active use in specific contexts where its properties are advantageous. Understanding when FCFS is appropriate deepens understanding of scheduling tradeoffs.
Many database systems use FCFS for transaction queuing at the lock manager level. Transactions waiting for a lock are typically granted access in arrival order to prevent starvation and provide predictable behavior. However, this can lead to convoy effects when a long-running transaction holds locks for extended periods.
We've established a comprehensive understanding of the First-Come, First-Served scheduling algorithm. Let's consolidate the essential concepts:
What's Next:
In the next page, we'll examine the FIFO Queue data structure in detail—how it's implemented, its properties, and its central role in FCFS scheduling. We'll see how the queue maintains arrival order guarantees and explore variations used in real operating systems.
You now understand the foundational mechanics of FCFS scheduling—the core algorithm, its implementation, theoretical properties, and position in the scheduling landscape. This knowledge forms the basis for understanding why more sophisticated algorithms were developed and how to analyze any scheduling algorithm's characteristics.