Loading content...
Every time you open multiple applications on your computer, listen to music while browsing the web, or run multiple programs simultaneously, an invisible scheduler is making thousands of decisions per second: Which task gets to run next? For how long? What happens when new tasks arrive?
Operating system schedulers, job queue processors, printer spoolers, customer service systems, and manufacturing assembly lines all face the same fundamental challenge: managing a stream of work items that arrive over time and must be processed according to specific rules.
The queue is the natural data structure for these problems. Its FIFO property ensures fairness (tasks are served in the order they arrive), and its dynamic nature allows tasks to be added and removed as the simulation progresses. Understanding queue-based scheduling simulation unlocks insights into how real systems manage concurrent workloads.
By the end of this page, you will understand how to simulate task scheduling using queues, implement round-robin scheduling algorithms, handle time-based task processing, and analyze the performance characteristics of different scheduling strategies. These skills apply directly to operating systems concepts and distributed systems design.
Before diving into implementations, let's establish the conceptual model for task scheduling:
Core Components:
Tasks (Jobs/Processes): Units of work to be completed. Each task has properties like:
Ready Queue: Where tasks wait to be processed. This is our focus—the queue data structure.
Processor (CPU/Worker): The entity that executes tasks. May be one (single-processor) or many (multi-processor).
Scheduler: The algorithm that decides which task runs next.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
// Core task scheduling model interface Task { id: string; arrivalTime: number; // When task enters the system burstTime: number; // Total processing time needed remainingTime: number; // Time left to complete (for preemptive) priority?: number; // Optional priority level deadline?: number; // Optional deadline // Tracking metrics startTime?: number; // When execution first began completionTime?: number; // When task finished} interface SchedulerMetrics { totalTasks: number; avgWaitingTime: number; // Time spent in ready queue avgTurnaroundTime: number; // Total time from arrival to completion avgResponseTime: number; // Time from arrival to first execution cpuUtilization: number; // Percentage of time CPU was busy throughput: number; // Tasks completed per unit time} // The ready queue - heart of the scheduling systemclass ReadyQueue { private queue: Task[] = []; enqueue(task: Task): void { this.queue.push(task); } dequeue(): Task | undefined { return this.queue.shift(); // FIFO: dequeue from front } peek(): Task | undefined { return this.queue[0]; } isEmpty(): boolean { return this.queue.length === 0; } size(): number { return this.queue.length; }} /*Key insight: The queue serves as a "holding area" for tasksawaiting their turn at the processor. The scheduling algorithmdetermines the ORDER in which tasks are added and removed. Different algorithms = different queue disciplines:- FCFS: Pure FIFO, tasks processed in arrival order- Round Robin: Circular FIFO with time limits- Priority: May require priority queue, not simple FIFO*/Scheduling Metrics Explained:
Different scheduling algorithms optimize for different metrics. Understanding these tradeoffs is key to selecting the right approach.
The simplest scheduling algorithm is First-Come-First-Served (FCFS), also known as First-In-First-Out (FIFO) scheduling. Tasks are processed in the exact order they arrive—a direct application of the queue's natural behavior.
Algorithm:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124
// First-Come-First-Served (FCFS) Scheduler Simulation interface Task { id: string; arrivalTime: number; burstTime: number; startTime?: number; completionTime?: number;} interface SimulationResult { schedule: { time: number; taskId: string; action: string }[]; metrics: { avgWaitingTime: number; avgTurnaroundTime: number; avgResponseTime: number; };} function fcfsScheduler(tasks: Task[]): SimulationResult { // Sort by arrival time (in case not already sorted) const sortedTasks = [...tasks].sort((a, b) => a.arrivalTime - b.arrivalTime); const readyQueue: Task[] = []; const schedule: { time: number; taskId: string; action: string }[] = []; const completed: Task[] = []; let currentTime = 0; let taskIndex = 0; // Next task to arrive while (completed.length < tasks.length) { // Add all tasks that have arrived by currentTime while (taskIndex < sortedTasks.length && sortedTasks[taskIndex].arrivalTime <= currentTime) { readyQueue.push(sortedTasks[taskIndex]); schedule.push({ time: sortedTasks[taskIndex].arrivalTime, taskId: sortedTasks[taskIndex].id, action: 'arrived' }); taskIndex++; } if (readyQueue.length > 0) { // Dequeue next task (FCFS: front of queue) const task = readyQueue.shift()!; // Record start time (response time reference) task.startTime = currentTime; schedule.push({ time: currentTime, taskId: task.id, action: 'started' }); // Execute task to completion (non-preemptive) currentTime += task.burstTime; task.completionTime = currentTime; completed.push(task); schedule.push({ time: currentTime, taskId: task.id, action: 'completed' }); } else { // CPU idle - jump to next task arrival if (taskIndex < sortedTasks.length) { currentTime = sortedTasks[taskIndex].arrivalTime; } } } // Calculate metrics let totalWaiting = 0; let totalTurnaround = 0; let totalResponse = 0; for (const task of completed) { const waitingTime = task.startTime! - task.arrivalTime; const turnaroundTime = task.completionTime! - task.arrivalTime; const responseTime = task.startTime! - task.arrivalTime; totalWaiting += waitingTime; totalTurnaround += turnaroundTime; totalResponse += responseTime; } return { schedule, metrics: { avgWaitingTime: totalWaiting / tasks.length, avgTurnaroundTime: totalTurnaround / tasks.length, avgResponseTime: totalResponse / tasks.length } };} /*Example:Tasks: [ { id: "P1", arrivalTime: 0, burstTime: 24 }, { id: "P2", arrivalTime: 1, burstTime: 3 }, { id: "P3", arrivalTime: 2, burstTime: 3 }] FCFS Execution:Time 0-24: P1 runs (arrived first)Time 24-27: P2 runsTime 27-30: P3 runs Timeline:|-------- P1 (24) --------|-- P2 --|- P3 -|0 24 27 30 Metrics:P1: Waiting = 0, Turnaround = 24P2: Waiting = 23, Turnaround = 26P3: Waiting = 25, Turnaround = 28 Average waiting time: (0 + 23 + 25) / 3 = 16 Notice the "convoy effect": Short tasks wait behind long ones!*/FCFS suffers from the convoy effect: if a long task arrives first, all subsequent short tasks must wait, dramatically increasing average waiting time. Imagine one person with 100 grocery items in front of 10 people with 1 item each. FCFS is fair but not optimal for average waiting time.
Round-Robin (RR) scheduling addresses FCFS's fairness issues by giving each task a fixed time slice called a quantum or time slice. When a task's quantum expires, it's preempted and re-queued at the back, allowing the next task to run.
Algorithm:
Round-robin uses the queue circularly—tasks cycle through until completion.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141
// Round-Robin Scheduler Simulation interface TaskRR { id: string; arrivalTime: number; burstTime: number; remainingTime: number; firstRunTime?: number; completionTime?: number;} interface RRResult { ganttChart: { start: number; end: number; taskId: string }[]; metrics: { avgWaitingTime: number; avgTurnaroundTime: number; avgResponseTime: number; };} function roundRobinScheduler(tasks: TaskRR[], quantum: number): RRResult { // Initialize remaining times const tasksCopy = tasks.map(t => ({ ...t, remainingTime: t.burstTime })); const sortedTasks = tasksCopy.sort((a, b) => a.arrivalTime - b.arrivalTime); const readyQueue: TaskRR[] = []; const ganttChart: { start: number; end: number; taskId: string }[] = []; const completed: TaskRR[] = []; let currentTime = 0; let taskIndex = 0; // Add initial tasks while (taskIndex < sortedTasks.length && sortedTasks[taskIndex].arrivalTime <= currentTime) { readyQueue.push(sortedTasks[taskIndex]); taskIndex++; } while (completed.length < tasks.length) { if (readyQueue.length > 0) { const task = readyQueue.shift()!; // Track first run (for response time) if (task.firstRunTime === undefined) { task.firstRunTime = currentTime; } // Execute for quantum or remaining time, whichever is smaller const executeTime = Math.min(quantum, task.remainingTime); ganttChart.push({ start: currentTime, end: currentTime + executeTime, taskId: task.id }); currentTime += executeTime; task.remainingTime -= executeTime; // Add newly arrived tasks BEFORE re-adding current task while (taskIndex < sortedTasks.length && sortedTasks[taskIndex].arrivalTime <= currentTime) { readyQueue.push(sortedTasks[taskIndex]); taskIndex++; } // If task not complete, add back to queue if (task.remainingTime > 0) { readyQueue.push(task); } else { task.completionTime = currentTime; completed.push(task); } } else { // CPU idle if (taskIndex < sortedTasks.length) { currentTime = sortedTasks[taskIndex].arrivalTime; readyQueue.push(sortedTasks[taskIndex]); taskIndex++; } } } // Calculate metrics let totalWaiting = 0, totalTurnaround = 0, totalResponse = 0; for (const task of completed) { const turnaround = task.completionTime! - task.arrivalTime; const waiting = turnaround - task.burstTime; const response = task.firstRunTime! - task.arrivalTime; totalWaiting += waiting; totalTurnaround += turnaround; totalResponse += response; } return { ganttChart, metrics: { avgWaitingTime: totalWaiting / tasks.length, avgTurnaroundTime: totalTurnaround / tasks.length, avgResponseTime: totalResponse / tasks.length } };} /*Example with quantum = 4:Tasks: [ { id: "P1", arrivalTime: 0, burstTime: 24 }, { id: "P2", arrivalTime: 1, burstTime: 3 }, { id: "P3", arrivalTime: 2, burstTime: 3 }] RR Execution (Q=4):Time 0-4: P1 runs (20 remaining) → requeueTime 4-7: P2 runs (0 remaining) → DONETime 7-10: P3 runs (0 remaining) → DONETime 10-14: P1 runs (16 remaining) → requeueTime 14-18: P1 runs (12 remaining) → requeueTime 18-22: P1 runs (8 remaining) → requeueTime 22-26: P1 runs (4 remaining) → requeueTime 26-30: P1 runs (0 remaining) → DONE Timeline:|P1-Q1|P2-P3|P1-Q2|P1-Q3|P1-Q4|P1-Q5|P1-Q6|0 4 10 14 18 22 26 30 Metrics:P1: Turnaround = 30 - 0 = 30, Waiting = 30 - 24 = 6P2: Turnaround = 7 - 1 = 6, Waiting = 6 - 3 = 3P3: Turnaround = 10 - 2 = 8, Waiting = 8 - 3 = 5 Avg waiting: (6 + 3 + 5) / 3 = 4.67 (vs 16 for FCFS!) Much fairer for short tasks!*/Choosing the Quantum:
The quantum size critically affects performance:
Typical values in real operating systems range from 10-100 milliseconds.
A common guideline: set quantum such that roughly 80% of tasks complete within a single quantum. This balances responsiveness with efficiency, minimizing context switches for most tasks while still ensuring fairness.
Real systems often have tasks with different characteristics requiring different treatment:
Multi-Level Queue (MLQ) scheduling uses multiple queues, each with its own scheduling algorithm and priority.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
// Multi-Level Queue (MLQ) Scheduler type TaskType = 'realtime' | 'interactive' | 'batch'; interface MLQTask { id: string; type: TaskType; arrivalTime: number; burstTime: number; remainingTime: number; priority: number; // Higher = more urgent} class MultiLevelQueueScheduler { // Three queues with different priorities private realtimeQueue: MLQTask[] = []; // Highest priority, FCFS private interactiveQueue: MLQTask[] = []; // Medium priority, Round-Robin private batchQueue: MLQTask[] = []; // Lowest priority, FCFS private interactiveQuantum = 4; addTask(task: MLQTask): void { switch (task.type) { case 'realtime': this.realtimeQueue.push(task); break; case 'interactive': this.interactiveQueue.push(task); break; case 'batch': this.batchQueue.push(task); break; } } getNextTask(): { task: MLQTask; quantum: number } | null { // Priority: realtime > interactive > batch if (this.realtimeQueue.length > 0) { const task = this.realtimeQueue.shift()!; return { task, quantum: Infinity }; // Run to completion } if (this.interactiveQueue.length > 0) { const task = this.interactiveQueue.shift()!; return { task, quantum: this.interactiveQuantum }; } if (this.batchQueue.length > 0) { const task = this.batchQueue.shift()!; return { task, quantum: Infinity }; // Run to completion } return null; // All queues empty } requeue(task: MLQTask): void { // After quantum expires, re-add to appropriate queue this.addTask(task); }} /*Example scenario: Time 0: Batch task B1 (burst 100) arrivesTime 5: Interactive task I1 (burst 10) arrivesTime 7: Realtime task R1 (burst 5) arrives Execution:0-5: B1 runs (95 remaining)5-7: I1 runs (8 remaining) ← preempts B1!7-12: R1 runs to completion ← preempts I1!12-16: I1 runs (4 remaining)16-20: I1 runs to completion20-115: B1 runs to completion Higher-priority queues always preempt lower ones!*/Pure MLQ can cause starvation: if higher-priority queues always have tasks, lower-priority tasks never run. Solutions include time-slicing between queues or aging (promoting tasks that have waited too long).
Many scheduling problems involve events that occur at specific times: tasks arriving, timers expiring, or resources becoming available. Event-driven simulation uses a queue (often a priority queue sorted by event time) to process events in chronological order.
The Pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
// Event-Driven Task Scheduling Simulation type EventType = 'TASK_ARRIVAL' | 'TASK_COMPLETE' | 'QUANTUM_EXPIRE'; interface Event { time: number; type: EventType; taskId?: string; task?: Task;} interface Task { id: string; arrivalTime: number; burstTime: number; remainingTime: number;} class EventDrivenSimulator { private eventQueue: Event[] = []; // Sorted by time private readyQueue: Task[] = []; private currentTask: Task | null = null; private currentTime = 0; private quantum = 4; private log: string[] = []; constructor(tasks: Task[]) { // Schedule all arrival events for (const task of tasks) { this.scheduleEvent({ time: task.arrivalTime, type: 'TASK_ARRIVAL', task: { ...task, remainingTime: task.burstTime } }); } } private scheduleEvent(event: Event): void { // Insert maintaining time order let i = 0; while (i < this.eventQueue.length && this.eventQueue[i].time <= event.time) { i++; } this.eventQueue.splice(i, 0, event); } run(): void { while (this.eventQueue.length > 0) { const event = this.eventQueue.shift()!; this.currentTime = event.time; this.handleEvent(event); } console.log("Simulation Log:"); this.log.forEach(line => console.log(line)); } private handleEvent(event: Event): void { switch (event.type) { case 'TASK_ARRIVAL': this.handleArrival(event.task!); break; case 'QUANTUM_EXPIRE': this.handleQuantumExpire(); break; case 'TASK_COMPLETE': this.handleCompletion(); break; } } private handleArrival(task: Task): void { this.log.push(`[Time ${this.currentTime}] Task ${task.id} arrived`); this.readyQueue.push(task); this.trySchedule(); } private handleQuantumExpire(): void { if (this.currentTask && this.currentTask.remainingTime > 0) { this.log.push( `[Time ${this.currentTime}] Quantum expired for ${this.currentTask.id}, ` + `${this.currentTask.remainingTime} remaining` ); this.readyQueue.push(this.currentTask); this.currentTask = null; this.trySchedule(); } } private handleCompletion(): void { this.log.push( `[Time ${this.currentTime}] Task ${this.currentTask!.id} completed` ); this.currentTask = null; this.trySchedule(); } private trySchedule(): void { if (this.currentTask !== null) return; // CPU busy if (this.readyQueue.length === 0) return; // Nothing to run this.currentTask = this.readyQueue.shift()!; const runTime = Math.min(this.quantum, this.currentTask.remainingTime); this.log.push( `[Time ${this.currentTime}] Starting ${this.currentTask.id} ` + `for ${runTime} units` ); this.currentTask.remainingTime -= runTime; if (this.currentTask.remainingTime === 0) { this.scheduleEvent({ time: this.currentTime + runTime, type: 'TASK_COMPLETE' }); } else { this.scheduleEvent({ time: this.currentTime + runTime, type: 'QUANTUM_EXPIRE' }); } }} /*This event-driven approach is how real simulators work:- No busy-waiting or polling- Jump directly from event to event- Accurate timing without iterating every time unit- Scales to millions of events efficiently*/Advantages of Event-Driven Simulation:
This pattern extends beyond scheduling to network simulation, discrete event systems, and game engines.
Beyond CPU scheduling, queues are foundational in distributed systems and background processing. Job queues decouple producers (who create work) from consumers (who process work).
Common patterns:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124
// Common Job Queue Patterns // Pattern 1: Simple Worker Poolclass WorkerPool { private jobQueue: (() => Promise<void>)[] = []; private activeWorkers = 0; private maxWorkers: number; constructor(maxWorkers: number) { this.maxWorkers = maxWorkers; } enqueue(job: () => Promise<void>): void { this.jobQueue.push(job); this.tryProcessNext(); } private async tryProcessNext(): Promise<void> { if (this.activeWorkers >= this.maxWorkers) return; if (this.jobQueue.length === 0) return; const job = this.jobQueue.shift()!; this.activeWorkers++; try { await job(); } finally { this.activeWorkers--; this.tryProcessNext(); // Check for more work } }} // Usage:const pool = new WorkerPool(4); // 4 concurrent workersfor (let i = 0; i < 100; i++) { pool.enqueue(async () => { console.log(`Processing job ${i}`); await simulateWork(1000); // 1 second });} // Pattern 2: Rate-Limited Queueclass RateLimitedQueue { private queue: { job: () => Promise<void>; retries: number }[] = []; private processing = false; private requestsPerSecond: number; constructor(rps: number) { this.requestsPerSecond = rps; } enqueue(job: () => Promise<void>): void { this.queue.push({ job, retries: 0 }); this.startProcessing(); } private async startProcessing(): Promise<void> { if (this.processing) return; this.processing = true; while (this.queue.length > 0) { const { job, retries } = this.queue.shift()!; try { await job(); } catch (error) { if (retries < 3) { // Re-queue with backoff this.queue.push({ job, retries: retries + 1 }); } } // Wait to enforce rate limit await sleep(1000 / this.requestsPerSecond); } this.processing = false; }} // Pattern 3: Priority Job Queueinterface PriorityJob { priority: number; // Higher = more urgent job: () => Promise<void>;} class PriorityJobQueue { private queue: PriorityJob[] = []; enqueue(job: () => Promise<void>, priority: number): void { // Insert in priority order const item = { priority, job }; let i = 0; while (i < this.queue.length && this.queue[i].priority >= priority) { i++; } this.queue.splice(i, 0, item); } async processNext(): Promise<boolean> { if (this.queue.length === 0) return false; const { job } = this.queue.shift()!; await job(); return true; }} async function simulateWork(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms));} async function sleep(ms: number): Promise<void> { return new Promise(resolve => setTimeout(resolve, ms));} /*These patterns appear in:- Message brokers (RabbitMQ, Kafka)- Task queues (Celery, Sidekiq, Bull)- Cloud services (AWS SQS, Azure Queue)- Background job processors*/| Pattern | Use Case | Queue Type | Key Feature |
|---|---|---|---|
| Worker Pool | CPU-bound parallel work | FIFO | Concurrency limit |
| Rate Limited | API calls, external services | FIFO | Request throttling |
| Priority Queue | Mixed importance tasks | Priority-ordered | Urgent tasks first |
| Dead Letter | Handling failures | Failed items | Error recovery |
| Delay Queue | Scheduled execution | Time-sorted | Future scheduling |
Queue-based task scheduling appears throughout computing and beyond. Understanding these examples reinforces the pattern and illustrates its versatility.
Once you recognize the queue-based scheduling pattern, you'll see it everywhere: from call centers to hospital triage, from elevator algorithms to network packet switching. The abstraction is universal.
Task scheduling simulation showcases the queue's role in managing ordered work. Let's consolidate the key concepts:
What's Next:
We've seen how queues manage work over time. The next page examines the most fundamental queue behavior pattern: First-Come-First-Serve (FCFS) Processing—a broader look at how FIFO ordering applies beyond CPU scheduling to any system where order of arrival determines order of service.
You now understand queue-based task scheduling: from simple FCFS to sophisticated multi-level systems. These patterns form the foundation of operating systems, distributed processing, and countless real-world applications where work must be managed fairly and efficiently.