Loading learning content...
When tasked with implementing a queue, most developers instinctively reach for an array—and for good reason. Arrays are the most fundamental data structure, offering O(1) access by index and cache-friendly memory layouts. The translation from abstract queue operations to array mechanics seems straightforward: elements enter at one end and exit from the other.
But this intuitive approach conceals a devastating inefficiency. As we'll discover, a linear array implementation of a queue creates a paradox: a queue that becomes progressively less useful even as you use it correctly. This page explores why the naive approach fails and sets the stage for the elegant solution.
By the end of this page, you will understand: (1) How to implement a queue using a simple linear array, (2) Why the front and rear pointers must move in one direction, (3) The critical 'wasted space' problem that makes linear implementations impractical, and (4) Why shifting elements is not a viable solution at scale.
Before diving into implementation, let's establish why arrays are a natural choice for queue implementation and understand the constraints they impose.
Why Arrays Work for Queues:
Arrays provide a contiguous block of memory where elements occupy adjacent locations. This property offers several advantages:
However, arrays also impose fixed constraints:
This page focuses on static arrays with fixed capacity. Dynamic arrays (like Python lists or JavaScript arrays) can resize, but the fundamental 'wasted space' problem we're about to discover applies to both—resizing doesn't eliminate the issue, it merely delays it.
The Queue Contract:
Recall that a queue must support these operations:
The challenge is implementing these operations efficiently using an array's indexed structure. Let's start with the most straightforward approach.
The linear queue design uses two pointers—front and rear—to track the logical boundaries of the queue within the array.
Initial State:
Imagine an array of capacity 5. Initially, both front and rear are set to indicate an empty queue. Different implementations use slightly different conventions, but a common approach is:
front = 0: Points to the first element (or where it will be)rear = -1: Points to the last element (or -1 if queue is empty)Alternatively, some implementations use:
front = 0, rear = 0 where rear points to the next available slotWe'll use the first convention where rear points to the last element. This makes the logic easier to follow.
123456789101112131415161718192021222324252627282930
class LinearQueue<T> { private data: (T | undefined)[]; private front: number; private rear: number; private capacity: number; private size: number; constructor(capacity: number) { this.capacity = capacity; this.data = new Array(capacity); this.front = 0; this.rear = -1; // No elements yet this.size = 0; } // isEmpty: O(1) - Check if queue has no elements isEmpty(): boolean { return this.size === 0; } // isFull: O(1) - Check if queue has reached capacity isFull(): boolean { return this.size === this.capacity; } // getSize: O(1) - Return current number of elements getSize(): number { return this.size; }}Understanding the Pointers:
| Variable | Purpose | Initial Value |
|---|---|---|
front | Index of the first element in the queue | 0 |
rear | Index of the last element in the queue | -1 (empty) |
size | Count of elements currently in the queue | 0 |
capacity | Maximum number of elements the array can hold | User-specified |
The relationship between these values is critical:
size === 0, the queue is empty (regardless of front/rear values)size === capacity, the queue is full[front, rear] (inclusive)The enqueue operation adds an element to the rear of the queue. In a linear implementation, this means:
1234567891011121314151617181920212223242526272829303132333435
// Enqueue: O(1) - Add element to the rear of the queueenqueue(element: T): void { // Check for overflow condition if (this.isFull()) { throw new Error("Queue Overflow: Cannot enqueue to a full queue"); } // Move rear pointer forward and place element this.rear++; this.data[this.rear] = element; this.size++;} /* Step-by-step visualization for enqueueing A, B, C into capacity-5 queue: Initial state: indices: [0] [1] [2] [3] [4] data: [ ] [ ] [ ] [ ] [ ] front = 0, rear = -1, size = 0 After enqueue('A'): indices: [0] [1] [2] [3] [4] data: [A] [ ] [ ] [ ] [ ] front = 0, rear = 0, size = 1 After enqueue('B'): indices: [0] [1] [2] [3] [4] data: [A] [B] [ ] [ ] [ ] front = 0, rear = 1, size = 2 After enqueue('C'): indices: [0] [1] [2] [3] [4] data: [A] [B] [C] [ ] [ ] front = 0, rear = 2, size = 3*/Enqueue is O(1) because we're performing a constant number of operations: one comparison, one increment, one array write, and one size increment. The operation's time doesn't depend on how many elements are already in the queue.
Why Rear Moves Right:
Notice that rear always increases. Each new element is placed at a higher index than the previous element. This creates a mental model of elements "flowing" from left to right through the array:
ENTRY POINT EXIT POINT
↓ ↓
[ ] [ ] [ ] [ ] [ ] → [A] [B] [C] [ ] [ ]
↑ ↑
front rear
This unidirectional flow is intuitive but, as we'll see, creates a fundamental problem.
The dequeue operation removes and returns the element at the front of the queue. In our linear implementation:
12345678910111213141516171819202122232425262728293031323334353637
// Dequeue: O(1) - Remove and return element from the frontdequeue(): T { // Check for underflow condition if (this.isEmpty()) { throw new Error("Queue Underflow: Cannot dequeue from an empty queue"); } // Retrieve element at front const element = this.data[this.front] as T; // Clear the slot (helps garbage collection) this.data[this.front] = undefined; // Move front pointer forward this.front++; this.size--; return element;} /* Continuing from previous state (A, B, C in queue): Before dequeue(): indices: [0] [1] [2] [3] [4] data: [A] [B] [C] [ ] [ ] front = 0, rear = 2, size = 3 After dequeue() → returns 'A': indices: [0] [1] [2] [3] [4] data: [ ] [B] [C] [ ] [ ] front = 1, rear = 2, size = 2 After dequeue() → returns 'B': indices: [0] [1] [2] [3] [4] data: [ ] [ ] [C] [ ] [ ] front = 2, rear = 2, size = 1*/The Critical Observation:
Look carefully at what happens to the array after dequeue operations:
index: [0] [1] [2] [3] [4]
data: [ ] [ ] [C] [ ] [ ]
↑ ↑ ↑
WASTED WASTED front/rear
Indices 0 and 1 are now permanently unusable. We removed elements from those positions, but we cannot place new elements there because rear only moves forward.
This is the beginning of the wasted space problem.
Both front and rear only move in one direction (toward higher indices). Once the front passes an index, that slot is abandoned forever. Once the rear reaches the end of the array, no more enqueue operations are possible—even if most of the array is empty!
The front (or peek) operation returns the element at the front of the queue without removing it. This is a simple read operation:
12345678910111213
// Front/Peek: O(1) - View front element without removalfront(): T { if (this.isEmpty()) { throw new Error("Queue is empty: No front element"); } return this.data[this.front] as T;} // Alternative name for the same operationpeek(): T { return this.front();}This operation is straightforward and doesn't contribute to the wasted space problem. It's included for completeness.
Complete Linear Queue Implementation:
Here's the full implementation, combining all the pieces:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
class LinearQueue<T> { private data: (T | undefined)[]; private frontIndex: number; private rearIndex: number; private capacity: number; private count: number; constructor(capacity: number) { if (capacity <= 0) { throw new Error("Capacity must be positive"); } this.capacity = capacity; this.data = new Array(capacity); this.frontIndex = 0; this.rearIndex = -1; this.count = 0; } isEmpty(): boolean { return this.count === 0; } isFull(): boolean { return this.count === this.capacity; } size(): number { return this.count; } enqueue(element: T): void { if (this.isFull()) { throw new Error("Queue Overflow"); } this.rearIndex++; this.data[this.rearIndex] = element; this.count++; } dequeue(): T { if (this.isEmpty()) { throw new Error("Queue Underflow"); } const element = this.data[this.frontIndex] as T; this.data[this.frontIndex] = undefined; this.frontIndex++; this.count--; return element; } front(): T { if (this.isEmpty()) { throw new Error("Queue is empty"); } return this.data[this.frontIndex] as T; } // Utility: Display queue state for debugging display(): void { console.log(`Queue state: front=${this.frontIndex}, rear=${this.rearIndex}, size=${this.count}`); console.log(`Array: [${this.data.map(e => e ?? '_').join(', ')}]`); }}Now let's witness the fundamental flaw of the linear queue implementation. Consider a queue with capacity 5 that processes a steady stream of tasks:
1234567891011121314151617181920212223242526272829303132
const queue = new LinearQueue<string>(5); // Normal usage pattern: add some tasks, process some tasksqueue.enqueue("Task1"); // [T1] [ ] [ ] [ ] [ ] front=0, rear=0queue.enqueue("Task2"); // [T1][T2] [ ] [ ] [ ] front=0, rear=1queue.enqueue("Task3"); // [T1][T2][T3] [ ] [ ] front=0, rear=2 queue.dequeue(); // [ ] [T2][T3] [ ] [ ] front=1, rear=2queue.dequeue(); // [ ] [ ] [T3] [ ] [ ] front=2, rear=2 queue.enqueue("Task4"); // [ ] [ ] [T3][T4] [ ] front=2, rear=3queue.enqueue("Task5"); // [ ] [ ] [T3][T4][T5] front=2, rear=4 // Now the queue has 3 elements (T3, T4, T5)// But rear is at the last index (4)! queue.dequeue(); // [ ] [ ] [ ] [T4][T5] front=3, rear=4// Queue has 2 elements, but... queue.enqueue("Task6"); // ❌ OVERFLOW! rear cannot exceed capacity-1 // The queue reports "full" even though indices [0], [1], [2] are empty!// We have 2 elements in a capacity-5 queue, but cannot add more. /* Visual representation: index: [0] [1] [2] [3] [4] data: [ ] [ ] [ ] [T4] [T5] ↑ ↑ ↑ ↑ WASTED WASTED rear (stuck at end) 3 slots (60%) are wasted and unrecoverable!*/In a linear queue, after enough operations, you can reach a state where the queue appears 'full' but is mostly empty! The rear pointer hits the end of the array, blocking further enqueues, while dequeued slots at the beginning remain forever unused. This is not a minor inefficiency—it's a fundamental design flaw that makes linear queues impractical for any serious use.
Quantifying the Problem:
Let's calculate the waste ratio in a typical usage pattern:
| Total Operations | Elements in Queue | Wasted Slots | Waste Percentage |
|---|---|---|---|
| 5 enqueues | 5 | 0 | 0% |
| 3 dequeues | 2 | 3 | 60% |
| 2 more dequeues | 0 | 5 | 100% |
After the queue becomes empty through normal usage, every slot is wasted. You cannot enqueue a single element even though the entire array is available!
The Mathematical Inevitability:
For a queue of capacity n:
n enqueue operations, rear = n - 1 (at the last index)rear cannot decreasen total enqueues, the linear queue is permanently exhaustedThis means a linear queue can only ever process n elements in its entire lifetime, regardless of how large its capacity is.
An obvious "fix" for the wasted space problem is to shift elements after each dequeue, reclaiming the freed slot:
Before dequeue: [A] [B] [C] [ ] [ ] front=0, rear=2
Dequeue returns A: [ ] [B] [C] [ ] [ ] front=1, rear=2 ← Linear approach
With shifting: [B] [C] [ ] [ ] [ ] front=0, rear=1 ← Shifted approach
By shifting all elements left after removal, we keep front at 0, allowing reuse of slots at the end.
1234567891011121314151617181920212223242526272829
// Shifting Dequeue: O(n) - A tempting but costly "fix"dequeueWithShift(): T { if (this.isEmpty()) { throw new Error("Queue Underflow"); } // Get front element const element = this.data[0] as T; // Shift ALL remaining elements left by one position for (let i = 0; i < this.count - 1; i++) { this.data[i] = this.data[i + 1]; // O(n) copying! } // Clear the last occupied slot this.data[this.count - 1] = undefined; // Rear moves back since we shifted this.rearIndex--; this.count--; return element;} /* Analysis: - Each dequeue now takes O(n) time (shifting n-1 elements) - For n dequeue operations: O(n²) total time - Memory moves are especially expensive due to cache behavior*/Shifting solves the space problem but destroys time efficiency. Dequeue becomes O(n) instead of O(1). For a queue processing millions of operations, this is unacceptable. We've traded wasted space for wasted time—neither solution is viable.
Comparative Analysis:
| Approach | Enqueue | Dequeue | Space Efficiency | Practical? |
|---|---|---|---|---|
| Linear (no shift) | O(1) | O(1) | Terrible (degrades to 0%) | ❌ No |
| Linear (with shift) | O(1) | O(n) | Perfect (100%) | ❌ No |
| Ideal solution | O(1) | O(1) | Perfect (100%) | ✅ Yes |
We need both O(1) operations AND full space utilization. The shifting approach sacrifices time for space. The non-shifting approach sacrifices space for time. Neither is acceptable.
Is there a way to have both?
Yes—and the insight is surprisingly elegant. Instead of treating the array as a one-way street where pointers only move forward, what if we allowed them to wrap around to the beginning?
The fundamental problem with linear queues is that pointers are trapped in a one-directional flow. Once rear reaches the end, it's stuck—even if the beginning of the array is empty.
The Key Insight:
What if, instead of viewing the array as a line with a beginning and end:
[0] → [1] → [2] → [3] → [4] → (end, stuck!)
We viewed it as a circle where the end connects back to the beginning?
[0]
↙ ↘
[4] [1]
↖ ↗
[3] ← [2]
When rear reaches index 4 and needs to advance, instead of stopping, it wraps around to index 0 (if that slot is empty from a previous dequeue).
This is the circular queue concept: treat the array as a ring buffer where indices cycle through the available positions.
1234567891011121314
// The mathematical trick: modular arithmetic// Instead of: rear++// We use: rear = (rear + 1) % capacity // Example with capacity = 5:let rear = 3;rear = (rear + 1) % 5; // rear = 4 rear = (rear + 1) % 5; // rear = 0 (wrapped around!) rear = (rear + 1) % 5; // rear = 1 // The % operator ensures indices stay in valid range [0, capacity-1]// and naturally "wrap" from the end back to the beginningWhy This Solves the Problem:
With circular indexing:
rear reaches the last index and needs to advance → wraps to index 0front reaches the last index and needs to advance → wraps to index 0Visual Transformation:
Linear view (problematic):
[ ] [ ] [ ] [T4] [T5] ← rear stuck at end, slots 0-2 wasted
Circular view (solution):
[0][ ]
↙ ↘
[4][T5] [1][ ]
↖ ↗
[3][T4] - [2][ ]
Next enqueue goes to slot [0], then [1], then [2]!
The circular queue reuses all slots efficiently while maintaining O(1) operations. But implementing it correctly requires careful pointer management, which we'll explore in the next page.
Circular queues solve both problems simultaneously: O(1) enqueue, O(1) dequeue, and perfect space utilization. The key is modular arithmetic for pointer advancement. However, circular queues introduce a new challenge: distinguishing between a full queue and an empty queue (both can have front == rear). We'll address this in upcoming pages.
This page established why the naive linear array queue implementation fails in practice and motivated the need for a more sophisticated approach.
What We Covered:
What's Next:
The next page dives into the circular queue implementation in detail. We'll explore:
You now understand why linear array queues are impractical for real use and why circular queues are necessary. The insight of treating arrays as rings rather than lines is fundamental to efficient queue implementation. Next, we'll implement this elegant solution and tackle the new challenges it introduces.