Loading content...
How does JavaScript, famously single-threaded, handle thousands of concurrent connections? How does Node.js serve millions of requests without spawning millions of threads? The answer lies in a deceptively simple abstraction: the event loop.
The event loop is the runtime heartbeat that makes asynchronous programming possible. It continuously monitors for completed operations, dispatches their callbacks, and manages the flow of an application's execution. Without understanding the event loop, async code behavior appears magical—or worse, unpredictable.
This page demystifies the event loop: its architecture, phases, underlying OS mechanisms, and how different platforms implement this critical abstraction.
By the end of this page, you will understand event loop architecture and phases, I/O multiplexing mechanisms (select, poll, epoll, kqueue), the JavaScript/Node.js event loop in detail, task queues and microtask queues, and how event loops achieve high concurrency on single threads.
An event loop is a programming pattern where a program waits for and dispatches events or messages. It's the central execution model for event-driven and asynchronous programming.
The core algorithm:
while (application_is_running) {
events = wait_for_events(); // Block until something happens
for (event in events) {
dispatch(event); // Call the appropriate handler
}
}
That's it. The event loop is fundamentally a forever loop that:
Why 'loop'?
The term emphasizes the continuous, iterative nature. Unlike a linear program that runs from start to end, an event-driven program loops continuously, responding to external stimuli.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# A simple event loop (conceptual implementation) class SimpleEventLoop: def __init__(self): self.running = True self.callbacks = {} # fd -> callback self.timers = [] # (deadline, callback) def add_reader(self, fd, callback): """Register callback for when fd is readable""" self.callbacks[fd] = callback def add_timer(self, delay, callback): """Register callback to run after delay seconds""" deadline = time.time() + delay heapq.heappush(self.timers, (deadline, callback)) def run_forever(self): """The actual event loop""" while self.running: # Calculate how long until next timer timeout = self._next_timer_timeout() # Wait for I/O or timeout # select() is the I/O multiplexing syscall readable, _, _ = select.select( list(self.callbacks.keys()), # fds to watch [], # writable (unused) [], # errors (unused) timeout # max wait time ) # Process ready I/O handlers for fd in readable: callback = self.callbacks[fd] callback(fd) # Process expired timers self._run_expired_timers() def _next_timer_timeout(self): if not self.timers: return None # Block indefinitely deadline, _ = self.timers[0] return max(0, deadline - time.time()) def _run_expired_timers(self): now = time.time() while self.timers and self.timers[0][0] <= now: _, callback = heapq.heappop(self.timers) callback() # Usageloop = SimpleEventLoop()loop.add_reader(socket.fileno(), handle_socket_data)loop.add_timer(5.0, lambda: print("5 seconds passed"))loop.run_forever()Key properties of event loops:
Event loop vs thread-per-connection:
| Aspect | Event Loop | Thread-per-Connection |
|---|---|---|
| Threads | 1 (usually) | N (one per connection) |
| Memory | Low (event state only) | High (stack per thread) |
| Context switches | Minimal | Frequent |
| Parallelism | None (need worker threads) | Natural |
| Complexity | Callbacks/async | Linear code |
| Best for | I/O-bound | CPU-bound |
Event loops power many systems: GUI frameworks (Windows message loop, GTK main loop), game engines (game loop), web servers (nginx, Node.js), GUI toolkits (Qt, wxWidgets), and embedded systems. The pattern predates JavaScript by decades.
At the heart of every event loop is I/O multiplexing—the ability to monitor multiple I/O sources simultaneously and be notified when any becomes ready. This is how a single thread can 'wait' on thousands of sockets without blocking on each one.
The problem without multiplexing:
// Without multiplexing: Can only wait on ONE socket
read(socket1, buffer, size); // Blocks until socket1 has data
// Can't check socket2, socket3, etc. while blocked!
With multiplexing:
// With multiplexing: Wait on ALL sockets at once
int ready = select(maxfd, &read_set, NULL, NULL, timeout);
// Returns when ANY socket in read_set has data
// Now we know WHICH ones are ready
The major I/O multiplexing mechanisms:
| Mechanism | Platform | Complexity | Max FDs | Notes |
|---|---|---|---|---|
| select() | POSIX (all Unix) | O(n) | ~1024 | Oldest, most portable, limited |
| poll() | POSIX | O(n) | Unlimited | No FD limit, but still O(n) |
| epoll | Linux | O(1) | Unlimited | Scalable, edge/level triggered |
| kqueue | BSD/macOS | O(1) | Unlimited | Similar to epoll, different API |
| IOCP | Windows | O(1) | Unlimited | Completion ports, true async |
| io_uring | Linux 5.1+ | O(1) | Unlimited | Newest, lowest overhead |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
// ========================================// select() - The original (1983)// ======================================== fd_set read_fds;FD_ZERO(&read_fds);FD_SET(socket1, &read_fds);FD_SET(socket2, &read_fds); struct timeval timeout = {5, 0}; // 5 seconds // Wait for any socket to be readableint ready = select(max_fd + 1, &read_fds, NULL, NULL, &timeout); if (ready > 0) { if (FD_ISSET(socket1, &read_fds)) handle_socket1(); if (FD_ISSET(socket2, &read_fds)) handle_socket2();} // Problem: Must rebuild fd_set every time, O(n) scan // ========================================// epoll - Linux scalable I/O (2002)// ======================================== // Create epoll instance (once)int epfd = epoll_create1(0); // Add sockets to monitor (once per socket)struct epoll_event ev;ev.events = EPOLLIN; // Interested in readsev.data.fd = socket1;epoll_ctl(epfd, EPOLL_CTL_ADD, socket1, &ev); // Wait for events (in the loop)struct epoll_event events[MAX_EVENTS];int nfds = epoll_wait(epfd, events, MAX_EVENTS, timeout_ms); // Only ready FDs are returned - O(1) per ready FD!for (int i = 0; i < nfds; i++) { int fd = events[i].data.fd; handle_ready_socket(fd);} // ========================================// kqueue - BSD/macOS (2000)// ======================================== int kq = kqueue(); // Register intereststruct kevent change;EV_SET(&change, socket1, EVFILT_READ, EV_ADD, 0, 0, NULL);kevent(kq, &change, 1, NULL, 0, NULL); // Wait for eventsstruct kevent events[MAX_EVENTS];int nev = kevent(kq, NULL, 0, events, MAX_EVENTS, &timeout); for (int i = 0; i < nev; i++) { int fd = events[i].ident; handle_ready_socket(fd);}Edge-triggered vs level-triggered:
A crucial distinction in I/O multiplexing:
Level-triggered: 'Socket has data' (repeated if not drained)
Edge-triggered: 'Socket received data' (once per arrival)
Edge-triggered is more efficient (fewer wakeups) but requires draining all data immediately. Level-triggered is safer but can cause spurious wakeups.
Why O(1) matters at scale:
With 10,000 connections but only 100 active at any moment:
| Mechanism | Work per wait | With 10K connections, 100 active |
|---|---|---|
| select/poll | Scan all 10,000 FDs | 10,000 operations |
| epoll/kqueue | Return only 100 ready | 100 operations |
For high-concurrency servers, this 100x difference is existential.
Node.js uses libuv, a C library that abstracts I/O multiplexing across platforms: epoll on Linux, kqueue on macOS/BSD, IOCP on Windows. Your JavaScript code doesn't need to know which mechanism is underneath—libuv handles it.
The Node.js event loop is the most well-documented implementation, making it an excellent case study. Understanding it helps reason about async behavior in JavaScript and similar systems.
The loop phases:
Node.js event loop has distinct phases, each with its own queue of callbacks:
┌───────────────────────────┐┌─>│ timers │ setTimeout, setInterval callbacks│ └─────────────┬─────────────┘│ ┌─────────────┴─────────────┐│ │ pending callbacks │ I/O callbacks deferred to next iteration│ └─────────────┬─────────────┘│ ┌─────────────┴─────────────┐│ │ idle, prepare │ Internal use only│ └─────────────┬─────────────┘ ┌───────────────┐│ ┌─────────────┴─────────────┐ │ incoming: ││ │ poll │<─────┤ connections, ││ └─────────────┬─────────────┘ │ data, etc. ││ ┌─────────────┴─────────────┐ └───────────────┘│ │ check │ setImmediate callbacks│ └─────────────┬─────────────┘│ ┌─────────────┴─────────────┐└──┤ close callbacks │ socket.on('close'), etc. └───────────────────────────┘ Between each phase: process.nextTick queue and microtask queueDetailed phase descriptions:
1. Timers Phase
Executes callbacks scheduled by setTimeout() and setInterval(). Timers specify a minimum delay—actual execution may be later depending on other callbacks.
2. Pending Callbacks Phase Executes I/O callbacks deferred from the previous loop iteration. Some system operations (like TCP errors) deliver callbacks here.
3. Idle, Prepare Phase Used internally by Node.js. Not directly accessible from JavaScript.
4. Poll Phase The most important phase. Two main functions:
5. Check Phase
Executes setImmediate() callbacks. Runs immediately after poll completes.
6. Close Callbacks Phase
Executes close event callbacks (e.g., socket.on('close', ...))
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
// Demonstrating event loop phases and ordering console.log('1. Script start (sync)'); setTimeout(() => { console.log('6. setTimeout callback (timers phase)');}, 0); setImmediate(() => { console.log('7. setImmediate callback (check phase)');}); Promise.resolve().then(() => { console.log('4. Promise.then (microtask)');}); process.nextTick(() => { console.log('3. process.nextTick (before microtasks)');}); const fs = require('fs');fs.readFile(__filename, () => { console.log('8. fs.readFile callback (poll phase)'); setTimeout(() => console.log('10. nested setTimeout'), 0); setImmediate(() => console.log('9. nested setImmediate')); // Inside I/O callback: setImmediate fires before setTimeout}); console.log('2. Script end (sync)'); /* * Output order: * 1. Script start (sync) * 2. Script end (sync) * 3. process.nextTick (before microtasks) * 4. Promise.then (microtask) * * Then event loop phases begin: * 5. (depends on timing - setTimeout vs setImmediate race in main module) * 6. setTimeout or setImmediate * 7. setImmediate or setTimeout * 8. fs.readFile callback (poll phase) * 9. nested setImmediate (check phase - always after I/O) * 10. nested setTimeout (next tick's timers phase) */In the main module, setTimeout(fn, 0) and setImmediate(fn) have unpredictable ordering—it depends on process performance. But within an I/O callback, setImmediate ALWAYS fires before setTimeout. This is because check phase runs after poll phase in the same iteration.
JavaScript engines distinguish between two types of async tasks: macrotasks (tasks) and microtasks. Understanding this distinction is crucial for predicting execution order.
Macrotasks (Tasks):
setTimeout, setIntervalsetImmediate (Node.js)requestAnimationFrame (browsers)Microtasks:
Promise.then, Promise.catch, Promise.finallyprocess.nextTick (Node.js - technically before microtasks)queueMicrotask()MutationObserver (browsers)The critical rule:
Microtasks are processed to completion between every macrotask. After each macrotask (or phase in Node.js), the engine drains the entire microtask queue before proceeding.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
// ========================================// Microtask queue drains completely// ======================================== console.log('Start'); setTimeout(() => console.log('Timeout 1'), 0);setTimeout(() => console.log('Timeout 2'), 0); Promise.resolve() .then(() => { console.log('Promise 1'); return Promise.resolve(); }) .then(() => { console.log('Promise 2'); // Add MORE to microtask queue Promise.resolve().then(() => console.log('Promise 3')); }); console.log('End'); /* * Output: * Start * End * Promise 1 * Promise 2 * Promise 3 <- Microtask queue fully drained! * Timeout 1 <- Only THEN next macrotask * Timeout 2 */ // ========================================// Infinite microtask queue = starvation!// ======================================== function evilMicrotasks() { Promise.resolve().then(() => { console.log('Microtask running...'); evilMicrotasks(); // Endlessly add to microtask queue });} evilMicrotasks();setTimeout(() => console.log('This NEVER runs!'), 0); // The microtask queue never empties, so timer never fires// This starves macrotasks - the event loop is blocked! // ========================================// process.nextTick vs Promise.then (Node.js)// ======================================== Promise.resolve().then(() => console.log('Promise'));process.nextTick(() => console.log('nextTick')); /* * Output: * nextTick <- nextTick queue processed BEFORE microtasks * Promise * * Node.js has TWO queues between phases: * 1. nextTick queue (process.nextTick) * 2. Microtask queue (Promises) * nextTick always runs first! */ // ========================================// Browser rendering and microtasks// ======================================== // In browsers:// 1. Run macrotask (e.g., click handler)// 2. Drain microtask queue// 3. RENDER if needed (repaint)// 4. Next macrotask // This means: Promises resolve BEFORE render// But setTimeout runs AFTER render button.addEventListener('click', () => { element.style.color = 'red'; Promise.resolve().then(() => { // Runs before repaint! element.style.color = 'blue'; }); // User only sees blue (red never rendered)});Visual model of event loop with queues:
┌─────────────────────────────────────────────────────────────┐
│ JAVASCRIPT ENGINE │
│ ┌─────────────┐ │
│ │ Call Stack │ ← Sync code executes here │
│ └──────┬──────┘ │
│ │ │
│ ↓ │
│ When stack is empty, check: │
│ │
│ 1. ┌─────────────────┐ │
│ │ nextTick queue │ ← process.nextTick (Node.js) │
│ └────────┬────────┘ │
│ ↓ (drain completely) │
│ 2. ┌─────────────────┐ │
│ │ Microtask queue │ ← Promise.then, queueMicrotask │
│ └────────┬────────┘ │
│ ↓ (drain completely) │
│ 3. ┌─────────────────┐ │
│ │ Macrotask queue │ ← setTimeout, I/O callbacks │
│ └────────┬────────┘ │
│ ↓ (process ONE, then back to step 1) │
└─────────────────────────────────────────────────────────────┘
Use queueMicrotask() for work that must complete before the next render but after current sync code. Use setTimeout(..., 0) when you want to yield to the event loop and allow rendering/I/O to proceed. Choose based on urgency vs responsiveness.
The browser event loop differs from Node.js in important ways. It must coordinate JavaScript execution with rendering, user events, and the DOM.
Browser event loop steps (simplified):
requestAnimationFrame callbacksThe render opportunity:
Browsers typically aim for 60fps, meaning ~16.67ms between frames. The event loop checks if rendering is needed after each task/microtask cycle. Long-running synchronous code blocks rendering.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
// ========================================// Browser event loop and rendering// ======================================== // Long task blocks renderingfunction blockingWork() { const start = Date.now(); while (Date.now() - start < 3000) { // Busy loop for 3 seconds } console.log('Done blocking'); // Browser was frozen for 3 seconds - no rendering!} // Breaking up work to allow renderingasync function nonBlockingWork(items) { for (let i = 0; i < items.length; i++) { processItem(items[i]); // Yield to event loop every 10 items if (i % 10 === 0) { // Use setTimeout(0) or requestAnimationFrame await new Promise(resolve => setTimeout(resolve, 0)); // Browser can now render! } }} // ========================================// requestAnimationFrame// ======================================== // Runs before NEXT repaint (not after current macrotask)function animate() { element.style.left = (parseInt(element.style.left) + 1) + 'px'; requestAnimationFrame(animate);}requestAnimationFrame(animate); // Proper way to schedule work before render:// - Runs at display refresh rate (typically 60fps)// - Batches DOM reads/writes// - Pauses when tab is hidden // ========================================// Event loop with user events// ======================================== button.addEventListener('click', () => { console.log('1. Click handler start'); Promise.resolve().then(() => { console.log('3. Microtask from click handler'); }); console.log('2. Click handler end');}); // When user clicks:// 1. Browser queues click event as macrotask// 2. Event loop picks up click task// 3. Handler runs synchronously (logs 1, then 2)// 4. Microtask queue drains (logs 3)// 5. Potential render// 6. Next macrotask // ========================================// Input event handling// ======================================== input.addEventListener('input', (e) => { // This handler should be FAST // Long processing here = laggy typing // BAD: Heavy computation const result = heavyComputation(e.target.value); display(result); // GOOD: Defer heavy work clearTimeout(debounceTimer); debounceTimer = setTimeout(() => { const result = heavyComputation(e.target.value); display(result); }, 300); // Debounce 300ms});Web Workers: Escaping the main thread
For CPU-intensive work, browsers provide Web Workers—true parallel threads that don't block the main event loop:
// main.js
const worker = new Worker('worker.js');
worker.postMessage({ data: largeDataset });
worker.onmessage = (event) => {
// Result received from worker
console.log('Computed in parallel:', event.data.result);
};
// Main thread continues normally - no blocking!
// worker.js
self.onmessage = (event) => {
const result = heavyComputation(event.data);
self.postMessage({ result });
};
Key differences from the main thread:
Google's Web Vitals guidelines suggest tasks should complete in under 50ms to maintain responsiveness. Longer tasks should be broken up using setTimeout, requestIdleCallback, or Web Workers. Long tasks block user interactions and cause visible jank.
Event loops aren't unique to JavaScript. Understanding implementations across systems reveals common patterns and design tradeoffs.
12345678910111213141516171819202122232425262728293031323334353637383940414243
# Python asyncio event loop import asyncio async def main(): # Create tasks (coroutines scheduled on event loop) task1 = asyncio.create_task(fetch_data('url1')) task2 = asyncio.create_task(fetch_data('url2')) # Await completion results = await asyncio.gather(task1, task2) return results # Run the event loopasyncio.run(main()) # Low-level access to the looploop = asyncio.get_event_loop() # Schedule callback (like setTimeout)loop.call_later(1.0, my_callback) # Schedule at absolute timeloop.call_at(loop.time() + 5.0, my_callback) # Add I/O handlerloop.add_reader(socket.fileno(), handle_readable)loop.add_writer(socket.fileno(), handle_writable) # Run until completeloop.run_until_complete(some_coroutine()) # Run forever (servers)loop.run_forever() """Python asyncio characteristics:- Single-threaded coroutine-based concurrency- Uses selectors (epoll/kqueue/select) for I/O- Explicit async/await everywhere- Tasks are scheduled coroutines- run_in_executor() for thread pool work"""Every system solves the same problem: waiting efficiently for multiple events and dispatching handlers. The core loop structure is identical across languages and platforms. Differences are in API design, thread models, and underlying I/O mechanisms.
Working with event loops requires understanding their cooperative nature. Violating the event loop's expectations causes performance problems, missed events, and blocked interfaces.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
// ========================================// BAD: Blocking the event loop// ======================================== // WRONG: Synchronous file read blocks everythingconst data = fs.readFileSync('large-file.txt'); // RIGHT: Async keeps event loop freeconst data = await fs.promises.readFile('large-file.txt'); // ========================================// BAD: Long-running sync code// ======================================== // WRONG: Blocks for entire computationfunction processLargeArray(arr) { return arr.map(item => heavyComputation(item));} // RIGHT: Break into chunksasync function processLargeArrayAsync(arr) { const results = []; const CHUNK_SIZE = 100; for (let i = 0; i < arr.length; i += CHUNK_SIZE) { const chunk = arr.slice(i, i + CHUNK_SIZE); const processed = chunk.map(item => heavyComputation(item)); results.push(...processed); // Yield to event loop between chunks await new Promise(resolve => setImmediate(resolve)); } return results;} // ========================================// Memory leak: Accumulating listeners// ======================================== // WRONG: Adding listener in loop without removingsocket.on('data', handleData); // Every connection adds one! // RIGHT: Remove listeners when donefunction handleConnection(socket) { const handler = (data) => processData(data); socket.on('data', handler); socket.on('close', () => { socket.removeListener('data', handler); });} // ========================================// Monitoring event loop lag (Node.js)// ======================================== const start = process.hrtime.bigint();setImmediate(() => { const lag = process.hrtime.bigint() - start; if (lag > 50_000_000n) { // 50ms in nanoseconds console.warn(`Event loop lag: ${lag / 1_000_000n}ms`); }}); // Or use monitorEventLoopDelayconst { monitorEventLoopDelay } = require('perf_hooks');const histogram = monitorEventLoopDelay();histogram.enable(); setInterval(() => { console.log(`Loop delay p99: ${histogram.percentile(99)}ms`);}, 5000);Don't block the event loop. Period. If you can't make something async, move it to a worker thread. If you can't use a worker, break it into chunks. The event loop is shared by everyone—blocking it affects all users.
The event loop is the engine of asynchronous programming, enabling single-threaded systems to achieve remarkable concurrency through efficient waiting and dispatch.
What's next:
With callbacks, promises, and event loops understood, we can explore the syntactic sugar that makes async code readable: async/await. This pattern transforms callback-based and promise-based code into something that looks and reads like synchronous code, while maintaining all async benefits.
You now understand the event loop—the mechanism that makes async programming possible on a single thread. From I/O multiplexing to task queues, you can reason about when and how your callbacks execute. Next, we complete the async journey with async/await patterns.