Loading content...
In the previous page, we saw how Programmed I/O forces the CPU into an endless checking loop—polling device status registers thousands or millions of times, burning cycles while waiting for slow hardware. This is fundamentally inefficient.
What if devices could announce their readiness, rather than requiring the CPU to constantly ask? What if, instead of:
"Are you ready yet? Are you ready yet? Are you ready yet? ..."
We could have:
"CPU, go do other work. I'll interrupt you when I need attention."
This is exactly what Interrupt-Driven I/O provides. Hardware interrupts are asynchronous signals that cause the processor to suspend its current work and execute a specific handler routine. When applied to I/O, interrupts allow devices to notify the CPU precisely when data is available or an operation completes.
By the end of this page, you will understand the interrupt mechanism from hardware to software handler, how interrupt-driven I/O improves CPU utilization, interrupt latency, overhead, and the interrupt coalescing problem, interrupt controller architecture (PIC, APIC, MSI/MSI-X), practical implementation patterns in device drivers, and when interrupts are better than polling—and when they're not.
An interrupt is an asynchronous signal that diverts the processor from its current execution path to handle an event requiring immediate attention. When an interrupt occurs:
This mechanism transforms I/O from synchronous polling to event-driven operation.
Interrupt-Driven I/O Workflow:
┌────────────────────────────────────────────────────────────────────┐
│ Interrupt-Driven I/O Read │
├─────────────────────────────────────────────────────────────────────┤
│ 1. CPU → Device: Issue read command │
│ 2. CPU: Return to other work (run other processes, idle) │
│ 3. Device: Prepares data (takes as long as needed) │
│ 4. Device → Interrupt Controller → CPU: Signal interrupt │
│ 5. CPU: Save state, jump to Interrupt Service Routine │
│ 6. ISR: Read data from device, store in memory │
│ 7. ISR: Signal completion to waiting process │
│ 8. CPU: Restore state, resume interrupted work │
└─────────────────────────────────────────────────────────────────────┘
The Key Difference from Polling:
In polling, the CPU actively checks the device during step 3. In interrupt-driven I/O, the CPU is free during step 3—it can execute other instructions, run other processes, or even enter a low-power sleep state.
Polling is like checking your mailbox every 5 minutes to see if a letter arrived. Interrupt-driven I/O is like having a doorbell—you can do other things until someone rings it. The doorbell model is obviously more efficient for your time, though installing and maintaining the doorbell has its own costs.
Multiple devices need to interrupt the CPU, but the processor has limited interrupt input pins. Interrupt controllers solve this by multiplexing many device interrupt signals onto the CPU's interrupt interface.
Evolution of x86 Interrupt Controllers:
8259A PIC (Programmable Interrupt Controller): Original PC design with 8 interrupt lines (IRQ0-IRQ7). Two cascaded PICs provide 15 usable IRQs.
I/O APIC (Advanced Programmable Interrupt Controller): Modern design supporting 24+ interrupt inputs, multiprocessor delivery, and programmable routing.
MSI/MSI-X (Message Signaled Interrupts): Modern PCI/PCIe approach where devices write to memory to signal interrupts, eliminating physical interrupt lines entirely.
| Mechanism | Era | Interrupt Count | Multi-CPU Support | Notes |
|---|---|---|---|---|
| 8259A PIC | 1981+ | 15 IRQs | Single CPU | Legacy, still present for compatibility |
| I/O APIC | 1997+ | 24+ per APIC | Yes | Required for SMP, supports redirection |
| MSI | 2003+ | 1 per device | Yes | No shared lines, reduces latency |
| MSI-X | 2007+ | 2048 per device | Yes | Per-queue interrupts for modern devices |
Message Signaled Interrupts (MSI/MSI-X):
MSI represents a fundamental shift in interrupt architecture. Instead of asserting a physical signal line, devices generate interrupts by writing a specific value to a specific memory address:
When the device writes to the message address, the memory controller recognizes this as an interrupt message and routes it directly to the target CPU's Local APIC.
Advantages of MSI/MSI-X:
Modern NVMe SSDs require MSI-X. They use one interrupt vector per I/O queue, allowing completion notifications to target the CPU that submitted the command. This eliminates cross-CPU coordination overhead and maximizes throughput. A typical NVMe drive might use 32+ MSI-X vectors for optimal performance.
When an interrupt fires, the CPU transfers control to an Interrupt Service Routine (ISR)—also called an interrupt handler. Writing correct, efficient ISRs is one of the most challenging aspects of systems programming.
ISR Execution Context Constraints:
ISRs execute in a special context with severe restrictions:
These constraints arise because the interrupted code is in an unknown state—it might hold locks, be in the middle of a critical section, or have inconsistent data structures.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132
/* * Interrupt Handler Pattern: Top-Half / Bottom-Half * * Modern device drivers split interrupt handling into two parts: * - Top Half: Runs in interrupt context, does minimal work * - Bottom Half: Runs later in a safer context, does the heavy lifting */ #include <stdint.h>#include <stdbool.h> /* Hardware register definitions */#define DEVICE_STATUS_REG 0x00#define DEVICE_DATA_REG 0x04#define DEVICE_IRQ_CLEAR 0x08 #define STATUS_RX_READY 0x01#define STATUS_TX_COMPLETE 0x02#define STATUS_ERROR 0x80 /* Assume these kernel primitives exist */extern void *mmio_base;extern uint32_t mmio_read32(void *addr);extern void mmio_write32(void *addr, uint32_t value);extern void spin_lock(void *lock);extern void spin_unlock(void *lock);extern void schedule_work(void (*func)(void *), void *data);extern void wake_up_process(void *wait_queue); struct device_context { void *lock; uint8_t rx_buffer[4096]; int rx_read_ptr; int rx_write_ptr; void *rx_wait_queue; bool tx_complete; uint32_t error_count;}; static struct device_context dev_ctx; /* * TOP HALF: Interrupt Handler (runs in IRQ context) * * This function is called directly when the device interrupts. * It must: * - Execute quickly * - Not sleep or block * - Do minimal work * - Acknowledge the interrupt * - Schedule deferred work if needed */int device_interrupt_handler(int irq, void *data) { struct device_context *ctx = (struct device_context *)data; uint32_t status; bool need_bottom_half = false; /* Read device status to determine interrupt cause */ status = mmio_read32(mmio_base + DEVICE_STATUS_REG); /* Check if this device actually generated the interrupt */ /* (Important for shared interrupt lines) */ if (!(status & (STATUS_RX_READY | STATUS_TX_COMPLETE | STATUS_ERROR))) { return 0; /* IRQ_NONE: Not our interrupt */ } /* Handle RX ready: read data immediately (time-critical) */ if (status & STATUS_RX_READY) { /* Read data from hardware before it's overwritten */ uint32_t data = mmio_read32(mmio_base + DEVICE_DATA_REG); /* Quick copy to buffer (no complex processing here) */ spin_lock(ctx->lock); ctx->rx_buffer[ctx->rx_write_ptr++ & 0xFFF] = data & 0xFF; spin_unlock(ctx->lock); /* Wake up any process waiting for data */ wake_up_process(ctx->rx_wait_queue); } /* Handle TX complete: just set a flag */ if (status & STATUS_TX_COMPLETE) { ctx->tx_complete = true; /* Wake up transmitting process */ } /* Handle errors: schedule bottom half for complex processing */ if (status & STATUS_ERROR) { ctx->error_count++; need_bottom_half = true; /* Need deferred error handling */ } /* CRITICAL: Acknowledge the interrupt to the device */ /* If we don't do this, the device keeps asserting IRQ */ mmio_write32(mmio_base + DEVICE_IRQ_CLEAR, status); /* Schedule bottom half if complex work is needed */ if (need_bottom_half) { schedule_work(device_bottom_half, ctx); } return 1; /* IRQ_HANDLED: We handled it */} /* * BOTTOM HALF: Deferred Work (runs in process context) * * This function runs later, outside of interrupt context. * It CAN: * - Sleep and block * - Allocate memory * - Perform complex processing * - Take mutex locks */void device_bottom_half(void *data) { struct device_context *ctx = (struct device_context *)data; /* Now we can do time-consuming error recovery */ /* This might involve: */ /* - Reading status registers */ /* - Logging detailed error information */ /* - Resetting the device */ /* - Reallocating buffers */ /* - Notifying user space */ /* Example: Complex error handling that we couldn't do in ISR */ if (ctx->error_count > 100) { /* Reset device - might involve sleeps, locks, etc. */ device_reset_sequence(ctx); ctx->error_count = 0; }}Linux provides multiple bottom-half mechanisms: Softirqs (high priority, run immediately after ISR), Tasklets (specific-CPU execution guarantee), Workqueues (full process context, can sleep), and Threaded IRQs (entire handler runs in kernel thread). Modern drivers increasingly use threaded IRQs for simplicity, as they provide the full kernel context with minimal complexity.
While interrupt-driven I/O frees the CPU from polling, interrupts themselves are not free. Interrupt latency is the delay between a device signaling an interrupt and the ISR beginning execution.
Components of Interrupt Latency:
| Component | Typical Latency | Notes |
|---|---|---|
| MSI-X delivery | ~500-1000 ns | Memory write to APIC |
| APIC processing | ~100-200 ns | Priority check, CPU selection |
| CPU interrupt recognition | ~50-100 ns | Finish current instruction |
| State saving (hardware) | ~100-200 ns | Push to interrupt stack |
| IDT lookup + dispatch | ~100 ns | Vector table access |
| ISR prologue | ~100-500 ns | Additional register saves |
| Total (best case) | ~1-3 µs | Assuming no contention |
| Worst case | 10-100+ µs | With cache misses, disabled IRQs |
Factors That Increase Latency:
Interrupt Overhead:
Beyond latency, each interrupt consumes CPU cycles for processing. A single interrupt might cost:
High-speed devices can generate thousands of interrupts per second. A 10 Gigabit NIC at full speed might generate 14.88 million packets/second—if each packet caused an interrupt, the CPU would spend all its time in interrupt handlers. This is why modern devices use interrupt coalescing and adaptive polling (NAPI).
To prevent interrupt storms from overwhelming the CPU, devices implement interrupt coalescing (also called interrupt moderation or interrupt throttling). Instead of interrupting for every event, the device batches multiple events into a single interrupt.
Coalescing Strategies:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123
/* * Interrupt Coalescing Configuration * * Example: Configuring interrupt coalescing on an Intel NIC (e1000e style) */ #include <stdint.h> /* Interrupt Throttle Register */#define E1000_ITR 0x00C4 /* Interrupt Throttling Rate */#define E1000_EITR 0x01500 /* Extended Interrupt Throttle (per-vector) */ /* Interrupt Delay Register */#define E1000_RDTR 0x02820 /* RX Delay Timer Register */#define E1000_RADV 0x0282C /* RX Interrupt Absolute Delay Timer */ /* Coalescing parameters */struct irq_coalesce_params { uint32_t rx_usecs; /* RX interrupt delay (microseconds) */ uint32_t rx_max_frames; /* RX interrupt frame threshold */ uint32_t tx_usecs; /* TX interrupt delay */ uint32_t tx_max_frames; /* TX interrupt frame threshold */ bool adaptive; /* Enable adaptive coalescing */}; extern void mmio_write32(void *base, uint32_t offset, uint32_t value);extern uint32_t mmio_read32(void *base, uint32_t offset); /* * Configure interrupt coalescing for network adapter * * This demonstrates the tradeoff between throughput and latency: * - Lower delay = lower latency, more interrupts * - Higher delay = higher throughput, fewer interrupts */void configure_irq_coalescing(void *hw_base, struct irq_coalesce_params *params) { /* * Interrupt Throttle Rate (ITR) * * This limits how frequently interrupts can fire. * Value is in units of 256 nanoseconds. * * Examples: * ITR = 0: No throttling (interrupt immediately) * ITR = 488: ~8000 interrupts/second maximum (125 µs interval) * ITR = 1953: ~2000 interrupts/second maximum (500 µs interval) * ITR = 20000: ~200 interrupts/second maximum (5 ms interval) */ if (!params->adaptive) { uint32_t itr_value = (params->rx_usecs * 4); /* Convert µs to 256ns units */ mmio_write32(hw_base, E1000_ITR, itr_value); } else { /* Adaptive mode: hardware adjusts based on traffic */ mmio_write32(hw_base, E1000_ITR, 0x8000 | (params->rx_usecs * 4)); } /* * RX Delay Timer (RDTR) * * After a packet arrives, wait this long before generating interrupt. * More packets arriving during the delay will be processed together. * Value is in 1.024 µs units. */ uint32_t rdtr = params->rx_usecs; if (rdtr > 0) { mmio_write32(hw_base, E1000_RDTR, rdtr); } /* * RX Absolute Delay Timer (RADV) * * Maximum time to wait before generating interrupt, regardless * of packet count. Prevents starvation when traffic is steady * but never reaches the count threshold. */ uint32_t radv = params->rx_usecs * 4; /* Longer absolute limit */ if (radv > 0) { mmio_write32(hw_base, E1000_RADV, radv); }} /* * Adaptive Interrupt Moderation * * This algorithm adjusts coalescing dynamically based on observed * interrupt rate and throughput. Similar to Linux e1000e driver. */struct adaptive_itr_state { uint64_t bytes_per_second; uint32_t packets_per_second; uint32_t current_itr;}; void adjust_itr_adaptive(struct adaptive_itr_state *state) { uint32_t new_itr; /* Calculate current throughput characteristics */ uint32_t avg_packet_size = state->bytes_per_second / (state->packets_per_second ? state->packets_per_second : 1); /* Low latency profile: small packets, interactive traffic */ if (avg_packet_size < 256 && state->packets_per_second < 8000) { new_itr = 70; /* ~20,000 interrupts/sec max - low latency */ } /* Medium profile: mixed traffic */ else if (avg_packet_size < 1024 && state->packets_per_second < 100000) { new_itr = 196; /* ~8,000 interrupts/sec max */ } /* Bulk profile: large packets, throughput-oriented */ else { new_itr = 488; /* ~3,300 interrupts/sec max - high throughput */ } /* Apply hysteresis: don't thrash on borderline conditions */ if (new_itr != state->current_itr) { /* Only change if difference is significant */ int diff = (int)new_itr - (int)state->current_itr; if (diff > 50 || diff < -50) { state->current_itr = new_itr; /* Update hardware ITR register */ } }}There is no universally 'best' coalescing setting. Interactive applications (SSH, gaming, trading) need low latency—use minimal coalescing. Bulk transfer workloads (file servers, backups) benefit from aggressive coalescing. Database servers often need a middle ground. Linux ethtool -c / -C commands show and set coalescing parameters.
Neither pure interrupt-driven I/O nor pure polling is optimal for all scenarios. Modern systems use hybrid approaches that adapt based on workload characteristics.
When Interrupts Win:
When Polling Wins:
| Scenario | Recommended Approach | Reasoning |
|---|---|---|
| Keyboard input | Interrupts | Low frequency, unpredictable timing |
| Mouse movement | Interrupts | Moderate frequency, user-perceptible latency |
| 10 Gbps NIC at full load | Hybrid (NAPI) | Too many events for pure interrupts |
| NVMe SSD at queue depth 32 | Polling + Interrupts | Poll briefly, interrupt if not ready |
| Boot console output | Polling | Simple, no interrupt setup needed |
| Debug serial port | Polling | Works when interrupts are broken |
| Real-time audio | Hybrid | Predictable callback timing needed |
Linux NAPI: The Hybrid Model
Network drivers in Linux use NAPI (New API), which combines interrupts and polling:
This approach:
Linux io_uring pushes this even further with 'interrupt-less' (polling mode) submission queues. Applications can submit I/O and poll for completions entirely in user space, avoiding syscall overhead entirely. For NVMe SSDs, this achieves millions of IOPS from a single thread—impossible with traditional interrupt-driven syscalls.
Let's tie everything together with a complete interrupt-driven serial port driver that demonstrates all the concepts we've covered:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256
/* * Interrupt-Driven Serial Port Driver * * Complete example demonstrating: * - Interrupt handler registration * - Top-half / bottom-half split * - Buffer management * - Synchronization between ISR and read/write functions */ #include <stdint.h>#include <stdbool.h> /* Simulated kernel primitives */typedef void (*irq_handler_t)(int irq, void *data);extern int request_irq(int irq, irq_handler_t handler, unsigned long flags, const char *name, void *data);extern void free_irq(int irq, void *data); /* Hardware port addresses */#define SERIAL_BASE 0x3F8#define SERIAL_DATA (SERIAL_BASE + 0)#define SERIAL_IER (SERIAL_BASE + 1) /* Interrupt Enable Register */#define SERIAL_IIR (SERIAL_BASE + 2) /* Interrupt Identification Register */#define SERIAL_FCR (SERIAL_BASE + 2) /* FIFO Control Register */#define SERIAL_LCR (SERIAL_BASE + 3) /* Line Control Register */#define SERIAL_MCR (SERIAL_BASE + 4) /* Modem Control Register */#define SERIAL_LSR (SERIAL_BASE + 5) /* Line Status Register */#define SERIAL_IRQ 4 /* Interrupt Enable Register bits */#define IER_RDA 0x01 /* Received Data Available */#define IER_THRE 0x02 /* Transmitter Holding Register Empty */ /* Line Status Register bits */#define LSR_DR 0x01 /* Data Ready */#define LSR_THRE 0x20 /* Transmitter Holding Register Empty */ /* Ring buffer structure for received data */#define BUFFER_SIZE 1024 struct serial_buffer { uint8_t data[BUFFER_SIZE]; volatile int head; /* Write pointer (ISR writes here) */ volatile int tail; /* Read pointer (reader reads here) */}; struct serial_device { struct serial_buffer rx_buf; struct serial_buffer tx_buf; void *rx_wait_queue; /* Processes waiting for RX data */ void *tx_wait_queue; /* Processes waiting for TX space */ void *lock; /* Spinlock for buffer access */ uint32_t rx_count; /* Statistics */ uint32_t tx_count; uint32_t irq_count; bool tx_running; /* TX currently active? */}; static struct serial_device serial_dev; /* Port I/O functions */static inline void outb(uint16_t port, uint8_t val) { __asm__ volatile ("outb %0, %1" : : "a"(val), "Nd"(port));}static inline uint8_t inb(uint16_t port) { uint8_t ret; __asm__ volatile ("inb %1, %0" : "=a"(ret) : "Nd"(port)); return ret;} /* Buffer helper functions */static inline bool buffer_empty(struct serial_buffer *buf) { return buf->head == buf->tail;}static inline bool buffer_full(struct serial_buffer *buf) { return ((buf->head + 1) % BUFFER_SIZE) == buf->tail;}static inline void buffer_put(struct serial_buffer *buf, uint8_t c) { buf->data[buf->head] = c; buf->head = (buf->head + 1) % BUFFER_SIZE;}static inline uint8_t buffer_get(struct serial_buffer *buf) { uint8_t c = buf->data[buf->tail]; buf->tail = (buf->tail + 1) % BUFFER_SIZE; return c;} /* * INTERRUPT HANDLER (Top Half) * * Called when serial port generates interrupt. * Must be fast and non-blocking. */void serial_interrupt_handler(int irq, void *data) { struct serial_device *dev = (struct serial_device *)data; uint8_t iir, lsr; int handled = 0; dev->irq_count++; /* Check interrupt identification register */ iir = inb(SERIAL_IIR); /* Process while interrupts pending (IIR bit 0 = 0 means pending) */ while ((iir & 0x01) == 0) { int irq_type = (iir >> 1) & 0x07; switch (irq_type) { case 0x02: /* Received data available or FIFO trigger */ /* Read all available data from FIFO */ while ((inb(SERIAL_LSR) & LSR_DR)) { uint8_t c = inb(SERIAL_DATA); if (!buffer_full(&dev->rx_buf)) { buffer_put(&dev->rx_buf, c); dev->rx_count++; } /* Else: buffer overflow - drop character */ } /* Wake up any process waiting for data */ wake_up_process(dev->rx_wait_queue); handled = 1; break; case 0x01: /* Transmitter holding register empty */ /* Send next bytes from TX buffer */ if (!buffer_empty(&dev->tx_buf)) { /* Load up to FIFO size (16 bytes for 16550) */ for (int i = 0; i < 16 && !buffer_empty(&dev->tx_buf); i++) { outb(SERIAL_DATA, buffer_get(&dev->tx_buf)); dev->tx_count++; } } else { /* TX buffer empty - disable TX interrupt */ outb(SERIAL_IER, IER_RDA); /* RX only */ dev->tx_running = false; } /* Wake up any process waiting for TX space */ wake_up_process(dev->tx_wait_queue); handled = 1; break; case 0x03: /* Line status (error) */ lsr = inb(SERIAL_LSR); /* Handle errors - would log or signal process */ handled = 1; break; } /* Check for more pending interrupts */ iir = inb(SERIAL_IIR); }} /* * READ FUNCTION (Called from process context) * * Blocks until data is available in the RX buffer. */int serial_read(char *buf, int count) { struct serial_device *dev = &serial_dev; int bytes_read = 0; while (bytes_read < count) { /* Wait for data to be available */ while (buffer_empty(&dev->rx_buf)) { /* Sleep until ISR wakes us */ wait_on_queue(dev->rx_wait_queue); /* (In real code, handle signals and return EINTR) */ } /* Copy data from buffer to user */ spin_lock(dev->lock); while (!buffer_empty(&dev->rx_buf) && bytes_read < count) { buf[bytes_read++] = buffer_get(&dev->rx_buf); } spin_unlock(dev->lock); } return bytes_read;} /* * WRITE FUNCTION (Called from process context) * * Adds data to TX buffer and starts transmission if needed. */int serial_write(const char *buf, int count) { struct serial_device *dev = &serial_dev; int bytes_written = 0; while (bytes_written < count) { /* Wait for space in TX buffer */ while (buffer_full(&dev->tx_buf)) { wait_on_queue(dev->tx_wait_queue); } /* Add data to TX buffer */ spin_lock(dev->lock); while (!buffer_full(&dev->tx_buf) && bytes_written < count) { buffer_put(&dev->tx_buf, buf[bytes_written++]); } /* Start transmission if not already running */ if (!dev->tx_running && !buffer_empty(&dev->tx_buf)) { dev->tx_running = true; outb(SERIAL_IER, IER_RDA | IER_THRE); /* Enable TX interrupt */ /* Kick-start: send first byte immediately */ outb(SERIAL_DATA, buffer_get(&dev->tx_buf)); } spin_unlock(dev->lock); } return bytes_written;} /* * INITIALIZATION */int serial_init(void) { struct serial_device *dev = &serial_dev; /* Initialize buffers */ dev->rx_buf.head = dev->rx_buf.tail = 0; dev->tx_buf.head = dev->tx_buf.tail = 0; dev->tx_running = false; /* Configure hardware: 115200 baud, 8N1 */ outb(SERIAL_LCR, 0x80); /* DLAB on */ outb(SERIAL_DATA, 0x01); /* Divisor low (115200) */ outb(SERIAL_IER, 0x00); /* Divisor high */ outb(SERIAL_LCR, 0x03); /* 8N1, DLAB off */ outb(SERIAL_FCR, 0xC7); /* Enable FIFO, 14-byte threshold */ outb(SERIAL_MCR, 0x0B); /* DTR, RTS, OUT2 (enables IRQ) */ /* Register interrupt handler */ int err = request_irq(SERIAL_IRQ, serial_interrupt_handler, 0, "serial", dev); if (err) return err; /* Enable RX interrupt (TX enabled when we have data) */ outb(SERIAL_IER, IER_RDA); return 0;} void serial_cleanup(void) { struct serial_device *dev = &serial_dev; /* Disable interrupts */ outb(SERIAL_IER, 0x00); /* Unregister handler */ free_irq(SERIAL_IRQ, dev);}Production drivers handle many additional concerns: support for multiple ports, DMA for high-speed serial, modem control signals, flow control (hardware and software), baud rate configuration, terminal discipline layers, error statistics, power management, and much more. The example above captures the core interrupt-driven pattern.
Interrupt-driven I/O transforms the CPU from an active polling agent to a responsive event processor, dramatically improving system efficiency and enabling true multiprogramming.
What's Next:
Interrupt-driven I/O solves the polling problem but still requires the CPU to move every byte of data between device and memory. For large transfers, this remains a significant burden. The final page in this module explores Direct Memory Access (DMA)—where devices transfer data directly to/from memory without CPU involvement, achieving the ultimate in I/O efficiency.
You now understand interrupt-driven I/O from hardware mechanisms through software implementation patterns. This knowledge is essential for device driver development, real-time systems design, and understanding modern operating system I/O stacks.