Loading content...
Unix provides a rich toolkit of IPC mechanisms: pipes, FIFOs, message queues, shared memory, signals, and sockets. Each was designed for specific use cases and carries distinct trade-offs. Choosing the wrong mechanism leads to unnecessary complexity, poor performance, or subtle bugs.
This page synthesizes everything we've learned into practical guidance. We'll map requirements to mechanisms, provide decision matrices for common scenarios, and give concrete recommendations backed by real-world experience.
By the end, you'll be able to look at any IPC requirement and confidently select the mechanism that fits best—not the one you're most familiar with, or the one that seems "fastest," but the one that's actually appropriate for the situation.
By the end of this page, you will understand the complete IPC mechanism landscape, know which mechanism to use for common scenarios, be able to evaluate trade-offs for novel requirements, avoid common selection mistakes, and apply a systematic selection process.
Let's first establish a complete catalog of the IPC mechanisms available on Unix-like systems, with their key characteristics:
| Mechanism | Type | Naming | Directionality | Primary Use |
|---|---|---|---|---|
| Anonymous Pipe | Byte stream | Inherited FD | Unidirectional | Parent-child, shell pipelines |
| Named Pipe (FIFO) | Byte stream | Filesystem path | Unidirectional | Unrelated processes, simple IPC |
| POSIX Message Queue | Messages | Name (/name) | Bidirectional* | Work queues, request-response |
| System V Message Queue | Messages | Numeric key | Bidirectional* | Legacy, portable IPC |
| Shared Memory (POSIX) | Memory region | Name (/name) | N/A | High-throughput, low-latency |
| Shared Memory (System V) | Memory region | Numeric key | N/A | Legacy, maximum portability |
| Memory-Mapped File | Memory region | Filesystem path | N/A | Persistence, file-based sharing |
| Unix Domain Socket (STREAM) | Byte stream | Path or abstract | Bidirectional | Client-server, complex protocols |
| Unix Domain Socket (DGRAM) | Messages | Path or abstract | Bidirectional | Local messaging, D-Bus transport |
| Signals | Notification | Process ID | Sender to receiver | Events, interrupts, control |
| eventfd | Counter/notification | Inherited FD | Bidirectional | User-space events, polling |
| Semaphores (POSIX) | Synchronization | Name (/name) | N/A | Coordination, mutual exclusion |
*Message queues are conceptually bidirectional—any process can send to or receive from the same queue—but a single queue doesn't establish a connection between specific peers.
Quick Characteristic Reference:
IPC MECHANISM CHARACTERISTICS AT A GLANCE═══════════════════════════════════════════════════════════════════════ Anon Named POSIX Unix Unix Shared Signal Pipe Pipe MsgQ Stream Dgram Memory━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Unrelated processes ✗ ✓ ✓ ✓ ✓ ✓ ✓Message boundaries ✗ ✗ ✓ ✗ ✓ N/A N/ABidirectional ✗ ✗* ✓ ✓ ✓ N/A ✗Multiple readers ✗ ✗ ✓ ✓ ✓ ✓ ✗Multiple writers ✓ ✓ ✓ ✓ ✓ ✓ ✓Pass file descriptors ✗ ✗ ✗ ✓ ✓ ✗ ✗Priority support ✗ ✗ ✓ ✗ ✗ N/A ✓**select/poll/epoll ✓ ✓ ✓ ✓ ✓ ✗ ✗***Zero-copy potential ✗ ✗ ✗ ✗ ✗ ✓ N/ACross-machine capable ✗ ✗ ✗ ✗ ✗ ✗ ✗ * Named pipes can be opened at both ends for read/write, but it's complex** Signals have implicit priority (real-time signals can queue)*** Signals can be waited for with signalfd which IS poll-able Network IPC (for comparison):━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━TCP Socket ✓ ✓ ✗ ✓ N/A ✗ ✗UDP Socket ✓ ✓ ✓ N/A ✓ ✗ ✗Linux has added newer mechanisms like io_uring (asynchronous I/O with shared ring buffers), memfd (anonymous memory-backed files), and pidfd (process file descriptors). These extend the toolkit but are Linux-specific and beyond our scope. The core mechanisms above work across Unix variants.
Let's map common scenarios to recommended mechanisms. These matrices encode experience from production systems.
Scenario 1: Parent-Child Communication
| Requirement | Recommended | Reason |
|---|---|---|
| Simple one-way data flow | Anonymous pipe | Simplest setup, inherited after fork(), no cleanup needed |
| Bidirectional conversation | Unix domain socketpair() | Single call creates bidirectional channel |
| Structured messages | Unix domain socket (SEQPACKET) | Message boundaries + connection semantics |
| Large data transfer | Anonymous pipe + shared memory | Pipe for coordination, shm for bulk data |
| Process exit notification | wait()/waitpid() + exit status | Built into process model, no IPC needed |
Scenario 2: Unrelated Process Communication
| Requirement | Recommended | Reason |
|---|---|---|
| Client-server request-response | Unix domain socket | Bidirectional, credential passing, async-capable |
| Work queue (one producer, many consumers) | POSIX message queue | Designed for this; atomic delivery to one consumer |
| Publish-subscribe (fan-out) | Unix domain socket + application routing | Sockets support multiple connections; route in app |
| High-throughput data sharing | Shared memory + notification mechanism | Zero-copy sharing; socket/eventfd for notification |
| Simple notification/event | Signal or eventfd | Lightweight; no data beyond 'something happened' |
| Configuration sharing | Memory-mapped file (read-only) | Persistent, multiple readers, no coordination |
Scenario 3: System Service Communication
| Requirement | Recommended | Reason |
|---|---|---|
| Desktop service (Linux) | D-Bus over Unix socket | Standard for Linux desktop; introspection, discovery |
| Custom daemon protocol | Unix domain socket (STREAM) | Connection-oriented, can pass credentials/FDs |
| Log aggregation | Unix domain socket (DGRAM) | Connectionless logging; syslog uses this |
| IPC across containers | TCP/gRPC or shared volumes | Containers have separate namespaces |
| High-security privilege separation | Socket with SO_PEERCRED | Kernel-verified credentials, minimal attack surface |
Scenario 4: Performance-Critical IPC
| Requirement | Recommended | Reason |
|---|---|---|
| Lowest possible latency | Shared memory + spinlock | Sub-microsecond; no kernel involvement |
| Highest throughput (bulk) | Shared memory + ring buffer | Zero-copy, batched operations |
| Mixed (control + data) | Socket for control, shared memory for data | Hybrid: explicit control, fast data |
| Producer-consumer (SPSC) | Lock-free shared memory queue | No locks = no blocking = predictable latency |
| Multi-producer multi-consumer | Consider message queue first, then shm | Lock contention often negates shm advantage |
If your scenario matches multiple rows with different recommendations, prioritize: (1) correctness requirements first, (2) security needs second, (3) performance last. A correct, secure system that's slightly slower is always better than a fast system that's buggy or insecure.
Let's walk through complete decision processes for common use cases.
Use Case: Build a pipeline like cmd1 | cmd2 | cmd3
Requirements:
Decision: Anonymous Pipe
12345678910111213141516171819202122232425262728293031323334
// Implementing: cmd1 | cmd2 int pipefd[2];pipe(pipefd); // pipefd[0] = read end, pipefd[1] = write end pid_t pid1 = fork();if (pid1 == 0) { // Child 1: cmd1 close(pipefd[0]); // Don't need read end dup2(pipefd[1], STDOUT); // stdout → pipe write close(pipefd[1]); execlp("cmd1", "cmd1", NULL);} pid_t pid2 = fork();if (pid2 == 0) { // Child 2: cmd2 close(pipefd[1]); // Don't need write end dup2(pipefd[0], STDIN); // stdin ← pipe read close(pipefd[0]); execlp("cmd2", "cmd2", NULL);} // Parentclose(pipefd[0]);close(pipefd[1]);waitpid(pid1, NULL, 0);waitpid(pid2, NULL, 0); // Why anonymous pipe is perfect:// ✓ Automatic cleanup when both ends close// ✓ Flow control (writer blocks when buffer full)// ✓ Works seamlessly with exec'd processes// ✓ No naming, no permissions, no cleanup codeLearning from others' mistakes is efficient. Here are common IPC selection errors and how to avoid them.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
// ❌ MISTAKE: Polling shared memory for changeswhile (1) { if (shared->flag == READY) { // Spinning! Wasting CPU! process(shared->data); shared->flag = PROCESSED; } // Even with usleep(), this is inefficient and has latency issues} // ✓ CORRECT: Use condition variable or eventfdpthread_mutex_lock(&shared->mutex);while (shared->flag != READY) { pthread_cond_wait(&shared->cond, &shared->mutex); // Sleeps efficiently}process(shared->data);shared->flag = PROCESSED;pthread_cond_signal(&shared->cond);pthread_mutex_unlock(&shared->mutex); // ❌ MISTAKE: Named pipe for bidirectionalint fd = open("/tmp/myfifo", O_RDWR); // Seems to work...write(fd, request, len);read(fd, response, len); // Might read your own request back! // ✓ CORRECT: Use Unix domain socket for bidirectionalint sock = socket(AF_UNIX, SOCK_STREAM, 0);connect(sock, ...);write(sock, request, len);read(sock, response, len); // Receives peer's response, not your request // ❌ MISTAKE: TCP for local IPC// Creates unnecessary 3-way handshake, Nagle delay, etc.int sock = socket(AF_INET, SOCK_STREAM, 0);connect(sock, &(struct sockaddr_in){ .sin_family = AF_INET, .sin_addr.s_addr = htonl(INADDR_LOOPBACK), .sin_port = htons(8080)}, ...); // ✓ CORRECT: Unix domain socket for local IPCint sock = socket(AF_UNIX, SOCK_STREAM, 0);connect(sock, &(struct sockaddr_un){ .sun_family = AF_UNIX, .sun_path = "/var/run/myservice.sock"}, ...);// Faster, no TCP overhead, credential passing availableThe most common mistake is choosing shared memory 'because it's faster' without measuring whether IPC is actually a bottleneck. In most applications, computation and I/O dominate; IPC overhead is negligible. Optimize IPC only after profiling proves it matters.
If your code needs to run on multiple Unix variants, Windows, or embedded systems, IPC mechanism choice is constrained by portability.
| Mechanism | Linux | macOS | FreeBSD | Windows | Notes |
|---|---|---|---|---|---|
| Anonymous Pipe | ✓ | ✓ | ✓ | ✓* | Windows pipes have different semantics |
| Named Pipe (FIFO) | ✓ | ✓ | ✓ | ✗ | Windows named pipes are different |
| Unix Domain Socket | ✓ | ✓ | ✓ | ✓** | Windows 10+ has Unix socket support |
| POSIX Message Queue | ✓ | ✗ | ✗ | ✗ | macOS lacks full implementation |
| POSIX Shared Memory | ✓ | ✓ | ✓ | ✗ | Windows has different APIs |
| System V IPC | ✓ | ✓ | ✓ | ✗ | Maximum Unix portability |
| TCP/UDP Socket | ✓ | ✓ | ✓ | ✓ | Universal, with platform differences |
| Signals | ✓ | ✓ | ✓ | ✗*** | Windows signals are very limited |
*Windows anonymous pipes are similar but not identical to Unix pipes.
**Windows 10 1803+ supports AF_UNIX sockets, but with limitations.
***Windows has a completely different signal model; only SIGINT, SIGTERM, etc. in very limited form.
Recommendation by Target:
If portability is critical, consider IPC abstraction libraries: Boost.Interprocess (C++), libuv (C), ZeroMQ (many languages), or gRPC (for network+local). These hide platform differences and provide consistent APIs.
Use this checklist when choosing an IPC mechanism. Work through each question to narrow your options.
IPC MECHANISM SELECTION CHECKLIST═══════════════════════════════════════════════════════════════════════ □ PROCESS RELATIONSHIP [ ] Related (parent-child/siblings)? → Anonymous pipe, inherited sockets OK [ ] Unrelated processes? → Need named mechanism (paths, network) □ DATA CHARACTERISTICS [ ] Message-oriented or byte stream? [ ] What's the typical message size? <1KB / 1KB-1MB / >1MB [ ] What's the transfer frequency? <100/s / 100-10K/s / >10K/s [ ] Message boundaries important? → Avoid pipes, use MQ or SEQPACKET □ COMMUNICATION PATTERN [ ] Unidirectional? → Pipe or MQ [ ] Bidirectional? → Socket or shared memory [ ] Request-response? → Socket (natural for this) [ ] Publish-subscribe? → Socket + app routing, or MQ per subscriber [ ] Producer-consumer? → MQ or shared memory ring buffer □ PERFORMANCE REQUIREMENTS [ ] Latency critical (<1μs)? → Shared memory with careful sync [ ] Throughput critical (>1GB/s)? → Shared memory for data, socket for control [ ] CPU usage matters? → Shared memory avoids copy overhead [ ] 'Normal' performance fine? → Message passing is usually sufficient □ SECURITY REQUIREMENTS [ ] Need peer authentication? → Unix socket (SO_PEERCRED) or TLS [ ] Audit trail required? → Message passing (observable) [ ] Processes don't fully trust each other? → Avoid shared memory [ ] Cross-user or cross-privilege? → Socket with credential check □ OPERATIONAL REQUIREMENTS [ ] Need to monitor IPC health? → Socket (connection state), MQ (queue depth) [ ] Debugging priority? → Message passing (log messages) [ ] Crash recovery needs? → Persistent MQ, or socket reconnection [ ] Hot restart/upgrade? → Socket (new process takes over) □ PORTABILITY [ ] Linux-only? → Everything available [ ] Need macOS? → Avoid POSIX MQ [ ] Need Windows? → Use sockets or abstraction library [ ] Need embedded? → Check RTOS support AFTER CHECKLIST - RECOMMENDATION:─────────────────────────────────Most answers favor simplicity → Unix Domain SocketNeed message priority/queueing → POSIX Message Queue Need maximum performance + expertise → Shared Memory + SemaphoreParent-child simple flow → Anonymous PipeNotification only → Signal or eventfdIf you're unsure after the checklist, choose Unix domain socket. It's the most versatile local IPC mechanism—bidirectional, supports credentials and FD passing, works with async I/O, and is widely understood. You can always specialize later.
Sometimes you realize the initial IPC choice was wrong, or requirements changed. Here's how to migrate between mechanisms with minimal disruption.
read/write become send/recv. Main work is connection setup. Gains bidirectionality and features.AF_UNIX to AF_INET/AF_INET6. Add authentication (TLS or app-level). Code structure mostly unchanged.Abstraction Layer Pattern
For flexibility in IPC choice, abstract the communication behind an interface:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
// Abstract IPC interface - swap implementation without changing callers typedef struct ipc_channel ipc_channel_t; // Generic interfaceipc_channel_t* ipc_create(const char *config);void ipc_destroy(ipc_channel_t *ch);int ipc_send(ipc_channel_t *ch, const void *msg, size_t len);int ipc_recv(ipc_channel_t *ch, void *buf, size_t len); // Implementations (compile different ones as needed) // --- Unix Socket Implementation ---struct ipc_channel { int sock_fd;}; ipc_channel_t* ipc_create_socket(const char *path) { ipc_channel_t *ch = malloc(sizeof(*ch)); ch->sock_fd = socket(AF_UNIX, SOCK_STREAM, 0); // connect or bind... return ch;} int ipc_send_socket(ipc_channel_t *ch, const void *msg, size_t len) { return send(ch->sock_fd, msg, len, 0);} // --- Shared Memory Implementation ---struct ipc_channel { void *shm_addr; sem_t *sem;}; int ipc_send_shm(ipc_channel_t *ch, const void *msg, size_t len) { sem_wait(ch->sem); memcpy(ch->shm_addr, msg, len); sem_post(ch->sem); // Notify receiver somehow... return len;} // Caller code unchanged when switching implementations:ipc_channel_t *ch = ipc_create("/var/run/myapp");ipc_send(ch, &message, sizeof(message));If you're uncertain about IPC requirements, invest in an abstraction layer upfront. The overhead is minimal, and the flexibility to swap mechanisms without restructuring your application is valuable.
We've developed a complete framework for IPC mechanism selection. Let's consolidate the key principles:
Module Complete: IPC Overview
You've completed the IPC Overview module! You now understand:
The subsequent modules in this chapter will dive deep into specific IPC mechanisms—pipes, FIFOs, message queues, shared memory, signals—with implementation details and advanced patterns.
Congratulations! You've mastered the foundational concepts of Inter-Process Communication. You understand why IPC is needed, the two fundamental paradigms (shared memory and message passing), how to compare them, and how to select the right mechanism for any situation. You're ready to dive into specific IPC mechanisms in the upcoming modules.