Loading learning content...
Imagine two colleagues in different offices who need to collaborate. Instead of sharing a whiteboard (shared memory), they exchange sealed envelopes through a mailroom. Each message is a discrete unit—sent, delivered, and received. The mailroom handles routing, ensures delivery order, and neither colleague ever directly accesses the other's office.
This is the message passing model of IPC. Processes communicate by sending and receiving messages through channels managed by the operating system. The kernel acts as an intermediary—providing structure, safety, and synchronization that shared memory lacks.
Message passing trades some raw performance for significant advantages: processes don't need to agree on memory layouts, synchronization happens automatically with each message, and the communication is explicit and traceable. This model dominates in distributed systems, microservices, and many local IPC scenarios.
By the end of this page, you will understand the message passing abstraction, the key design dimensions (naming, synchronization, buffering), how message passing mechanisms implement these concepts, the trade-offs compared to shared memory, and when to choose message passing for your IPC needs.
At its core, message passing provides two fundamental operations:
This abstraction seems simple, but the details of how these operations behave define dramatically different messaging systems. Before examining specific mechanisms, let's understand the conceptual model.
The Channel Abstraction
A message passing channel is a logical conduit between processes. Messages flow from sender to receiver(s), but unlike shared memory, processes never directly access each other's address space. The kernel (or a messaging infrastructure) owns the channel and mediates all communication.
Message Passing vs. Shared Memory Conceptual Model═══════════════════════════════════════════════════ SHARED MEMORY MODEL:┌─────────────────┐ ┌─────────────────┐│ Process A │ │ Process B ││ │ │ ││ Write to addr │──────┬──────────►│ Read from addr ││ │ │ │ │└─────────────────┘ │ └─────────────────┘ │ ┌────▼────┐ │ Shared │ │ Memory │ ← Both directly access │ Region │ the same memory └─────────┘ MESSAGE PASSING MODEL:┌─────────────────┐ ┌─────────────────┐│ Process A │ │ Process B ││ │ │ ││ send(B, msg) │ │ recv(A, &msg) ││ │ │ │└────────┬────────┘ └────────▲────────┘ │ │ │ ┌─────────────────────────────┐ │ └─►│ Message Channel │───┘ │ (kernel-managed buffer) │ │ │ │ [msg1] [msg2] [msg3] ... │ ← Kernel owns └─────────────────────────────┘ the channelKey Properties of Message Passing:
Isolation is maintained: Senders never access receivers' memory and vice versa. Messages are copied in and out of the channel.
Explicit communication: Every data exchange requires explicit send/receive calls. There's no ambient sharing—you know exactly when data moves.
Kernel involvement: The kernel handles buffering, synchronization, and delivery. This adds overhead but provides guarantees.
Message boundaries: Unlike byte streams, message-oriented channels preserve message boundaries. If you send a 100-byte message, the receiver gets a 100-byte message (not 50 bytes twice).
Ordering guarantees: Most message passing systems guarantee that messages from sender A to receiver B arrive in the order sent (FIFO ordering).
Message passing's explicit nature is its safety advantage. You can't accidentally corrupt another process's data—you don't have access to it. Every piece of data in your address space is under your control. Bugs are more likely to be local to one process rather than causing system-wide corruption.
How do processes find each other? This is the naming problem in message passing. Two fundamental approaches exist: direct and indirect naming.
Direct Naming
In direct naming, processes explicitly name each other:
1234567891011121314
// Direct Naming - Symmetric (both sides name each other)// Process A:send(process_B, message); // Process B:receive(process_A, &message); // Direct Naming - Asymmetric (sender names, receiver accepts any)// Process A:send(process_B, message); // Sender specifies recipient // Process B (server pattern):receive(&sender_id, &message); // Accept from any senderreply(sender_id, response); // Reply to whoever sentAdvantages of Direct Naming:
Disadvantages:
Indirect Naming (Mailboxes/Ports)
In indirect naming, processes communicate through mailboxes (also called ports or channels) rather than naming each other directly:
12345678910111213141516171819202122
// Indirect Naming - Mailbox Model// Both processes send to/receive from a mailbox, not each other // Create a mailboxmailbox_t mailbox = create_mailbox("/my_mailbox"); // Process A (sender):send(mailbox, message); // Send to mailbox, not to a process // Process B (receiver):receive(mailbox, &message); // Receive from mailbox // Multiple senders, multiple receivers possible!// Process C (another sender):send(mailbox, another_message); // Process D (another receiver):receive(mailbox, &yet_another_message); // Who gets which message? Depends on the mailbox semantics:// - One-to-one: Exactly one receiver gets each message// - Broadcasting: All receivers get all messages (rare)| Mechanism | Naming Model | Details |
|---|---|---|
| Pipes (anonymous) | Implicit/direct | Created by parent, inherited by children—no global name |
| Named Pipes (FIFO) | Indirect | Filesystem path /tmp/myfifo names the channel |
| POSIX Message Queues | Indirect | Named like /my_queue; any process can access |
| Unix Domain Sockets | Indirect | Filesystem path or abstract namespace |
| Windows Named Pipes | Indirect | Named like \\.\pipe\my_pipe |
| Signals | Direct | Sent to a specific PID |
| D-Bus | Indirect + Service Names | Bus addresses and service names (e.g., org.freedesktop.NetworkManager) |
Indirect naming decouples senders from receivers. A web service can be restarted, scaled to multiple processes, or replaced entirely—as long as the new processes use the same mailbox, senders are unaffected. This is why most production IPC uses indirect naming.
Perhaps the most important design dimension is synchronization: when do send and receive operations complete, and how do they coordinate sender and receiver?
Blocking (Synchronous) Communication
With blocking communication, operations wait until they complete:
Blocking Communication Timeline════════════════════════════════ BLOCKING SEND (waits for receiver):─────────────────────────────────────────────────────────────────────►Sender: |--send()--| (blocked) |------- continues ------- │ ▲ │ message in │ receiver accepted ▼ kernel buffer │ the message─────────────────────────────────────────────────────────────────────►Receiver: |--receive()--| BLOCKING RECEIVE (waits for sender):─────────────────────────────────────────────────────────────────────►Receiver: |--receive()--| (blocked) |---- continues (with message) │ ▲ │ waiting for │ sender provided ▼ a message │ a message─────────────────────────────────────────────────────────────────────►Sender: |--send()--| With blocking semantics:• Sender cannot proceed until receiver acknowledges• Receiver cannot proceed until a message arrives• Provides natural flow control and synchronization• Simple to reason about - no race conditionsNon-Blocking (Asynchronous) Communication
With non-blocking communication, operations return immediately:
Non-Blocking Communication Timeline═════════════════════════════════════ NON-BLOCKING SEND:─────────────────────────────────────────────────────────────────────►Sender: |--send()--| message queued, returns immediately | continues without waiting for receiver ▼ ┌─────────────────────┐ │ Message Buffer │ (kernel holds message) │ [msg in queue] │ └─────────────────────┘ ▲Receiver: └── receive() later gets the message NON-BLOCKING RECEIVE:─────────────────────────────────────────────────────────────────────►Receiver: |--receive()--| returns immediately (may be empty!) │ if message available: returns message │ if no message: returns error/indication ▼ continue doing other work... │ └─► poll again later with receive() Non-blocking semantics:• Sender doesn't wait - messages buffered• Receiver doesn't wait - checks and moves on• Requires explicit polling or event notification• Can lead to buffer overflow if sender is faster• Enables parallelism and responsiveness| Mechanism | Default Mode | Alternative Mode | Notes |
|---|---|---|---|
| Pipes | Blocking | O_NONBLOCK flag | Blocking read waits for data; blocking write waits if pipe full |
| POSIX Message Queues | Blocking | O_NONBLOCK or timeouts | mq_timedreceive() allows timeout |
| Unix Domain Sockets | Blocking | O_NONBLOCK or async I/O | Full async I/O support with select/poll/epoll |
| Signals | Asynchronous | sigwait() for synchronous | Signals interrupt the process asynchronously |
| Windows Named Pipes | Blocking | OVERLAPPED I/O | Windows async mechanism |
Rendezvous Communication
A special case of synchronous communication is rendezvous, where both sender and receiver must reach the communication point simultaneously:
Rendezvous is rare in Unix IPC but common in programming language primitives (Go's unbuffered channels, Erlang's receive with no mailbox queue, CSP-style synchronous channels).
123456789101112131415161718192021222324252627
// Go's unbuffered channels provide rendezvous semantics ch := make(chan int) // Unbuffered channel // Goroutine 1 (sender)go func() { fmt.Println("Sender: about to send") ch <- 42 // Blocks until receiver is ready fmt.Println("Sender: sent!")}() // Goroutine 2 (receiver)go func() { time.Sleep(time.Second) // Delay receiver fmt.Println("Receiver: about to receive") value := <-ch // Blocks until sender is ready fmt.Println("Receiver: got", value)}() // Output:// Sender: about to send// (1 second delay)// Receiver: about to receive// Sender: sent!// Receiver: got 42 // Note: Sender was blocked for ~1 second waiting for receiver!Blocking operations are simpler but can cause deadlocks if not careful. Non-blocking operations are more flexible but require managing polling/events. Rendezvous forces tight synchronization but guarantees no buffering overflow. Choose based on your application's coordination needs.
Buffering refers to how many messages can be in transit at once—stored in the channel between send and receive. This interacts with synchronization to determine system behavior.
| Buffer Capacity | Behavior | Implications |
|---|---|---|
| Zero (Rendezvous) | No messages stored; sender and receiver must synchronize | Strongest coordination; sender always knows receiver got message; no memory for buffers |
| Bounded (N messages) | Up to N messages can be queued; sender blocks when full | Flow control built-in; prevents runaway memory use; common in practice |
| Unbounded | Unlimited messages can be queued | Sender never blocks on buffer; can exhaust memory; rarely truly unbounded |
Bounded Buffering in Practice
Most real-world IPC uses bounded buffering. The buffer size represents a trade-off:
Effect of Buffer Size on Producer-Consumer Communication═══════════════════════════════════════════════════════════ Scenario: Producer generates messages at 100/sec, Consumer processes at 80/sec Buffer Size = 0 (Rendezvous):────────────────────────────────────────────────────────────────────────Producer blocks on EVERY send waiting for consumer.Effective rate: 80 messages/sec (limited by consumer)Producer is idle 20% of the time. Buffer Size = 10:────────────────────────────────────────────────────────────────────────Initially: Buffer fills at 20 messages/sec (100 produced - 80 consumed)After 0.5 seconds: Buffer full (10 messages)Then: Producer blocks until consumer catches up.Effective rate: Still 80 messages/sec averageBut: Produces in bursts, handles temporary producer speed variations better. Buffer Size = 1000:──────────────────────────────────────────────────────────────────────── Takes 50 seconds to fill the bufferAllows producer to work ahead significantlyBut: 50 seconds of work lost if consumer crashes before draining bufferMemory usage: 1000 messages worth of RAM Buffer Size = Unbounded (conceptual):────────────────────────────────────────────────────────────────────────Buffer grows indefinitely at 20 messages/secAfter 1 hour: 72,000 messages buffered (potentially gigabytes!)Eventually: System runs out of memoryThis is why "unbounded" is dangerous.Flow Control
Bounded buffers provide natural flow control: the sender is slowed down when the receiver can't keep up. This prevents the sender from overwhelming the receiver with data.
Without flow control:
With bounded buffer:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
#include <mqueue.h>#include <fcntl.h>#include <stdio.h> int main() { // POSIX message queues have configurable max messages struct mq_attr attr; attr.mq_flags = 0; attr.mq_maxmsg = 10; // Maximum 10 messages in queue attr.mq_msgsize = 1024; // Maximum 1024 bytes per message attr.mq_curmsgs = 0; // Current messages (read-only after creation) mqd_t mq = mq_open("/flow_controlled_queue", O_CREAT | O_RDWR, 0666, &attr); if (mq == (mqd_t)-1) { perror("mq_open"); return 1; } // Now sending behavior: // - First 10 sends succeed immediately (buffer has space) // - 11th send BLOCKS until a message is received // - This provides automatic flow control! char msg[] = "test message"; for (int i = 0; i < 20; i++) { printf("Sending message %d... ", i); // mq_send blocks when queue is full (unless O_NONBLOCK) if (mq_send(mq, msg, sizeof(msg), 0) == 0) { printf("sent!"); } } // With O_NONBLOCK flag: // - mq_send returns -1 with errno=EAGAIN when full // - Caller decides whether to retry, drop, or wait mq_close(mq); mq_unlink("/flow_controlled_queue"); return 0;}Too small a buffer causes excessive blocking and poor throughput. Too large a buffer wastes memory and delays backpressure signaling. Size buffers based on expected burst size and acceptable latency, not arbitrarily. Monitor buffer fill levels in production to tune appropriately.
Now let's map these design dimensions to concrete IPC mechanisms. Unix provides several message passing mechanisms, each with different characteristics.
| Mechanism | Naming | Directionality | Message Boundaries | Typical Use |
|---|---|---|---|---|
| Pipes (anonymous) | Implicit (fork inheritance) | Unidirectional | Byte stream (no boundaries) | Parent-child, shell pipelines |
| Named Pipes (FIFOs) | Filesystem path | Unidirectional | Byte stream | Unrelated processes, simple IPC |
| POSIX Message Queues | Named (/name) | Messages to/from queue | True messages (preserved) | Request-response, work queues |
| Unix Domain Sockets (DGRAM) | Path or abstract | Bidirectional messages | True messages | Local client-server, D-Bus |
| Unix Domain Sockets (STREAM) | Path or abstract | Bidirectional byte stream | Byte stream | Local client-server, complex protocols |
| Signals | Process ID | Sender to receiver | Signal number only (tiny) | Notifications, control signals |
Byte Stream vs. Message-Oriented
A critical distinction is whether the mechanism preserves message boundaries:
Byte Stream (Pipes, TCP sockets, STREAM sockets):
Message-Oriented (POSIX MQ, DGRAM sockets, UDP):
1234567891011121314151617181920212223242526272829303132333435363738394041424344
// Problem: Byte streams don't preserve message boundaries // Sender sends two messages:write(pipe_fd, "Hello", 5); // First messagewrite(pipe_fd, "World", 5); // Second message // Receiver might see:// - "HelloWorld" in one read (messages merged)// - "He" then "lloWor" then "ld" (split across reads)// - "Hello" then "World" (lucky alignment, not guaranteed!) // Solution: Message framing protocol // Length-prefixed framing:void send_message(int fd, const char *msg, size_t len) { uint32_t netLen = htonl(len); // Network byte order write(fd, &netLen, sizeof(netLen)); // Send length first write(fd, msg, len); // Then message body} int recv_message(int fd, char *buf, size_t bufSize) { uint32_t netLen; if (read(fd, &netLen, sizeof(netLen)) != sizeof(netLen)) { return -1; // Connection closed or error } size_t len = ntohl(netLen); if (len > bufSize) { return -1; // Message too large for buffer } // Read exactly 'len' bytes (may need loop for partial reads) size_t total = 0; while (total < len) { ssize_t n = read(fd, buf + total, len - total); if (n <= 0) return -1; total += n; } return len;} // Alternative: Delimiter-based framing (like HTTP headers)// Each message ends with "" or "\r"// Simpler but fails if message contains the delimiterPOSIX Message Queues: True Messages
123456789101112131415161718192021222324252627282930313233343536373839404142434445
#include <mqueue.h>#include <fcntl.h>#include <string.h>#include <stdio.h> int main() { struct mq_attr attr = { .mq_maxmsg = 10, // Max 10 messages .mq_msgsize = 256 // Max 256 bytes per message }; // Server: Create and receive from queue mqd_t mq = mq_open("/example_queue", O_CREAT | O_RDONLY, 0666, &attr); char buffer[256]; unsigned int priority; // Receive a complete message (boundaries preserved automatically!) ssize_t bytes = mq_receive(mq, buffer, sizeof(buffer), &priority); if (bytes > 0) { printf("Received message: '%.*s' (priority: %u)", (int)bytes, buffer, priority); } //Client: Open and send to queue mqd_t client_mq = mq_open("/example_queue", O_WRONLY); const char *msg1 = "First message"; const char *msg2 = "Second message"; // These are sent as SEPARATE messages mq_send(client_mq, msg1, strlen(msg1), 1); // Priority 1 mq_send(client_mq, msg2, strlen(msg2), 2); // Priority 2 // Receiver gets them as separate messages, in priority order! // Higher priority (2) comes first, then (1) mq_close(mq); mq_close(client_mq); mq_unlink("/example_queue"); // Cleanup return 0;}POSIX message queues support message priorities. Higher priority messages are delivered first, regardless of send order. This is useful for out-of-band control messages that should skip ahead of data messages. Most pipes and sockets don't support priorities natively.
Unix domain sockets deserve special attention as they're the most versatile local IPC mechanism. They support both byte stream and message-oriented communication, bidirectional data flow, and multiple connection patterns.
| Type | Behavior | Use Case |
|---|---|---|
| SOCK_STREAM | Connection-oriented byte stream (like TCP) | Client-server with long sessions, complex protocols |
| SOCK_DGRAM | Connectionless messages (like UDP) | Simple request-response, messages up to ~128KB |
| SOCK_SEQPACKET | Connection-oriented messages | Best of both: connections with message boundaries (rare) |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
// Unix Domain Socket Server (SOCK_STREAM example) #include <sys/socket.h>#include <sys/un.h>#include <unistd.h>#include <stdio.h>#include <string.h> #define SOCKET_PATH "/tmp/my_socket" int main() { // Create socket int server_fd = socket(AF_UNIX, SOCK_STREAM, 0); if (server_fd == -1) { perror("socket"); return 1; } // Bind to a filesystem path struct sockaddr_un addr; memset(&addr, 0, sizeof(addr)); addr.sun_family = AF_UNIX; strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1); // Remove existing socket file if present unlink(SOCKET_PATH); if (bind(server_fd, (struct sockaddr*)&addr, sizeof(addr)) == -1) { perror("bind"); return 1; } // Listen for connections if (listen(server_fd, 5) == -1) { perror("listen"); return 1; } printf("Server listening on %s", SOCKET_PATH); // Accept a connection int client_fd = accept(server_fd, NULL, NULL); if (client_fd == -1) { perror("accept"); return 1; } printf("Client connected!"); // Exchange data char buffer[256]; ssize_t n = read(client_fd, buffer, sizeof(buffer) - 1); if (n > 0) { buffer[n] = '\0'; printf("Received: %s", buffer); const char *response = "Hello from server!"; write(client_fd, response, strlen(response)); } close(client_fd); close(server_fd); unlink(SOCKET_PATH); return 0;}Special Features of Unix Domain Sockets:
1. File Descriptor Passing
Unix domain sockets can pass file descriptors between processes. This is enormously powerful—one process can open a file, database connection, or network socket, then pass the open descriptor to another process.
12345678910111213141516171819202122232425262728293031323334
// Sending a file descriptor over Unix domain socket #include <sys/socket.h>#include <sys/un.h>#include <fcntl.h> void send_fd(int socket, int fd_to_send) { struct msghdr msg = {0}; struct iovec iov[1]; char buf[1] = {0}; // Must send at least 1 byte of data iov[0].iov_base = buf; iov[0].iov_len = 1; msg.msg_iov = iov; msg.msg_iovlen = 1; // Control message carries the file descriptor char cmsgbuf[CMSG_SPACE(sizeof(int))]; msg.msg_control = cmsgbuf; msg.msg_controllen = sizeof(cmsgbuf); struct cmsghdr *cmsg = CMSG_FIRSTHDR(&msg); cmsg->cmsg_level = SOL_SOCKET; cmsg->cmsg_type = SCM_RIGHTS; // "Sending file descriptor rights" cmsg->cmsg_len = CMSG_LEN(sizeof(int)); *((int*)CMSG_DATA(cmsg)) = fd_to_send; sendmsg(socket, &msg, 0);} // Use case: A privileged daemon opens /etc/shadow (which only root can read)// and passes the open fd to an unprivileged process for processing.// The unprivileged process can read the file through the fd even though// it couldn't have opened it itself!2. Credential Passing
Unix domain sockets can reliably identify the peer process. The kernel provides the peer's PID, UID, and GID—these cannot be forged.
1234567891011121314151617181920212223242526272829
// Getting peer credentials (SO_PEERCRED) #include <sys/socket.h>#include <stdio.h> void check_peer(int socket) { struct ucred cred; socklen_t len = sizeof(cred); if (getsockopt(socket, SOL_SOCKET, SO_PEERCRED, &cred, &len) == 0) { printf("Peer process ID: %d", cred.pid); printf("Peer user ID: %d", cred.uid); printf("Peer group ID: %d", cred.gid); // Security decision based on peer identity if (cred.uid != 0 && cred.uid != getuid()) { printf("Rejecting connection from unprivileged user"); close(socket); return; } }} // This is how D-Bus, systemd, and many security-sensitive daemons// authenticate clients without passwords!Unix domain sockets are often the best choice for local IPC: they're bidirectional, support both streams and messages, can pass file descriptors and credentials, integrate with select/poll/epoll for async I/O, and perform nearly as well as pipes while offering much more functionality. They're the foundation of D-Bus, systemd, and most modern Linux service communication.
Now that we've explored both paradigms, let's compare them directly. Understanding when to choose each is crucial for effective systems design.
| Aspect | Message Passing | Shared Memory |
|---|---|---|
| Synchronization | Implicit (kernel handles) | Explicit (must implement) |
| Data isolation | Complete (processes never share memory) | None (same memory visible to all) |
| Performance overhead | Copy into kernel, copy out (2 copies) | Zero copies after setup |
| Latency | Microseconds (system call overhead) | Nanoseconds (memory access) |
| Throughput | Limited by copy bandwidth (~GB/s) | Memory bandwidth (~tens of GB/s) |
| Programming difficulty | Simpler (explicit send/receive) | Harder (synchronization, memory ordering) |
| Debugging | Easier (messages are observable) | Harder (race conditions, corruption) |
| Scalability | Works across machines (with sockets) | Same machine only |
| Security | Auditable, policy-enforceable | Hard to audit, all-or-nothing access |
| Failure modes | Connection drops, messages lost | Silent corruption, crashes |
Decision Framework:
Hybrid Approaches
In practice, many systems use both:
Control via message passing + bulk data via shared memory: Send "process region X" via a socket, with region X being in shared memory. Chrome does this—Mojo messages might reference shared memory buffers for large payloads.
Initial discovery via message passing, then shared memory: Processes connect via sockets, negotiate a shared memory segment, then switch to shared memory for high-frequency data.
Message passing for rare events, shared memory for state: Configuration changes come via messages; current state is in shared memory for fast access.
Modern systems increasingly favor message passing for its safety and debuggability, accepting the performance cost. Containers, microservices, and security-conscious designs all push toward explicit, auditable communication. Shared memory is reserved for performance hotspots where it's truly necessary.
We've thoroughly explored the message passing model for IPC. Let's consolidate the key insights:
What's Next: Comparing IPC Models
With both the shared memory and message passing models fully explored, the next page provides a comprehensive comparison of these fundamental approaches. We'll examine performance benchmarks, use case analyses, and decision frameworks to help you choose the right IPC model for any given situation.
You now understand the message passing model for IPC—its abstraction, design dimensions, mechanisms, and trade-offs. Combined with your understanding of shared memory, you have the foundation to analyze any IPC scenario. Next, we'll put these models side-by-side for a comprehensive comparison.