Loading content...
With multiple IPC mechanisms available—pipes, shared memory, sockets, signals—why would you choose message queues? Each IPC mechanism has its strengths, and message queues excel in specific scenarios that other mechanisms handle poorly.
Message queues provide a unique combination of properties:
Understanding when these advantages matter—and when they don't—is essential for making sound IPC architecture decisions.
By the end of this page, you will understand the specific advantages of message queues, how they compare to other IPC mechanisms, when message queues are the right choice, when they're the wrong choice, and how to make principled IPC architecture decisions.
The most fundamental advantage of message queues over pipes and sockets is message boundary preservation. When you send a message, it arrives as a discrete unit, not as a stream of bytes that might be fragmented or coalesced.
Consider sending structured data through a pipe:
// Sender
write(pipe_fd, "Hello", 5);
write(pipe_fd, "World", 5);
// Receiver - what happens?
read(pipe_fd, buffer, 100);
// buffer might contain:
// - "HelloWorld" (both writes coalesced)
// - "Hel" (partial read)
// - "Hello" (lucky case, but not guaranteed)
Pipes are byte streams. The OS makes no guarantees about how read() calls correspond to write() calls. This forces you to implement framing—length prefixes, delimiters, or fixed-size records.
With message queues, boundaries are automatic:
// Sender
mq_send(mq, "Hello", 5, 0);
mq_send(mq, "World", 5, 0);
// Receiver
len = mq_receive(mq, buffer, size, &prio); // len = 5, buffer = "Hello"
len = mq_receive(mq, buffer, size, &prio); // len = 5, buffer = "World"
// Guaranteed: each receive gets exactly one message
No framing code needed. Each mq_send() creates exactly one message that exactly one mq_receive() retrieves.
Message boundaries are most valuable when: (1) Messages are variable-length, (2) Messages contain binary data (can't use delimiters), (3) Protocol has many message types, (4) You want to minimize parsing code. If all your messages are fixed-size, simple structs, pipes work fine.
Message queues decouple producers from consumers in time. The sender doesn't need the receiver to be ready—messages queue up and wait. This is fundamentally different from pipes (which block when full) and shared memory (which requires synchronization).
A producer can:
A consumer can:
The message queue acts as a buffer that absorbs rate mismatches:
Producer: [====fast burst====].....[burst].....[burst]
↓
[Message Queue Buffer]
↓
Consumer: [--steady consumption at average rate--]
Without this buffer, either the producer blocks (back-pressure) or messages are lost (overflow). Message queues provide a managed middle ground.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <unistd.h>#include <mqueue.h>#include <fcntl.h>#include <sys/wait.h>#include <time.h> #define QUEUE_NAME "/async_demo"#define MAX_MSG_SIZE 128 // ================================================// Demonstration: Producer-Consumer Decoupling// Producer sends burst, terminates.// Consumer starts later, processes all messages.// ================================================ void producer(int message_count) { struct mq_attr attr = { .mq_maxmsg = 50, .mq_msgsize = MAX_MSG_SIZE }; mqd_t mq = mq_open(QUEUE_NAME, O_CREAT | O_WRONLY, 0644, &attr); if (mq == (mqd_t)-1) { perror("producer mq_open"); exit(1); } printf("[Producer] Starting burst of %d messages\n", message_count); for (int i = 1; i <= message_count; i++) { char msg[MAX_MSG_SIZE]; snprintf(msg, sizeof(msg), "Message #%d from PID %d", i, getpid()); mq_send(mq, msg, strlen(msg), i % 10); // Priority based on message number } printf("[Producer] Sent all %d messages\n", message_count); printf("[Producer] Terminating (messages remain queued)\n"); mq_close(mq); // Note: NOT calling mq_unlink - queue persists} void consumer() { printf("\n[Consumer] Starting up...\n"); mqd_t mq = mq_open(QUEUE_NAME, O_RDONLY | O_NONBLOCK); if (mq == (mqd_t)-1) { perror("consumer mq_open"); exit(1); } struct mq_attr attr; mq_getattr(mq, &attr); printf("[Consumer] Found queue with %ld messages waiting\n", attr.mq_curmsgs); char buffer[MAX_MSG_SIZE]; unsigned int priority; ssize_t len; int count = 0; while ((len = mq_receive(mq, buffer, attr.mq_msgsize, &priority)) > 0) { buffer[len] = '\0'; printf("[Consumer] Received (prio=%u): %s\n", priority, buffer); count++; usleep(100000); // Slow processing (100ms per message) } printf("[Consumer] Processed %d messages total\n", count); mq_close(mq); mq_unlink(QUEUE_NAME); // Cleanup} int main() { // Clean up any existing queue mq_unlink(QUEUE_NAME); printf("=== Async Decoupling Demo ===\n\n"); // Producer runs, sends messages, terminates producer(10); // Simulate delay before consumer starts printf("\n[Main] Producer terminated. Waiting 2 seconds...\n"); sleep(2); // Consumer starts and finds messages waiting consumer(); printf("\n=== Demo Complete ===\n"); printf("Notice: Producer terminated BEFORE consumer started.\n"); printf("Messages were preserved in the queue.\n"); return EXIT_SUCCESS;}Decoupling isn't free. If producers consistently outpace consumers, the queue fills up. You need either: (1) blocking sends (backpressure), (2) message dropping (lossy), or (3) dynamic queue sizing. Choose based on whether data loss or producer blocking is acceptable.
Unlike shared memory, which requires explicit locking mechanisms, message queues provide built-in synchronization. The kernel handles all mutual exclusion, making message passing inherently thread-safe and process-safe.
Using shared memory requires careful synchronization:
// Shared memory approach - complex!
// Producer
pthread_mutex_lock(&shm->mutex);
memcpy(shm->buffer, data, size);
shm->data_ready = 1;
pthread_mutex_unlock(&shm->mutex);
pthread_cond_signal(&shm->cond);
// Consumer
pthread_mutex_lock(&shm->mutex);
while (!shm->data_ready)
pthread_cond_wait(&shm->cond, &shm->mutex);
memcpy(local_buf, shm->buffer, size);
shm->data_ready = 0;
pthread_mutex_unlock(&shm->mutex);
This is error-prone: forgotten unlocks, deadlocks, race conditions, signal-safety issues.
// Message queue approach - simple!
// Producer
mq_send(mq, data, size, priority); // Done!
// Consumer
mq_receive(mq, buffer, size, &priority); // Done!
No locks, no condition variables, no deadlock risk. The kernel ensures that:
| Aspect | Shared Memory | Message Queues |
|---|---|---|
| Mutual exclusion | Application (mutex) | Kernel (automatic) |
| Wake on data | Application (condvar) | Kernel (blocking receive) |
| Atomicity | Application (careful) | Kernel (per-message) |
| Deadlock risk | High (lock ordering) | None (lockless) |
| Multi-process safety | Complex (robust mutexes) | Automatic |
| Code complexity | High | Low |
The built-in synchronization comes with overhead. Each send/receive involves a system call and kernel-mediated wakeup. For extremely high-throughput scenarios (millions of messages/second), shared memory with careful locking may outperform message queues. Benchmark your specific use case.
Messages in a queue persist in kernel memory independently of any process. This has powerful implications for system design.
If a sender crashes or terminates normally, messages it already sent remain queued:
Time →
Sender: [running] X (crash)
Queue: M1 M2 M3 M1 M2 M3 (messages preserved)
Receiver: [starts] → receives M1, M2, M3
This is fundamentally different from pipes, where writer termination closes the pipe (readers get EOF).
System V queues persist until explicitly deleted (msgctl(IPC_RMID)) or system reboot. This can cause resource leaks:
# Find orphaned System V queues
ipcs -q
# Remove specific queue
ipcrm -q <msqid>
# Remove all queues owned by user
ipcrm -q $(ipcs -q | awk '/0x/ {print $2}')
POSIX queues use reference counting but the name persists until mq_unlink():
# Find POSIX queues (Linux)
ls -la /dev/mqueue/
# Remove orphaned queue
rm /dev/mqueue/myqueue
atexit(), signal handlers, or explicit cleanup code to remove queues on normal termination.Message queues allow receivers to selectively receive messages based on type (System V) or priority (POSIX), without consuming unwanted messages. This enables sophisticated message routing patterns impossible with streams.
Pattern 1: Multiple Consumers, Different Types
Queue: [type1] [type2] [type1] [type3] [type2]
│ │ │
▼ ▼ ▼
Consumer A Consumer B Consumer C
(type 2) (type 3) (type 2)
Each consumer receives only its type; others remain queued.
Pattern 2: Per-Client Response Channels
Server sends responses with type = client_pid:
Queue: [resp(1234)] [resp(5678)] [resp(1234)]
│ │ │
▼ ▼ ▼
Client 1234 Client 5678 Client 1234
(type=1234) (type=5678)
Pattern 3: Priority-Based Processing
Receive with msgtyp = -100:
→ Gets lowest type first (highest priority)
→ Types 1, 2, 3 processed before types 50, 60, 100
POSIX message queues do NOT support selective reception by type—only by priority (highest first, always). If you need type-based selection, use System V queues or implement multiple POSIX queues. This is often the deciding factor between POSIX and System V.
Without message type selection, you'd need:
System V's type-based reception handles all these scenarios with a single queue and no application-level routing logic.
Let's systematically compare message queues with other IPC mechanisms to understand when each is appropriate.
| Feature | Message Queues | Pipes | Shared Memory | Sockets |
|---|---|---|---|---|
| Message boundaries | Preserved | No (stream) | N/A | Depends (UDP yes, TCP no) |
| Bidirectional | Yes | No (half-duplex) | Yes | Yes |
| Synchronization | Built-in | Basic (EOF) | Manual | Built-in |
| Process relationship | Any | Related (anon) or any (named) | Any | Any |
| Network capable | No | No | No | Yes |
| Persistence | Kernel (survives crashes) | No | Yes (until detached) | No |
| Type/Priority | Yes | No | No | No (application layer) |
| Performance | Medium | High | Highest | Medium-Low |
| Complexity | Low | Low | High | Medium |
Use this decision framework to choose the right IPC mechanism for your application:
Need network communication?
├── Yes → Sockets (TCP/UDP)
└── No:
Data naturally structured as discrete messages?
├── Yes:
│ Need type-based selection?
│ ├── Yes → System V Message Queues
│ └── No:
│ Need async notification?
│ ├── Yes → POSIX Message Queues
│ └── No → Either message queue type works
└── No (stream/bulk data):
Need maximum performance?
├── Yes → Shared Memory (+ semaphores)
└── No:
Related processes only?
├── Yes → Anonymous Pipes
└── No → Named Pipes (FIFOs)
| Scenario | Recommended IPC | Reason |
|---|---|---|
| Shell command pipeline | Anonymous pipe | Simplest, designed for this |
| Video frame processing | Shared memory | Lowest latency for large data |
| Task distribution to workers | Message queue | Built-in queuing and priority |
| Request-response protocol | Message queue | Clean message boundaries |
| Log shipping between processes | Named pipe or MQ | Both work; MQ if priority needed |
| Microservice communication | Sockets | Network capability required |
| Database connection | Sockets/shared memory | Depends on locality |
Real systems often combine IPC mechanisms: message queues for control/signaling plus shared memory for bulk data. A message says 'data is ready in shared memory buffer 3'—best of both worlds.
We've comprehensively analyzed what makes message queues valuable and when to choose them. Here are the key insights:
Congratulations! You've completed the Message Queues module. You now understand System V and POSIX message queues, message typing and priority, and when to choose message queues over other IPC mechanisms. You're equipped to design robust inter-process communication systems using message-based architectures.