Loading content...
No process is an island. Modern software systems consist of multiple processes working together—web servers communicating with database backends, microservices exchanging messages, shell pipelines passing data between commands. Beyond a single machine, networked applications span the globe, connecting billions of devices.
The operating system provides communication services that enable this cooperation. These services span two domains: Inter-Process Communication (IPC) for processes on the same machine, and networking for communication across machines. Both share fundamental challenges: data transfer, synchronization, and reliability—but address them at different scales and with different trade-offs.
By the end of this page, you will understand the major IPC mechanisms provided by operating systems (pipes, shared memory, message queues, sockets), how network communication is layered and abstracted, the socket API that unifies local and network communication, and the key trade-offs between different communication approaches. This knowledge is essential for building distributed systems and understanding modern software architecture.
Operating systems deliberately isolate processes—each has its own address space, preventing direct memory access between processes. This isolation is essential for security and stability, but it creates a challenge: how do processes share data and coordinate actions?
Common communication scenarios:
Producer-consumer patterns: One process generates data, another consumes it
cat file | grep pattern | wc -lClient-server architecture: Clients request services from servers
Parallel processing: Multiple workers coordinate on a shared task
Signaling and coordination: Notifying processes of events
Communication models:
Two fundamental models underlie all IPC mechanisms:
Shared Memory Model:
Message Passing Model:
Shared Memory: Message Passing:
┌─────────────┐ ┌─────────────┐
│ Process A │ │ Process A │
│ (writes) │ │ send(msg) │──┐
└──────┬──────┘ └─────────────┘ │
│ │
▼ ▼
┌───────────┐ ┌────────────────────┐
│ Shared │ │ Kernel │
│ Memory │ │ (message queue/ │
│ Region │ │ buffer/channel) │
└───────────┘ └────────────────────┘
▲ │
│ ▼
┌──────┴──────┐ ┌─────────────┐
│ Process B │ │ Process B │
│ (reads) │ │ recv(msg) │◄─┘
└─────────────┘ └─────────────┘
Shared memory excels when: large amounts of data exchanged, low latency critical, processes are trusted, you can manage synchronization correctly.
Message passing excels when: communication is structured/typed, processes are untrusted, simplicity is valued, crossing machine boundaries (networking inherently uses message passing).
Pipes are the quintessential Unix IPC mechanism—simple, elegant, and powerful. A pipe is a unidirectional byte stream connecting two processes: one writes, the other reads.
Anonymous pipes:
Created with the pipe() system call, anonymous pipes exist only in memory and are inherited by child processes:
int pipefd[2];
pipe(pipefd);
// pipefd[0] = read end
// pipefd[1] = write end
The shell uses pipes extensively for command chaining:
$ cat /var/log/syslog | grep error | tail -20
This creates two pipes:
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐
│ cat │────▶│ pipe │────▶│ grep │────▶│ pipe │────▶│ tail │
└──────┘ └──────┘ └──────┘ └──────┘ └──────┘
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
#include <stdio.h>#include <unistd.h>#include <string.h>#include <sys/wait.h> /** * Demonstrates pipe communication between parent and child processes */int main() { int pipefd[2]; pid_t pid; char buffer[256]; /* Create pipe before fork */ if (pipe(pipefd) == -1) { perror("pipe"); return 1; } pid = fork(); if (pid == 0) { /* CHILD PROCESS - will read from pipe */ close(pipefd[1]); /* Close unused write end */ ssize_t bytes = read(pipefd[0], buffer, sizeof(buffer)); if (bytes > 0) { buffer[bytes] = '\0'; printf("Child received: %s\n", buffer); } close(pipefd[0]); } else { /* PARENT PROCESS - will write to pipe */ close(pipefd[0]); /* Close unused read end */ const char *message = "Hello from parent!"; write(pipefd[1], message, strlen(message)); close(pipefd[1]); /* Close write end - sends EOF to reader */ wait(NULL); /* Wait for child */ } return 0;} /* * Pipe behavior: * - Writing to a pipe with no reader: SIGPIPE signal (broken pipe) * - Reading from empty pipe with writers: blocks until data * - Reading from empty pipe with no writers: returns 0 (EOF) * - Pipes have limited buffer (typically 64KB on Linux) * - Writing to full pipe: blocks until space available */Named pipes (FIFOs):
Anonymous pipes only work between related processes (parent-child). Named pipes (FIFOs) appear in the filesystem, allowing any process to connect:
# Create named pipe
$ mkfifo /tmp/myfifo
$ ls -l /tmp/myfifo
prw-r--r-- 1 user user 0 Jan 15 10:00 /tmp/myfifo
^ ^
p = pipe size 0 (it's a channel, not storage)
# Terminal 1: Reader (blocks until writer connects)
$ cat /tmp/myfifo
# Terminal 2: Writer
$ echo "Hello through FIFO" > /tmp/myfifo
# Terminal 1 now shows: Hello through FIFO
FIFO use cases:
POSIX guarantees that writes of PIPE_BUF bytes or fewer (typically 4KB) are atomic—they won't interleave with other writes. This is crucial when multiple processes write to the same pipe (e.g., log aggregation). Writes larger than PIPE_BUF may be interleaved.
Shared memory is the fastest IPC mechanism—processes map the same physical memory into their address spaces, enabling direct data access without kernel involvement.
How it works:
┌────────────────────────────────────────────────────────────────┐
│ Physical Memory │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Shared Memory Segment │ │
│ │ (4 KB) │ │
│ └────────────────────────────────────────────────────────┘ │
│ ▲ ▲ │
└──────────┼────────────────────────────────────┼─────────────────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│ Process A │ │ Process B │
│ ┌─────────┐ │ │ ┌─────────┐ │
│ │ 0x7FF │ │ ◄─────────────────── │ │ 0x7AA │ │
│ │ Virtual │ │ Same physical │ │ Virtual │ │
│ │ Address │ │ memory! │ │ Address │ │
│ └─────────┘ │ │ └─────────┘ │
└─────────────┘ └─────────────┘
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <fcntl.h>#include <sys/mman.h>#include <sys/stat.h>#include <unistd.h> #define SHM_NAME "/my_shared_mem"#define SHM_SIZE 4096 /** * POSIX shared memory example * Run writer first, then reader */ /* Writer process */int writer_main() { /* Create and open shared memory object */ int fd = shm_open(SHM_NAME, O_CREAT | O_RDWR, 0666); if (fd == -1) { perror("shm_open"); return 1; } /* Set size */ ftruncate(fd, SHM_SIZE); /* Map into address space */ char *ptr = mmap(NULL, SHM_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if (ptr == MAP_FAILED) { perror("mmap"); return 1; } /* Write data - it's just memory now */ const char *message = "Hello from shared memory!"; strcpy(ptr, message); printf("Writer: wrote '%s'\n", message); /* Unmap and close (segment persists until shm_unlink) */ munmap(ptr, SHM_SIZE); close(fd); return 0;} /* Reader process */int reader_main() { /* Open existing shared memory */ int fd = shm_open(SHM_NAME, O_RDONLY, 0666); if (fd == -1) { perror("shm_open"); return 1; } /* Map read-only */ char *ptr = mmap(NULL, SHM_SIZE, PROT_READ, MAP_SHARED, fd, 0); if (ptr == MAP_FAILED) { perror("mmap"); return 1; } /* Read data */ printf("Reader: read '%s'\n", ptr); /* Cleanup */ munmap(ptr, SHM_SIZE); close(fd); /* Remove shared memory object (if last user) */ shm_unlink(SHM_NAME); return 0;} /* * Key points: * - shm_open() creates named shared memory in /dev/shm/ * - mmap() maps it into process address space * - Changes visible to all processes mapping the same segment * - MUST use synchronization (semaphores, mutexes) for safe access * - Link with -lrt on Linux */Synchronization is mandatory:
Shared memory provides no built-in synchronization. Without explicit coordination, processes can observe partially written data or corrupt shared structures. Common synchronization mechanisms:
/* Shared memory structure with synchronization */
typedef struct {
pthread_mutex_t mutex; /* Must be initialized with PTHREAD_PROCESS_SHARED */
int counter;
char data[256];
} SharedData;
/* Access pattern */
pthread_mutex_lock(&shared->mutex);
shared->counter++;
snprintf(shared->data, sizeof(shared->data), "Count: %d", shared->counter);
pthread_mutex_unlock(&shared->mutex);
Shared memory is powerful but dangerous: • Race conditions — Without locks, data corruption is likely • Deadlocks — Improper lock ordering causes hangs • Memory corruption — One buggy process can crash others • Security — Processes sharing memory are trusting each other fully
Prefer message passing unless shared memory's performance is essential.
Message queues provide structured, kernel-mediated message passing. Unlike pipes (byte streams), message queues preserve message boundaries—each send is received as a discrete unit.
POSIX message queue characteristics:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <mqueue.h>#include <fcntl.h>#include <sys/stat.h> #define QUEUE_NAME "/my_queue"#define MAX_MSG_SIZE 256#define MAX_MSGS 10 /** * POSIX message queue example * Compile with: gcc -o mq_demo mq_demo.c -lrt */ /* Sender process */int sender_main() { /* Create and open queue */ struct mq_attr attr = { .mq_flags = 0, .mq_maxmsg = MAX_MSGS, .mq_msgsize = MAX_MSG_SIZE }; mqd_t mq = mq_open(QUEUE_NAME, O_CREAT | O_WRONLY, 0666, &attr); if (mq == (mqd_t)-1) { perror("mq_open"); return 1; } /* Send messages with different priorities */ const char *msg1 = "Low priority message"; const char *msg2 = "URGENT: High priority message"; const char *msg3 = "Normal priority message"; mq_send(mq, msg1, strlen(msg1) + 1, 1); /* Priority 1 (low) */ mq_send(mq, msg2, strlen(msg2) + 1, 10); /* Priority 10 (high) */ mq_send(mq, msg3, strlen(msg3) + 1, 5); /* Priority 5 (medium) */ printf("Sender: sent 3 messages\n"); mq_close(mq); return 0;} /* Receiver process */int receiver_main() { mqd_t mq = mq_open(QUEUE_NAME, O_RDONLY); if (mq == (mqd_t)-1) { perror("mq_open"); return 1; } char buffer[MAX_MSG_SIZE]; unsigned int priority; /* Receive messages - delivered in priority order! */ while (1) { ssize_t bytes = mq_receive(mq, buffer, MAX_MSG_SIZE, &priority); if (bytes >= 0) { printf("Received (priority %u): %s\n", priority, buffer); } else { break; /* Queue closed or error */ } } mq_close(mq); mq_unlink(QUEUE_NAME); return 0;} /* * Output order (highest priority first): * Received (priority 10): URGENT: High priority message * Received (priority 5): Normal priority message * Received (priority 1): Low priority message * * Key features: * - Messages delivered with boundaries intact * - Priority ordering within queue * - Blocking receive (or use mq_timedreceive) * - Queue persists in /dev/mqueue/ * - mq_notify() for async notification */| Mechanism | Data Type | Sync | Performance | Related Processes? |
|---|---|---|---|---|
| Anonymous Pipe | Byte stream | Built-in | High | Yes (parent-child) |
| Named Pipe (FIFO) | Byte stream | Built-in | High | No (filesystem name) |
| Shared Memory | Arbitrary | Manual | Highest | No (named segment) |
| Message Queue | Messages | Built-in | Medium | No (named queue) |
| Unix Socket | Byte stream/datagram | Built-in | High | No (filesystem/abstract) |
| Signals | Signal number only | Async | Low overhead | No (by PID) |
Unix has two IPC API families: • System V IPC — Older: shmget/shmat, msgget/msgsnd, semget/semop. Uses numeric keys. • POSIX IPC — Newer: shm_open/mmap, mq_open/mq_send, sem_open. Uses string names.
POSIX IPC is generally preferred for new code—cleaner API, filesystem-like naming, better integration with file I/O patterns.
Sockets are the most versatile communication mechanism—they provide a unified API for both local IPC and network communication. The same code that communicates between processes on one machine can communicate across the internet with minimal changes.
Socket types:
Stream sockets (SOCK_STREAM): Reliable, ordered, connection-oriented byte streams. TCP over IP or local Unix sockets.
Datagram sockets (SOCK_DGRAM): Unreliable, unordered discrete messages. UDP over IP or local Unix sockets.
Raw sockets (SOCK_RAW): Direct access to lower-level protocols. For implementing protocols or network tools.
Socket domains (address families):
AF_UNIX (AF_LOCAL): Local (same machine) communication. Addressed by filesystem path or abstract name.
AF_INET: IPv4 network communication. Addressed by IP:port.
AF_INET6: IPv6 network communication. Addressed by IPv6:port.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <unistd.h>#include <sys/socket.h>#include <netinet/in.h>#include <arpa/inet.h> #define PORT 8080 /** * TCP Server - accepts connections and echoes data */int server_main() { /* Create socket */ int server_fd = socket(AF_INET, SOCK_STREAM, 0); if (server_fd < 0) { perror("socket"); return 1; } /* Allow address reuse */ int opt = 1; setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); /* Bind to port */ struct sockaddr_in addr = { .sin_family = AF_INET, .sin_addr.s_addr = INADDR_ANY, /* Accept on all interfaces */ .sin_port = htons(PORT) }; if (bind(server_fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) { perror("bind"); return 1; } /* Listen for connections */ listen(server_fd, 10); /* Backlog of 10 pending connections */ printf("Server listening on port %d\n", PORT); /* Accept and handle connections */ while (1) { struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); int client_fd = accept(server_fd, (struct sockaddr *)&client_addr, &client_len); if (client_fd < 0) { perror("accept"); continue; } printf("Client connected: %s:%d\n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port)); /* Echo received data */ char buffer[256]; ssize_t bytes; while ((bytes = read(client_fd, buffer, sizeof(buffer))) > 0) { write(client_fd, buffer, bytes); /* Echo back */ } close(client_fd); printf("Client disconnected\n"); } close(server_fd); return 0;} /** * TCP Client - connects and sends data */int client_main() { /* Create socket */ int fd = socket(AF_INET, SOCK_STREAM, 0); if (fd < 0) { perror("socket"); return 1; } /* Connect to server */ struct sockaddr_in addr = { .sin_family = AF_INET, .sin_addr.s_addr = inet_addr("127.0.0.1"), .sin_port = htons(PORT) }; if (connect(fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) { perror("connect"); return 1; } printf("Connected to server\n"); /* Send and receive */ const char *message = "Hello, Server!"; write(fd, message, strlen(message)); char buffer[256]; ssize_t bytes = read(fd, buffer, sizeof(buffer) - 1); buffer[bytes] = '\0'; printf("Server response: %s\n", buffer); close(fd); return 0;}Unix domain sockets:
For local IPC, Unix domain sockets offer similar performance to pipes but with the full socket API (including datagram mode):
/* Server: create Unix domain socket */
int fd = socket(AF_UNIX, SOCK_STREAM, 0);
struct sockaddr_un addr = {
.sun_family = AF_UNIX,
.sun_path = "/tmp/my_socket" /* Or abstract: "\0my_socket" */
};
bind(fd, (struct sockaddr *)&addr, sizeof(addr));
listen(fd, 10);
/* ... accept/read/write as TCP ... */
/* Client: connect to Unix domain socket */
int fd = socket(AF_UNIX, SOCK_STREAM, 0);
connect(fd, (struct sockaddr *)&addr, sizeof(addr));
/* ... read/write as TCP ... */
Unix sockets vs. pipes:
When you use TCP sockets, the OS network stack handles: • Segmentation — Breaking data into packets • Sequencing — Numbering and reordering packets • Reliability — Retransmitting lost packets • Flow control — Preventing sender from overwhelming receiver • Congestion control — Adapting to network conditions
You write to a socket; the kernel handles the complexity of reliable delivery.
Operating systems implement network communication through a layered protocol stack. Understanding this architecture helps diagnose problems and optimize performance.
The TCP/IP model:
Application Layer
┌─────────────────────────────────────────────────────────┐
│ HTTP, HTTPS, FTP, SSH, DNS, SMTP, etc. │
│ Application-specific protocols │
└───────────────────────────┬─────────────────────────────┘
│ Socket API
Transport Layer │
┌───────────────────────────┴─────────────────────────────┐
│ TCP (reliable streams) │ UDP (unreliable datagrams) │
│ Port numbers, flow control, congestion control │
└───────────────────────────┬─────────────────────────────┘
│
Network Layer │
┌───────────────────────────┴─────────────────────────────┐
│ IP (Internet Protocol) │
│ Addressing, routing, fragmentation │
└───────────────────────────┬─────────────────────────────┘
│
Link Layer │
┌───────────────────────────┴─────────────────────────────┐
│ Ethernet, Wi-Fi, etc. │
│ MAC addresses, framing, local delivery │
└───────────────────────────┬─────────────────────────────┘
│
Physical Layer │
┌───────────────────────────┴─────────────────────────────┐
│ Cables, radio signals, electrical specifications │
└─────────────────────────────────────────────────────────┘
Data encapsulation:
As data descends the stack, each layer adds its header:
Application: [ HTTP Request ]
↓
TCP: [TCP Header][ HTTP Request ]
↓
IP: [IP Header][TCP Header][ HTTP Request ]
↓
Ethernet: [Eth Header][IP Header][TCP Header][ HTTP Request ][Eth Trailer]
OS network subsystem components:
Network device drivers: Interface with physical/virtual network hardware
Protocol implementations: TCP, UDP, IP, ICMP, ARP, etc.
Socket layer: Translates socket API calls to protocol operations
Routing subsystem: Determines how to reach destination addresses
Filtering/Firewall: netfilter (Linux), Windows Filtering Platform
QoS/Traffic control: Prioritization, rate limiting, traffic shaping
# View network interfaces
$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
# View routing table
$ ip route show
default via 192.168.1.1 dev eth0 proto dhcp metric 100
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.50
# View active connections
$ ss -tuln
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
tcp LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
| Characteristic | TCP | UDP |
|---|---|---|
| Connection | Connection-oriented | Connectionless |
| Reliability | Guaranteed delivery, ordering | Best-effort, may lose/reorder |
| Flow control | Yes (sliding window) | No |
| Use cases | HTTP, SSH, file transfer | DNS, gaming, streaming, VoIP |
| Overhead | Higher (headers, handshake) | Lower (minimal overhead) |
| Message boundaries | No (stream) | Yes (datagrams) |
High-performance applications minimize data copies between user space and kernel: • sendfile() — Send file directly to socket without user-space copy • splice() — Move data between file descriptors in kernel • io_uring — Asynchronous I/O with minimal system calls • DPDK — Bypass kernel entirely for maximum throughput
For web servers sending files, sendfile() can nearly double throughput compared to read()/write() loops.
We've explored the communication services that enable processes to cooperate locally and across networks. Let's consolidate the key insights:
Module complete:
With communication services covered, we've explored all the major OS service categories: user interfaces, program execution, I/O operations, file system manipulation, and communication. These services form the foundation upon which all applications are built—understanding them deeply empowers you to build more effective software and debug complex system issues.
Congratulations! You've completed the Operating System Services module. You now understand the essential services that OSes provide to users and applications—from the interfaces we use to interact, through program execution and I/O, to file systems and inter-process communication. This knowledge forms the foundation for deeper study of operating system internals.