Loading learning content...
Your laptop is currently running perhaps fifty applications—each one a separate process with its own memory space, execution context, and communication needs. Your web browser maintains multiple connections to different servers. Your email client checks for new messages. A video streaming app buffers media. A cloud sync service uploads files. Background system services handle updates, security, and diagnostics.
All these processes share a single network connection, a single IP address. Yet when a packet arrives for Netflix, it reaches the Netflix application—not your email client, not your web browser. How does this work?
The answer lies in process-to-process delivery—the transport layer's defining capability. While the network layer gets data to the right machine, the transport layer gets data to the right process within that machine. This seemingly simple extension transforms how we use networked computers.
By completing this page, you will deeply understand how port numbers enable process identification, how the combination of IP addresses and ports creates globally unique socket addresses, how multiplexing and demultiplexing work in practice, and why this mechanism is fundamental to virtually all modern network applications.
To appreciate the elegance of process-to-process delivery, we must first understand the problem it solves. Consider the challenge facing network operating systems:
The Multi-Process Reality:
Modern operating systems are fundamentally multi-process environments. At any moment, dozens or hundreds of processes run concurrently:
Many of these processes communicate over the network—simultaneously, continuously. They send data to different destinations and receive data from different sources.
What IP Provides (And What It Doesn't):
The Internet Protocol assigns addresses to network interfaces, not processes. An IP address like 192.168.1.100 identifies a specific machine on the network. When a packet arrives at this address, the operating system's IP stack receives it. But the IP protocol provides no mechanism to specify which process on that machine should receive the data.
| Layer | Address Type | Identifies | Example | Uniqueness Scope |
|---|---|---|---|---|
| Data Link | MAC Address | Network interface | 00:1A:2B:3C:4D:5E | Local network (LAN) |
| Network | IP Address | Host/interface on internet | 192.168.1.100 | Internet (global) |
| Transport | Port Number | Process/service on host | 443 | Within a single host |
| Transport | Socket Address | Specific endpoint | 192.168.1.100:443 | Internet (global) |
The Conceptual Gap:
Without transport layer addressing, every network-aware process would receive every incoming packet. The operating system would have no principled way to route data. Applications would need to:
This approach:
The transport layer's process addressing mechanism solves all these problems through a deceptively simple concept: port numbers.
Early networked systems often had dedicated network computers—a single process per machine. Process-to-process delivery became critical as time-sharing systems emerged, allowing multiple users and programs to share a single computer. The transport layer emerged to manage this multiplicity.
Port numbers are 16-bit unsigned integers that identify specific processes or services on a host. Combined with an IP address, a port number creates a complete endpoint address for network communication.
The Mathematics of Port Space:
With 16 bits, port numbers range from 0 to 65,535, providing 65,536 possible values. This number was chosen carefully:
Port Number Ranges:
The Internet Assigned Numbers Authority (IANA) divides the port space into three ranges with different allocation policies:
| Range | Name | Ports | Usage | Assignment |
|---|---|---|---|---|
| Well-Known | System Ports | 0-1023 | Standard services | IANA assigned, require root/admin privilege |
| Registered | User Ports | 1024-49151 | Registered applications | IANA registered, no privilege required |
| Dynamic | Ephemeral Ports | 49152-65535 | Client-side connections | OS assigned, temporary use |
Well-Known Ports (0-1023):
These ports are reserved for standardized services. When you connect to a web server, you assume port 80 (HTTP) or 443 (HTTPS). When you send email, SMTP servers listen on port 25. This standardization is essential—clients need to know where to find services.
Common well-known port assignments:
| Port | Protocol | Service |
|---|---|---|
| 20/21 | TCP | FTP (data/control) |
| 22 | TCP | SSH |
| 23 | TCP | Telnet |
| 25 | TCP | SMTP (email sending) |
| 53 | TCP/UDP | DNS |
| 67/68 | UDP | DHCP (server/client) |
| 80 | TCP | HTTP |
| 110 | TCP | POP3 (email retrieval) |
| 143 | TCP | IMAP (email retrieval) |
| 443 | TCP | HTTPS |
| 3306 | TCP | MySQL (registered, not well-known) |
On Unix-like systems, binding to ports 0-1023 requires superuser (root) privileges—a security measure preventing unprivileged processes from impersonating critical services.
Port 0 has special meaning—it requests the OS to assign any available ephemeral port. When a client application doesn't care which local port it uses (most don't), it binds to port 0 and the kernel selects an unused dynamic port. This is how thousands of browser tabs can each have unique source ports automatically.
Ephemeral Ports (49152-65535):
When a client application initiates a connection, it needs a source port to receive responses. The client typically doesn't care which port it uses—it just needs any available one. The OS assigns a temporary (ephemeral) port from the dynamic range.
Different operating systems historically used different ephemeral ranges:
/proc/sys/net/ipv4/ip_local_port_range)Understanding ephemeral ports matters for high-connection-count servers—you can exhaust the ephemeral port range with too many concurrent connections to the same destination.
A port number alone identifies a process locally—on a single host. But for Internet-wide communication, we need globally unique identification. This is achieved through socket addresses.
Definition:
A socket address (sometimes called a transport address or socket endpoint) is the combination of an IP address and a port number:
Socket Address = IP Address + Port Number
Example: 192.168.1.100:443
IPv6 Example: [2001:db8::1]:443
This combination is globally unique—no two endpoints anywhere on the Internet share the same socket address (assuming no NAT complications).
The Five-Tuple:
For TCP connections, uniqueness is determined by a five-tuple:
This five-tuple uniquely identifies a connection. Multiple connections can share the same destination IP:port as long as different source IP:port combinations are used.
| Connection | Source IP:Port | Destination IP:Port | Status |
|---|---|---|---|
| Browser Tab 1 | 192.168.1.100:51234 | 93.184.216.34:443 | Unique ✅ |
| Browser Tab 2 | 192.168.1.100:51235 | 93.184.216.34:443 | Unique ✅ |
| Browser Tab 3 | 192.168.1.100:51236 | 93.184.216.34:443 | Unique ✅ |
| Different Device | 192.168.1.101:51234 | 93.184.216.34:443 | Unique ✅ |
Why This Matters for Servers:
A web server running on port 443 can handle thousands of simultaneous connections—all to the same local socket address. How? Each connection has a different client socket address. The server's OS maintains the connection table indexed by the full five-tuple.
When a packet arrives at the server:
Socket Binding:
In socket programming, servers bind to a specific local socket address:
// Server binds to listen on port 443
bind(socket, {address: "0.0.0.0", port: 443})
listen(socket)
// Client binds to any available ephemeral port
bind(socket, {address: "0.0.0.0", port: 0}) // Often implicit
connect(socket, {address: "93.184.216.34", port: 443})
The address 0.0.0.0 means "all interfaces"—the server accepts connections arriving on any network interface.
A socket address cannot be immediately reused after a connection closes—the OS keeps it in TIME_WAIT state for safety. This can cause 'Address already in use' errors when rapidly restarting servers. The SO_REUSEADDR socket option allows rebinding to TIME_WAIT addresses in most cases.
The mechanism that enables process-to-process delivery has two complementary operations: multiplexing at the sender and demultiplexing at the receiver. Together, these operations allow multiple applications to share a single network interface.
Multiplexing (At Sender):
Multiplexing is the process of gathering data from multiple application processes, encasing each with transport headers (including port numbers), and passing them to the network layer.
Think of it as multiple postal customers at a single post office—each customer's letters are tagged with sender/recipient addresses and merged into a single outgoing mail stream.
Demultiplexing (At Receiver):
Demultiplexing is the reverse—examining incoming segments' destination port numbers and directing each segment's payload to the correct receiving process.
Think of it as the mail room in a large office building—incoming mail is sorted by recipient and delivered to individual offices.
How Demultiplexing Works:
When a segment arrives at the transport layer, demultiplexing follows this process:
UDP vs TCP Demultiplexing:
The demultiplexing process differs slightly:
UDP Demultiplexing:
TCP Demultiplexing:
This difference explains why UDP servers typically have one socket serving all clients, while TCP servers create a new socket for each connected client.
| Protocol | Demux Fields | Socket Per | Implication |
|---|---|---|---|
| UDP | Destination port only | Service (all clients share) | Simple, fast, stateless |
| TCP | Four-tuple (src IP, src port, dst IP, dst port) | Connection (each client unique) | Complex, slower, stateful |
In Unix systems, sockets are file descriptors. When an application calls read() on a socket, the OS's transport layer returns data from that socket's receive buffer. The demultiplexing has already happened—data is in the correct buffer based on port matching. Applications never see 'wrong' data intended for other processes.
Process-to-process delivery enables the dominant pattern of Internet communication: the client-server model. Understanding how ports work in this model clarifies their practical significance.
Server Behavior:
Servers provide services on well-known ports. A web server listens on port 80 (HTTP) or 443 (HTTPS). The server:
The server's port is fixed and advertised—clients know where to connect.
Client Behavior:
Clients initiate communication. They need to:
A Complete Transaction Example:
Let's trace a complete HTTPS request from your browser:
The server never knows or cares about the client's ephemeral port—it just uses whatever the client provides. The client's port exists solely to identify which process receives the response.
Concurrent Connections:
Modern browsers open multiple concurrent connections to the same server—often 6-8 per origin. Each connection has a different client ephemeral port, making them distinguishable despite sharing the same server port.
With only ~16,000 ephemeral ports (typical OS range) and a 2-minute TIME_WAIT, a client can exhaust ports when making rapid connections to the same server. This limits sustained connection rates to roughly 133 new connections per second per destination. High-performance systems may expand the ephemeral range or reduce TIME_WAIT duration.
The relationship between processes and ports is more nuanced than a simple one-to-one mapping. Understanding these relationships clarifies common architectures and potential pitfalls.
One Process, Multiple Ports:
A single process can bind to multiple ports simultaneously. Examples:
The process creates multiple sockets, each bound to a different port. The OS demultiplexes to the correct socket; the process reads from each as needed.
One Port, Multiple Processes (Special Cases):
Normally, only one process can bind to a port. But exceptions exist:
Process 1: bind(socket1, {:8080}, SO_REUSEPORT) ✅
Process 2: bind(socket2, {:8080}, SO_REUSEPORT) ✅
Kernel distributes incoming connections between them
| Architecture | Process:Port Ratio | Use Case | Example |
|---|---|---|---|
| Single-threaded server | 1:1 | Simple services | Basic Redis |
| Multi-port server | 1:N | Multi-protocol services | nginx (HTTP + HTTPS) |
| Pre-fork model | N:1 (shared socket) | Traditional parallelism | Apache mpm_prefork |
| SO_REUSEPORT model | N:1 (separate sockets) | Modern parallelism | nginx with workers |
| Connection-per-process | N:N (each connection) | Process isolation | PostgreSQL per-connection |
Port Inheritance Across Fork:
When a process forks, the child inherits all open file descriptors—including sockets. This means:
This is the basis of the pre-fork model used by Apache and early nginx.
Port Conflicts:
If a process tries to bind a port that's already bound:
This is why multiple web servers can't typically run on the same port without explicit sharing mechanisms.
Applications should handle bind failures gracefully. Common causes: another service using the port, previous instance in TIME_WAIT, insufficient privileges for low ports, or port blocked by firewall. Good applications provide clear error messages indicating which port couldn't be bound and why.
Port numbers have profound security implications. Because they identify services, they're fundamental to network security mechanisms.
Firewall Port Filtering:
Most firewalls implement port-based access control:
These rules don't examine packet contents—they simply allow or deny based on port numbers. This is fast and effective for controlling service access.
Port Scanning:
Attackers use port scanning to discover services:
Tools like Nmap automate this process. Defensive measures include:
| State | TCP Behavior | UDP Behavior | Meaning |
|---|---|---|---|
| Open | SYN-ACK received | Response received or no ICMP error | Service accepting connections |
| Closed | RST received | ICMP Port Unreachable | No service, but host is up |
| Filtered | No response (timeout) | No response (timeout) | Firewall blocking/dropping packets |
| Open|Filtered | N/A (TCP-specific) | No response (could be either) | UDP ambiguity |
Port-Based Network Segmentation:
Enterprise networks often segment traffic by port:
This provides defense in depth—compromised web servers can't directly attack databases.
Non-Standard Ports:
Services can run on unusual ports for:
However, port numbers don't guarantee service type—a web server could run on any port. Deep packet inspection examines actual protocol behavior, not just ports.
Every open port is an attack surface. Best practice: expose only the ports your service absolutely requires. Internal services should be completely invisible from the Internet. Use VPNs, bastion hosts, or zero-trust networking to access internal services rather than exposing them publicly.
We've explored how the transport layer extends network addressing to reach specific processes. Let's consolidate the key concepts:
What's Next:
Process-to-process delivery is the transport layer's most fundamental capability, but it's just the beginning. The next page explores end-to-end communication—the principles and mechanisms that enable reliable data exchange between processes regardless of the network path between them.
You now understand how port numbers, socket addresses, and multiplexing/demultiplexing enable multiple applications to share a single network connection. This process-to-process delivery mechanism is the foundation of all network application communication—from simple web requests to complex distributed systems.