Loading learning content...
When a video conferencing application sends your voice across the ocean, it needs the transport layer to be fast—a 300ms delay makes conversation awkward. When a banking application transfers money, it needs the transport layer to be reliable—losing even one acknowledgment could mean duplicate transactions. When a file synchronization service uploads gigabytes of data, it needs the transport layer to be efficient—maximizing throughput without overloading the network.
The transport layer offers a menu of services to applications. Different protocols provide different combinations of guarantee—and applications must choose wisely. Selecting the wrong transport service can make your application slow, unreliable, network-unfriendly, or all three.
This page systematically examines the services the transport layer can provide, which protocols offer which services, and how applications should choose based on their requirements.
By completing this page, you will understand the complete taxonomy of transport services—reliability, ordering, connection semantics, congestion control, flow control, and more. You'll learn which standard protocols (TCP, UDP, SCTP, QUIC) provide which services, and how to match application requirements to appropriate transport choices.
Transport services can be organized into categories based on the guarantees they provide. Understanding this taxonomy helps you reason about what applications need and what protocols offer.
Category 1: Delivery Guarantees
Category 2: Ordering Guarantees
Category 3: Duplicate Handling
| Service Category | Service Options | TCP | UDP | SCTP | QUIC |
|---|---|---|---|---|---|
| Reliability | Reliable / Best-effort | Reliable | Best-effort | Reliable | Reliable |
| Ordering | Ordered / Unordered / Per-stream | Ordered | Unordered | Per-stream | Per-stream |
| Duplicates | No duplicates / Possible | No duplicates | Possible | No duplicates | No duplicates |
| Connection | Connected / Connectionless | Connected | Connectionless | Connected | Connected |
| Stream type | Byte stream / Message-oriented | Byte stream | Message | Message | Byte stream |
| Congestion control | Yes / No | Yes | No* | Yes | Yes |
| Flow control | Yes / No | Yes | No | Yes | Yes |
| Multi-homing | Yes / No | No | No | Yes | No* |
Category 4: Connection Semantics
Category 5: Data Boundaries
Category 6: Rate Management
Category 7: Advanced Features
Some services applications might want are NOT provided by any standard transport protocol: guaranteed timing (no latency bounds), guaranteed throughput (only best-effort), guaranteed bandwidth reservation (requires network support like IntServ). Applications needing these must use specialized techniques or accept that they're best-effort.
The most fundamental transport service division is between reliable and best-effort delivery. This choice shapes nearly everything else about how a protocol works.
Reliable Delivery (TCP, SCTP, QUIC):
Reliable delivery guarantees that data sent will either:
There's no silent failure—the sender always knows the outcome.
Mechanisms required:
Best-Effort Delivery (UDP):
Best-effort means the transport layer makes one attempt to deliver data. If it fails:
This sounds terrible—why would anyone choose it?
The Reliability-Latency Trade-off:
Reliability comes at a cost: latency. When a packet is lost:
For real-time applications, this delay is unacceptable. A video call with 500ms gaps is worse than one with occasional visual artifacts. These applications prefer:
Partial Reliability:
Some modern protocols offer partial reliability—controlled data loss:
This bridges the gap between TCP's full reliability and UDP's no reliability.
Transport reliability means data reached the receiving process. It doesn't guarantee the application stored it durably—the process could crash after receiving but before persisting. Application-level acknowledgment (e.g., database commit confirmation) provides durability; transport reliability doesn't.
Beyond reliability, applications often care about the order in which data arrives. The transport layer offers several ordering models.
Ordered Delivery (TCP):
TCP guarantees all data arrives in the order it was sent. If bytes 1-100 are sent before bytes 101-200, the application receives them in that order, always.
This is powerful but has a cost: head-of-line blocking.
If packet 2 is lost, packets 3, 4, 5... must wait in the receive buffer. The application can't access any of them until packet 2 arrives. For multiplexed connections (HTTP/2 over TCP), a lost packet for one stream blocks all streams.
Unordered Delivery (UDP):
UDP delivers datagrams in whatever order they arrive. If packet 2 arrives before packet 1, the application receives packet 2 first.
For applications that don't need ordering (DNS queries, some game protocols), this avoids head-of-line blocking.
Per-Stream Ordering (SCTP, QUIC):
This hybrid approach offers the best of both worlds:
HTTP/3 (over QUIC) uses this: each HTTP request is a stream, so a lost packet affects only that request, not others.
| Model | Guarantee | Head-of-Line Blocking | Use Case | Protocol |
|---|---|---|---|---|
| Fully Ordered | All data in sequence | Yes (all data) | File transfer, traditional HTTP | TCP |
| Unordered | No ordering guarantee | No | DNS, UDP-based games | UDP |
| Per-Stream Ordered | Order within streams | Only within stream | HTTP/3, multiplexed protocols | QUIC, SCTP |
| Partially Ordered | Application defines | Configurable | Custom protocols | Custom design |
Message Boundaries:
A related question: does the transport preserve message boundaries?
Byte Stream (TCP, QUIC streams):
Message-Oriented (UDP, SCTP):
Why Byte Stream?
Byte streams are natural for:
Why Messages?
Messages are natural for:
QUIC's multi-streaming solves HTTP/2's head-of-line blocking problem. HTTP/2 over TCP multiplexes requests, but a single lost packet blocks all requests. HTTP/3 over QUIC gives each request its own stream—lost packets affect only that request. This dramatically improves performance on lossy networks.
Transport protocols differ fundamentally in whether they establish connections before data transfer.
Connection-Oriented (TCP, SCTP, QUIC):
Before exchanging data, endpoints perform a handshake:
This establishes:
Connectionless (UDP):
No handshake. The first packet is either a request or data. Each packet is independent—there's no shared state between sender and receiver.
Trade-offs:
Connection Setup Latency:
Connection overhead matters for short interactions:
| Protocol | Handshake RTTs | Total to First Data |
|---|---|---|
| TCP | 1.5 RTT | 1.5-2 RTT |
| TCP + TLS 1.2 | 3 RTT | 3+ RTT |
| TCP + TLS 1.3 | 2 RTT | 2+ RTT (1 RTT resumption) |
| QUIC | 1 RTT | 1 RTT (0 RTT resumption) |
| UDP | 0 RTT | 0 RTT |
For a DNS query (one request, one response), TCP's 1.5 RTT setup doubles latency compared to UDP's instant request. For a long-lived web session, setup latency is negligible.
Connection State Costs:
Servers maintaining millions of connections consume significant memory:
Connectionless servers avoid this—they maintain no per-client state. This is why DNS servers traditionally used UDP.
QUIC allows '0-RTT' connection resumption—if you've connected to a server before, you can send data in the very first packet with no handshake delay. This combines connection-oriented reliability with connectionless-like latency for repeat visitors. It's a major advancement for web performance.
The transport layer's rate control services prevent senders from overwhelming receivers or networks. These are among the most important—and most underappreciated—transport functions.
Flow Control:
Flow control prevents faster senders from overwhelming slower receivers:
This creates backpressure—if the application doesn't read data, the send eventually blocks. This feedback prevents buffer overflows.
Congestion Control:
Congestion control prevents all senders collectively from overwhelming the network:
| Protocol | Flow Control | Congestion Control | Implications |
|---|---|---|---|
| TCP | Window advertisement in header | AIMD + slow start + fast retransmit | Network-friendly, receiver-protected |
| UDP | None | None | Sender can overwhelm receiver and network |
| SCTP | Window per association | Similar to TCP | Multi-stream with unified control |
| QUIC | Stream and connection-level | Pluggable (typically CUBIC) | Modern, flexible congestion control |
Why UDP Lacks Control:
UDP has no flow or congestion control. Why?
But this is dangerous:
Application-Layer Rate Control:
UDP applications that send significant data should implement their own congestion control:
Responsible network citizens don't send unlimited UDP traffic.
TCP's congestion control is a 'social contract' among Internet users. TCP senders voluntarily reduce their rate when the network is congested. If everyone used uncongestion-controlled protocols, the Internet would collapse. This is why UDP should be used only for low-bandwidth or real-time applications that implement their own control.
Historically, base transport protocols (TCP, UDP) provided no security. Data traversed networks in plaintext, vulnerable to eavesdropping, tampering, and impersonation. Modern applications require more.
Traditional Model: Security as Add-On
Security was added as a layer above transport:
This works but has inefficiencies:
Modern Model: Integrated Security (QUIC)
QUIC integrates TLS 1.3 directly:
| Protocol Stack | Confidentiality | Integrity | Authentication | Setup Latency |
|---|---|---|---|---|
| TCP (raw) | ❌ None | Checksum only | ❌ None | 1.5 RTT |
| TCP + TLS 1.2 | ✅ Full encryption | ✅ MAC on data | ✅ Certificates | 3-4 RTT |
| TCP + TLS 1.3 | ✅ Full encryption | ✅ AEAD | ✅ Certificates | 2-3 RTT (1 RTT resume) |
| UDP (raw) | ❌ None | Optional checksum | ❌ None | 0 RTT |
| UDP + DTLS | ✅ Full encryption | ✅ AEAD | ✅ Certificates | 2+ RTT |
| QUIC | ✅ Full (including headers) | ✅ AEAD on all data | ✅ TLS 1.3 integrated | 1 RTT (0 RTT resume) |
What Security Services Provide:
Confidentiality: Nobody but endpoints can read the data
Integrity: Nobody can modify data in transit without detection
Authentication: You know you're talking to the right party
Forward Secrecy: Past sessions can't be decrypted if long-term keys are compromised
The Encryption Trend:
The Internet is moving toward ubiquitous encryption:
This closes the gap where transport provided no security.
QUIC encrypts even transport headers (packet numbers, acknowledgments). This prevents middleboxes from inspecting or modifying transport behavior, preserving the end-to-end principle for transport semantics. It also enables protocol evolution—new QUIC features can't be broken by middleboxes that don't understand them.
Choosing the right transport protocol requires understanding your application's requirements and mapping them to available services.
The Decision Framework:
Must all data arrive?
Does order matter?
Is latency critical?
Is security required?
| Application Type | Key Requirements | Recommended Protocol | Why |
|---|---|---|---|
| Web browsing | Reliable, secure, fast | QUIC (HTTP/3) or TCP+TLS (HTTP/2) | Modern web benefits from QUIC's 0-RTT and no HoL blocking |
| File transfer | Reliable, ordered | TCP | Complete data integrity essential |
| Reliable | TCP + TLS | Messages must arrive intact and secure | |
| DNS queries | Fast, simple | UDP (or DoH/DoT for security) | Short transactions, retry is cheap |
| Video streaming | High throughput, some loss OK | QUIC or UDP + application reliability | Can buffer; loss tolerance varies |
| Video calling | Low latency, loss tolerant | UDP + WebRTC (SRTP) | Can't buffer; prefer small glitches to delay |
| Online gaming | Low latency, partial reliability | UDP + custom protocol | Current state matters; old state obsolete |
| SSH/Terminal | Reliable, ordered, secure | TCP + TLS (SSH protocol) | Every keystroke must arrive correctly |
| Database queries | Reliable, ordered | TCP + TLS | Query results must be complete |
| IoT sensors | Simple, low overhead | UDP or MQTT (TCP) | Depends on reliability requirements |
Common Mistakes:
Mistake 1: Using TCP when UDP is better
Mistake 2: Using UDP when TCP is better
Mistake 3: Ignoring congestion control
Mistake 4: Assuming TCP ordering is free
The Modern Default:
For most new applications:
The transport landscape is changing. QUIC is becoming the de facto choice for web applications (HTTP/3). WebTransport extends QUIC to more use cases. Traditional TCP+TLS remains essential but may gradually give way to QUIC-based alternatives. When starting new projects, evaluate whether modern options like QUIC fit your needs.
We've surveyed the full range of services the transport layer offers. Let's consolidate the essential knowledge:
What's Next:
We've explored what the transport layer offers. The final page of this module examines the host's responsibilities in implementing transport services—how operating systems structure transport protocol implementations, handle connection state, and manage the complex dance of timers, buffers, and events that make transport work.
You now understand the complete menu of transport services and how to match application requirements to protocol choices. This knowledge is essential for designing network applications—choosing the wrong transport has lasting performance and reliability consequences.