Loading content...
Consider a phone conversation. Both parties can speak and listen simultaneously—there's no need to say "over" and wait for a response like with a walkie-talkie. Ideas flow freely in both directions, interruptions are possible, and the conversation feels natural.
TCP works the same way. A single TCP connection provides two independent, simultaneous communication channels—one in each direction. Data can flow from client to server at the same time as data flows from server to client. Neither side needs to wait for the other to finish.
This full-duplex capability is so fundamental that we often take it for granted. But it's not a given—many communication systems are half-duplex (one direction at a time) or simplex (one direction only). TCP's full-duplex design enables efficient, natural communication patterns that match how humans think about conversations.
In this page, we'll explore how TCP implements full-duplex communication: independent sequence spaces, simultaneous data flow, piggybacking, half-close operations, and the impact on protocol design.
By the end of this page, you will understand how TCP provides full-duplex communication through independent sequence spaces, how piggybacking optimizes ACK transmission, the mechanics of half-close for graceful shutdown, and how full-duplex capabilities influence application protocol design. You'll see TCP connections as two independent byte streams sharing a single conversation.
Before diving into TCP's full-duplex implementation, let's understand the three communication modes:
Simplex: One Direction Only
Data flows in only one direction, permanently. One side is always the sender; the other is always the receiver.
Examples: Television broadcasting, keyboard input to computer, traditional pager systems.
Half-Duplex: Both Directions, But Not Simultaneously
Data can flow in both directions, but only one direction at a time. When one side transmits, the other must wait.
Examples: Walkie-talkies, traditional Ethernet (CSMA/CD on shared medium), some satellite links.
Full-Duplex: Both Directions Simultaneously
Data flows in both directions at the same time. Both sides can send and receive concurrently.
Examples: Telephone calls, TCP connections, modern full-duplex Ethernet.
| Mode | Direction | Simultaneous? | Turn-Taking? | Example |
|---|---|---|---|---|
| Simplex | One way | N/A | No (sender only sends) | Broadcast radio |
| Half-Duplex | Both ways | No | Yes (explicit or implicit) | Walkie-talkie |
| Full-Duplex | Both ways | Yes | No (concurrent) | Phone call, TCP |
TCP is inherently full-duplex.
Every TCP connection supports simultaneous bidirectional data flow from the moment it's established. There's no mode switching, no signaling required, no turn-taking protocol. Both endpoints can write to and read from the connection at any time.
This is not just a convenience—it's an architectural foundation that enables efficient protocol design and matches natural communication patterns.
Full-duplex means both directions can carry data simultaneously, but doesn't mean the traffic must be symmetric. A file download might have 99% of data flowing server→client, with only ACKs flowing client→server. The capability exists in both directions; usage depends on the application.
TCP implements full-duplex by maintaining two completely independent byte streams—one in each direction. Each stream has its own:
When Host A communicates with Host B over TCP, there are actually two logical streams:
These streams are independent. The sequence numbers for A→B have no relationship to the sequence numbers for B→A.
TCP Connection: Host A ←→ Host B ╔══════════════════════════════════════════════════════════════════════╗║ HOST A ║╠═══════════════════════════════════╦══════════════════════════════════╣║ SENDING (A → B) ║ RECEIVING (B → A) ║║ ────────────────────── ║ ──────────────────── ║║ ISN: 100 ║ IRS: 500 ║║ SND.NXT: 650 (next seq to send) ║ RCV.NXT: 1200 (next expected) ║║ SND.UNA: 500 (oldest unACKed) ║ RCV.WND: 16384 (recv window) ║║ SND.WND: 8192 (peer's recv wnd) ║ ║╚═══════════════════════════════════╩══════════════════════════════════╝ ↕ Network ↕╔══════════════════════════════════════════════════════════════════════╗║ HOST B ║╠═══════════════════════════════════╦══════════════════════════════════╣║ SENDING (B → A) ║ RECEIVING (A → B) ║║ ────────────────────── ║ ──────────────────── ║║ ISN: 500 ║ IRS: 100 ║║ SND.NXT: 1200 ║ RCV.NXT: 650 ║║ SND.UNA: 1100 ║ RCV.WND: 8192 ║║ SND.WND: 16384 ║ ║╚═══════════════════════════════════╩══════════════════════════════════╝ Note: A's SND.WND = B's RCV.WND (they're the same window, seen from different sides) A's ISN (100) becomes B's IRS (100), and vice versaEach direction is managed independently:
| Aspect | A → B Stream | B → A Stream |
|---|---|---|
| Sequence tracking | A's sequence numbers | B's sequence numbers |
| ACK generation | B sends ACKs to A | A sends ACKs to B |
| Flow control | B's receive window limits A | A's receive window limits B |
| Congestion control | A's cwnd for A→B | B's cwnd for B→A |
| Retransmission | A retransmits unACKed A→B data | B retransmits unACKed B→A data |
Why independence matters:
This independence is crucial for correct operation:
Every TCP segment contains both a sequence number (for data being sent in this direction) and an acknowledgment number (for data received from the other direction). This dual field structure is what enables piggybacking—combining data transmission with ACK generation in a single segment.
The TCP header structure reflects full-duplex operation. Every segment carries fields for both sending and receiving:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number | ← For THIS segment's data
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number | ← For OTHER direction's data
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Offset| Rsv | Flags | Window Size | ← Receive window for OTHER direction
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Key bidirectional fields:
A single segment can carry both data and acknowledgment:
Segment from A to B:
- Seq=1000, Len=500 → A is sending bytes 1000-1499 to B
- ACK=2500 → A confirms receipt of B's bytes up to 2499
- Window=8192 → A has 8KB of receive buffer available
This single segment:
1. Sends data in the A→B direction
2. Acknowledges data in the B→A direction
3. Provides flow control for the B→A direction
This combination is called piggybacking and is a major efficiency gain from full-duplex operation.
A TCP segment is the unit of data at the transport layer. It becomes an IP packet when encapsulated. The terms are sometimes used interchangeably, but precisely, a segment is TCP's PDU (Protocol Data Unit) and a packet is IP's PDU.
Piggybacking is the practice of including ACK information in data-carrying segments, rather than sending separate ACK-only segments. This is a natural consequence of full-duplex operation and provides significant efficiency gains.
Without piggybacking:
A → B: [SEQ=1000, DATA=500 bytes]
B → A: [ACK=1500] ← ACK-only segment
B → A: [SEQ=2000, DATA=500 bytes] ← Data segment
Total segments: 3
With piggybacking:
A → B: [SEQ=1000, DATA=500 bytes]
B → A: [SEQ=2000, ACK=1500, DATA=500 bytes] ← ACK piggybacked on data
Total segments: 2
Piggybacking saves:
Delayed ACKs enable piggybacking:
To maximize piggybacking opportunities, TCP uses delayed ACKs: instead of immediately acknowledging received data, the receiver waits briefly (up to 500ms per RFC, typically 40-200ms in practice).
During this delay, if the application generates data to send, the ACK is piggybacked on the outgoing segment. If no data is generated, a standalone ACK is sent after the delay expires.
Delayed ACK rules (RFC 5681):
Delayed ACKs can interact poorly with the Nagle algorithm, causing "ACK delay" where the sender waits for an ACK before sending small writes, but the receiver delays the ACK hoping to piggyback. This creates latency issues for interactive applications. Solutions include TCP_NODELAY (disabling Nagle) or TCP_QUICKACK (disabling delayed ACKs).
Full-duplex means both directions can carry data at the exact same time. This isn't just alternating quickly—it's true simultaneous transmission.
How is this possible?
At the physical layer, modern networks (like Ethernet with full-duplex switches) can transmit and receive simultaneously on separate wire pairs or frequencies. At the TCP layer, each host can:
Timeline of simultaneous transmission:
Time Host A Host B──────────────────────────────────────────────────────────────────t=0 Sends: Seq=1000 ──────────▶ Sends: Seq=5000 ──────────▶t=1 ◀────────── Receives: Seq=5000 Sends more: Seq=5500 ────▶t=2 Sends more: Seq=1500 ────▶ ◀──────── Receives: Seq=1000 t=3 ◀────────── Receives: Seq=5500 ◀──────── Receives: Seq=1500t=4 Sends: Seq=2000, ACK=6000 ──▶ Sends: Seq=6000, ACK=2000 ──▶ At any given moment:- Segments may be in transit in BOTH directions- Each host is sending AND receiving- ACKs flow in both directions- Neither host "waits its turn" Network contains: Host A Host B | [Seq=1500]──────────────────────────────▶ | | ◀──────────────────────────────[Seq=5500] | | [ACK=5500]──────────────────────────────▶ | | ◀──────────────────────────────[ACK=1500] |Why this matters for applications:
Request streaming: A client can start sending a large upload while the server is still sending results from a previous request
Bidirectional protocols: WebSocket, gRPC streaming, and other protocols leverage full-duplex for natural bidirectional communication
No artificial latency: Unlike half-duplex, there's no overhead from direction switching
Efficient bulk transfers: Both directions can saturate available bandwidth simultaneously
WebSocket (RFC 6455) was specifically designed to expose TCP's full-duplex capability to web applications. Unlike HTTP/1.x's request-response pattern, WebSocket allows server-initiated pushes and truly bidirectional data flow, fully utilizing the underlying TCP connection's capabilities.
Because TCP's two directions are independent, each can be closed separately. This is called half-close: one side finishes sending while the other continues.
How half-close works:
During the half-closed state:
Use cases for half-close:
Signaling end of input: A client sends a query, then uses shutdown(SHUT_WR) to signal "I'm done." The server knows no more requests are coming and can send a complete response without waiting.
Streaming responses: The client sends a short request, then closes its send side. The server streams a large response (video, large dataset) while the client consumes it.
rsh/rlogin protocols: Historic remote shell protocols used half-close to signal end of stdin while keeping stdout/stderr open.
API for half-close:
// Close only the write side (send FIN)
shutdown(socket, SHUT_WR); // Can still read
// Close only the read side (less common)
shutdown(socket, SHUT_RD); // Can still write
// Close both (equivalent to close)
shutdown(socket, SHUT_RDWR);
// close() closes both directions and the socket
close(socket);
When a host receives a FIN but hasn't sent its own FIN, the connection enters CLOSE_WAIT state. This can last indefinitely if the application doesn't close its end. Monitoring for CLOSE_WAIT accumulation is important—it often indicates applications not properly closing connections.
Each direction has independent flow control. Host A's receive window controls B→A data; Host B's receive window controls A→B data.
Why independent flow control?
The two directions may have completely different characteristics:
Asymmetric data transfer: A downloads a 1GB file from B, but only sends acknowledgments back. A needs a large receive buffer; its send buffer can be small.
Asymmetric processing: A sends data quickly, but B's application is slow to read. B's receive window shrinks; A must slow down. Meanwhile, B might have a large window for A.
Asymmetric network: The path from A to B might have different bandwidth than B to A (common with cable/DSL).
| Aspect | A → B Direction | B → A Direction |
|---|---|---|
| Flow Control | B's rwnd limits A's sending | A's rwnd limits B's sending |
| Congestion Control | A's cwnd for A→B path | B's cwnd for B→A path |
| Data Rate | Can saturate A→B bandwidth | Can saturate B→A bandwidth |
| Loss Recovery | A handles A→B losses | B handles B→A losses |
Window Advertisement in Each Direction:
Segment from A to B:
Window field = A's receive window (for B→A data)
"I can accept X more bytes from you"
Segment from B to A:
Window field = B's receive window (for A→B data)
"I can accept Y more bytes from you"
The window field always advertises the sender's incoming capacity, which controls the outgoing rate of the remote host.
Zero window in one direction:
If B's receive buffer fills (slow application), B advertises rwnd=0. A stops sending data (A→B is paused). But B→A can continue normally—B can still send, and A's window is unaffected.
When debugging TCP performance, consider each direction separately. A connection might have excellent throughput in one direction but be limited in the other. Tools like Wireshark can show window sizes and congestion behavior for each direction independently.
Application protocols can leverage full-duplex in different ways. Understanding these patterns helps in protocol design and implementation.
Pattern 1: Request-Response (Underutilizes Full-Duplex)
Classic HTTP/1.x: client sends request, waits, server sends response. This is essentially half-duplex usage of a full-duplex channel.
Client → Server: Request
Client waits...
Server → Client: Response
Client sends next request...
Pattern 2: Pipelining (Better Utilization)
HTTP/1.1 pipelining: client sends multiple requests without waiting. Still request-response, but overlapped.
Client → Server: Request 1
Client → Server: Request 2
Client → Server: Request 3
Server → Client: Response 1
Server → Client: Response 2
Server → Client: Response 3
Pattern 3: Multiplexing (HTTP/2)
Multiple independent streams interleaved on one connection. Both directions carry frames for multiple streams.
Client → Server: [Stream 1 headers]
Client → Server: [Stream 2 headers]
Server → Client: [Stream 1 headers]
Client → Server: [Stream 1 data]
Server → Client: [Stream 2 headers]
Server → Client: [Stream 1 data]
...
Pattern 4: True Bidirectional (Full Utilization)
WebSocket, gRPC streaming: both sides send whenever they want, independently.
Client → Server: Message A
Server → Client: Message 1
Client → Server: Message B } These can happen
Server → Client: Message 2 } simultaneously
Server → Client: Message 3
Client → Server: Message C
Full-duplex is a capability, not a requirement. If your application naturally follows request-response patterns, there's no need to force bidirectional messaging. Use full-duplex when both sides genuinely need to initiate communication independently.
TCP's full-duplex capability transforms a single connection into two independent communication channels. Let's consolidate what we've learned:
TCP provides a reliable, ordered, full-duplex byte stream between applications. These characteristics together create a powerful abstraction: two independent, dependable communication channels over a single connection, enabling everything from simple web requests to complex real-time protocols.
Module Complete:
With this page, we've covered all five fundamental characteristics of TCP:
These characteristics form the foundation for everything else in TCP. The next modules will build on this foundation, exploring sequence numbers in depth, header format details, the three-way handshake, state diagrams, and connection termination.