Loading learning content...
The sliding window mechanism exists for one fundamental purpose: pipelining. By allowing multiple frames to be 'in flight' simultaneously—traveling through the network before any acknowledgments return—pipelining transforms channel utilization from Stop-and-Wait's fraction of a percent to near-100% efficiency.
Pipelining is the key that unlocks high-performance reliable data transfer over high bandwidth-delay product networks. This page examines pipelining mechanics in complete detail: the timing, the efficiency formulas, the error recovery implications, and the real-world performance characteristics.
By the end of this page, you will understand pipelining timing diagrams and how frames overlap in transit, efficiency calculations for pipelined protocols, the relationship between window size and channel utilization, error recovery costs in pipelined systems, and how Go-Back-N and Selective Repeat differ in pipelined performance.
Stop-and-Wait Revisited:
In Stop-and-Wait, the sender transmits one frame and then idles, waiting for the acknowledgment to traverse the network back. The channel sits empty during this wait time.
Sender: [Frame 0]─────────────────────────────────────────────────────────
Channel: >>>>>>>>>>>>>> (silence) <<<<<<<<<<<<<
Receiver: [receives]──────────[ACK 0]───────────────
↓
arrives at sender
The Pipelining Insight:
Nothing prevents the sender from transmitting additional frames during that wait time. If we allow up to W frames in-flight:
Sender: [Frame 0][Frame 1][Frame 2][Frame 3]──────────────────────────────
Channel: >>>>>>>>>>>>>>>>>>>>>>>>>>> ...
Receiver: [receives] [receives] [receives] ...
With enough frames in the pipeline, we achieve continuous transmission—the sender never idles.
The 'Pipe' Metaphor:
Think of the network as a pipe between sender and receiver. The pipe has:
Stop-and-Wait pushes one frame into the pipe, then waits for confirmation before pushing the next. The pipe is mostly empty.
Pipelining fills the pipe with multiple frames. The sender keeps pushing frames until the pipe is full (window exhausted), then uses returning ACKs to 'make room' for more.
Bandwidth-Delay Product:
The bandwidth-delay product (BDP) quantifies the pipe's capacity:
BDP = Bandwidth × RTT (bits)
= R × (2 × d/v) bits
For optimal utilization, the window should hold at least BDP worth of data.
For a 10 Gbps link with 50 ms RTT: BDP = 10⁹ × 0.05 = 62.5 megabytes. Your window needs to hold 62.5 MB of data to fully utilize this link. This explains why modern TCP uses window scaling—the original 64 KB limit was far too small for high-BDP paths.
Let's formalize the timing of pipelined transmission to derive efficiency formulas.
Time Components:
| Symbol | Name | Formula |
|---|---|---|
| Tₜ | Transmission time | L / R (frame size / bandwidth) |
| Tₚ | Propagation delay (one-way) | d / v (distance / speed) |
| RTT | Round-trip time | 2 × Tₚ (ignoring processing) |
| a | Normalized delay | Tₚ / Tₜ |
The Timing Diagram:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
Pipelined Transmission: Timing Diagram (Window = 4)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Time: 0 Tₜ 2Tₜ 3Tₜ 4Tₜ Tₜ+RTT 2Tₜ+RTT | | | | | | | Sender: [F0]────┐ [F1]────┐ [F2]────┐ [F3]────┐ | (window full, must wait) | |───────── ACK 0 arrives ──────────┐ | | | [F4]───┤ ACK 1 arrives──┼─[F5] | Receiver: ..........[F0] (propagation delay = Tₚ) ........[F1] ......[F2] ....[F3] ACK 0 sent ← ACK 1 sent ← ACK 2 sent ← ACK 3 sent ← ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Timeline Analysis:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ • Frame 0 starts transmission at t = 0• Frame 0 finishes transmission at t = Tₜ• Frame 0 arrives at receiver at t = Tₜ + Tₚ• ACK 0 starts at receiver at t = Tₜ + Tₚ (ignoring processing)• ACK 0 arrives at sender at t = Tₜ + Tₚ + Tₚ = Tₜ + 2Tₚ = Tₜ + RTT Frames 1, 2, 3 transmitted during [Tₜ, 4Tₜ] Time when window opens for F4: t = Tₜ + RTT Sender was "busy transmitting" from t=0 to t=4Tₜ (4 frames)Next transmission possible at t = max(4Tₜ, Tₜ + RTT) If 4Tₜ ≥ Tₜ + RTT (i.e., 3Tₜ ≥ RTT → 3 ≥ 2a): Continuous transmission! ACK arrives before window exhausted. If 4Tₜ < Tₜ + RTT (i.e., 3Tₜ < RTT → 3 < 2a): Dead time between window exhaustion and first ACK arrival.Condition for Continuous Transmission:
The sender transmits continuously if frames keep the channel busy until the first ACK returns:
Time to transmit W frames ≥ Time for first ACK to return
W × Tₜ ≥ Tₜ + RTT
(W - 1) × Tₜ ≥ RTT = 2Tₚ
W - 1 ≥ 2a
W ≥ 2a + 1
Optimal Window Size:
W_optimal = ⌈2a + 1⌉
Any window size ≥ 2a + 1 achieves 100% utilization.
The formula W ≥ 2a + 1 includes '+1' because the first frame spends Tₜ being transmitted before propagation begins. Just having 2a frames isn't enough—you need one more to cover that initial transmission time.
Let's derive the efficiency (utilization) formulas for pipelined transmission.
Without Errors:
Channel utilization U is the fraction of time the sender is transmitting useful data:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
Pipelined Protocol Efficiency (No Errors)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Definitions: Tₜ = Frame transmission time = L / R Tₚ = One-way propagation delay a = Tₚ / Tₜ (normalized propagation delay) W = Window size (frames) Cycle Time Analysis:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Case 1: Window smaller than optimal (W < 2a + 1) Sender transmits W frames in time W × Tₜ. Then must wait for ACK: idle until Tₜ + 2Tₚ = Tₜ + RTT Cycle time = max(W × Tₜ, Tₜ + RTT) = Tₜ + RTT = Tₜ(1 + 2a) Useful transmission time = W × Tₜ Utilization: U = (W × Tₜ) / (Tₜ + RTT) = W × Tₜ / (Tₜ + 2Tₚ) = W × Tₜ / (Tₜ(1 + 2a)) U = W / (1 + 2a) Case 2: Window at or above optimal (W ≥ 2a + 1) Sender transmits continuously - ACKs arrive before window empties. Utilization: U = 1 (100%) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Combined Formula:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ⎧ W / (1 + 2a) if W < 1 + 2a U(W,a) = ⎨ ⎩ 1 if W ≥ 1 + 2a ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Example Calculations:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Scenario: Satellite link, a = 250 (Tₚ = 250 ms, Tₜ = 1 ms) Optimal window: W ≥ 1 + 2(250) = 501 frames Stop-and-Wait (W = 1): U = 1 / (1 + 500) = 0.00199 = 0.2% W = 10: U = 10 / 501 = 2.0% W = 100: U = 100 / 501 = 20% W = 501: U = 100% Scenario: Local Ethernet, a = 0.01 (Tₚ = 0.01 ms, Tₜ = 1 ms) Optimal window: W ≥ 1 + 0.02 = 1.02 → W = 2 frames! Stop-and-Wait (W = 1): U = 1 / 1.02 = 98% W = 2: U = 100% Note: For short links, even small windows achieve high efficiency.| W | a=0.1 (LAN) | a=1 | a=10 | a=100 | a=250 (satellite) |
|---|---|---|---|---|---|
| 1 | 83% | 33% | 4.8% | 0.99% | 0.2% |
| 3 | 100% | 100% | 14% | 2.9% | 0.6% |
| 7 | 100% | 100% | 33% | 6.5% | 1.4% |
| 15 | 100% | 100% | 71% | 13% | 3.0% |
| 63 | 100% | 100% | 100% | 44% | 13% |
| 127 | 100% | 100% | 100% | 63% | 25% |
| 511 | 100% | 100% | 100% | 100% | 100% |
With a = 250 (satellite), HDLC's normal 7-frame window achieves only 1.4% utilization. Extended HDLC (127-frame window) achieves 25%. Even that leaves 75% of capacity idle! High-BDP links demand large windows—this is why protocols like TCP evolved window scaling.
In real networks, frames get corrupted or lost. Error recovery introduces additional overhead that reduces effective throughput.
Error Model:
Let p = probability a frame is received in error (or lost).
Successful transmission probability = 1 - p
Go-Back-N with Errors:
When a frame is lost in Go-Back-N:
Expected transmissions per frame (geometric series):
E[transmissions] = 1 + p × W + p² × 2W + p³ × 3W + ...
≈ 1 / (1 - p × W) for p × W << 1
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
Efficiency with Frame Errors━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Parameters: p = Frame error probability W = Window size a = Normalized propagation delay (Tₚ/Tₜ) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Go-Back-N Efficiency (with errors):━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ When a frame fails: - All W frames in window must be retransmitted - If window fills pipe (W ≥ 1+2a): W ≈ 1 + 2a Average transmissions per successful frame: Nr_GBN = (1 - p) + (1 + 2a) × p + (1 + 2a)² × p² + ... = (1 - p) / (1 - (1 + 2a) × p) ≈ 1 - p + (1 + 2a) × p for small p = 1 + 2ap For W ≥ 1 + 2a: U_GBN = (1 - p) / (1 + 2ap) For W < 1 + 2a: U_GBN = W(1 - p) / [(1 + 2a)(1 - p + Wp)] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Selective Repeat Efficiency (with errors):━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ When a frame fails: - Only THAT frame is retransmitted - Other frames are buffered and delivered when gap fills Average transmissions per successful frame: Nr_SR = 1 + p + p² + p³ + ... = 1 / (1 - p) For W ≥ 1 + 2a: U_SR = (1 - p) For W < 1 + 2a: U_SR = W(1 - p) / (1 + 2a) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Comparison (a = 100, W = 201, p = 0.01 = 1%):━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Go-Back-N: U_GBN = (1 - 0.01) / (1 + 2×100×0.01) = 0.99 / 3 = 33% Selective Repeat: U_SR = 1 - 0.01 = 99% The 1% error rate costs GBN 66% of throughput!SR maintains near-optimal performance. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Summary: Error Sensitivity━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ GBN degradation: proportional to a × p (very sensitive to both)SR degradation: proportional to p only (independent of 'a') For high-BDP error-prone links (satellite), SR vastly outperforms GBN.| Error Rate (p) | GBN Eff. | SR Eff. | GBN Loss Factor |
|---|---|---|---|
| 0% (p=0) | 100% | 100% | — |
| 0.1% | 83% | 99.9% | 17% |
| 0.5% | 50% | 99.5% | 50% |
| 1% | 33% | 99% | 66% |
| 2% | 20% | 98% | 79% |
| 5% | 9% | 95% | 86% |
GBN is acceptable for low-error, low-BDP links (short distances, wired connections). SR is essential for high-error or high-BDP links (satellite, wireless, long-haul). Modern TCP uses SACK to achieve SR-like efficiency while maintaining compatibility.
Even with optimal window sizes, errors cause pipeline stalls—periods when the sender cannot usefully transmit, waiting for retransmission or gap resolution.
Go-Back-N Pipeline Stall:
When a frame is lost:
Stall duration ≈ RTT + timeout (often 2× RTT as conservative timeout)
Selective Repeat Pipeline Stall:
With SR:
Stall duration ≈ minimal (only affects the single frame's slot)
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
Pipeline Behavior During Loss Event━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Scenario: W = 4, Frame 1 lost, frames 0,2,3 received correctly Go-Back-N:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Sender: [F0][F1][F2][F3]_______________[F1'][F2'][F3'][F4]... ↑ ↑ ↑ ↑ ↑ ↑ ↑ │ │ │ │ │ │ └─ new │ │ │ └────┴────┴─ retransmissions │ │ └─ discarded at receiver │ └─ LOST └─ delivered, ACK 1 sent Receiver: [OK] X [discard][discard] [OK][OK][OK][OK] ACK1 ACK1 ACK1 ACK2 ACK3 ACK4 ACK5 Timeline: T=0: Send F0-F3 T=4Tₜ: Window full, waiting for ACKs T=Tₜ+RTT: ACK 1 arrives (for F0) T=4Tₜ+RTT: ACK 1 for F2,F3 (duplicates - no progress!) T=4Tₜ+TO: TIMEOUT! Retransmit F1,F2,F3 T=...+RTT: ACKs for retransmissions arrive Stall: ~RTT + Timeout (potentially 2-3× RTT) Selective Repeat:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Sender: [F0][F1][F2][F3][F4][F5][F6]...[F1']... ↑ ↑ │ └─ retransmission only for F1 └─ LOST Receiver: [OK][buffer][buffer] [OK→deliver 1,2,3] ACK0 ACK2 ACK3 ACK1 Timeline: T=0-6Tₜ: Send F0-F6 (continuous - window > 4) T=Tₜ+RTT: ACK 0 arrives, window slides to [1-7] T=3Tₜ+RTT: ACK 2 arrives (but F1 missing - no slide) T=???: NAK for F1 or timeout for F1 T=...: Retransmit F1 only T=..+RTT: ACK 1 arrives, window slides, buffered frames delivered Stall: Only the slot for F1 is affected. Other frames proceed. Overall delay: ~1 RTT for the lost frame, no delay for others. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Key Insight: GBN: Loss creates "bubble" in pipeline, all subsequent work wasted SR: Loss creates "hole" that's filled without disrupting rest of pipeLatency Implications:
For latency-sensitive applications, pipeline stalls matter beyond pure throughput:
For a satellite link (RTT = 500 ms), a GBN stall means 0.5-1.5 seconds of blocked data. This causes visible pauses in streaming, sluggish interactive sessions, and disrupted real-time applications.
TCP's 'fast retransmit' (triggered by 3 duplicate ACKs) reduces stall time by detecting loss before timeout. This hybrid approach—using duplicate ACKs as implicit NAKs—brings some SR benefits to cumulative-ACK systems.
Beyond the theoretical formulas, several practical factors affect pipelined performance:
1. ACK Compression and Loss:
In practice, ACKs may be:
Cumulative ACKs provide robustness—a single ACK(n) implicitly acknowledges all frames up to n, so lost intermediate ACKs don't cause retransmissions.
2. Receiver Buffer Pressure:
Large windows at sender mean potentially large out-of-order buffers at receiver:
3. Congestion Considerations:
Pipelining creates bursts of packets that can cause congestion:
Modern protocols (TCP) combine pipelining with congestion control:
The interplay between flow control window (receiver-limited) and congestion window (network-limited) determines actual transmission rate.
A large window enables high throughput but doesn't guarantee it. If the network path has bottleneck bandwidth B and RTT d, max throughput = B regardless of window. The window merely ensures you CAN use that bandwidth. Congestion or receiver limits may still constrain you below it.
Pipelining is the mechanism that transforms sliding window protocols from correct-but-slow to correct-and-fast. Let's consolidate the key insights:
Module Complete:
This completes our comprehensive treatment of the Sliding Window Protocol. You've learned:
With this foundation, you're prepared to study the specific protocol variants—Go-Back-N and Selective Repeat—in complete detail in subsequent modules.
Congratulations! You now possess a comprehensive understanding of sliding window protocols—from the window abstraction through sender/receiver mechanics, sequence number mathematics, to pipelining efficiency. This knowledge forms the foundation for understanding all modern reliable data transfer protocols, from HDLC to TCP.