Loading learning content...
Over the preceding pages, we've traced the evolution of TCP congestion control from Tahoe's crisis response in 1988 through CUBIC's modern dominance. Each variant solved specific problems while introducing new trade-offs. Now we synthesize this knowledge into a unified framework for understanding, selecting, and diagnosing TCP congestion control algorithms.
This comparison page serves multiple purposes:
We'll compare the variants across multiple dimensions: algorithmic behavior, performance characteristics, deployment status, and appropriate use cases.
By the end of this page, you will be able to compare TCP variants' behaviors under different scenarios, select appropriate congestion control for specific use cases, diagnose performance issues related to congestion control, and understand how each variant fits into TCP's evolution.
Let's begin with a comprehensive comparison matrix covering all major aspects of each TCP variant.
| Aspect | Tahoe | Reno | NewReno | CUBIC |
|---|---|---|---|---|
| Year Introduced | 1988 | 1990 | 1999 (RFC 2582) | 2006 (Linux default) |
| Standard Document | RFC 5681 (historical) | RFC 5681 (historical) | RFC 6582 | RFC 8312 |
| Slow Start | Yes (exp growth) | Yes (exp growth) | Yes (exp growth) | Yes + HyStart |
| Congestion Avoidance | Linear (1 MSS/RTT) | Linear (1 MSS/RTT) | Linear (1 MSS/RTT) | Cubic function |
| Fast Retransmit | 3 dup ACKs | 3 dup ACKs | 3 dup ACKs | 3 dup ACKs |
| Fast Recovery | No (back to SS) | Yes (stay in CA) | Yes (handles partial ACKs) | Yes (NewReno-style) |
| Loss Response (FR) | cwnd = 1 MSS | cwnd = ssthresh | cwnd stays elevated during FR | cwnd = 0.7 × cwnd |
| Loss Response (RTO) | cwnd = 1 MSS | cwnd = 1 MSS | cwnd = 1 MSS | cwnd = 1 MSS |
| ssthresh on loss | cwnd/2 | cwnd/2 | cwnd/2 | cwnd × 0.7 |
| RTT Fairness | Poor (RTT-dependent) | Poor (RTT-dependent) | Poor (RTT-dependent) | Good (time-based) |
Key Differences Highlighted:
Let's visualize how each algorithm behaves under the same network conditions. Consider a scenario:
Each algorithm's cwnd trajectory reveals its fundamental character.
Congestion Window Over Time============================ cwnd(segs) ^ |850| ____------ CUBIC | ___/ \___ | / \700| / \ | / \ | /\ / \600| / \ / | / \ / ....... NewReno | / \/ .../ \...500| / ../ \.. | / ../ \ | / ../ \400| / ../ ----- Reno | / ../ --/ \-- | | ./ -/ \-300| | ./ -/ \ | | / -/ \ | |/ -/ \200| | -/ \ | / -/ \ |/ -/ \100| -/ Tahoe \ | / /\ \ | / / \ \ 1|--/ / \________________________ +-----|-----|-----|-----|-----|-----|-----|---> time (s) 2 4 6 8 10 12 14 Event at t=5s: Single packet lossEvent at t=10s: Multiple packet losses (2 packets) OBSERVATIONS:- Tahoe: Deep drops to cwnd=1 on every loss event- Reno: Shallow drop on first loss; deep drop on second (RTO timeout)- NewReno: Shallow drops on both events (handles multiple losses)- CUBIC: Shallowest drops (30% reduction) + fastest recoveryInterpreting the Trajectories:
Tahoe (bottom): The characteristic deep valleys after each loss event. Every loss triggers a complete reset to cwnd=1, followed by Slow Start. Recovery takes the longest.
Reno (lower middle): Shallow recovery from the first loss (Fast Recovery works). However, the second event (multiple losses) causes a timeout, resulting in a Tahoe-like deep valley.
NewReno (upper middle): Both loss events show moderate recovery. The multiple-loss event at t=10s is handled within Fast Recovery without timeout.
CUBIC (top): Shallowest drops (retaining 70% of window) and fastest recovery. The time-based cubic function rapidly returns to W_max vicinity.
When analyzing packet captures or system metrics, the cwnd trajectory pattern immediately reveals which congestion control behavior is active. Deep valleys = aggressive backoff (Tahoe-like). Shallow valleys = modern congestion control. Cubic curves = CUBIC specifically.
Different network conditions favor different algorithms. Understanding these trade-offs is essential for system administration and performance tuning.
| Network Condition | Best Performer | Worst Performer | Notes |
|---|---|---|---|
| Low BDP (LAN) | All similar | None | TCP-friendliness makes CUBIC match Reno |
| High BDP (WAN) | CUBIC | Tahoe | CUBIC's time-based growth shines |
| Single packet loss | Reno/NewReno/CUBIC | Tahoe | Fast Recovery avoids Slow Start |
| Multiple packet loss | NewReno/CUBIC | Reno | Reno needs RTO for second+ losses |
| Mixed RTT competition | CUBIC | Tahoe/Reno/NewReno | CUBIC's RTT-fairness |
| Bursty loss | CUBIC | Tahoe | CUBIC's gentler reduction preserves momentum |
| Very high loss rate | All struggle | All struggle | Loss-based CC reaches floor quickly |
Throughput Estimation:
For loss-based TCP variants, steady-state throughput can be estimated using the Mathis formula:
Throughput ≈ (MSS / RTT) × (C / √p)
Where:
This formula shows that throughput scales inversely with RTT for Reno/NewReno but is more resilient for CUBIC due to its time-based growth.
Let's analyze specific real-world scenarios to understand how each variant would perform.
SCENARIO 1: Web Page Load (Short Flow)======================================Setup: 100ms RTT, 10 Mbps, 200 KB page (14 segments at 1460B each) Timeline (starting from handshake complete): Tahoe/Reno/NewReno: RTT 0: SS, cwnd=1, send 1 seg RTT 1: SS, cwnd=2, send 2 segs RTT 2: SS, cwnd=4, send 4 segs RTT 3: SS, cwnd=8, send 7 segs (completes transfer) Total: 3 RTTs = 300ms CUBIC: Same behavior (still in Slow Start, TCP-friendliness active) Total: 3 RTTs = 300ms Conclusion: For very short flows, all variants perform similarly. Slow Start dominates; congestion control differences irrelevant. SCENARIO 2: Large File Transfer (Long Flow)===========================================Setup: 100ms RTT, 100 Mbps, 1 GB file, 0.1% random loss Expected behavior: BDP = 100 Mbps × 100ms = 1.25 MB = ~856 segments With 0.1% loss, expect loss roughly every 1000 segments Tahoe: - Reaches ~850 seg window - Single loss → cwnd = 1, ssthresh = 425 - Climb back to 425: 9 RTTs (Slow Start) - Climb to 850: 425 more RTTs (42.5 seconds!) - Average window: ~400 segments (47% optimal) Reno: - Single loss → cwnd = 425, no Slow Start - Climb back to 850: 425 RTTs (42.5 seconds) - But if multiple losses occur: back to Tahoe behavior - Average window: ~500 segments (59% optimal) NewReno: - Handles multiple losses gracefully - More consistent recovery - Average window: ~550 segments (65% optimal) CUBIC: - Loss → cwnd = 595 (70% of 850) - K = ∛(850 × 0.3 / 0.4) = 8.6 seconds - Returns to 850 in ~8.6 seconds (vs. 42.5 for Reno) - Average window: ~700 segments (82% optimal) Conclusion: CUBIC achieves ~75% higher throughput than Tahoe, ~40% higher than Reno on this high-BDP path. SCENARIO 3: Competing Flows with Different RTTs===============================================Setup: Two flows sharing 100 Mbps bottleneck Flow A: 20ms RTT, Flow B: 200ms RTT Fair share: 50 Mbps each Reno/NewReno actual share: Bandwidth ∝ 1/RTT Flow A: ~90 Mbps (10× RTT advantage) Flow B: ~10 Mbps Fairness ratio: 9:1 (extremely unfair) CUBIC actual share: Time-based growth equalizes Flow A: ~55 Mbps Flow B: ~45 Mbps Fairness ratio: ~1.2:1 (much fairer) Conclusion: CUBIC provides dramatically fairer bandwidth allocation when flows have different RTTs. SCENARIO 4: Burst Loss Event (3 packets lost)=============================================Setup: cwnd = 100 segments, segments 50, 53, 58 lost Reno: - Detects loss of seg 50 via 3 dupacks - Fast Recovery: retransmit 50, cwnd = 50 - Partial ACK reveals seg 53 loss → Reno exits FR! - Must wait for RTO on seg 53 (~1 second) - Then another RTO on seg 58 - Total recovery time: ~2-3 seconds NewReno: - Detects seg 50, retransmit - Partial ACK (seg 53) → stay in FR, retransmit 53 - Partial ACK (seg 58) → stay in FR, retransmit 58 - Full ACK → exit FR - Total recovery time: ~3 RTTs (~300ms) CUBIC (with NewReno-style recovery): - Same as NewReno recovery - Plus gentler window reduction (70% vs 50%) - Total recovery time: ~3 RTTs, higher post-recovery cwnd Conclusion: NewReno-style partial ACK handling is critical for multiple loss scenarios.Given the analysis above, here are practical guidelines for selecting and configuring TCP congestion control.
| Use Case | Recommended | Avoid |
|---|---|---|
| General Internet server | CUBIC | Tahoe, Reno |
| High-latency links (satellite) | CUBIC or Hybla | Any RTT-dependent variant |
| Data center (with ECN) | DCTCP | Standard loss-based CC |
| Video streaming | BBR or CUBIC | Tahoe |
| IoT/Embedded | NewReno (if resource-limited) | CUBIC (may be overkill) |
| Research/Education | All (for comparison) | N/A |
# Check current algorithm
cat /proc/sys/net/ipv4/tcp_congestion_control
# List available algorithms
cat /proc/sys/net/ipv4/tcp_available_congestion_control
# Change algorithm (persistent requires sysctl.conf)
echo 'bbr' | sudo tee /proc/sys/net/ipv4/tcp_congestion_control
When TCP connections underperform, congestion control behavior is often involved. Here's how to diagnose common issues.
SYMPTOM: Very slow throughput despite available bandwidth===========================================================Possible Causes:1. Stuck in Slow Start (ssthresh too low) - Diagnose: Check ss -i output for 'ssthresh' value - Fix: If ssthresh is artificially low, identify cause of past losses 2. RTT-unfairness (competing with low-RTT flows) - Diagnose: Compare your RTT vs competing flows - Fix: Ensure CUBIC is active (not Reno) 3. Receiver window limited - Diagnose: Check netstat for 'Recv-Q' > 0 - Fix: Receiver-side buffer tuning SYMPTOM: Periodic throughput drops===========================================================Possible Causes:1. Normal congestion control oscillation - Diagnose: Loss events correlate with drops - Fix: Expected behavior; consider RED/ECN at router 2. Multiple losses causing RTO (Reno behavior) - Diagnose: Long pauses (~1-3 seconds) - Fix: Upgrade to NewReno or CUBIC if using Reno 3. CUBIC's plateau region - Diagnose: cwnd stable near previous W_max - Fix: Expected; window will grow convexly if capacity available SYMPTOM: Poor performance on high-latency paths===========================================================Possible Causes:1. Linear CA growth too slow (Reno/NewReno) - Diagnose: cwnd grows ~1 segment/RTT (very slow if RTT is large) - Fix: Use CUBIC for time-based growth 2. Initial window too small - Diagnose: First RTT only 1-4 segments - Fix: Enable RFC 6928 IW10 if not already 3. Bufferbloat adding latency - Diagnose: RTT much higher than baseline during transfer - Fix: Consider BBR for latency-limited paths DIAGNOSTIC TOOLS:================# Live stats with cwnd, ssthresh, rttss -i # Detailed per-socket statisticscat /proc/net/tcp # Requires decoding # Wireshark/tcpdump for packet-level analysistcpdump -i eth0 'tcp port 80' -w capture.pcap # netstat for summarynetstat -s | grep -i retrans # iperf3 for controlled testingiperf3 -c server -p 5201 --get-server-outputsysctl net.ipv4.tcp_congestion_control.net.core.rmem_max / wmem_max.Understanding how TCP congestion control evolved provides context for current and future developments.
TCP CONGESTION CONTROL EVOLUTION================================= 1974 ┌─────────────────────────────────────────────────────────┐ │ Original TCP (RFC 793) │ │ - No congestion control whatsoever │ │ - Relied on receiver flow control only │ └─────────────────────────────────────────────────────────┘ │1986 ▼ Internet Congestion Collapse │1988 ┌─────────────────────────────────────────────────────────┐ │ TCP TAHOE (Van Jacobson, BSD 4.3 Tahoe) │ │ + Slow Start │ │ + Congestion Avoidance (AIMD) │ │ + Fast Retransmit │ │ - Full cwnd reset on all losses │ └─────────────────────────────────────────────────────────┘ │1990 ┌─────────────────────────────────────────────────────────┐ │ TCP RENO (BSD 4.3 Reno) │ │ + Fast Recovery (avoid SS on 3 dupacks) │ │ - Cannot handle multiple losses per window │ └─────────────────────────────────────────────────────────┘ │1996 ┌─────────────────────────────────────────────────────────┐ │ TCP SACK (RFC 2018) │ │ + Selective acknowledgments │ │ + Precise loss information │ │ + Parallel retransmission │ └─────────────────────────────────────────────────────────┘ │1999 ┌─────────────────────────────────────────────────────────┐ │ TCP NEWRENO (RFC 2582/3782/6582) │ │ + Partial ACK handling │ │ + Complete multiple-loss recovery │ │ - Still one retransmit per RTT │ └─────────────────────────────────────────────────────────┘ │2004 ┌─────────────────────────────────────────────────────────┐ │ TCP BIC (Binary Increase Congestion control) │ │ + Binary search for capacity │ │ + Better high-BDP performance │ │ - Complex, aggressive │ └─────────────────────────────────────────────────────────┘ │2006 ┌─────────────────────────────────────────────────────────┐ │ TCP CUBIC (Linux default) │ │ + Cubic window function │ │ + RTT-independent growth │ │ + TCP-friendliness mode │ │ = Currently dominant algorithm │ └─────────────────────────────────────────────────────────┘ │2016 ┌─────────────────────────────────────────────────────────┐ │ TCP BBR (Google) │ │ + Model-based (not loss-based) │ │ + Estimates bandwidth and RTT │ │ + Minimizes buffer occupancy │ │ - Fairness debates ongoing │ └─────────────────────────────────────────────────────────┘ │2021+ ┌─────────────────────────────────────────────────────────┐ │ Future: BBRv2/v3, QUIC CC, Machine Learning CC │ │ - Continued evolution │ │ - Application-layer congestion control │ │ - AI/ML-assisted algorithms │ └─────────────────────────────────────────────────────────┘Early algorithms (Tahoe through CUBIC) are reactive: they wait for loss, then adjust. Modern approaches (BBR, some QUIC variants) are proactive: they actively estimate network capacity and adjust preemptively. This shift reduces latency and buffer occupancy but requires more sophisticated modeling.
While CUBIC dominates today's Internet, congestion control continues to evolve. Understanding emerging approaches prepares you for the future.
The CUBIC → BBR Debate:
BBR represents a philosophical shift from loss-based to model-based congestion control. The debate is ongoing:
| Aspect | CUBIC Argument | BBR Argument |
|---|---|---|
| Safety | Loss signal is unambiguous | Models may be wrong |
| Buffers | Some buffering smooths bursts | Buffers cause latency |
| Fairness | Well-understood, converges | BBRv1 had issues; improving |
| Complexity | Simple, proven | More complex modeling |
| Latency | Fills buffers (higher latency) | Minimizes buffers (low latency) |
For most deployments, CUBIC remains the safe choice. BBR offers advantages for specific use cases (streaming, latency-sensitive applications) with careful deployment.
Follow the IETF TCPM working group, the QUIC working group, and Google's BBR development for the latest in congestion control. The Linux kernel networking list also tracks implementation changes.
We've completed our comprehensive journey through TCP congestion control variants, from Tahoe's emergency response to CUBIC's sophisticated time-based algorithms. Let's consolidate the key takeaways from the entire module:
| Variant | Key Innovation | When Released | Current Use |
|---|---|---|---|
| Tahoe | Slow Start + CA + Fast Retransmit | 1988 | Historical only |
| Reno | Fast Recovery | 1990 | Historical only |
| NewReno | Partial ACK handling | 1999 | Fallback/embedded |
| CUBIC | Time-based cubic growth | 2006 | Dominant (~65%) |
| BBR | Model-based, latency-minimizing | 2016 | Growing (~20%) |
Your Congestion Control Toolkit:
With the knowledge from this module, you can:
Congestion control is one of the most elegant examples of distributed algorithm design—end hosts cooperating to share network capacity fairly and efficiently, using only local information. Mastering this topic provides deep insight into how the Internet actually works.
Congratulations! You've completed the TCP Variants module with comprehensive understanding of Tahoe, Reno, NewReno, and CUBIC. You can now analyze, compare, and diagnose TCP congestion control behaviors across the algorithms that power the modern Internet.