Loading learning content...
We've examined propagation delay (how long signals take to travel) and transmission delay (how long it takes to push bits onto the wire). Now we combine these to understand Round-Trip Time (RTT)—the total time for a complete request-response cycle.
In Stop-and-Wait ARQ, RTT is the critical metric because the sender cannot transmit the next frame until the acknowledgment completes its round trip. Every nanosecond added to RTT is a nanosecond the sender sits idle, wasting channel capacity.
Consider what happens when you ping a server:
C:\> ping google.com
Pinging google.com [142.250.185.46] with 32 bytes of data:
Reply from 142.250.185.46: bytes=32 time=14ms TTL=118
Reply from 142.250.185.46: bytes=32 time=13ms TTL=118
Reply from 142.250.185.46: bytes=32 time=12ms TTL=118
Reply from 142.250.185.46: bytes=32 time=14ms TTL=118
That 12-14ms represents your RTT to Google—the time for your packet to reach Google's server and for the response to return. For Stop-and-Wait, this would limit you to roughly 71-83 round trips per second, regardless of your bandwidth.
By the end of this page, you will understand all components that contribute to RTT, calculate RTT for various network configurations, distinguish between theoretical minimum RTT and measured RTT, appreciate how RTT directly constrains Stop-and-Wait performance, and analyze RTT variability and its implications.
The Complete RTT Cycle:
For Stop-and-Wait ARQ, RTT represents the time from when the sender begins transmitting a frame until it receives the acknowledgment. This cycle includes:
1. Transmission of Data Frame (Tt_data)
2. Propagation to Receiver (Tp)
3. Receiver Processing (Tproc_rx)
4. Transmission of ACK Frame (Tt_ack)
5. Propagation of ACK (Tp)
6. Sender Processing (Tproc_tx)
The Complete RTT Formula:
$$\text{RTT} = T_t^{\text{data}} + T_p + T_{\text{proc}}^{\text{rx}} + T_t^{\text{ack}} + T_p + T_{\text{proc}}^{\text{tx}}$$
Simplifying with common assumptions:
$$\text{RTT} = T_t^{\text{data}} + 2T_p + T_t^{\text{ack}} + T_{\text{proc}}$$
Standard Simplifications:
In most textbook problems and practical analyses:
Simplified RTT for Analysis:
$$\text{RTT} \approx T_t + 2T_p$$
This is the formula we use for Stop-and-Wait utilization:
$$U = \frac{T_t}{\text{RTT}} = \frac{T_t}{T_t + 2T_p}$$
The simplified RTT formula assumes negligible ACK transmission time and processing delay. This is valid for most scenarios but fails when: (1) ACKs carry piggybacked data (larger ACK frames), (2) highly asymmetric links are used (e.g., satellite uplink/downlink), (3) processing-constrained devices are involved (IoT, embedded systems), or (4) queuing delays are significant (congested networks).
The Simplest Case:
For a direct connection between sender and receiver:
$$\text{RTT} = T_t + 2T_p = \frac{L}{B} + \frac{2d}{v}$$
Example 1: Campus Network
Two buildings connected by 1 km fiber, 1 Gbps link, 1500-byte frames:
$$T_t = \frac{12,000}{10^9} = 12 \text{ μs}$$ $$2T_p = \frac{2 \times 1000}{2 \times 10^8} = 10 \text{ μs}$$ $$\text{RTT} = 12 + 10 = 22 \text{ μs}$$
Utilization: U = 12/22 = 54.5%
Example 2: Cross-Country Link
New York to San Francisco (4000 km), 10 Gbps, 1500-byte frames:
$$T_t = \frac{12,000}{10^{10}} = 1.2 \text{ μs}$$ $$2T_p = \frac{2 \times 4,000,000}{2 \times 10^8} = 40 \text{ ms} = 40,000 \text{ μs}$$ $$\text{RTT} = 1.2 + 40,000 = 40,001.2 \text{ μs} \approx 40 \text{ ms}$$
Utilization: U = 1.2/40,000 = 0.003%
Propagation utterly dominates!
| Link Type | Distance | Bandwidth | Tt | 2Tp | RTT | Utilization |
|---|---|---|---|---|---|---|
| Office LAN | 100 m | 1 Gbps | 12 μs | 1 μs | 13 μs | 92.3% |
| Campus backbone | 2 km | 10 Gbps | 1.2 μs | 20 μs | 21.2 μs | 5.7% |
| Metro link | 50 km | 10 Gbps | 1.2 μs | 500 μs | 501.2 μs | 0.24% |
| Regional WAN | 500 km | 100 Gbps | 0.12 μs | 5 ms | 5.00012 ms | 0.002% |
| Cross-country | 4,000 km | 100 Gbps | 0.12 μs | 40 ms | 40 ms | 0.0003% |
| GEO Satellite | 71,600 km | 100 Mbps | 120 μs | 477 ms | 477 ms | 0.025% |
Notice how utilization drops catastrophically as distance increases. This is the fundamental limitation of Stop-and-Wait: no matter how much bandwidth you add, RTT imposes an absolute ceiling on effective throughput. With RTT = 40 ms, maximum Stop-and-Wait frame rate is 25 frames/second, yielding 25 × 1500 × 8 = 300 Kbps—on a 100 Gbps link!
Store-and-Forward Effects:
In multi-hop networks, frames are transmitted and received at each intermediate device. For n hops:
Forward Path:
Return Path (ACK):
Complete RTT for n Hops (Store-and-Forward):
$$\text{RTT} = \sum_{i=1}^{n}(T_t^{\text{data}(i)} + T_p^{(i)} + T_{\text{queue}}^{(i)} + T_{\text{proc}}^{(i)}) + \sum_{i=1}^{n}(T_t^{\text{ack}(i)} + T_p^{(i)} + T_{\text{queue}}^{(i)} + T_{\text{proc}}^{(i)})$$
Simplifying with identical links and negligible ACK/processing:
$$\text{RTT} \approx n \cdot T_t + 2 \sum_{i=1}^{n} T_p^{(i)}$$
Example: Three-Hop Internet Path
Path from client to server with three router hops:
Hop 1: 100 km fiber, 1 Gbps Hop 2: 2,000 km fiber, 10 Gbps Hop 3: 50 km fiber, 100 Mbps
Frame: 1500 bytes = 12,000 bits
Transmission Delays: $$T_t^{(1)} = \frac{12,000}{10^9} = 12 \text{ μs}$$ $$T_t^{(2)} = \frac{12,000}{10^{10}} = 1.2 \text{ μs}$$ $$T_t^{(3)} = \frac{12,000}{10^8} = 120 \text{ μs}$$
Propagation Delays: $$T_p^{(1)} = \frac{100,000}{2 \times 10^8} = 0.5 \text{ ms}$$ $$T_p^{(2)} = \frac{2,000,000}{2 \times 10^8} = 10 \text{ ms}$$ $$T_p^{(3)} = \frac{50,000}{2 \times 10^8} = 0.25 \text{ ms}$$
Total Forward Delay: $$T_{\text{forward}} = (12 + 1.2 + 120) \text{ μs} + (0.5 + 10 + 0.25) \text{ ms}$$ $$= 133.2 \text{ μs} + 10.75 \text{ ms} = 10.88 \text{ ms}$$
RTT (assuming symmetric return, negligible ACK): $$\text{RTT} \approx 2 \times 10.75 \text{ ms} + 133.2 \text{ μs} = 21.63 \text{ ms}$$
Note: Transmission delay is small compared to propagation in this WAN scenario.
In the example above, Hop 3 (100 Mbps) contributes the largest transmission delay (120 μs) despite being the shortest in distance. However, Hop 2 dominates overall delay due to its 2,000 km length (10 ms propagation). For latency optimization, identify whether transmission or propagation is the bottleneck—they require different solutions.
The Gap Between Theory and Reality:
Theoretical RTT (RTT_min) considers only propagation delays—the absolute minimum determined by physics:
$$\text{RTT}_{\text{min}} = \frac{2d}{v}$$
Measured RTT includes all real-world delays:
$$\text{RTT}{\text{measured}} = \text{RTT}{\text{min}} + T_{\text{queuing}} + T_{\text{processing}} + T_{\text{transmission}}$$
Where Does the Extra Time Go?
1. Queuing Delay (Highly Variable):
2. Processing Delay:
3. Transmission Delay:
Example: New York to London RTT Analysis
Cable distance: approximately 6,000 km
Theoretical minimum: $$\text{RTT}_{\text{min}} = \frac{2 \times 6,000,000}{2 \times 10^8} = 60 \text{ ms}$$
Measured RTT (typical): 70-90 ms
Where does the extra 10-30 ms go?
| Component | Estimated Delay |
|---|---|
| Extra cable routing | +5-10 ms |
| Router processing (6-8 hops) | +1-3 ms |
| Queuing (variable) | +2-20 ms |
| Transmission (multiple hops) | < 1 ms |
| Total additional | +10-30 ms |
RTT Inflation Factor: $$\text{Inflation} = \frac{\text{RTT}{\text{measured}}}{\text{RTT}{\text{min}}} = \frac{80}{60} = 1.33 \text{ (typical)}$$
Real-world RTT is typically 20-50% higher than theoretical minimum.
| Route | Distance | RTT_min | Typical Measured | Inflation |
|---|---|---|---|---|
| Same data center | < 1 km | < 0.01 ms | 0.1-0.5 ms | 10-50× |
| Same city | 50 km | 0.5 ms | 1-5 ms | 2-10× |
| Cross-country (US) | 4,000 km | 40 ms | 50-70 ms | 1.25-1.75× |
| Transatlantic | 6,000 km | 60 ms | 70-90 ms | 1.15-1.5× |
| US to Asia | 12,000 km | 120 ms | 150-200 ms | 1.25-1.67× |
| GEO Satellite | 72,000 km | 480 ms | 500-600 ms | 1.04-1.25× |
Notice that short-distance routes show the highest inflation (10-50×) while long-distance routes show lower inflation (1.1-1.7×). This is because fixed overheads (processing, transmission) dominate when propagation is small, but become negligible relative to propagation delay on long links. For Stop-and-Wait analysis, long-distance RTT is well-approximated by 2×propagation delay.
Why RTT Fluctuates:
Unlike propagation delay (constant) and transmission delay (constant for fixed frame size), measured RTT varies significantly over time:
Sources of RTT Variability:
Queuing Delay Variation:
Route Changes:
Processing Variation:
Wireless/Mobile Effects:
Characterizing RTT Distribution:
Minimum RTT (RTT_min):
Mean RTT (RTT_avg):
Maximum RTT (RTT_max):
Jitter (RTT Variation):
$$\text{Jitter} = \text{RTT}{\text{max}} - \text{RTT}{\text{min}}$$
Or as standard deviation:
$$\sigma_{\text{RTT}} = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(\text{RTT}i - \text{RTT}{\text{avg}})^2}$$
| Environment | RTT Min | RTT Avg | RTT Max | Jitter | σ RTT |
|---|---|---|---|---|---|
| Wired LAN | 0.1 ms | 0.3 ms | 2 ms | 1.9 ms | 0.3 ms |
| Enterprise WAN | 10 ms | 15 ms | 50 ms | 40 ms | 8 ms |
| Home broadband | 15 ms | 25 ms | 100 ms | 85 ms | 15 ms |
| Cellular 4G | 30 ms | 50 ms | 200 ms | 170 ms | 40 ms |
| Satellite Internet | 500 ms | 550 ms | 800 ms | 300 ms | 60 ms |
RTT variability complicates Stop-and-Wait timeout configuration. Set timeout too low (based on RTT_min) and you'll get spurious retransmissions. Set it too high (based on RTT_max) and you'll waste time waiting after genuine losses. Adaptive timeout algorithms must track RTT distribution dynamically.
Active Measurement:
1. ICMP Ping: The simplest and most common method:
ping -c 4 target.example.com
Measures RTT for ICMP Echo Request/Reply packets.
Limitations:
2. TCP Handshake Timing: Measure time from SYN to SYN-ACK:
Time SYN sent: T1
Time SYN-ACK received: T2
RTT ≈ T2 - T1
Advantage: Measures actual TCP path Limitation: Includes server processing time
3. HTTP Timing: Measure time from request start to first byte:
Time of Request (TOR) = T1
Time to First Byte (TTFB) = T2
RTT ≈ TTFB - TOR - server_processing
Advantage: Application-realistic Limitation: Hard to separate RTT from server latency
Passive Measurement:
Observe RTT from ongoing traffic without injecting probes:
1. TCP Timestamp Echo: TCP timestamps allow receivers to echo sender's timestamp:
2. ACK Observation: For each data segment, measure time until ACK received: $$\text{RTT}i = T{\text{ACK}i} - T{\text{DATA}_i}$$
RTT Estimation (Exponential Weighted Moving Average):
TCP uses EWMA to smooth RTT measurements:
$$\text{SRTT}{\text{new}} = (1 - \alpha) \cdot \text{SRTT}{\text{old}} + \alpha \cdot \text{RTT}_{\text{measured}}$$
Typically α = 1/8 = 0.125
Retransmission Timeout (RTO) Calculation:
$$\text{RTO} = \text{SRTT} + 4 \cdot \text{RTTVAR}$$
Where RTTVAR tracks RTT deviation:
$$\text{RTTVAR}{\text{new}} = (1 - \beta) \cdot \text{RTTVAR}{\text{old}} + \beta \cdot |\text{SRTT} - \text{RTT}_{\text{measured}}|$$
Typically β = 1/4 = 0.25
The RTO formula includes 4× the RTT variation to handle jitter. If RTT is stable (low RTTVAR), RTO stays close to SRTT. If RTT is volatile (high RTTVAR), RTO expands to avoid spurious retransmissions. This adaptive behavior is crucial for protocols operating over diverse network conditions.
The Fundamental Constraint:
Stop-and-Wait sends one frame per RTT. Maximum frame rate:
$$\text{Frame Rate}_{\text{max}} = \frac{1}{\text{RTT}}$$
Maximum throughput:
$$\text{Throughput}_{\text{max}} = \frac{L}{\text{RTT}}$$
This is independent of bandwidth!
Example Analysis:
| RTT | Max Frame Rate | Max Throughput (1500B) | % of 1 Gbps |
|---|---|---|---|
| 1 ms | 1,000/sec | 12 Mbps | 1.2% |
| 10 ms | 100/sec | 1.2 Mbps | 0.12% |
| 50 ms | 20/sec | 240 Kbps | 0.024% |
| 100 ms | 10/sec | 120 Kbps | 0.012% |
| 500 ms | 2/sec | 24 Kbps | 0.0024% |
Key Insight: At 500 ms RTT (satellite), Stop-and-Wait on a 1 Gbps link achieves only 24 Kbps—slower than a 56K modem!
The RTT Sensitivity Problem:
Small changes in RTT cause proportional changes in throughput:
$$\frac{\Delta \text{Throughput}}{\text{Throughput}} = -\frac{\Delta \text{RTT}}{\text{RTT}}$$
Example:
This sensitivity explains why Stop-and-Wait performance varies significantly with network conditions.
Comparison with Sliding Window:
Sliding window protocols decouple throughput from RTT by allowing multiple frames in flight:
| Protocol | Window Size | Frames in Flight | Throughput (50ms RTT, 1 Gbps) |
|---|---|---|---|
| Stop-and-Wait | 1 | 1 | 240 Kbps |
| GBN (W=8) | 8 | 8 | 1.92 Mbps |
| GBN (W=100) | 100 | 100 | 24 Mbps |
| GBN (W=4167) | 4167 | 4167 | 1 Gbps (full utilization) |
Window size needed for full utilization:
$$W = \frac{\text{RTT} \times B}{L} = \frac{0.05 \times 10^9}{12,000} \approx 4,167 \text{ frames}$$
Stop-and-Wait has an absolute throughput ceiling of L/RTT, regardless of link bandwidth. Upgrading from 100 Mbps to 10 Gbps provides ZERO throughput improvement if RTT is the bottleneck. This is why understanding RTT is crucial before investing in bandwidth upgrades.
This page provided comprehensive coverage of Round-Trip Time—the complete cycle time that determines how fast Stop-and-Wait can operate.
You now have a thorough understanding of Round-Trip Time and its critical role in Stop-and-Wait performance. In the final page of this module, we'll derive and apply the comprehensive efficiency formula, bringing together all timing components for complete protocol analysis.