Loading learning content...
In the previous page, we discovered that Stop-and-Wait utilization depends critically on the ratio of propagation delay to transmission time. Now we must understand propagation delay in depth—because it represents a fundamental physical constraint that no amount of engineering can overcome.
The speed of light in a vacuum is approximately 299,792,458 meters per second (~3 × 10⁸ m/s). This cosmic speed limit means that even with infinitely fast computers and infinitely wide bandwidth, information cannot travel faster than about 300,000 kilometers per second.
For a signal traveling from New York to London (approximately 5,500 km via undersea fiber), the minimum possible one-way delay is:
$$T_p^{\text{min}} = \frac{5,500,000 \text{ m}}{3 \times 10^8 \text{ m/s}} \approx 18.3 \text{ ms}$$
In practice, fiber optic cables achieve about 2/3 of light speed, giving:
$$T_p^{\text{actual}} \approx \frac{5,500,000}{2 \times 10^8} \approx 27.5 \text{ ms}$$
This delay is irreducible. It exists regardless of how fast our processors are, how wide our bandwidth is, or how advanced our protocols become. Understanding propagation delay is understanding a fundamental limit of the physical universe.
By the end of this page, you will understand the physics underlying propagation delay, calculate propagation delay for various media and network topologies, appreciate how geography impacts protocol performance, and recognize the difference between transmission time (controllable) and propagation delay (fixed by physics).
What is Propagation?
Propagation is the process by which a signal (electromagnetic wave, light pulse, or electrical voltage change) travels through a transmission medium. When we transmit a bit, we're creating a change in some physical property—voltage on a wire, light intensity in fiber, or electromagnetic field strength in wireless.
This change propagates outward from the source at a velocity determined by the medium's physical properties.
The Propagation Velocity Formula:
In a vacuum, electromagnetic waves travel at the speed of light:
$$c = 3 \times 10^8 \text{ m/s}$$
In other media, the velocity is reduced by the refractive index (n):
$$v = \frac{c}{n}$$
Where:
| Medium | Refractive Index (n) | Propagation Velocity (v) | % of Light Speed |
|---|---|---|---|
| Vacuum | 1.00 | 3.00 × 10⁸ m/s | 100% |
| Air (wireless) | ~1.0003 | ~3.00 × 10⁸ m/s | ~100% |
| Glass fiber (typical) | ~1.50 | 2.00 × 10⁸ m/s | ~67% |
| Copper cable (coax) | ~1.40 | 2.14 × 10⁸ m/s | ~71% |
| Twisted pair (Cat6) | ~1.40-1.50 | 1.90-2.10 × 10⁸ m/s | ~66-70% |
For most network calculations, we use v = 2 × 10⁸ m/s for both fiber and copper cables. This approximation (about 2/3 the speed of light) is sufficiently accurate for protocol analysis and simplifies calculations. Wireless propagation uses 3 × 10⁸ m/s.
Why Different Media Have Different Velocities:
Fiber Optic Cable: Light travels through glass, which has a refractive index of approximately 1.5. The light undergoes total internal reflection, bouncing along the fiber core. Each reflection adds minuscule delays, and the glass itself slows propagation compared to vacuum.
Copper Cable: Electrical signals in copper propagate as electromagnetic waves guided by the conductor. The velocity depends on the cable's capacitance and inductance per unit length. Insulation material and cable geometry affect these properties.
Wireless/Satellite: In free space (atmosphere or vacuum), electromagnetic waves travel at essentially the speed of light. However, satellite links have enormous distances—geostationary satellites orbit at 35,786 km altitude, creating one-way delays of approximately 120 ms even at light speed.
The Fundamental Formula:
Propagation delay is simply distance divided by velocity:
$$T_p = \frac{d}{v}$$
Where:
Example 1: Local Area Network
A sender and receiver are 200 meters apart in an office building, connected by Cat6 Ethernet (v ≈ 2 × 10⁸ m/s).
$$T_p = \frac{200}{2 \times 10^8} = 1 \times 10^{-6} \text{ s} = 1 \text{ μs}$$
This is negligible—fully acceptable for Stop-and-Wait.
Example 2: Metropolitan Area Network
Two data centers 50 km apart connected by fiber optic:
$$T_p = \frac{50,000}{2 \times 10^8} = 2.5 \times 10^{-4} \text{ s} = 250 \text{ μs} = 0.25 \text{ ms}$$
Still relatively small, but becoming significant for high-bandwidth links.
Example 3: Transcontinental Link
New York to Los Angeles fiber connection (~4,000 km):
$$T_p = \frac{4,000,000}{2 \times 10^8} = 0.02 \text{ s} = 20 \text{ ms}$$
Round-trip delay: 40 ms. At 10 Gbps with 1500-byte frames:
Example 4: Transatlantic Link
New York to London undersea fiber (~6,000 km cable route):
$$T_p = \frac{6,000,000}{2 \times 10^8} = 0.03 \text{ s} = 30 \text{ ms}$$
Example 5: Geostationary Satellite
Earth to GEO satellite (35,786 km altitude). Signal travels up and back down:
$$\text{Total distance} = 2 \times 35,786 \text{ km} = 71,572 \text{ km}$$ $$T_p = \frac{71,572,000}{3 \times 10^8} \approx 239 \text{ ms}$$
For a complete request-response, the round-trip is approximately 478 ms—making Stop-and-Wait utterly impractical.
| Network Scenario | Distance | Medium | One-Way Delay | Round-Trip |
|---|---|---|---|---|
| Office LAN | 100 m | Cat6 | 0.5 μs | 1 μs |
| Campus network | 1 km | Fiber | 5 μs | 10 μs |
| Metro area | 50 km | Fiber | 250 μs | 500 μs |
| Regional WAN | 500 km | Fiber | 2.5 ms | 5 ms |
| Cross-country | 4,000 km | Fiber | 20 ms | 40 ms |
| Transatlantic | 6,000 km | Undersea fiber | 30 ms | 60 ms |
| Pacific crossing | 10,000 km | Undersea fiber | 50 ms | 100 ms |
| GEO satellite | 71,600 km | Free space | 239 ms | 478 ms |
Why Round-Trip Matters for Protocols:
In Stop-and-Wait ARQ, the sender must wait for the acknowledgment before transmitting the next frame. This means the total waiting time includes:
Round-Trip Time (RTT):
$$\text{RTT} = 2 \times T_p \quad \text{(symmetric links)}$$
This is why our utilization formula includes 2Tp, not just Tp:
$$U = \frac{T_t}{T_t + 2T_p}$$
Asymmetric Links:
In reality, forward and reverse paths may have different propagation delays:
$$\text{RTT} = T_p^{\text{forward}} + T_p^{\text{return}}$$
This occurs when:
The 'ping' utility measures RTT directly. When you ping a server and see '45 ms', that's the round-trip time—data traveling to the server and response coming back. However, ping RTT includes not just propagation delay but also processing delays at each hop, queuing delays in routers, and transmission times. For high-precision analysis, these components must be separated.
Components of Measured RTT:
When measuring RTT in real networks, the observed value includes several components:
$$\text{RTT}{\text{measured}} = 2T_p + T{\text{queue}} + T_{\text{process}} + T_{\text{transmit}}$$
Where:
For protocol efficiency analysis, we often isolate propagation delay because:
The Speed-of-Light RTT:
The theoretical minimum RTT is determined purely by distance:
$$\text{RTT}_{\text{min}} = \frac{2d}{v} = \frac{2d}{2 \times 10^8} = \frac{d}{10^8}$$
Any measured RTT higher than this represents additional network overhead.
Propagation delay calculations become more complex in multi-hop networks. Understanding how topology affects delay is essential for accurate analysis.
Point-to-Point Links:
The simplest case—direct connection between sender and receiver:
$$T_p = \frac{d}{v}$$
Multi-Hop Networks:
When packets traverse multiple links through routers/switches:
$$T_p^{\text{total}} = \sum_{i=1}^{n} T_p^{(i)} = \sum_{i=1}^{n} \frac{d_i}{v_i}$$
Where n is the number of hops and each hop may have different distance and velocity.
Example: Three-Hop Path
Hop 1: 100 km fiber (v = 2×10⁸ m/s) → Tp = 0.5 ms Hop 2: 500 km fiber (v = 2×10⁸ m/s) → Tp = 2.5 ms Hop 3: 50 km copper (v = 2×10⁸ m/s) → Tp = 0.25 ms
Total: Tp = 0.5 + 2.5 + 0.25 = 3.25 ms
In multi-hop networks, each intermediate device (router/switch) must receive the entire frame before forwarding. This adds transmission time at each hop, not just propagation delay. Total delay = Σ(propagation) + Σ(transmission). Cut-through switching can reduce this, but most routers use store-and-forward.
Satellite Link Configurations:
Geostationary Earth Orbit (GEO):
Medium Earth Orbit (MEO):
Low Earth Orbit (LEO):
Comparison for Stop-and-Wait:
| Orbit Type | Typical RTT | Stop-and-Wait Efficiency (1 Mbps, 1500B frame) |
|---|---|---|
| GEO | 480 ms | 0.025% |
| MEO | 200 ms | 0.06% |
| LEO | 40 ms | 0.3% |
Submarine Cable Routes:
Transcontinental communication relies on undersea fiber optic cables. The actual cable length is typically longer than the great-circle distance due to:
Example: Transatlantic Routes
| Cable System | Route | Length | Propagation Delay |
|---|---|---|---|
| MAREA | Virginia Beach - Bilbao | 6,600 km | 33 ms |
| TAT-14 | NJ - UK - Denmark - France - Germany | 15,000 km | 75 ms |
| AC-1 | US East Coast - UK - Germany - Netherlands | 14,000 km | 70 ms |
The direct great-circle distance (NYC to London: ~5,500 km) suggests 27 ms, but actual cable routing adds 20-50% more delay.
Definition:
The bandwidth-delay product (BDP) represents the maximum amount of data that can be "in flight" on a network link at any given time—the capacity of the "pipe":
$$\text{BDP} = \text{Bandwidth} \times \text{Delay} = B \times T_p$$
Alternatively, for round-trip considerations:
$$\text{BDP}_{\text{RTT}} = B \times \text{RTT} = B \times 2T_p$$
Physical Interpretation:
Imagine the network link as a pipe. The bandwidth determines how much data flows in per second. The propagation delay determines how long the pipe is (in time). The product tells us how much data can be inside the pipe simultaneously.
Example Calculations:
1. Office LAN:
2. Cross-Country Link:
3. GEO Satellite:
To achieve 100% utilization, you need to keep the 'pipe' completely full at all times. This means having BDP bits in transit continuously. Stop-and-Wait can only have one frame (typically 12,000 bits) in transit, regardless of BDP. When BDP = 25 MB but frame size = 1,500 bytes, you're using 0.006% of the pipe's capacity.
Why BDP Determines Protocol Requirements:
The bandwidth-delay product directly relates to our parameter 'a':
$$a = \frac{T_p}{T_t} = \frac{T_p}{L/B} = \frac{T_p \cdot B}{L} = \frac{\text{BDP}}{L}$$
Therefore:
$$a = \frac{\text{Bandwidth-Delay Product}}{\text{Frame Size}}$$
This reveals the fundamental relationship:
The BDP tells us how many frames "fit in the pipe":
$$\text{Frames in flight} = \frac{\text{BDP}}{L} = a$$
For efficient transmission, we want at least as many frames in flight as 'a' suggests—but Stop-and-Wait only allows one frame at a time, regardless of how many could fit.
| Scenario | Bandwidth | One-Way Delay | BDP | Frames Needed* |
|---|---|---|---|---|
| Office LAN | 1 Gbps | 1 μs | 1,000 bits | < 1 |
| Campus backbone | 10 Gbps | 50 μs | 500 KB | 42 |
| Metro WAN | 10 Gbps | 1 ms | 10 MB | 833 |
| Cross-country | 100 Gbps | 20 ms | 250 MB | 20,833 |
| Transatlantic | 100 Gbps | 30 ms | 375 MB | 31,250 |
| GEO satellite | 1 Gbps | 240 ms | 30 MB | 2,500 |
*Frames needed = BDP / 12,000 bits (assuming 1500-byte frames). This column shows how many frames could theoretically be "in the pipe" simultaneously.
The LFN Problem:
Networks with large bandwidth-delay products are affectionately called "Long Fat Networks" (LFNs, pronounced "elephants"). These networks pose unique challenges for reliable data transfer protocols.
RFC 1323 Definition:
A "long" network has a large RTT (high propagation delay). A "fat" network has high bandwidth. When both are present, the BDP becomes enormous.
Characteristics of LFNs:
Example LFN Scenarios:
| Network Type | Bandwidth | RTT | BDP |
|---|---|---|---|
| Satellite Internet | 50 Mbps | 500 ms | 3.125 MB |
| 10G ethernet cross-country | 10 Gbps | 60 ms | 75 MB |
| 100G research network | 100 Gbps | 100 ms | 1.25 GB |
These BDP values far exceed original TCP's 64 KB window, necessitating extensions like window scaling (RFC 1323).
The acronym LFN = "Long Fat Network" creates a clever pronunciation ("elephant"). This reflects the challenge: like an elephant, LFNs are powerful but require special handling. Protocols designed for mice (small networks) don't scale to elephants.
LFN Protocol Solutions:
Networks with large BDPs require protocols that can:
Stop-and-Wait on LFNs:
For a 100 Gbps × 50 ms LFN (BDP = 625 MB):
Stop-and-Wait would use 0.00012% of a 100 Gbps link—effectively reducing it to 120 Kbps. This is why sliding window protocols are essential for LFNs.
Network delay consists of multiple components. Understanding how propagation delay relates to other delays is essential for complete network analysis.
The Four Types of Network Delay:
1. Propagation Delay (Tp)
2. Transmission Delay (Tt)
3. Processing Delay (Tproc)
4. Queuing Delay (Tqueue)
Total End-to-End Delay:
$$T_{\text{total}} = T_t + T_p + T_{\text{proc}} + T_{\text{queue}}$$
For multi-hop paths with n links and n-1 routers:
$$T_{\text{total}} = \sum_{i=1}^{n}(T_t^{(i)} + T_p^{(i)}) + \sum_{j=1}^{n-1}(T_{\text{proc}}^{(j)} + T_{\text{queue}}^{(j)})$$
Relative Importance Across Network Types:
| Network Type | Dominant Delay | Secondary | Negligible |
|---|---|---|---|
| LAN (100m) | Transmission | Propagation | Processing, Queuing |
| Metro WAN (50km) | Propagation | Transmission | Processing, Queuing |
| Internet path (congested) | Queuing | Propagation | Processing, Transmission |
| Intercontinental fiber | Propagation | Queuing | Transmission, Processing |
| Satellite (GEO) | Propagation | Processing | Transmission, Queuing |
Every other delay component can potentially be reduced through engineering: faster processors reduce processing delay, higher bandwidth reduces transmission delay, better traffic management reduces queuing delay. But propagation delay is fundamentally limited by the speed of light and network geography. This makes it the ultimate bottleneck for latency-sensitive applications across long distances.
This page provided an in-depth exploration of propagation delay—the physics-constrained component of network latency that fundamentally limits Stop-and-Wait efficiency.
You now have a thorough understanding of propagation delay and its role in determining Stop-and-Wait efficiency. In the next page, we'll examine transmission delay—the other fundamental timing component—and explore how frame size and bandwidth interact to determine transmission time.