Loading learning content...
When you press Enter to send a message, a complex journey begins. That message transforms into electrical or optical signals that race through cables at velocities approaching the speed of light, navigating physical constraints, enduring degradation, and emerging at the destination to be reconstructed into meaningful data.
Signal propagation is the study of this journey—how electromagnetic waves travel through guided media, what affects their speed, and how these physical realities shape the networks we build. Understanding propagation isn't merely academic; it directly influences network architecture, protocol design, cable selection, and even the physical distance constraints that determine where you can locate servers relative to users.
By the end of this page, you will understand how signals propagate through different media, calculate propagation delay for any cable run, interpret the relationship between frequency, wavelength, and data transmission, and reason about how propagation physics shape network protocols and architecture.
Every guided medium exhibits a propagation velocity—the speed at which electromagnetic waves travel through it. This velocity is slower than the speed of light in vacuum (c ≈ 299,792 km/s) because signals must travel through a physical material.
The velocity factor (VF), also called velocity of propagation (VoP), expresses propagation velocity as a fraction of the speed of light:
VF = v / c
Where:
Different media have different velocity factors based on their physical properties:
| Medium | Velocity Factor | Propagation Velocity | Primary Limiting Factor |
|---|---|---|---|
| Vacuum (reference) | 1.00 | 299,792 km/s | Physical limit of light |
| Air (free space) | ~1.00 | ~299,792 km/s | Minimal atmospheric effects |
| Single-mode fiber | ~0.67 | ~200,000 km/s | Refractive index of glass |
| Multimode fiber | ~0.66 | ~198,000 km/s | Refractive index + modal effects |
| Coaxial cable | 0.66-0.87 | 198,000-260,000 km/s | Dielectric material properties |
| Cat6 twisted pair | ~0.64 | ~192,000 km/s | Insulation dielectric |
| Cat5e twisted pair | ~0.64 | ~192,000 km/s | Insulation dielectric |
At first glance, these differences seem small—all velocities are still in the 200,000 km/s range. But in networking, microseconds matter. A transatlantic cable run of 6,000 km introduces ~30 ms of propagation delay (one-way). For latency-sensitive applications like financial trading, this delay is the fundamental limit—no protocol optimization can overcome the speed of light through fiber.
In Electrical Conductors:
The velocity factor in copper cables is determined primarily by the cable's dielectric constant (ε) of the insulation material:
VF = 1 / √ε
Common insulation materials:
Foam dielectrics achieve higher velocity factors by incorporating air (ε = 1) into the insulation.
In Optical Fiber:
The velocity factor in fiber is determined by the refractive index (n) of the glass core:
VF = 1 / n
For silica glass, n ≈ 1.48, yielding VF ≈ 0.67. This is a fundamental material property—achieving significantly higher velocities would require materials other than glass.
Propagation delay is the time required for a signal to travel from source to destination through a medium. It's a fundamental component of network latency that cannot be reduced by technology—only by shortening the physical path.
t_prop = d / v
Where:
For fiber optic cable with VF ≈ 0.67: propagation delay ≈ 5 microseconds per kilometer (5 μs/km). For copper cable with VF ≈ 0.64: propagation delay ≈ 5.2 microseconds per kilometer. This provides a quick mental estimate for network design.
Example 1: Data Center Cable Run
A Cat6a cable runs 80 meters from a server to a top-of-rack switch.
For data center applications, this sub-microsecond delay is negligible compared to processing and queuing delays.
Example 2: Cross-Continental Fiber Link
A fiber connection spans 4,000 km from New York to Los Angeles.
This 40 ms round-trip is the absolute minimum latency—actual latency will be higher due to routing, processing, and any non-geodesic paths.
Example 3: Transatlantic Submarine Cable
The London-to-New York submarine cable path is approximately 5,600 km.
Financial trading firms pay millions for slightly shorter cable routes to gain milliseconds of advantage.
| Distance | Fiber One-Way | Fiber Round-Trip | Context |
|---|---|---|---|
| 100 m | 0.5 μs | 1 μs | Maximum Ethernet twisted pair run |
| 500 m | 2.5 μs | 5 μs | Multimode fiber in building |
| 1 km | 5 μs | 10 μs | Campus network |
| 10 km | 50 μs | 100 μs | Metropolitan connection |
| 100 km | 500 μs | 1 ms | Regional network |
| 1,000 km | 5 ms | 10 ms | Inter-city backbone |
| 10,000 km | 50 ms | 100 ms | Trans-continental |
| 20,000 km | 100 ms | 200 ms | Half global circumference |
Signals in transmission media are fundamentally waves, characterized by frequency and wavelength. Understanding this wave nature is essential for grasping bandwidth, signal quality, and transmission capacity.
v = f × λ
Where:
Rearranged: λ = v / f
This means that for a given propagation medium, higher frequencies have shorter wavelengths.
Networking standards specify the frequency at which cables must perform:
| Cable Category | Bandwidth | Maximum Frequency |
|---|---|---|
| Cat5 | 100 MHz | 100 MHz |
| Cat5e | 100 MHz | 100 MHz |
| Cat6 | 250 MHz | 250 MHz |
| Cat6a | 500 MHz | 500 MHz |
| Cat7 | 600 MHz | 600 MHz |
| Cat8 | 2,000 MHz | 2,000 MHz |
Why do higher data rates need higher frequencies?
Digital signals can be viewed as combinations of many frequency components. A square wave (ideal digital signal) theoretically contains infinite frequency harmonics. To faithfully transmit higher data rates, the medium must pass higher frequencies.
The Nyquist theorem tells us that maximum data rate is proportional to bandwidth:
Maximum Data Rate ≤ 2 × Bandwidth × log2(Signal Levels)
For binary signaling: Maximum Rate ≤ 2 × Bandwidth
In practice, multilevel signaling (like PAM-4) increases bits per symbol, enabling higher data rates within the same bandwidth.
The ultimate limit on channel capacity is given by C = B × log2(1 + SNR), where B is bandwidth and SNR is signal-to-noise ratio. This formula shows that capacity increases with bandwidth and SNR. No communication system can exceed this theoretical limit—it represents the physics of information transmission.
Fiber optic systems operate at specific wavelengths of light, measured in nanometers (nm). The choice of wavelength profoundly affects performance:
850 nm (Shortwave):
1310 nm (O-band):
1550 nm (C-band):
1625 nm (L-band):
| Band | Wavelength Range | Attenuation | Primary Use |
|---|---|---|---|
| First Window | 800-900 nm | ~2.5 dB/km | Short-reach multimode |
| O-band | 1260-1360 nm | ~0.35 dB/km | Single-mode, low dispersion |
| E-band | 1360-1460 nm | ~0.4 dB/km | Historical water peak, now usable |
| S-band | 1460-1530 nm | ~0.25 dB/km | Extended WDM systems |
| C-band | 1530-1565 nm | ~0.2 dB/km | Main long-haul DWDM band |
| L-band | 1565-1625 nm | ~0.22 dB/km | Extended capacity in DWDM |
A fascinating consequence of propagation velocity is that a bit occupies a physical length on the cable. This bit length (or bit distance) has important implications for understanding network behavior.
Bit Length = Propagation Velocity / Data Rate
Example: 1 Gbps Ethernet over Cat6
Each bit occupies about 19 centimeters on the cable!
Example: 10 Gbps Ethernet over Cat6a
At 10 Gbps, each bit is less than 2 cm long.
| Data Rate | Bit Length (Copper VF=0.64) | Bit Length (Fiber VF=0.67) |
|---|---|---|
| 10 Mbps | 19.2 m | 20.0 m |
| 100 Mbps | 1.92 m | 2.0 m |
| 1 Gbps | 19.2 cm | 20.0 cm |
| 10 Gbps | 1.92 cm | 2.0 cm |
| 25 Gbps | 7.7 mm | 8.0 mm |
| 100 Gbps | 1.92 mm | 2.0 mm |
| 400 Gbps | 0.48 mm | 0.5 mm |
Bit length provides concrete intuition about data transmission. Consider: a 100-meter Ethernet cable at 1 Gbps contains about 520 bits 'in flight' at any moment (100m / 0.192m per bit). At 10 Gbps, that same cable holds 5,200 bits in flight. This 'bits in flight' count has implications for protocol design, buffering, and flow control.
The bandwidth-delay product (BDP) generalizes this concept to any link:
BDP = Bandwidth × Round-Trip Delay
The result is the amount of data that can be 'in flight' on the network—transmitted but not yet acknowledged. This is critical for:
TCP Window Sizing: For maximum throughput, TCP's receive window must be at least as large as the BDP. Otherwise, the sender idles waiting for acknowledgments.
Example: Trans-Pacific Link
To fully utilize this link, TCP requires a 187.5 MB window—far exceeding traditional 64 KB window sizes. This is why window scaling options are essential for modern networks.
Buffer Sizing: Switches and routers need sufficient buffer to absorb BDP worth of data during congestion. Under-buffered equipment drops packets on high-BDP links.
Protocol Design: Protocols expecting small networks (sub-millisecond RTT) may behave poorly on high-BDP paths. Connection establishment, flow control algorithms, and timeout calculations all depend on BDP assumptions.
Network latency comprises multiple components. Understanding and distinguishing them is essential for performance analysis.
Propagation delay depends on distance and medium velocity. It's the same whether you send 1 byte or 1 gigabyte—it's the time for the first bit to arrive after it leaves the source.
Transmission delay depends on packet size and link bandwidth. It's the time to serialize all bits of a packet onto the wire.
Analogy: Imagine a highway between two cities.
A caravan of 100 cars still takes the same time for the lead car to reach the destination (propagation), but takes longer to completely depart the on-ramp (transmission).
Example 1: LAN Scenario (Short Distance, High Bandwidth)
Propagation: 100 m / 192,000,000 m/s = 0.52 μs Transmission: (1500 × 8 bits) / 1,000,000,000 bps = 12 μs
Transmission dominates (12 μs >> 0.52 μs)
Example 2: WAN Scenario (Long Distance, High Bandwidth)
Propagation: 3,000,000 m / 200,000,000 m/s = 15 ms Transmission: (1500 × 8 bits) / 10,000,000,000 bps = 1.2 μs
Propagation dominates (15 ms >> 1.2 μs)
Example 3: Low-Bandwidth Legacy Link
Propagation: 1,000 m / 200,000,000 m/s = 5 μs Transmission: (1500 × 8 bits) / 1,000,000 bps = 12 ms
Transmission dominates even over moderate distance due to low bandwidth
Signal propagation characteristics profoundly influence network protocol design. Understanding these influences reveals why protocols work the way they do.
Ethernet requires a minimum frame size of 64 bytes. This isn't arbitrary—it's a direct consequence of propagation delay and collision detection.
The CSMA/CD Constraint:
In half-duplex Ethernet, a station must detect collisions before finishing transmission. If a collision occurs at the far end of the segment, the collision signal must propagate back before the sender finishes transmitting.
For 10BASE-5 Ethernet:
Packets smaller than 64 bytes would complete transmission before collisions could be detected, breaking the CSMA/CD mechanism.
Modern switched Ethernet operates in full-duplex mode with no collisions, making CSMA/CD obsolete for point-to-point links. However, the 64-byte minimum remains for backward compatibility and because frame check sequences need minimum data to be effective.
TCP's congestion control algorithms depend heavily on round-trip time (RTT), which is dominated by propagation delay over long distances.
Slow Start: TCP doubles its congestion window every RTT. On a 100 ms RTT link, reaching full bandwidth takes many seconds. On a 1 ms LAN, it's nearly instant.
CUBIC, BBR, and High-BDP Networks: Modern congestion control algorithms like CUBIC and BBR are specifically designed for high-BDP networks where traditional algorithms (Reno, NewReno) performed poorly. BBR explicitly measures propagation delay to optimize bandwidth utilization.
Protocol timeouts must account for propagation delay:
Too short: False timeouts cause unnecessary retransmissions, wasting bandwidth and causing congestion Too long: Real failures aren't detected quickly, degrading user experience
TCP dynamically measures RTT and adjusts timeouts (RTO = SRTT + 4 × RTTVAR), but this requires careful estimation especially when RTT varies.
Chatty Protocols: Protocols requiring many round-trips suffer severely on high-latency links. NFS historically required 5-7 RTTs to open a file—unusable across WANs. Modern protocols like HTTP/2 and gRPC are designed to minimize round-trips.
Request Batching: Pipelining (HTTP/1.1), multiplexing (HTTP/2), and request batching amortize propagation delay across multiple operations.
Prefetching and Speculation: Applications that predict future requests and issue them speculatively can hide propagation delay. Web browsers prefetch DNS and TCP connections; databases prefetch likely-accessed blocks.
Edge Computing and CDNs: When propagation delay to origin is intolerable, move computation closer to users. CDNs exist precisely because propagation delay to distant origins degrades user experience. Edge functions execute at locations near users, minimizing RTT.
At human scales, light is instantaneous. At network scales, it's frustratingly slow. Light takes 67 ms to circle the Earth through fiber. No technology will ever beat this limit. Protocol design, caching, and edge computing are responses to this fundamental constraint.
Understanding signal propagation enables practical engineering decisions across many domains.
Cable Length Budget: Modern data centers carefully manage cable lengths. With 100 m copper limits, server rows must be positioned relative to distribution switches. When requirements exceed copper distance limits, fiber is deployed—at higher cost but with greater reach.
Latency Optimization: In high-frequency trading environments, even meters matter. Servers are placed to minimize cable runs to network equipment. Premium colocation fees are paid for positions closest to exchange matching engines.
Synchronization: Distributed systems requiring tight time synchronization must account for cable length. PTP (Precision Time Protocol) deployments measure and compensate for propagation delay to achieve sub-microsecond accuracy.
Geographic Placement: Decisions about where to place data centers, edge nodes, and points of presence are fundamentally constrained by propagation delay. Users more than 3000 km from a server experience noticeable latency (>20 ms RTT minimum).
| RTT | User Perception | Suitable For |
|---|---|---|
| < 50 ms | Imperceptible | All interactive applications |
| 50-100 ms | Slightly noticeable | Web browsing, most apps |
| 100-200 ms | Noticeable delay | Web OK, real-time stressed |
| 200-500 ms | Clearly delayed | Email, browsing tolerable |
500 ms | Frustrating | Only batch/offline use |
Industrial and automotive networks use Time-Sensitive Networking to guarantee bounded latency. TSN implementations must precisely account for propagation delays through each link segment to enforce end-to-end timing guarantees.
Financial markets have driven extreme optimization of propagation delay:
Geostationary satellites orbit at 35,786 km altitude. Round-trip to satellite and back:
This makes geostationary satellites unsuitable for interactive applications. Modern LEO constellations (Starlink) orbit at ~550 km, reducing RTT to ~20-40 ms—comparable to terrestrial fiber for many routes.
We've explored the physics and implications of signal propagation through guided media. Let's consolidate the key concepts:
What's Next:
The next page explores media characteristics in depth—the specific properties of different cable types including attenuation profiles, bandwidth limits, crosstalk behavior, and how these characteristics determine appropriate applications for each medium type.
You now understand how signals propagate through guided media and can calculate delays, understand bandwidth relationships, and reason about how propagation physics influence network and protocol design. This knowledge is fundamental for performance analysis and network architecture decisions.