Loading learning content...
In the previous page, we explored propagation delay—the irreducible time governed by physics and geography. Now we turn to transmission delay, the second critical timing component. Unlike propagation delay, transmission delay is controllable through engineering choices.
Transmission delay represents the time required to push all the bits of a frame from the sender's network interface onto the transmission medium. Think of it like pouring water from a bucket onto a conveyor belt: the propagation delay is how long the belt takes to carry the water across the room, but the transmission delay is how long it takes to empty the bucket onto the belt.
Here's the key insight: You can empty the bucket faster with a wider spout (more bandwidth), or you can use a smaller bucket (smaller frame). Both approaches reduce transmission delay. This controllability makes transmission delay a primary design variable in network engineering.
For Stop-and-Wait ARQ, transmission delay is the only time the channel is productively used. The ratio of transmission delay to total cycle time directly determines efficiency. Understanding how to calculate and optimize transmission delay is essential for protocol analysis.
By the end of this page, you will understand how to calculate transmission delay for any frame size and bandwidth, analyze the relationship between frame size and protocol efficiency, appreciate bandwidth as a multiplier of transmission speed, and recognize the tradeoffs in frame size selection.
Definition:
Transmission delay (also called emission delay or store-and-forward delay) is the time required to push all bits of a frame onto the transmission medium:
$$T_t = \frac{L}{B}$$
Where:
Physical Interpretation:
A network interface card (NIC) transmits bits sequentially at the link's data rate. If the bandwidth is 100 Mbps (100 million bits per second), the NIC can push 100 million bits onto the wire every second—or equivalently, one bit every 10 nanoseconds.
For a 12,000-bit frame at 100 Mbps:
$$T_t = \frac{12,000}{100 \times 10^6} = 120 \times 10^{-6} \text{ s} = 120 \text{ μs}$$
The NIC spends 120 microseconds pushing all 12,000 bits onto the wire, one after another.
Transmission delay is determined by bandwidth and frame size—both engineering variables. Propagation delay is determined by distance and signal velocity—physics constants. Even with infinite bandwidth (Tt → 0), you cannot reduce propagation delay. Even with a 1-meter link (Tp → 0), you cannot transmit a frame instantaneously if bandwidth is finite.
The Bit Emission Process:
When a sender transmits a frame:
Note that the last bit arrives only slightly after the first bit because both travel at the same propagation velocity—the "length" of the frame on the wire is Tt × v meters.
Frame Length on the Wire:
At any instant, a frame in transit occupies a physical length on the transmission medium:
$$\text{Frame length on wire} = T_t \times v = \frac{L}{B} \times v = \frac{L \times v}{B}$$
Example:
$$\text{Wire length} = \frac{12,000 \times 2 \times 10^8}{10^9} = 2,400 \text{ meters} = 2.4 \text{ km}$$
The entire 1500-byte frame, while in transit, occupies 2.4 km of fiber!
The Inverse Relationship:
Transmission delay is inversely proportional to bandwidth:
$$T_t = \frac{L}{B} \propto \frac{1}{B}$$
Doubling the bandwidth halves the transmission time. This relationship drives the constant pursuit of higher bandwidth in network technology.
Bandwidth Evolution and Impact:
The history of Ethernet illustrates this relationship dramatically:
| Generation | Bandwidth | Transmission Time | Era |
|---|---|---|---|
| Original Ethernet | 10 Mbps | 1.2 ms | 1980s |
| Fast Ethernet | 100 Mbps | 120 μs | 1995 |
| Gigabit Ethernet | 1 Gbps | 12 μs | 1998 |
| 10 Gigabit Ethernet | 10 Gbps | 1.2 μs | 2002 |
| 40 Gigabit Ethernet | 40 Gbps | 300 ns | 2010 |
| 100 Gigabit Ethernet | 100 Gbps | 120 ns | 2010 |
| 400 Gigabit Ethernet | 400 Gbps | 30 ns | 2017 |
The Efficiency Paradox Revisited:
Counter-intuitively, increasing bandwidth can decrease Stop-and-Wait efficiency! Here's why:
As bandwidth increases:
Example: 1000 km link, 1500-byte frame
| Bandwidth | Tt | Tp | a | Utilization |
|---|---|---|---|---|
| 1 Mbps | 12 ms | 5 ms | 0.42 | 54.5% |
| 10 Mbps | 1.2 ms | 5 ms | 4.17 | 10.7% |
| 100 Mbps | 120 μs | 5 ms | 41.7 | 1.18% |
| 1 Gbps | 12 μs | 5 ms | 417 | 0.12% |
| 10 Gbps | 1.2 μs | 5 ms | 4,167 | 0.012% |
Increasing bandwidth from 1 Mbps to 10 Gbps improves raw capacity 10,000×, but Stop-and-Wait utilization drops from 54.5% to 0.012%—a 4,542× drop in efficiency!
With Stop-and-Wait, upgrading from 1 Gbps to 10 Gbps on a 1000 km link actually provides ZERO throughput improvement! Both links achieve approximately 600 Kbps effective throughput because they're both limited by RTT, not bandwidth. This is why sliding window protocols are essential for high-bandwidth links.
The Direct Relationship:
Transmission delay is directly proportional to frame size:
$$T_t = \frac{L}{B} \propto L$$
Larger frames mean longer transmission times. However, for Stop-and-Wait, larger frames actually improve efficiency:
$$a = \frac{T_p}{T_t} = \frac{T_p \cdot B}{L} \propto \frac{1}{L}$$
Since a decreases with larger L, utilization U = 1/(1+2a) increases.
Intuitive Explanation:
With larger frames:
Example: 100 Mbps link, 10 km distance
Tp = 10,000 / (2 × 10⁸) = 50 μs
| Frame Size | Tt | a | Utilization | Throughput |
|---|---|---|---|---|
| 100 bytes | 8 μs | 6.25 | 7.4% | 7.4 Mbps |
| 500 bytes | 40 μs | 1.25 | 28.6% | 28.6 Mbps |
| 1,500 bytes | 120 μs | 0.42 | 54.5% | 54.5 Mbps |
| 9,000 bytes | 720 μs | 0.07 | 87.7% | 87.7 Mbps |
Increasing frame size from 100 bytes to 9,000 bytes (jumbo frame) improves throughput from 7.4 Mbps to 87.7 Mbps—an 11.8× improvement!
If larger frames improve efficiency, why not use enormous frames? Several constraints apply: (1) Standard Ethernet MTU is 1500 bytes—larger frames require special configuration. (2) Larger frames increase error probability—one bit error corrupts more data. (3) Larger frames increase latency for other traffic—unfair in multi-user environments. (4) Buffer memory requirements scale with frame size.
Frame Size Constraints:
1. Maximum Transmission Unit (MTU):
| Network Type | Standard MTU | Notes |
|---|---|---|
| Ethernet (standard) | 1,500 bytes | Default, universal |
| Jumbo Frames | 9,000 bytes | Requires end-to-end support |
| Token Ring | 4,464 bytes | Legacy |
| FDDI | 4,352 bytes | Legacy |
| PPPoE | 1,492 bytes | 8 bytes for PPPoE header |
| VPN tunnels | Variable | Depends on encryption overhead |
2. Error Recovery Cost:
If a frame is corrupted, the entire frame must be retransmitted. For larger frames:
Error cost analysis:
Expected overhead due to errors = Frame size × Error probability × Retransmission cost
3. Latency Sensitivity:
Large frames monopolize the link during transmission. For real-time traffic:
4. Fragmentation:
When frames exceed path MTU, they must be fragmented:
Bit Time:
The bit time is the duration of a single bit on the transmission medium:
$$T_{\text{bit}} = \frac{1}{B}$$
Transmission delay is simply frame size multiplied by bit time:
$$T_t = L \times T_{\text{bit}} = L \times \frac{1}{B} = \frac{L}{B}$$
Example Bit Times:
| Bandwidth | Bit Time | Interpretation |
|---|---|---|
| 10 Mbps | 100 ns | One bit every 100 nanoseconds |
| 100 Mbps | 10 ns | One bit every 10 nanoseconds |
| 1 Gbps | 1 ns | One bit every 1 nanosecond |
| 10 Gbps | 100 ps | One bit every 100 picoseconds |
| 100 Gbps | 10 ps | One bit every 10 picoseconds |
Physical Significance:
At 10 Gbps (100 picosecond bit time), light travels only 2 centimeters during one bit time! This illustrates the incredible precision required in high-speed network equipment.
Slot Time in Ethernet:
In CSMA/CD Ethernet, the slot time is critical for collision detection. It represents the maximum time to detect a collision—the time for a signal to propagate to the farthest point and back, plus jam signal:
$$T_{\text{slot}} = \text{Maximum RTT in collision domain} + \text{Jam time}$$
For 10 Mbps Ethernet:
For 100 Mbps Fast Ethernet:
For Gigabit Ethernet:
Why Slot Time Matters:
For collision detection to work reliably, the sender must still be transmitting when a collision signal returns. The minimum frame size must be at least the slot time:
$$L_{\text{min}} = B \times T_{\text{slot}}$$
For 10 Mbps Ethernet: L_min = 10 × 10⁶ × 51.2 × 10⁻⁶ = 512 bits = 64 bytes
This is why Ethernet has a 64-byte minimum frame size!
The slot time constraint creates an inverse relationship between bandwidth and network diameter. Faster networks must be smaller (in physical extent) to maintain collision detection capability. This is one reason why modern Gigabit+ networks use full-duplex switched Ethernet, eliminating CSMA/CD and its slot time requirements entirely.
When calculating transmission delay, we must consider the total frame size, not just the payload. Every layer adds overhead:
Ethernet Frame Structure:
| Component | Size | Purpose |
|---|---|---|
| Preamble | 7 bytes | Synchronization |
| SFD | 1 byte | Start Frame Delimiter |
| Destination MAC | 6 bytes | Recipient address |
| Source MAC | 6 bytes | Sender address |
| Type/Length | 2 bytes | Protocol identification |
| Payload | 46-1500 bytes | Actual data |
| FCS | 4 bytes | Frame Check Sequence |
| IFG | 12 bytes | Inter-Frame Gap |
| Total Overhead | 38 bytes | Excluding preamble/IFG |
True Transmission Time:
For a 1500-byte payload:
$$L_{\text{total}} = 8 + 6 + 6 + 2 + 1500 + 4 + 12 = 1538 \text{ bytes} = 12,304 \text{ bits}$$
(Including preamble and IFG)
Overhead percentage: 38/1538 = 2.5%
Layer Encapsulation:
Real network traffic has multiple layers of headers:
| Layer | Protocol | Typical Header Size |
|---|---|---|
| Physical | Ethernet preamble | 8 bytes |
| Data Link | Ethernet header + FCS | 18 bytes (+ 12 IFG) |
| Network | IPv4 | 20-60 bytes |
| Network | IPv6 | 40 bytes (fixed) |
| Transport | TCP | 20-60 bytes |
| Transport | UDP | 8 bytes |
| Application | HTTP/2 | Variable |
Example: HTTPS Web Request Overhead
For a small 100-byte HTTP response payload:
| Component | Size |
|---|---|
| HTTP response headers | ~50 bytes |
| TLS record | 5 bytes |
| TCP header | 20 bytes |
| IP header | 20 bytes |
| Ethernet header | 14 bytes |
| Ethernet FCS | 4 bytes |
| Preamble + IFG | 20 bytes |
| Total | 233 bytes |
Overhead ratio: 133/233 = 57%
For small payloads, overhead can exceed the actual data!
Small packets (voice, online gaming, IoT sensors) suffer severe overhead penalties. A 20-byte gaming update in a 233-byte total frame has 91% overhead! This is why protocol designers work to aggregate small payloads and why header compression (VoIP, mobile networks) is critical for efficiency.
Efficiency with Overhead:
When analyzing Stop-and-Wait efficiency, consider:
Payload Efficiency:
$$\eta_{\text{payload}} = \frac{L_{\text{payload}}}{L_{\text{total}}}$$
Combined Efficiency (utilization × payload efficiency):
$$\eta_{\text{combined}} = U \times \eta_{\text{payload}} = \frac{T_t}{T_t + 2T_p} \times \frac{L_{\text{payload}}}{L_{\text{total}}}$$
Example:
Both utilization losses and overhead losses compound!
Let's work through several transmission delay calculations with varying complexity.
Example 1: Basic Calculation
A host transmits a 500-byte frame over a 10 Mbps link.
$$T_t = \frac{L}{B} = \frac{500 \times 8}{10 \times 10^6} = \frac{4000}{10^7} = 400 \times 10^{-6} = 400 \text{ μs}$$
Example 2: Unit Conversion Challenge
A 2.5 KB file is transmitted over a 45 Mbps T3 link.
$$L = 2.5 \times 1024 \times 8 = 20,480 \text{ bits}$$ $$T_t = \frac{20,480}{45 \times 10^6} = 455 \times 10^{-6} \approx 455 \text{ μs}$$
Example 3: High-Speed Link
A standard 1500-byte Ethernet frame on a 40 Gbps backbone.
$$T_t = \frac{1500 \times 8}{40 \times 10^9} = \frac{12,000}{4 \times 10^{10}} = 3 \times 10^{-7} = 300 \text{ ns}$$
Example 4: Including Overhead
Calculate the transmission time for a 1460-byte TCP payload over 100 Mbps Ethernet.
Total frame size:
$$T_t = \frac{1538 \times 8}{100 \times 10^6} = \frac{12,304}{10^8} = 123.04 \text{ μs}$$
Vs. payload-only calculation: $$T_t^{\text{payload}} = \frac{1460 \times 8}{10^8} = 116.8 \text{ μs}$$
Overhead adds 5.3% to transmission time.
Example 5: Comparative Analysis
Compare transmission times for the same 1500-byte frame across different technologies:
| Technology | Bandwidth | Tt |
|---|---|---|
| Dial-up modem | 56 Kbps | 214 ms |
| DSL | 10 Mbps | 1.2 ms |
| Fast Ethernet | 100 Mbps | 120 μs |
| Gigabit Ethernet | 1 Gbps | 12 μs |
| 10G Ethernet | 10 Gbps | 1.2 μs |
| 100G Ethernet | 100 Gbps | 120 ns |
| 400G Ethernet | 400 Gbps | 30 ns |
Transmission time spans 7 orders of magnitude!
| Bandwidth | Bit Time | Frame Tt | Frames/second |
|---|---|---|---|
| 1 Mbps | 1 μs | 12 ms | 83 |
| 10 Mbps | 100 ns | 1.2 ms | 833 |
| 100 Mbps | 10 ns | 120 μs | 8,333 |
| 1 Gbps | 1 ns | 12 μs | 83,333 |
| 10 Gbps | 100 ps | 1.2 μs | 833,333 |
| 100 Gbps | 10 ps | 120 ns | 8.3 million |
In multi-hop networks, frames are transmitted multiple times—once on each link. This significantly affects end-to-end delay.
Store-and-Forward Networks:
Most network devices (routers, switches) use store-and-forward processing:
Each hop adds a complete transmission delay:
$$T_{\text{total}} = \sum_{i=1}^{n} T_t^{(i)} + \sum_{i=1}^{n} T_p^{(i)}$$
For n hops with identical links:
$$T_{\text{total}} = n \cdot T_t + \sum_{i=1}^{n} T_p^{(i)}$$
Example: 3-Hop Path
A packet traverses three 1 Gbps links with distances 100 km, 500 km, and 50 km.
Frame size: 1500 bytes
Transmission delays: $$T_t = 3 \times \frac{12,000}{10^9} = 3 \times 12 \text{ μs} = 36 \text{ μs}$$
Propagation delays: $$T_p = \frac{100,000 + 500,000 + 50,000}{2 \times 10^8} = \frac{650,000}{2 \times 10^8} = 3.25 \text{ ms}$$
Total: 3.25 ms + 36 μs ≈ 3.29 ms
Propagation dominates!
Some switches use cut-through switching, which begins forwarding as soon as the destination address is read (first 6 bytes). This eliminates most of the store-and-forward delay, reducing hop-by-hop transmission delay to near-zero. However, it cannot perform error checking and is only used in low-error environments.
Heterogeneous Networks:
Real networks have links with different bandwidths. Each link contributes its own transmission delay:
$$T_t^{\text{total}} = \frac{L}{B_1} + \frac{L}{B_2} + ... + \frac{L}{B_n}$$
Example: Campus to Internet
Path: Workstation → Building Switch → Core Router → ISP Router → Internet
| Link | Bandwidth | Distance | Tt | Tp | |------|-----------|----------|----|----|| | Workstation → Building | 1 Gbps | 100 m | 12 μs | 0.5 μs | | Building → Core | 10 Gbps | 1 km | 1.2 μs | 5 μs | | Core → ISP | 10 Gbps | 50 km | 1.2 μs | 250 μs | | ISP → Internet | 100 Gbps | 200 km | 0.12 μs | 1 ms |
Total Tt: 12 + 1.2 + 1.2 + 0.12 = 14.52 μs Total Tp: 0.5 + 5 + 250 + 1000 = 1,255.5 μs = 1.26 ms
The slowest link (1 Gbps) dominates transmission delay despite being the shortest!
The Bottleneck Effect:
Overall throughput is limited by the slowest link—the bottleneck:
$$\text{Throughput}_{\text{max}} = \min(B_1, B_2, ..., B_n)$$
Upgrading non-bottleneck links provides no throughput benefit.
This page provided comprehensive coverage of transmission delay—the controllable timing component that determines how long it takes to push frame bits onto the transmission medium.
You now have a thorough understanding of transmission delay and its role in Stop-and-Wait efficiency. In the next page, we'll synthesize our knowledge of propagation and transmission delays to analyze Round-Trip Time (RTT)—the complete cycle time that determines protocol efficiency.