Loading content...
Throughput is the ultimate measure of network utility—how much data actually flows from source to destination per unit time. While bandwidth represents the theoretical maximum capacity, throughput reflects what users actually experience. The gap between the two reveals inefficiencies, bottlenecks, and design limitations.
Why throughput calculations matter: Understanding throughput helps you predict application performance, diagnose bottlenecks, size network infrastructure, and answer interview questions accurately. A network engineer who can calculate expected throughput under various conditions provides immense value to any organization.
By the end of this page, you will: (1) Distinguish between bandwidth, throughput, and goodput, (2) Calculate maximum throughput limited by window size, (3) Apply the TCP throughput equation (Mathis formula), (4) Analyze end-to-end throughput with bottleneck links, and (5) Account for overhead in realistic throughput estimates.
These three terms are often confused but have distinct meanings that matter for accurate calculations and clear communication.
Bandwidth (Capacity): The maximum theoretical data rate a link can carry, determined by physical properties and technology. Often stated in Mbps or Gbps.
Throughput: The actual rate at which data is successfully delivered over a path, including all protocol overhead. Always ≤ bandwidth.
Goodput: The rate at which useful application data is delivered, excluding protocol overhead. Always ≤ throughput.
The relationship is: $$Goodput \leq Throughput \leq Bandwidth$$
| Metric | What It Measures | Example (100 Mbps link) | Affected By |
|---|---|---|---|
| Bandwidth | Link capacity (theoretical max) | 100 Mbps | Physical layer technology |
| Throughput | Bits actually transferred (including headers) | 85 Mbps | Protocol efficiency, errors, congestion |
| Goodput | User data delivered (payload only) | 78 Mbps | All above + header overhead |
Several factors reduce throughput below bandwidth:
| Factor | Impact | Typical Reduction |
|---|---|---|
| Protocol overhead | Headers consume bandwidth | 3-10% |
| ACK traffic | Return path carries acknowledgments | 3-5% |
| Retransmissions | Lost packets must be re-sent | Variable (1-50%+) |
| Window limitations | Can't fill the pipe fully | Variable |
| Congestion control | Sender throttles voluntarily | Variable |
| Half-duplex operation | Switching direction takes time | Up to 50% |
| Coding overhead | FEC in wireless systems | 10-50% |
For well-tuned TCP connections on low-loss paths, expect throughput around 80-95% of link bandwidth. For poorly-tuned connections (small windows, high RTT) or lossy paths (wireless), throughput can drop to 10-50% of bandwidth. Always ask yourself: what's limiting throughput in this scenario?
When a reliable transport protocol (like TCP) uses windowed flow control, the window size can become the limiting factor for throughput.
With a sliding window protocol, the maximum throughput is:
$$Throughput_{max} = \frac{Window\ Size}{RTT}$$
This is because:
Throughput is window-limited when: $$\frac{Window\ Size}{RTT} < Bandwidth$$
Or equivalently, when: $$Window\ Size < Bandwidth \times RTT = BDP$$
The throughput equation becomes:
$$Throughput = \min\left(Bandwidth, \frac{Window}{RTT}\right)$$
If Window ≥ BDP, throughput can reach link bandwidth (assuming no other limits). If Window < BDP, throughput is capped at Window/RTT regardless of link speed.
Window size: 64 KB = 65,536 bytes = 524,288 bits
RTT: 150 ms (US West Coast to Tokyo)
Link bandwidth: 1 GbpsWindow-limited throughput:
TP = Window / RTT
= 524,288 / 0.15
= 3,495,253 bps
≈ **3.5 Mbps**
Bandwidth: 1 Gbps = 1000 Mbps
Utilization = 3.5 / 1000 = **0.35%**Despite having a 1 Gbps link, this connection achieves only 3.5 Mbps (0.35% utilization) because the window is too small. The BDP is 1 Gbps × 0.15s = 150 Mb = 18.75 MB. With only 64 KB window, the pipe is nearly empty. Solution: enable window scaling to increase window to at least 18.75 MB.
Link bandwidth: 10 Gbps
RTT: 50 ms (New York to Los Angeles)
Target: 100% utilizationRequired window = BDP = Bandwidth × RTT
= 10 × 10^9 × 0.05
= 500,000,000 bits
= 62,500,000 bytes
= **62.5 MB**
With 14-bit window scaling (max scale = 2^14 = 16384):
Max window = 65535 × 16384 = 1.07 GB ✓ SufficientTo fully utilize this 10 Gbps link, TCP needs a 62.5 MB window—about 1000x the default 64 KB. This is achievable with window scaling (RFC 7323) which modern operating systems enable by default. OS socket buffer sizes should also be set to at least this value.
When packet loss is present, TCP's congestion control limits throughput even if window size is adequate. The Mathis formula (also called the TCP throughput equation) models this relationship:
$$Throughput \approx \frac{MSS}{RTT} \times \frac{C}{\sqrt{p}}$$
Where:
This can also be written as:
$$Throughput \approx \frac{1.22 \times MSS}{RTT \times \sqrt{p}}$$
Why 1/√p? TCP's AIMD (Additive Increase, Multiplicative Decrease) congestion control oscillates around an average window size. Analysis shows the average window is proportional to 1/√p. Specifically, the average congestion window in segments is approximately:
$$W_{avg} \approx \frac{1.22}{\sqrt{p}}$$
Why MSS/RTT? Each window delivers W segments of MSS bytes each, and this happens once per RTT. So:
$$Throughput = \frac{W \times MSS}{RTT} = \frac{1.22 \times MSS}{RTT \times \sqrt{p}}$$
| Loss Rate (p) | Relative Throughput | Real-World Scenario |
|---|---|---|
| 0.01% (10^-4) | 100× baseline | Well-managed wired network |
| 0.1% (10^-3) | 31.6× baseline | Lightly congested network |
| 1% (10^-2) | 10× baseline | Congested or wireless |
| 5% (5×10^-2) | 4.5× baseline | Very congested/poor wireless |
| 10% (10^-1) | 3.2× baseline | Severe problems |
MSS: 1460 bytes = 11,680 bits
RTT: 80 ms
Loss rate: 0.1% (p = 0.001)Throughput = 1.22 × MSS / (RTT × √p)
= 1.22 × 1460 × 8 / (0.08 × √0.001)
= 14,249.6 / (0.08 × 0.0316)
= 14,249.6 / 0.00253
= **5.63 Mbps**Even 0.1% loss on an 80 ms RTT path limits TCP throughput to about 5.6 Mbps. On a 10 Gbps link, this represents only 0.056% utilization! This explains why long-distance, lossy paths (like satellite or poorly-managed WAN links) often feel slow despite high bandwidth. Solutions: loss-tolerant protocols (QUIC), forward error correction, or reducing loss through better network engineering.
Doubling RTT halves throughput. Quadrupling loss rate halves throughput (due to √). A path with 2x RTT and 4x loss has 25% the throughput of the baseline. This is why international connections over congested networks perform so poorly—both factors multiply against performance.
In any multi-link path, the bottleneck link is the link with the minimum bandwidth. It limits the maximum possible throughput for the entire path.
$$Throughput_{path} \leq \min(R_1, R_2, \ldots, R_n)$$
No matter how fast other links are, data cannot flow faster than the slowest link allows. This is analogous to water flowing through pipes—the narrowest pipe limits overall flow.
In practice, the bottleneck might not be the lowest-bandwidth link if:
When N flows share a bottleneck link with capacity C, each flow gets approximately C/N (under fair queuing). But if flows have different RTTs, shorter-RTT flows tend to grab more bandwidth (RTT unfairness in TCP).
Link 1 (Access): 100 Mbps
Link 2 (Metro): 1 Gbps
Link 3 (Core): 10 Gbps
Link 4 (Server): 10 GbpsBottleneck = min(100, 1000, 10000, 10000) Mbps
= **100 Mbps** (Link 1)
Max throughput = 100 Mbps
Note: The multi-gigabit core and server links are irrelevant; the access link determines performance.This is the typical access network bottleneck scenario. Home/office connections (Link 1) are usually the constraint. Upgrading core network links provides no benefit—only upgrading the access link improves user experience. This is why 'last mile' infrastructure matters so much.
Bottleneck link: 1 Gbps
Number of flows: 100 (all TCP, similar RTTs)
Assuming fair sharingPer-flow throughput ≈ 1 Gbps / 100
= 10 Mbps per flow
Actual throughput may vary based on:
- RTT differences (shorter RTT gets more)
- Loss rates on each path
- Window size limitationsWith 100 flows sharing a 1 Gbps link, each gets roughly 10 Mbps under fair conditions. In reality, flows with shorter RTTs react faster to available bandwidth and capture more capacity (TCP RTT unfairness). This is why video streaming from a nearby CDN performs better than from a distant server, even if they share the same bottleneck.
Real throughput calculations must account for protocol overhead at each layer. The effective throughput (goodput) excludes headers:
$$Goodput = Throughput \times \frac{Payload}{Payload + Headers}$$
Or equivalently:
$$Goodput = Throughput \times \eta_{transmission}$$
Where η is the transmission efficiency from the previous section.
For TCP/IP over Ethernet, typical overhead per 1500-byte frame:
| Layer | Header Size | Notes |
|---|---|---|
| Ethernet | 18 bytes | 14 header + 4 FCS |
| IP | 20 bytes | Without options |
| TCP | 20 bytes | Without options |
| Total overhead | 58 bytes | Per packet |
| MSS (payload) | 1460 bytes |
Transmission efficiency = 1460 / (1460 + 58) = 96.2%
So for a 1 Gbps link: Goodput ≈ 962 Mbps of actual application data.
Link bandwidth: 1 Gbps
RTT: 20 ms
Window size: 4 MB (adequate for BDP)
Loss rate: 0.01%
MSS: 1460 bytes
Protocol overhead: 58 bytes per packetStep 1: Check window limitation
BDP = 1 Gbps × 20 ms = 20 Mb = 2.5 MB
Window (4 MB) > BDP ✓ Not window limited
Step 2: Check loss limitation
TP_mathis = 1.22 × 1460 × 8 / (0.02 × √0.0001)
= 14249.6 / (0.02 × 0.01)
= 14249.6 / 0.0002
= 71.2 Mbps
Step 3: Actual throughput is min of bandwidth and loss-limited:
Throughput = min(1000 Mbps, 71.2 Mbps) = 71.2 Mbps
Step 4: Apply overhead
Goodput = 71.2 × (1460/1518) = 71.2 × 0.962
= **68.5 Mbps**Despite a 1 Gbps link and adequate window, 0.01% loss limits throughput to ~71 Mbps, and after overhead we get 68.5 Mbps of application data—less than 7% of link capacity! Even small loss rates (1 in 10,000 packets) can devastate TCP performance on moderate-RTT paths. This is why loss matters more than raw bandwidth for many applications.
When frames or packets are lost and must be retransmitted, effective throughput decreases. The impact depends on the loss probability and the recovery mechanism used.
With loss probability p and ARQ (Automatic Repeat reQuest):
Average transmissions per packet: $$E[transmissions] = \frac{1}{1-p}$$
Effective throughput: $$Throughput_{effective} = Throughput_{raw} \times (1-p)$$
For example, 10% loss means 1/(1-0.1) = 1.11 transmissions per packet, or 90% of raw throughput.
Go-Back-N: When a frame is lost, all subsequent frames are retransmitted. Expected transmissions per frame: $$E_{GBN} = \frac{1 - p + Wp}{1-p}$$
Where W is the window size.
Selective Repeat: Only lost frames are retransmitted. Expected transmissions per frame: $$E_{SR} = \frac{1}{1-p}$$
Selective Repeat is more efficient under loss because Go-Back-N wastes bandwidth retransmitting correctly-received frames.
Link bandwidth: 100 Mbps
Frame size: 1000 bytes
Window size: 20 frames
Bit error rate: 10^-5
Frame error rate: 1 - (1-10^-5)^8000 ≈ 0.077 (7.7%)Raw throughput: 100 Mbps
Go-Back-N:
E_GBN = (1 - 0.077 + 20×0.077) / (1 - 0.077)
= (0.923 + 1.54) / 0.923
= 2.67 frames/frame
Throughput_GBN = 100 / 2.67 = **37.5 Mbps**
Selective Repeat:
E_SR = 1 / (1 - 0.077) = 1.083 frames/frame
Throughput_SR = 100 / 1.083 = **92.3 Mbps**With 7.7% frame loss and window=20, GBN achieves only 37.5% of capacity while SR achieves 92.3%. The difference is dramatic because GBN retransmits many correctly-received frames. This is why modern protocols (TCP SACK, QUIC) use selective acknowledgment—the efficiency gain is substantial under loss.
Different application-layer protocols have vastly different throughput characteristics based on their design and use patterns.
| Protocol | Typical Efficiency | Limiting Factors | Optimization Strategy |
|---|---|---|---|
| HTTP/1.1 | 60-80% | Serial requests, connection overhead | Persistent connections, pipelining |
| HTTP/2 | 80-95% | Head-of-line blocking (at TCP layer) | Multiplexing, header compression |
| HTTP/3 (QUIC) | 85-97% | Independent streams, no TCP HOL | UDP-based, per-stream reliability |
| FTP (data transfer) | 90-97% | Separate control connection | Passive mode, large buffers |
| SMB/CIFS | 50-80% | Chatty protocol, many round trips | SMB3 multichannel, larger reads |
| VoIP (RTP) | 60-80% | Small packets, high overhead | Header compression, larger packetization |
| Video streaming | 85-95% | Adaptive bitrate algorithm | CDN placement, pre-buffering |
For request-response protocols with small messages, throughput is often measured in transactions per second rather than bits per second:
$$TPS = \frac{1}{RTT + Processing\ Time}$$
Or with pipelining (N concurrent requests):
$$TPS = \frac{N}{RTT + Processing\ Time}$$
For example, a database query with 10 ms RTT and 5 ms processing:
For chatty protocols (many small transactions), throughput is limited by RTT and server processing, not bandwidth. A 10 Gbps link doesn't help if each transaction requires a round trip. This is why edge computing, connection pooling, and protocol redesign (batching, multiplexing) matter more than raw bandwidth for many workloads.
When multiple flows or bidirectional traffic share a link, calculating aggregate throughput requires careful consideration of directionality and sharing.
Half-duplex (one direction at a time): $$Throughput_{total} = Bandwidth$$
Direction switching incurs overhead, typically 5-15% efficiency loss.
Full-duplex (simultaneous both directions): $$Throughput_{total} = 2 \times Bandwidth$$
Each direction can transmit at full rate simultaneously.
Many access technologies have asymmetric bandwidth:
| Technology | Download | Upload | Ratio |
|---|---|---|---|
| Cable (DOCSIS 3.1) | 1 Gbps | 35 Mbps | 28:1 |
| DSL (ADSL2+) | 24 Mbps | 3.3 Mbps | 7:1 |
| Satellite (GEO) | 100 Mbps | 3 Mbps | 33:1 |
| Fiber (GPON) | 2.5 Gbps | 1.25 Gbps | 2:1 |
For asymmetric links, the direction of data flow matters: $$Throughput_{download} eq Throughput_{upload}$$
Link: 10 Mbps download, 2 Mbps upload (asymmetric DSL)
Video call requirements:
- 720p video: 1.5 Mbps each direction
- Audio: 100 kbps each direction
- ACK traffic: negligibleRequired throughput:
- Download: 1.5 + 0.1 = 1.6 Mbps ✓ (< 10 Mbps)
- Upload: 1.5 + 0.1 = 1.6 Mbps ✓ (< 2 Mbps)
Video call works with margin:
- Download utilization: 16%
- Upload utilization: 80%
If second call added:
- Download: 3.2 Mbps ✓
- Upload: 3.2 Mbps ✗ (exceeds 2 Mbps!)
Second participant would need to reduce quality or experience congestion.Upload bandwidth is often the constraint for symmetric applications like video calls. One 720p call uses 80% of upload, leaving little headroom. This asymmetry explains why 'my internet is fast but video calls are choppy'—fast download doesn't help when upload is saturated.
When solving throughput problems, identify which factor is the bottleneck:
Step 1: List all potential limits
Step 2: Calculate each limit's throughput
Step 3: The actual throughput is the minimum $$Throughput = \min(BW, Window/RTT, Mathis\ limit, ...)$$
Step 4: Apply efficiency factor for goodput $$Goodput = Throughput \times \frac{Payload}{Payload + Headers}$$
Throughput calculations reveal what users actually experience, not just theoretical capacity. Here are the key insights:
You now understand how to calculate network throughput under various conditions. The next page covers subnetting problems—how to divide IP address space, calculate host counts, and design addressing schemes for real network deployments.