Loading content...
In a shared-medium network, multiple stations compete for access to the same communication channel. A fundamental question arises: Does every station get a fair share of the medium? The answer has profound implications for network behavior, user experience, and system design.
Fairness in networking means that stations with equal traffic demands receive approximately equal service—throughput, latency, and success rates should be comparable across stations. Binary Exponential Backoff, despite its elegant design, exhibits subtle fairness characteristics that network designers and administrators must understand.
This page examines fairness from multiple angles: the capture effect and its causes, short-term versus long-term fairness, the 'New Station Advantage,' mathematical analysis of access probability, and practical strategies for ensuring equitable network access.
Before analyzing whether BEB is "fair," we must define what fairness means in the context of MAC protocols. Several definitions exist, each capturing a different aspect of equitable access.
Types of Fairness:
1. Throughput Fairness: Over a long period, each station transmits an equal number of frames (or bytes). If station A sends 1000 frames per minute, station B should also send approximately 1000 frames per minute, assuming equal demand.
2. Access Fairness: Each station has an equal probability of successfully transmitting at any given opportunity. This is a stronger requirement than throughput fairness—it demands fairness at every moment, not just on average.
3. Delay Fairness: Frames from different stations experience similar average delays. Even if throughput is equal, one station suffering consistently higher delays would be unfair.
4. Jain's Fairness Index: A mathematical measure ranging from 0 to 1:
Fairness Index = (Σ xi)² / (n × Σ xi²)
Where xi is the throughput of station i and n is the number of stations. An index of 1 means perfect fairness; lower values indicate unfairness.
| Measure | Formula/Definition | Perfect Value | What It Captures |
|---|---|---|---|
| Throughput Fairness | All stations have equal throughput | Equal rates | Long-term bandwidth share |
| Access Probability Fairness | P(success) equal for all stations | Equal probabilities | Per-attempt fairness |
| Jain's Index | (Σxi)²/(n × Σxi²) | 1.0 | Aggregate statistical fairness |
| Delay Fairness | Similar average delay per station | Equal delays | Quality of experience |
| Max-Min Fairness | Maximize minimum allocation | Optimal allocation | Protecting weakest stations |
Different applications prioritize different fairness types. Real-time applications care about delay fairness; bulk transfers care about throughput fairness. BEB's behavior differs across these dimensions, as we'll explore.
The most significant fairness issue in Binary Exponential Backoff is the capture effect—a phenomenon where a station that just successfully transmitted has a higher probability of transmitting again, effectively "capturing" the channel.
How Capture Occurs:
Consider two stations, A and B, after a collision:
1234567891011121314151617181920212223242526272829303132
Time → ═══ INITIAL COLLISION ═══Both A and B: Collision count = 1, window = [0, 1] Station A: k = 0 (immediate retry)Station B: k = 1 (wait 1 slot) ═══ A TRANSMITS SUCCESSFULLY ═══A sends frame, completes successfullyA's collision counter resets to 0 ═══ A HAS ANOTHER FRAME ═══A: New frame, collision count = 0, waits IFG onlyA: Begins transmitting before B's backoff expires! ═══ B STILL WAITING ═══ B: Original backoff not yet expired, or...B: Sees carrier, must defer!B: Counter still at 1 from original collision Result: A has transmitted two frames while B transmitted zeroThis is the "capture" effect - A captured the channel ═══ WORSE CASE ═══If this pattern repeats:- A continually transmits with small backoff windows- B's collision counter grows (if it collides again)- B's window grows, making it even less competitive- A dominates the channel The "rich get richer" problem in BEB.The core unfairness: a station with collision count 0 uses window [0,1] (2 choices). A station with collision count 5 uses window [0,31] (32 choices). The first station is 16× more likely to select k=0 for immediate access. This asymmetry creates capture.
Mathematical Model of Capture:
Let station A have just transmitted successfully (collision count reset to 0) and station B have collision count n from a previous collision.
P(A wins next opportunity) = P(A selects small k) × P(B selects larger k)
For A with window [0,1] and B with window [0, 2^n - 1]:
P(A selects 0) = 1/2
P(B selects 0) = 1/2^n
P(A selects 0 AND B selects >0) = (1/2) × (1 - 1/2^n)
As n increases, P(A wins) approaches 1/2, but more importantly,
A on average selects lower values than B.
The expected waiting time for A is 0.5 slots; for B with n=5, it's 15.5 slots. A has an order of magnitude advantage.
Binary Exponential Backoff exhibits different fairness characteristics depending on the time scale considered. Understanding this distinction is crucial for evaluating real-world performance.
Short-Term Fairness: Poor
In the short term (seconds to minutes), BEB can be quite unfair:
This short-term unfairness manifests as:
Long-Term Fairness: Good
Over longer periods, BEB achieves approximate fairness:
Why the Discrepancy?
The difference arises from BEB's memoryless design:
The time constant of this averaging depends on traffic patterns. Under sustained heavy load, unfairness persists longer. Under variable load, fairness improves faster.
Applications with real-time requirements suffer more from short-term unfairness. A VoIP call experiencing a 100ms delay spike is noticeably degraded, even if average delay is good. Bulk file transfers care less about short-term variations—total time matters, not moment-to-moment throughput.
A specific manifestation of the capture effect is the new station advantage—when a station begins transmitting for the first time, it has a competitive edge over stations already experiencing backoff.
The Scenario:
D has a significant advantage: while A, B, C are waiting large backoff values, D consistently chooses from the smallest possible window.
12345678910111213141516171819202122232425
Scenario: A, B, C at collision count 4; D at collision count 0 Station A: window [0, 15], E[backoff] = 7.5 slotsStation B: window [0, 15], E[backoff] = 7.5 slotsStation C: window [0, 15], E[backoff] = 7.5 slotsStation D: window [0, 1], E[backoff] = 0.5 slots D's expected backoff is 15× shorter than others! P(D selects 0) = 1/2 = 50%P(A selects 0) = 1/16 = 6.25%P(B selects 0) = 1/16 = 6.25%P(C selects 0) = 1/16 = 6.25% P(D wins if all contend simultaneously):= P(D selects smaller than all others)≈ P(D=0 and others>0) + P(D=1 and others>1)≈ 0.5 × (1 - 1/16)³ + 0.5 × (1 - 2/16)³≈ 0.5 × 0.82 + 0.5 × 0.67≈ 0.75 D wins 75% of the time against 3 established stations! This is the "new station advantage" - newcomers are favoredover stations that have been struggling with congestion.Is This Good or Bad?
Arguments both ways exist:
Against (unfair):
For (beneficial in practice):
The original Ethernet designers chose not to address this because:
The new station advantage is a consequence of BEB's elegant simplicity. Adding fairness would require tracking global state or coordinating between stations—exactly what CSMA/CD was designed to avoid. The designers accepted imperfect fairness for simplicity and robustness.
BEB's fairness characteristics vary significantly with network load. Understanding this relationship helps predict network behavior and set appropriate expectations.
Light Load (< 30% utilization):
| Load Level | Utilization | Collision Rate | Jain's Index | Delay Variance |
|---|---|---|---|---|
| Very Light | < 20% | < 1% | 0.99 | Very Low |
| Light | 20-40% | 1-5% | 0.95 | Low |
| Moderate | 40-60% | 5-15% | 0.90 | Moderate |
| Heavy | 60-80% | 15-30% | 0.80-0.90 | High |
| Saturated | 80% | 30% | < 0.80 | Very High |
Moderate Load (30-60% utilization):
Heavy Load (60-80% utilization):
Saturated Load (> 80% utilization):
Ethernet networks should be designed for 30-50% average utilization. Above 60%, both efficiency and fairness degrade significantly. The theoretical maximum throughput under heavy load is about 37% of channel capacity due to collision overhead—but fairness suffers before this limit is reached.
While BEB's fairness limitations are inherent to its design, several strategies can improve practical fairness in Ethernet networks.
1. Use Switches Instead of Hubs:
The most effective solution: eliminate collisions entirely.
2. Protocol-Level Alternatives:
Some environments use alternative MAC protocols with better fairness:
3. Modified BEB Algorithms:
Research has proposed BEB variants with better fairness:
These aren't standard Ethernet but appear in academic research and some proprietary systems.
In practice, the transition from hubs to switches has made BEB fairness largely irrelevant. Modern switched Ethernet doesn't use CSMA/CD at all in normal operation. The fairness concerns remain academically interesting and relevant for wireless networks (which use similar backoff mechanisms) but are solved in wired Ethernet by eliminating shared media.
Comparing BEB's fairness with alternative MAC protocols illuminates the trade-offs involved in protocol design.
Token Ring (IEEE 802.5):
Token passing provides perfect short-term fairness:
However:
| Protocol | Short-Term Fairness | Long-Term Fairness | Under Light Load | Under Heavy Load |
|---|---|---|---|---|
| BEB (Ethernet) | Poor | Good | Excellent efficiency | Fairness degrades |
| Token Ring | Perfect | Perfect | Wasted token rotations | Stable, fair |
| TDMA | Perfect | Perfect | Wasted slots if idle | Fixed allocation |
| Polling | Perfect | Perfect | Poll overhead | Controller bottleneck |
| CSMA/CA (WiFi) | Moderate | Good | Similar to Ethernet | Better than BEB |
CSMA/CA (WiFi 802.11):
WiFi uses a modified backoff similar to BEB but with improvements:
These modifications improve short-term fairness while maintaining BEB's adaptive nature.
Why Ethernet Kept Simple BEB:
Token Ring prioritized guaranteed fairness at the cost of complexity. Ethernet prioritized simplicity and efficiency, accepting imperfect fairness. The market chose Ethernet—simplicity won, and switches eventually eliminated the fairness concerns anyway.
We've thoroughly examined fairness considerations in Binary Exponential Backoff. Let's consolidate the essential understanding:
Module Complete:
With this page, we've completed our comprehensive examination of Exponential Backoff in Ethernet. You now understand:
This knowledge enables deep understanding of Ethernet behavior, network troubleshooting, and appreciation for elegant protocol design.
Congratulations! You've mastered Binary Exponential Backoff—from its mathematical foundations to practical fairness implications. This collision resolution mechanism, simple yet profound, has enabled Ethernet's decades of success and continues to influence modern protocol design.