Loading content...
Having understood why backoff is necessary and how collisions are detected and handled, we now turn to the precise mechanics of the Binary Exponential Backoff (BEB) algorithm as specified in IEEE 802.3. This section provides the exact formulation that every Ethernet NIC implements, explaining each component and design decision.
The BEB algorithm is deceptively simple in its specification—just a few lines capture decades of research into optimal collision resolution. Yet every parameter choice represents a carefully considered trade-off between recovery speed, fairness, and stability. Understanding these choices transforms the algorithm from a 'magic formula' into a comprehensible engineering solution.
This page presents the official IEEE 802.3 backoff algorithm, explains the truncation mechanism and why it's necessary, walks through step-by-step examples, and provides pseudocode suitable for implementation.
The Binary Exponential Backoff algorithm is formally specified in IEEE 802.3 Section 4.2.3.2.5 (Collision Backoff and Retransmission). The algorithm can be stated precisely as follows:
Official Algorithm:
After experiencing collision number n (where n ranges from 1 to 16):
123456789101112131415161718192021222324252627
Binary Exponential Backoff (IEEE 802.3)════════════════════════════════════════ Given: n = collision number (1 ≤ n ≤ 16) T_slot = slot time (512 bit-times at 10 Mbps = 51.2 μs) Algorithm: 1. k ← min(n, 10) // Truncate at 10 collisions 2. r ← random(0, 2^k - 1) // Uniform random integer 3. T_backoff ← r × T_slot // Calculate wait time 4. Wait T_backoff 5. If medium idle, retransmit 6. If n > 16, abort and report error Window Evolution: n=1: k=1, range [0, 1], 2 choices, max wait = 1 slot n=2: k=2, range [0, 3], 4 choices, max wait = 3 slots n=3: k=3, range [0, 7], 8 choices, max wait = 7 slots n=4: k=4, range [0, 15], 16 choices, max wait = 15 slots n=5: k=5, range [0, 31], 32 choices, max wait = 31 slots n=6: k=6, range [0, 63], 64 choices, max wait = 63 slots n=7: k=7, range [0, 127], 128 choices, max wait = 127 slots n=8: k=8, range [0, 255], 256 choices, max wait = 255 slots n=9: k=9, range [0, 511], 512 choices, max wait = 511 slots n=10: k=10, range [0, 1023], 1024 choices, max wait = 1023 slots n=11-16: k=10 (truncated), same as n=10Three magic numbers define the algorithm: 2 (the base), 10 (the truncation limit), and 16 (the retry limit). Each is carefully chosen based on mathematical analysis and practical experience.
A critical aspect of the BEB algorithm is the truncation of the exponent at 10. After the 10th collision, the backoff window stops growing and remains at [0, 1023] for collisions 11 through 16. This truncation is not arbitrary—it addresses several important concerns.
Why Truncate?
1. Bounded Maximum Delay:
Without truncation, the maximum backoff delay would grow unboundedly:
Such delays are unacceptable for interactive applications and would cause TCP timeouts and application failures.
2. Practical Window Size:
A window of 1024 slots is already extremely large—about 52 ms at 10 Mbps. This provides adequate randomization for typical congestion levels. Larger windows provide diminishing returns:
| Collision # | Without Truncation | With Truncation at 10 | Savings |
|---|---|---|---|
| 10 | 52.4 ms (1023 slots) | 52.4 ms (1023 slots) | 0% |
| 11 | 104.8 ms (2047 slots) | 52.4 ms (1023 slots) | 50% |
| 12 | 209.5 ms (4095 slots) | 52.4 ms (1023 slots) | 75% |
| 13 | 419.0 ms (8191 slots) | 52.4 ms (1023 slots) | 87.5% |
| 14 | 838.0 ms (16383 slots) | 52.4 ms (1023 slots) | 93.75% |
| 15 | 1676.1 ms (32767 slots) | 52.4 ms (1023 slots) | 96.875% |
| 16 | 3352.2 ms (65535 slots) | 52.4 ms (1023 slots) | 98.4375% |
3. Fairness Considerations:
Truncation also improves fairness. If backoff windows continued growing indefinitely, a station experiencing many collisions would wait increasingly longer, while new arrivals with fresh collision counters would have small windows. This creates unfairness.
With truncation, stations experiencing repeated collisions eventually reach the same maximum window as each other, restoring fairness at high collision counts.
4. Implementation Simplicity:
A 10-bit random number generator is sufficient when truncated at 10. Hardware random number generators with exactly 10 bits of output can be used directly without modulo operations or additional processing.
Truncating at a smaller value (say, 8) would limit delay further but reduce the algorithm's ability to handle heavy congestion. Analysis shows that 10 provides an excellent balance: the window grows sufficiently to handle realistic congestion levels while keeping delays bounded for interactive traffic.
Let's trace through several scenarios to solidify understanding of how the BEB algorithm operates in practice.
Example 1: Simple Two-Station Collision
Station A and Station B both have frames to transmit. They detect an idle medium and begin transmitting simultaneously.
12345678910111213141516171819202122
Time 0.0: Both A and B begin transmittingTime ~25 μs: Collision detected (signals overlap at medium midpoint)Time ~25.5 μs: Both stations transmit jam signal (32 bits)Time ~29 μs: Jam complete, both prepare for backoff ═══ COLLISION 1 ═══Both A and B: n=1, k=min(1,10)=1, window=[0,1] Station A: r_A = random(0,1) = 0 Backoff = 0 × 51.2 μs = 0 μs (immediate retry) Station B: r_B = random(0,1) = 1 Backoff = 1 × 51.2 μs = 51.2 μs Time ~29 μs: Station A immediately transmits (r=0)Time ~29 μs: Station B begins waiting (r=1, must wait 51.2 μs)Time ~29 μs: Station A's carrier sensed by BTime ~1.2 ms: Station A completes frame successfullyTime ~1.21 ms: IFG (96 bit-times = 9.6 μs)Time ~1.22 ms: Station B transmits successfully Resolution: 1 collision, both frames deliveredExample 2: Extended Collision Sequence
Now consider a case where both stations repeatedly select the same random value:
1234567891011121314151617181920212223242526272829
═══ COLLISION 1 ═══n=1, k=1, window=[0,1]Station A: r=1, wait 1 slotStation B: r=1, wait 1 slot (same choice!)→ Both transmit at T+51.2 μs → COLLISION 2 ═══ COLLISION 2 ═══n=2, k=2, window=[0,3]Station A: r=2, wait 2 slotsStation B: r=2, wait 2 slots (unlucky again!)→ Both transmit at T+102.4 μs → COLLISION 3 ═══ COLLISION 3 ═══n=3, k=3, window=[0,7]Station A: r=3, wait 3 slotsStation B: r=5, wait 5 slots (finally different!)→ A transmits at T+153.6 μs, succeeds→ B senses carrier, defers→ After A completes, B transmits successfully Resolution: 3 collisions before successTotal overhead: ~150 μs of collision/backoff time Statistics:- Probability of exactly matching after collision 1: 1/2 = 50%- Probability of matching twice: 1/2 × 1/4 = 12.5% - Probability of matching three times: 1/2 × 1/4 × 1/8 = 1.56% Most collisions resolve quickly; extended sequences are rare.The probability of k consecutive matching selections is 1/(2^(1+2+3+...+k)) = 1/2^(k(k+1)/2). For k=5 consecutive matches, this is about 1/32768 or 0.003%. Extended collision sequences are extremely rare.
The following pseudocode represents a complete, implementation-ready specification of the BEB algorithm as implemented in Ethernet NICs:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
// Constantsconst int SLOT_TIME_BITS = 512; // 512 bit-times = 51.2 μs at 10 Mbpsconst int MAX_BACKOFF_EXPONENT = 10; // Truncation limitconst int MAX_RETRIES = 16; // Maximum collision countconst int JAM_SIZE_BITS = 32; // Jam signal size // State per frameint collision_count = 0; /** * Transmit a frame with CSMA/CD and BEB * @param frame The frame data to transmit * @return SUCCESS, or EXCESSIVE_COLLISION_ERROR */TransmitResult transmit_with_csma_cd(Frame frame) { collision_count = 0; while (true) { // Step 1: Carrier Sense - wait for idle medium while (medium_is_busy()) { wait_one_bit_time(); } // Step 2: Inter-Frame Gap wait_bit_times(96); // IFG = 96 bit-times // Step 3: Re-check medium (might have become busy during IFG) if (medium_is_busy()) { continue; // Defer and restart } // Step 4: Begin transmission start_transmission(frame); // Step 5: Monitor for collision during transmission bool collision_detected = false; while (transmission_in_progress()) { if (detect_collision()) { collision_detected = true; break; } } // Step 6: Handle result if (!collision_detected) { return SUCCESS; // Frame transmitted successfully } // Collision occurred! // Step 7: Send jam signal transmit_jam_signal(JAM_SIZE_BITS); // Step 8: Increment collision counter collision_count++; // Step 9: Check retry limit if (collision_count > MAX_RETRIES) { return EXCESSIVE_COLLISION_ERROR; } // Step 10: Calculate backoff int k = min(collision_count, MAX_BACKOFF_EXPONENT); int window = (1 << k) - 1; // 2^k - 1, using bit shift int r = random_uniform(0, window); // Inclusive range // Step 11: Wait for backoff period wait_bit_times(r * SLOT_TIME_BITS); // Loop back to retry transmission }} /** * Generate uniform random integer in range [0, max] * CRITICAL: Must use high-quality randomness! */int random_uniform(int min, int max) { // Use hardware RNG or well-seeded PRNG // Seed should incorporate MAC address for uniqueness return random_in_range(min, max);}The random_uniform function MUST produce truly random or high-quality pseudorandom numbers. If stations use similar PRNGs with similar seeds, they may repeatedly select the same backoff values, causing pathological collision patterns. Hardware NICs typically use true random number generators or unique seeds based on MAC addresses.
The BEB algorithm has several important mathematical properties that characterize its performance. Understanding these helps predict network behavior under various conditions.
Expected Backoff Time:
For a station experiencing its n-th collision:
Expected backoff slots = (2^min(n,10) - 1) / 2
This is because the station selects uniformly from [0, 2^k - 1], and the expected value of a uniform distribution is (max - min) / 2.
| Collision # | Window [0, max] | Expected Slots | Expected Time (μs) |
|---|---|---|---|
| 1 | [0, 1] | 0.5 | 25.6 |
| 2 | [0, 3] | 1.5 | 76.8 |
| 3 | [0, 7] | 3.5 | 179.2 |
| 4 | [0, 15] | 7.5 | 384.0 |
| 5 | [0, 31] | 15.5 | 793.6 |
| 6 | [0, 63] | 31.5 | 1,612.8 |
| 7 | [0, 127] | 63.5 | 3,251.2 |
| 8 | [0, 255] | 127.5 | 6,528.0 |
| 9 | [0, 511] | 255.5 | 13,081.6 |
| 10 | [0, 1023] | 511.5 | 26,188.8 |
| 11-16 | [0, 1023] | 511.5 | 26,188.8 |
Total Expected Delay:
For a frame that ultimately succeeds after n collisions, the total expected backoff is:
Total Expected Backoff = Σ (expected backoff for collision i), i=1 to n
= Σ (2^min(i,10) - 1)/2, i=1 to n
Variance and Fairness:
The variance of backoff time increases with collision count:
Variance of backoff after collision n = (2^min(n,10))^2 / 12
High variance means outcomes are less predictable. This is intentional—it provides maximum spreading of retry attempts, minimizing the probability of repeated collisions.
12345678910111213141516171819202122
Probability Analysis for Two Contending Stations═══════════════════════════════════════════════════ P(success on 1st try) = 0 (guaranteed collision if both start together) After collision 1:P(success) = P(A ≠ B) = 1 - 1/2 = 0.5 After collision 2:P(success | collision) = 1 - 1/4 = 0.75 After collision n:P(success | collision n-1) = 1 - 1/2^min(n,10) Expected collisions until success (two stations):E[collisions] = Σ k × P(exactly k collisions) ≈ 1.7 collisions on average For N stations with exponential backoff:E[collisions] ≈ e × ln(N) ≈ 2.72 × log₂(N) This logarithmic growth is the key to Ethernet's scalability!The most important property: expected collisions grow as O(log N) with N stations. This means Ethernet can handle significant station increases with only modest increases in collision overhead—the foundation of its remarkable scalability.
Implementing BEB in hardware or simulation requires attention to several practical details not explicit in the basic algorithm specification.
Random Number Generation:
The quality of random number generation significantly affects BEB performance:
Timing Precision:
BEB requires timing accurate to slot-time granularity:
Hardware implementations use crystal-locked counters synchronized to the bit clock.
State Management:
Each frame in progress maintains its own collision counter:
The collision counter is associated with a specific frame, not with the station. If Station A has two frames queued and the first succeeds, the second frame starts with collision counter = 0 regardless of what happened to the first frame.
While standard Ethernet uses the IEEE 802.3 BEB algorithm, various extensions and variations have been proposed and used in related protocols.
Modified Truncation Points:
Some implementations use different truncation values:
Binary Exponential Backoff with MILD (Multiplicative Increase, Linear Decrease):
Some wireless protocols combine BEB with faster recovery:
This provides faster adaptation but more complex implementation.
| Protocol | Base | Growth | Truncation | Reset on Success |
|---|---|---|---|---|
| IEEE 802.3 Ethernet | 2 | Exponential | 10 collisions | Yes (to 0) |
| IEEE 802.11 WiFi | 2 | Exponential | CWmax (varies) | Yes (to CWmin) |
| Token Ring (recovery) | N/A | N/A | N/A | N/A (no backoff) |
| TCP Congestion | 2 | Exponential | 64 seconds | No (continues growing) |
| Exponential Backoff in APIs | 2 | Exponential | Application-defined | Varies |
Slot Time Variations:
Different physical layers define different slot times:
| Physical Layer | Slot Time | Bits per Slot | Minimum Frame |
|---|---|---|---|
| 10BASE-X | 51.2 μs | 512 bits | 64 bytes |
| 100BASE-X | 5.12 μs | 512 bits | 64 bytes |
| 1000BASE-X | 4.096 μs | 4096 bits | 512 bytes (with extension) |
Application-Level Exponential Backoff:
The BEB concept has influenced retry strategies throughout computing:
These follow the same principle: wait longer after each failure to reduce load.
Binary Exponential Backoff has become a universal pattern for handling contention and failures in distributed systems. Understanding it in the Ethernet context provides insight applicable across networking, cloud computing, and software engineering.
We've thoroughly examined the Binary Exponential Backoff algorithm—from its formal specification to practical implementation details. Let's consolidate the key points:
What's Next:
The next page examines maximum retries—what happens when a frame experiences 16 collisions, why this limit exists, and the error handling mechanisms that kick in when BEB cannot resolve contention.
You now have implementation-ready knowledge of the Binary Exponential Backoff algorithm. This understanding enables you to analyze network behavior, predict collision resolution times, and even implement BEB in simulators or custom protocols.