Loading learning content...
In 1980, 10 Mbps Ethernet was revolutionary. Today, 400 Gbps Ethernet is deployed in hyperscale data centers, with 800 Gbps standards in development and 1.6 Tbps on the horizon. This represents a 40,000-fold increase in speed while maintaining fundamental protocol compatibility—an engineering achievement unparalleled in computing history.
To put this in perspective: if automobile speeds had improved proportionally since 1980, cars would now travel at over 4 million miles per hour—fast enough to reach the moon in under four minutes.
This page traces Ethernet's speed evolution, examining the engineering breakthroughs that enabled each generational leap and the design decisions that maintained compatibility across this remarkable progression.
By the end of this page, you will understand the technical challenges of increasing Ethernet speed, the key standards at each generation (Fast Ethernet, Gigabit, 10G, 40G, 100G+), and how protocol adaptations like full-duplex operation and flow control enabled higher speeds while preserving the Ethernet frame format.
For fifteen years, 10 Mbps was the only Ethernet speed. This long period wasn't stagnation—it was consolidation. During this time, Ethernet transitioned from thick coaxial cable to thin coaxial to twisted pair to fiber, establishing the physical layer flexibility that would enable future speed increases.
The 10 Mbps ecosystem:
| Standard | Year | Medium | Max Distance | Key Feature |
|---|---|---|---|---|
| 10BASE5 | 1983 | Thick coax (10mm) | 500m | Original IEEE standard, vampire taps |
| 10BASE2 | 1985 | Thin coax (5mm) | 185m | BNC connectors, 'Cheapernet' |
| 10BASE-T | 1990 | Cat 3 UTP | 100m | Star topology, RJ-45 connectors |
| 10BASE-FL | 1993 | Multi-mode fiber | 2,000m | Fiber optic, longer distances |
| 10BASE-FB | 1993 | Multi-mode fiber | 2,000m | Backbone fiber, synchronous |
| 10BASE-FP | 1993 | Multi-mode fiber | 500m | Passive star, no electronics in hub |
Why 10 Mbps worked for 15 years:
In the 1980s and early 1990s, 10 Mbps was more than adequate for typical office applications:
Network utilization rarely exceeded 30% in typical office environments. Shared 10 Mbps was sufficient for dozens of users.
The pressure for more speed:
By the mid-1990s, several factors created demand for faster Ethernet:
As networks grew larger and applications more demanding, the shared bandwidth model of CSMA/CD became increasingly strained. A 50-station network sharing 10 Mbps gave each station an average of 200 Kbps—painfully slow for emerging graphical applications. The pressure for speed was really pressure to escape the shared bandwidth paradigm.
The IEEE 802.3u standard, ratified in 1995, defined Fast Ethernet at 100 Mbps. This 10x speed increase presented significant engineering challenges while maintaining the fundamental Ethernet protocol.
The core challenge: CSMA/CD timing
Recall that Ethernet's minimum frame size (64 bytes) relates directly to the collision detection requirement. At 10 Mbps:
At 100 Mbps, the same 512 bit times = 5.12 μs. At the speed of light, signals travel only about 250 meters in this time—meaning the maximum collision domain shrinks to roughly 200 meters (accounting for cable and electronic delays).
Increasing speed 10x without changing the minimum frame size reduces the maximum collision domain by the same factor. At 100 Mbps, the maximum hub-to-hub distance dropped to about 205 meters using copper. This physical constraint influenced the entire Fast Ethernet architecture.
Fast Ethernet standards:
| Standard | Medium | Distance | Key Characteristics |
|---|---|---|---|
| 100BASE-TX | Cat 5 UTP (2 pairs) | 100m | Most common, uses MLT-3 encoding |
| 100BASE-T4 | Cat 3 UTP (4 pairs) | 100m | Uses existing Cat 3 wiring, 8B6T encoding |
| 100BASE-FX | Multi-mode fiber | 412m (half-duplex), 2km (full-duplex) | Fiber optic, 4B5B encoding |
Encoding schemes:
Fast Ethernet couldn't simply transmit at 100 MHz on copper—the electrical characteristics of twisted pair cables would cause unacceptable signal degradation. Instead, sophisticated encoding schemes were developed:
100BASE-TX (MLT-3 encoding):
100BASE-T4 (8B6T encoding):
These clever encoding techniques allowed 100 Mbps over cables designed for 10 Mbps operation.
Despite 100BASE-T4's ability to use existing Cat 3 wiring, 100BASE-TX became dominant. The cost of upgrading to Cat 5 was modest, and Cat 5's superior characteristics simplified transceiver design and provided headroom for future improvements. By the late 1990s, Cat 5 installation was standard practice.
While Fast Ethernet increased raw speed 10x, a parallel development would effectively double network capacity again: full-duplex operation.
Half-duplex vs. Full-duplex:
The switch transformation:
Full-duplex Ethernet required abandoning hubs for switches. Unlike hubs, which repeat signals to all ports, switches:
With switches, each station has a dedicated 100 Mbps channel to the switch. Two stations can exchange data at 100 Mbps in each direction simultaneously—effectively 200 Mbps of throughput per station pair.
In a switch-based, full-duplex network, CSMA/CD serves no purpose—there's no shared medium to arbitrate. Stations simply transmit whenever they have data. This observation becomes critical at higher speeds where CSMA/CD's collision domain constraints would otherwise limit network diameter.
Flow control in full-duplex: IEEE 802.3x PAUSE
With CSMA/CD's natural backpressure removed, full-duplex networks needed an alternative congestion control mechanism. IEEE 802.3x (1997) introduced the PAUSE frame:
PAUSE is a Layer 2 mechanism—it operates below TCP/IP and can stop traffic instantly. However, it's a blunt instrument: it stops all traffic on a link, regardless of destination or priority. This limitation led to later developments like Priority Flow Control (PFC) for data center applications.
| Metric | 100 Mbps Half-Duplex | 100 Mbps Full-Duplex |
|---|---|---|
| Peak throughput per direction | 100 Mbps (shared) | 100 Mbps (dedicated) |
| Total bidirectional throughput | 100 Mbps | 200 Mbps |
| Collision handling | Required (CSMA/CD) | Not applicable |
| Distance limit (copper) | ~200m (collision domain) | 100m (signal quality only) |
| Equipment required | Hub or switch | Switch only |
| Latency variability | High (collision backoff) | Low (deterministic) |
IEEE 802.3z, ratified in 1998, defined Gigabit Ethernet (GbE) at 1000 Mbps. This represented another 10x speed increase and another set of engineering challenges.
The half-duplex challenge:
At 1000 Mbps, the 512-bit slot time shrinks to just 512 nanoseconds. Light travels only about 100 meters in this time—making even a single cable run from station to switch problematic for CSMA/CD.
The IEEE's solution was controversial: carrier extension. For half-duplex Gigabit Ethernet, the slot time was increased to 4096 bits (512 bytes), allowing approximately 200-meter collision domains. Short frames (under 512 bytes) were padded with carrier extension symbols—essentially meaningless signals that maintained transmission long enough for collision detection.
This was inefficient for small frames:
| Frame Size | Carrier Extension | Efficiency |
|---|---|---|
| 64 bytes | 448 bytes | 12.5% |
| 128 bytes | 384 bytes | 25% |
| 256 bytes | 256 bytes | 50% |
| 512 bytes | 0 bytes | 100% |
| 1518 bytes | 0 bytes | 100% |
Frame bursting:
To mitigate carrier extension's inefficiency, frame bursting was also introduced. A station with multiple frames to send could transmit them in a burst, with only the first frame requiring carrier extension. Subsequent frames in the burst could use minimal inter-frame gaps, amortizing the extension overhead across multiple frames.
The practical reality: Full-duplex dominated
In practice, half-duplex Gigabit Ethernet was rarely deployed. By 1998, switches were affordable, and the performance advantages of full-duplex were overwhelming. Half-duplex GbE's carrier extension and frame bursting were implemented mainly for standards compliance.
Gigabit Ethernet effectively marked the end of CSMA/CD's relevance. While technically supported for backward compatibility, virtually all Gigabit and faster Ethernet deployments use full-duplex point-to-point links with switches. The algorithm that defined Ethernet for 25 years became a historical footnote.
Gigabit Ethernet standards:
| Standard | Medium | Distance | Year | Key Notes |
|---|---|---|---|---|
| 1000BASE-SX | Multi-mode fiber (850nm) | 220-550m | 1998 | Short wavelength, LED-based, data centers |
| 1000BASE-LX | Multi-mode/Single-mode fiber (1310nm) | 550m MM, 5km SM | 1998 | Long wavelength, laser-based |
| 1000BASE-CX | Twinaxial copper | 25m | 1998 | Short-range, intra-rack connections |
| 1000BASE-T | Cat 5e UTP (4 pairs) | 100m | 1999 | Backwards compatible with Cat 5e cabling |
1000BASE-T: Engineering marvel
1000BASE-T (IEEE 802.3ab, 1999) achieved Gigabit speeds over standard Cat 5e twisted pair—the same cabling as 100BASE-TX. This required extraordinary signal processing:
The transceivers contained more processing power than entire 1980s computers. Yet the standard maintained plug-compatibility with existing cabling infrastructure.
IEEE 802.3ae, ratified in 2002, defined 10 Gigabit Ethernet (10GbE). At this speed, supporting CSMA/CD would limit collision domains to under 20 meters—clearly impractical. Accordingly, 10GbE was defined as full-duplex only, formally ending CSMA/CD's role in Ethernet.
10GbE: Two distinct markets
10GbE standards addressed two different application domains:
10 Gigabit Ethernet standards:
| Standard | Medium | Distance | Notes |
|---|---|---|---|
| 10GBASE-SR | MM fiber (850nm) | 26-400m | Short range, OM1 to OM4 fiber |
| 10GBASE-LR | SM fiber (1310nm) | 10 km | Long range, standard for data center interconnects |
| 10GBASE-ER | SM fiber (1550nm) | 40 km | Extended range, metro/WAN applications |
| 10GBASE-LRM | MM fiber (1310nm) | 220m | Reaches legacy MM fiber deployments |
| 10GBASE-CX4 | 4× twinax (InfiniBand cables) | 15m | Early copper option, now obsolete |
| 10GBASE-T | Cat 6a/7 UTP | 100m | RJ-45 copper, 802.3an (2006) |
| 10GBASE-CR | SFP+ Direct Attach Copper | 7m | Low-cost short-range data center connections |
The rise of 40G and 100G:
IEEE 802.3ba (2010) defined 40 and 100 Gigabit Ethernet. Rather than increasing per-lane speeds beyond manufacturing limits, these standards used parallel lanes:
Higher speeds: 200G, 400G, and beyond
IEEE 802.3bs (2017) defined 200 and 400 Gigabit Ethernet using higher lane rates (50 Gbps, then 100 Gbps per lane):
IEEE 802.3db (2022) defined 800 Gigabit Ethernet, and work on 1.6 Terabit Ethernet is underway.
Notice the pattern: as manufacturing limits constrain single-lane speeds, standards use more parallel lanes or wavelengths. A 400G link might use 4 × 100G lanes, 8 × 50G lanes, or even wavelength-division multiplexing. The MAC layer doesn't care—it sees a single logical 400G link.
Let's consolidate Ethernet's speed evolution with a comprehensive timeline:
| Year | Speed | Standard | Key Innovation |
|---|---|---|---|
| 1973 | 2.94 Mbps | Experimental | CSMA/CD proof of concept at Xerox PARC |
| 1983 | 10 Mbps | 802.3 (10BASE5) | First IEEE standard, thick coax |
| 1990 | 10 Mbps | 802.3i (10BASE-T) | Twisted pair star topology |
| 1995 | 100 Mbps | 802.3u (Fast Ethernet) | 10x speed, MLT-3 encoding |
| 1998 | 1 Gbps | 802.3z (GbE fiber) | Carrier extension, fiber PHYs |
| 1999 | 1 Gbps | 802.3ab (1000BASE-T) | GbE over Cat 5e copper |
| 2002 | 10 Gbps | 802.3ae (10GbE) | Full-duplex only, WAN PHY option |
| 2006 | 10 Gbps | 802.3an (10GBASE-T) | 10 GbE over Cat 6a copper |
| 2010 | 40/100 Gbps | 802.3ba | Parallel lanes, data center focus |
| 2016 | 25 Gbps | 802.3by | Single-lane 25G for server connections |
| 2017 | 200/400 Gbps | 802.3bs | 50G/100G per lane |
| 2022 | 800 Gbps | 802.3db | 100G lanes, 8 × 100G configuration |
| TBD | 1.6 Tbps | 802.3dj (in progress) | 200G lanes planned |
The doubling period:
Observe that Ethernet speed has roughly doubled every 2-3 years, consistent with Moore's Law for the underlying semiconductor technology:
Each generational leap required innovations in encoding, optics, signal processing, and manufacturing. Yet through all these changes, the fundamental Ethernet frame format remains compatible with the 1982 Ethernet II specification.
A 400 Gbps Ethernet frame contains the same fields as a 10 Mbps frame from 1983: preamble, SFD, destination MAC, source MAC, type/length, payload, and FCS. This compatibility means a frame can traverse links of vastly different speeds (with appropriate buffering), enabling gradual network upgrades without protocol translation.
Each speed increase required more sophisticated encoding to transmit higher bit rates over physical media. The progression of encoding schemes tells the story of increasing engineering complexity:
| Speed | Encoding | Line Rate | Efficiency | Key Technique |
|---|---|---|---|---|
| 10 Mbps | Manchester | 20 Mbaud | 50% | Self-clocking, simple but inefficient |
| 100 Mbps | 4B5B + MLT-3 | 125 Mbaud | 80% | Reduces fundamental frequency |
| 1 Gbps (fiber) | 8B10B | 1.25 Gbaud | 80% | DC-balanced, run-length limited |
| 1 Gbps (copper) | PAM-5 | 125 Mbaud | 80% | 5 voltage levels, 4 pairs, DSP |
| 10 Gbps | 64B/66B | 10.3125 Gbaud | 97% | Minimal overhead, scrambled |
| 25+ Gbps | 64B/66B + PAM-4 | 26.5625 Gbaud | 97% | 4 levels halve baud rate |
Why encoding efficiency matters:
High-efficiency encoding is critical at higher speeds:
At 100 Gbps and beyond, every percent of encoding overhead translates to billions of additional symbols per second—requiring faster electronics, more power, and higher manufacturing precision.
The PAM-4 revolution:
For 25+ Gbps lanes, PAM-4 (Pulse Amplitude Modulation with 4 levels) became essential:
The trade-off: PAM-4's closer voltage levels require better signal-to-noise ratio, more sophisticated equalization, and more powerful forward error correction (FEC).
Modern high-speed Ethernet standards mandate Forward Error Correction (FEC) to compensate for increased error rates from sophisticated modulation. 400GBASE-SR8 uses Reed-Solomon FEC that can correct up to 15 symbol errors per 256-symbol block. The processing power for this FEC exceeds entire computers from Ethernet's early days.
We've traced Ethernet's remarkable speed evolution from 10 Mbps to 800 Gbps and beyond. Let's consolidate the key insights:
What's next:
Having surveyed the speed evolution, we'll now examine specific milestones in detail: the jump from 10 Mbps to 100 Gbps. The next page explores the engineering challenges and solutions at each major speed tier.
You now understand Ethernet's speed evolution from 10 Mbps to 800+ Gbps. The key themes—encoding advances, full-duplex operation, lane parallelism, and frame format preservation—explain how Ethernet maintained compatibility while achieving a 40,000-fold speed increase. Next, we'll examine the specific 10 Mbps to 100 Gbps transition in detail.