Loading learning content...
In our study of ALOHA protocols, we encountered a fundamental limitation: stations transmit blindly, without any awareness of what other stations are doing. Pure ALOHA achieves only 18.4% efficiency, and even slotted ALOHA peaks at 36.8%. The majority of the channel capacity is wasted on collisions.
But consider a simple observation from everyday human communication: We don't just start talking randomly in a conversation. We listen first to see if someone else is speaking. This seemingly obvious insight—applied systematically to computer networks—led to Carrier Sense Multiple Access (CSMA) protocols, which form the foundation of modern Ethernet and dramatically improve upon ALOHA.
This page explores carrier sensing: the fundamental mechanism that allows stations to detect ongoing transmissions before deciding whether to transmit themselves.
By the end of this page, you will understand: (1) Why carrier sensing fundamentally improves channel utilization, (2) How stations detect whether the channel is busy or idle, (3) The physical and logical mechanisms behind carrier sensing, (4) Why carrier sensing alone cannot eliminate all collisions, and (5) The concept of propagation delay and its critical role in CSMA effectiveness.
Before diving into carrier sensing, let's precisely understand the problem it solves. In ALOHA protocols, collisions occur because stations transmit without any knowledge of channel state.
Consider a shared radio channel with five stations (A, B, C, D, E). Each station generates frames at random times. When station A transmits, it has no idea whether:
This blindness to channel activity is the root cause of ALOHA's inefficiency. Stations transmit into an ongoing transmission, causing collisions that destroy both frames.
| Scenario | What Happens | Result |
|---|---|---|
| Station A transmits | B is already transmitting | Collision: Both frames destroyed |
| Station A transmits | B starts during A's transmission | Collision: Both frames destroyed |
| Station A transmits | Channel was idle | Success: Frame delivered |
The key observation:
Many collisions in ALOHA are avoidable. If station A could simply detect that station B was already transmitting, it could wait until B finishes. This would eliminate the most obvious source of collisions—transmitting into an already-busy channel.
This insight leads to a simple but powerful principle: Listen Before Talk (LBT). Before transmitting, sense the channel to determine if it's busy. If busy, defer. If idle, transmit.
Carrier sensing implements the 'listen before talk' principle: stations monitor the channel for ongoing transmissions before attempting their own. This simple idea eliminates the most egregious collision type—transmitting when the channel is obviously busy.
Carrier sensing is the process by which a station determines whether the shared communication channel is currently in use. The term "carrier" refers to the carrier signal—the electromagnetic wave or electrical signal that carries the actual data.
When a station transmits on a shared medium:
A station performing carrier sensing samples the medium and measures whether:
Physical carrier sensing uses hardware to detect signals on the medium. Virtual carrier sensing (used in WiFi) uses a software timer called NAV (Network Allocation Vector) based on duration fields in frame headers. A station considers the channel busy if EITHER physical OR virtual sensing indicates busy.
Let's trace through the carrier sensing process step by step. We'll use Ethernet as our example, but the principles apply broadly to any CSMA system.
Step-by-Step Carrier Sensing in Ethernet:
The Inter-Frame Gap (IFG):
The IFG is a mandatory quiet period between frames. In 10 Mbps Ethernet, the IFG is 9.6 microseconds (96 bit times). For faster Ethernet variants, the IFG scales proportionally:
| Ethernet Speed | Bit Time | IFG Duration |
|---|---|---|
| 10 Mbps | 100 ns | 9.6 μs |
| 100 Mbps | 10 ns | 960 ns |
| 1 Gbps | 1 ns | 96 ns |
| 10 Gbps | 0.1 ns | 9.6 ns |
The IFG serves multiple purposes:
Carrier sensing dramatically reduces collisions compared to ALOHA by eliminating the most obvious collision scenario: transmitting when someone else is clearly in the middle of their transmission.
Quantifying the improvement:
In ALOHA, a station transmits whenever it has data, regardless of channel state. If the channel is busy 50% of the time on average, then roughly 50% of all transmission attempts will immediately collide with ongoing transmissions.
With carrier sensing, this entire category of collisions is eliminated. A station waits if the channel is busy, only transmitting when it appears idle. This alone can more than double the effective throughput.
The intuition:
Think of a busy highway on-ramp without traffic lights (ALOHA) versus one with traffic lights (CSMA). Without lights, cars blindly merge, causing frequent collisions and gridlock. With lights that detect highway traffic, cars wait for a gap before merging—dramatically improving flow.
But notice: even with the traffic light, collisions can still happen. If two cars on different on-ramps both see a gap at the same time, both may try to merge into the same spot. This is analogous to the collisions that can still occur in CSMA, which we'll explore next.
Carrier sensing eliminates 'obvious' collisions—transmitting into an already-busy channel. This alone dramatically improves throughput. The remaining collisions occur in a much smaller window: when multiple stations sense idle simultaneously.
If carrier sensing tells us when the channel is busy, why do collisions still occur in CSMA? The answer lies in a fundamental physical constraint: signals do not travel instantaneously.
Light travels at approximately 300,000 km/second in a vacuum, and electrical/optical signals in cables travel at about 60-80% of this speed (200,000-240,000 km/s). While this sounds incredibly fast, in networking terms, it creates measurable delays.
Example: A 2 km Ethernet segment
Consider two stations, A and B, at opposite ends of a 2 km coaxial cable:
This 10 μs propagation delay creates a critical vulnerability window:
| Time | Station A | Station B |
|---|---|---|
| t=0 | Senses idle, starts transmitting | Senses idle (A's signal hasn't arrived yet!) |
| t=5μs | Transmitting | Still senses idle, could start transmitting |
| t=9μs | Transmitting | Still senses idle |
| t=10μs | Transmitting | A's signal finally arrives — too late! |
If station B starts transmitting before t=10μs (before A's signal reaches B), a collision occurs. Both A and B will have their frames destroyed.
Carrier sensing can only detect transmissions that have had time to propagate to the sensing station. During the propagation delay window, a distant transmission is 'invisible' to other stations. This window—equal to the propagation delay—is when CSMA collisions can occur.
Vulnerable Period in CSMA:
The vulnerable period in CSMA equals the propagation delay (τ) of the network. Compare this to ALOHA:
| Protocol | Vulnerable Period | Explanation |
|---|---|---|
| Pure ALOHA | 2 × T_frame | Any overlapping transmission causes collision |
| Slotted ALOHA | T_frame | Only same-slot transmissions collide |
| CSMA | τ (propagation delay) | Only transmissions started within τ of each other collide |
Since propagation delay (τ) is typically much smaller than frame transmission time (T_frame), CSMA's vulnerable period is dramatically shorter, leading to far fewer collisions.
The effectiveness of carrier sensing depends critically on the ratio between propagation delay and frame transmission time. This ratio is captured in a fundamental parameter called 'a':
$$a = \frac{\tau}{T_{frame}} = \frac{\text{Propagation Delay}}{\text{Transmission Time}}$$
Where:
Interpreting the 'a' parameter:
| Value of 'a' | Network Characteristic | Carrier Sensing Effectiveness |
|---|---|---|
| a << 1 | Propagation delay much smaller than frame time | Excellent: Very short vulnerable period |
| a ≈ 0.01 | Typical LAN (short distances, large frames) | Very good: CSMA works very well |
| a ≈ 0.1 | Fast LAN or small frames | Good: Still significant benefit |
| a ≈ 1 | Propagation ≈ transmission time | Poor: Vulnerable period comparable to frame time |
| a >> 1 | Satellite/long-distance, small frames | Very poor: CSMA degrades toward ALOHA |
Example calculations:
Scenario 1: Classic 10 Mbps Ethernet LAN
Scenario 2: Geosynchronous Satellite
With a = 200, the propagation delay is 200 times the frame transmission time. By the time a satellite station detects another's transmission, it has already transmitted 200 frames! Carrier sensing provides almost no benefit in satellite networks—they must use other approaches like reservation-based protocols.
The key insight:
CSMA is highly effective when a is small—that is, when propagation delay is a small fraction of frame transmission time. This is why CSMA (and specifically CSMA/CD) became the foundation of Ethernet:
For networks with large 'a' (long distances, short frames, high speeds), CSMA provides diminishing returns, and other MAC protocols become necessary.
Carrier sensing is implemented differently depending on the physical medium. Understanding these implementations helps explain the capabilities and limitations of CSMA in various network types.
Coaxial and Twisted-Pair Carrier Sensing:
In traditional Ethernet, carrier sensing is performed by monitoring voltage levels on the cable:
Manchester Encoding: Ethernet uses Manchester encoding where a logical '1' is represented by a transition from high-to-low voltage, and '0' by low-to-high. An idle cable has no transitions.
Carrier Detect Circuitry: The NIC contains analog circuitry (CD - Carrier Detect) that monitors for voltage transitions. Any transition activity indicates carrier present.
Threshold Detection: The receiver compares voltage levels against thresholds. If voltage swings exceed the idle threshold, carrier is detected.
Modern Twisted-Pair (100BASE-TX, 1000BASE-T):
Modern Ethernet uses more sophisticated encoding (MLT-3, PAM-5) but the principle remains: the PHY chip (physical layer chip) continuously monitors the receive pair for valid signal patterns and reports 'carrier sense' status to the MAC layer.
Modern switched Ethernet eliminates the shared medium entirely—each port is a separate collision domain of one device. Carrier sensing still occurs but collisions are impossible in full-duplex mode. CSMA concepts remain relevant for wireless (WiFi) and legacy shared-media networks.
Carrier sensing represents a fundamental improvement over ALOHA's 'transmit blindly' approach. By implementing 'listen before talk,' CSMA protocols dramatically reduce collisions and improve channel utilization.
What's next:
We've established that carrier sensing determines whether the channel is idle. But what should a station do when it has data to send? The answer to this question leads to different persistence strategies:
Each strategy offers different tradeoffs between throughput and collision probability. The next page explores 1-persistent CSMA in depth.
You now understand carrier sensing—the fundamental mechanism underlying all CSMA protocols. You can explain how stations detect channel activity, why propagation delay limits CSMA effectiveness, and how the 'a' parameter determines when CSMA is beneficial. Next, we'll explore what actions stations take based on sensing results.