Loading learning content...
A network architect faces a critical decision: which controlled access protocol best fits the application requirements? This isn't a theoretical exercise—the wrong choice can mean:
Each protocol represents a different philosophy for coordinating channel access:
This comparison synthesizes everything we've learned, providing a decision framework for selecting—or combining—controlled access mechanisms.
By the end of this page, you will understand how to choose between polling, token passing, and reservation based on network requirements—topology constraints, traffic patterns, latency requirements, fault tolerance needs, and implementation complexity. You'll also see how modern protocols combine these approaches.
Let's begin with a comprehensive comparison of fundamental characteristics across all three controlled access protocol families.
| Characteristic | Polling | Token Passing | Reservation |
|---|---|---|---|
| Control architecture | Centralized (primary/secondary) | Distributed (peer-to-peer) | Centralized or distributed |
| Collision behavior | None (explicit permission) | None (token holder only) | None in data period; possible in request period |
| Single point of failure | Primary station | None (with recovery protocols) | Controller (if centralized) |
| Latency determinism | Bounded, predictable | Bounded, predictable | Bounded after reservation |
| Fairness guarantee | By polling schedule | By token circulation | By allocation algorithm |
| Overhead type | Poll/response messages | Token circulation | Reservation signaling |
| Topology flexibility | Any (star, bus, etc.) | Ring (physical or logical) | Any |
| Station autonomy | Low (slave stations) | High (peer stations) | Medium (request needed) |
Protocol Philosophy:
| Protocol | Core Philosophy |
|---|---|
| Polling | "Ask permission before speaking" — The controller explicitly grants each station the right to transmit. Maximum control, minimum autonomy. |
| Token Passing | "Pass the talking stick" — Stations cooperatively share the token, self-organizing who transmits when. Maximum autonomy, distributed control. |
| Reservation | "Book your appointment" — Stations request future access, allowing the system to schedule efficiently. Balances control and autonomy. |
Understanding the core philosophy helps predict behavior. Polling is paternalistic—the controller knows best. Token passing is democratic—stations cooperate as equals. Reservation is bureaucratic—central planning optimizes allocation. Each philosophy has contexts where it excels.
Efficiency depends critically on the traffic pattern. Let's compare how each protocol performs under different conditions.
Efficiency Formulas:
| Protocol | Efficiency Formula | Key Parameters |
|---|---|---|
| Polling | η_poll = D / (D + N×P) | D=data time, N=stations, P=poll overhead |
| Token (early) | η_token = 1 / (1 + a/N) | a=propagation ratio, N=active stations |
| Reservation | η_res = D / (D + R) | D=data period, R=reservation period |
Under Various Traffic Loads:
| Traffic Pattern | Polling | Token Passing | Reservation |
|---|---|---|---|
| Light (5% stations active) | Poor (95% empty polls) | Good (idle token passes quickly) | Poor (reservation overhead dominates) |
| Medium (50% active) | Moderate | Good | Good |
| Heavy (90%+ active) | Good (most polls return data) | Excellent (near 100% utilization) | Excellent (full data period use) |
| Bursty/Variable | Poor-Moderate | Good (adapts naturally) | Good (if request channel efficient) |
| Continuous stream | Good | Excellent | Excellent (single reservation) |
Quantitative Example:
Consider a network with:
Polling Efficiency:
One cycle: 100 polls × 10 bytes + 10 data frames × 1000 bytes
= 1000 bytes + 10,000 bytes
η_poll = 10,000 / 11,000 = 90.9%
But with only 1% active: η = 1000 / 2000 = 50%
Token Passing Efficiency:
η_token = 1 / (1 + 0.5/100) = 1 / 1.005 = 99.5%
(Activity rate doesn't affect efficiency—only adds latency)
Reservation Efficiency:
With 100 mini-slots reservation, each 1 byte:
Reservation period = 100 bytes
Data period (10 active) = 10 × 1000 bytes = 10,000 bytes
η_res = 10,000 / 10,100 = 99.0%
Polling efficiency depends heavily on activity rate—empty polls waste bandwidth. Token passing efficiency is independent of activity—the token simply circulates faster when stations are idle. Reservation efficiency depends on reservation overhead relative to data period length.
For real-time applications, latency bounds are often more important than raw throughput.
Latency Formulas:
| Protocol | Worst-Case Latency | Average Latency |
|---|---|---|
| Polling | (N-1) × poll_cycle | N/2 × poll_cycle |
| Token | N × THT + τ | N/2 × avg_hold + τ/2 |
| Reservation | frame_period + transmission | frame_period/2 + transmission |
Comparative Analysis:
| Aspect | Polling | Token Passing | Reservation |
|---|---|---|---|
| Worst-case bound | Tight, predictable | Tight, predictable | Tight, predictable |
| Dependence on N | Linear | Linear | Sublinear (slots, not stations) |
| Load sensitivity | Low (poll cycle fixed) | High (THT usage varies) | Medium (contention in request) |
| Priority support | Easy (reorder poll list) | Moderate (reservation bits) | Easy (priority scheduling) |
| Jitter | Low | Moderate (variable hold times) | Low (fixed slots) |
Latency Under Load:
Latency (ms)
↑
│
100 ┼ ........ Polling (saturated)
│ ....
80 ┼ ....
│ ....
60 ┼ ....────────────── Token (THT limited)
│ ....
40 ┼ ....
│ ....____________________________ Reservation (frame period)
20 ┼ ....
│...
└───────────────────────────────────────► Load (%)
20 40 60 80 100
Key Observations:
Unlike controlled access protocols, CSMA/CD has UNBOUNDED worst-case latency under heavy load. Collisions can repeat indefinitely. For any application requiring latency guarantees, controlled access protocols (polling, token, reservation) are mandatory.
Reliability is critical for production networks. How does each protocol handle failures?
| Failure Scenario | Polling | Token Passing | Reservation |
|---|---|---|---|
| Primary/Controller failure | Network halts completely | N/A (no central controller) | Network halts (if centralized) |
| Station failure | Timeout, poll skipped | Token lost recovery, bypass | Slot unused |
| Link failure | Stations beyond break isolated | Ring wrap (if dual ring) | Depends on topology |
| Message corruption | CRC retry | Token regeneration | Reservation retry |
| Lost token/poll | Controller regenerates poll | Active Monitor regenerates | Reservation timeout/retry |
Recovery Mechanisms:
Polling:
Primary-Backup Architecture:
- Primary polls secondaries
- Backup monitors primary via heartbeat
- On primary failure, backup takes over
- Recovery time: heartbeat timeout + takeover
Typical recovery: 1-10 seconds
Token Passing:
Distributed Recovery:
- Active Monitor detects lost token (timer)
- Claim Token Process elects new AM
- New AM generates fresh token
- Ring wraps around failed stations
Typical recovery: 50-500 milliseconds
Reservation:
Graceful Degradation:
- Unused reservations timeout
- New reservations fill gaps
- Central controller can have backup
Typical recovery: 1 frame period (request retry)
Token passing's distributed nature provides better resilience (no central point of failure) but requires complex recovery protocols. Polling's centralized control makes failure obvious but fatal without redundancy. Choose based on failure tolerance requirements and acceptable complexity.
Complexity affects development cost, debugging difficulty, and system reliability. Let's compare implementation challenges.
| Aspect | Polling | Token Passing | Reservation |
|---|---|---|---|
| Primary station logic | Complex (scheduling, timeout, retry) | Simple (standard station) | Complex (allocation algorithm) |
| Secondary station logic | Simple (respond to poll) | Moderate (token handling) | Moderate (request, slot use) |
| Protocol state machine | Simple | Moderate (multiple states) | Complex (request states) |
| Failure recovery | Simple (primary handles) | Complex (distributed recovery) | Moderate (timeout/retry) |
| Priority implementation | Easy (reorder poll list) | Complex (reservation bits) | Moderate (priority queues) |
| Ring maintenance | N/A | Complex (add/remove stations) | N/A |
| Timing requirements | Relaxed | Strict (token timers) | Moderate (slot timing) |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
# Simplified complexity comparison of protocol state machines class PollingSecondary: """Simple polling secondary - minimal state.""" class State: IDLE = "idle" RESPONDING = "responding" def __init__(self): self.state = self.State.IDLE self.tx_buffer = [] def receive_poll(self): if self.tx_buffer: self.state = self.State.RESPONDING return self.send_data() else: return self.send_nak() class TokenRingStation: """Token Ring station - moderate complexity.""" class State: IDLE = "idle" # Waiting for token REPEAT = "repeat" # Relaying someone else's frame TRANSMIT = "transmit" # Holding token, sending DRAIN = "drain" # Waiting for our frame to return CLAIM = "claim" # Lost token recovery BEACON = "beacon" # Fault recovery STANDBY = "standby_monitor" ACTIVE = "active_monitor" def __init__(self): self.state = self.State.IDLE self.timer_token_rotation = None self.timer_transmit = None self.priority = 0 self.saved_priority = 0 # ... many more state variables # Full implementation requires ~500+ lines for proper # token handling, priority, monitor functions, recovery class ReservationStation: """Reservation protocol station - moderate complexity.""" class State: NO_RESERVATION = "no_reservation" CONTENDING = "contending" # In request phase AWAITING_GRANT = "awaiting_grant" RESERVED = "reserved" # Have a slot TRANSMITTING = "transmitting" RELEASING = "releasing" def __init__(self): self.state = self.State.NO_RESERVATION self.my_slots = [] self.backoff_counter = 0 # Moderate complexity - clear state transitionsComplexity Summary:
| Protocol | Lines of Code (Estimate) | Key Complexity Source |
|---|---|---|
| Polling Primary | ~200-300 | Scheduling, timeouts, failover |
| Polling Secondary | ~50-100 | Very simple |
| Token Ring Station | ~500-1000 | Recovery, priority, ring maintenance |
| FDDI Station | ~1500-2000 | TTR, dual ring, full recovery |
| Reservation Station | ~200-400 | Request protocol, slot management |
These estimates are for core MAC functionality. Add PHY layer, management interfaces, diagnostics, and edge cases, and real implementations are 10-50× larger. Token Ring's complexity contributed to its higher cost compared to Ethernet.
The "right" protocol depends on the application domain. Here's a mapping of protocols to their ideal use cases.
| Application Domain | Best Protocol | Rationale |
|---|---|---|
| Industrial SCADA | Polling | Centralized control, simple slaves, deterministic scan |
| Factory automation (PLCs) | Token Bus / Polling | Deterministic timing, existing bus infrastructure |
| Office LAN (historical) | Token Ring | Fair access, bounded latency, high reliability |
| MAN/WAN backbone | FDDI / Reservation | Long distances, guaranteed bandwidth |
| Satellite VSAT | Reservation (DAMA) | Expensive bandwidth, bursty traffic |
| Wireless voice | PRMA (Reservation) | Statistical multiplexing, voice activity |
| Avionics/Military | MIL-STD-1553 (Polling) | Certified determinism, simple stations |
| Building automation | Polling / Token | Low-speed, many devices, reliability |
Decision Tree for Protocol Selection:
┌─ Need centralized control?
│ ├─ YES → Polling
│ │ ├─ Many idle stations? → Consider reservation hybrid
│ │ └─ Real-time critical? → Add primary redundancy
│ │
│ └─ NO → Distributed control
│ ├─ Ring topology feasible?
│ │ ├─ YES → Token Ring/FDDI
│ │ │ └─ Very large ring? → Must use early release
│ │ │
│ │ └─ NO → Create logical ring (Token Bus)
│ │
│ └─ Bus/Star topology?
│ └─ Reservation (with request channel design)
│
└─ Traffic pattern?
├─ Continuous, regular → Polling or Token
├─ Bursty, irregular → Reservation (R-ALOHA style)
└─ Voice/Real-time → PRMA or priority-enhanced token
Each protocol excels in its niche. Polling dominates where central control is natural (SCADA). Token passing thrives where peer equality and fault tolerance matter (LANs, backbones). Reservation wins where bandwidth is precious and traffic is variable (satellite, cellular). The engineer's job is matching protocol to requirements.
Modern systems often combine multiple controlled access concepts to achieve the best of all worlds.
Example Hybrids:
| System | Contention Component | Controlled Component | Benefit |
|---|---|---|---|
| LTE/5G Uplink | RACH (Random Access) | Scheduled (Polling-like) | Fast initial access + efficient data |
| DOCSIS 3.1 | Contention mini-slots | Scheduled data slots | Flexible request + guaranteed BW |
| Wi-Fi 6 OFDMA | CSMA/CA fallback | Trigger-based scheduling | Low latency for critical traffic |
| Reservation ALOHA | Slot acquisition | Reserved ongoing use | Handle both bursty and continuous |
| Bluetooth | Polling (piconet) | TDMA slots | Simple + deterministic data phase |
LTE Uplink: Hybrid Polling + Contention:
Initial Access (RACH - Contention):
UE ──[Random Access Preamble]──> eNodeB
eNodeB ──[Random Access Response]──> UE
UE ──[Connection Request]──> eNodeB
Data Transfer (Scheduled - Polling-like):
UE ──[Buffer Status Report]──> eNodeB
eNodeB ──[Uplink Grant (slot allocation)]──> UE
UE ──[Data in allocated slots]──> eNodeB
Benefit:
- Contention for initial access (no pre-assignment needed)
- Scheduling for data (no collisions, QoS control)
Wi-Fi 6 OFDMA: Hybrid Contention + Controlled:
Traditional Mode (CSMA/CA - Contention):
Station contends for channel
If win, transmit entire frame
Trigger-Based Mode (Scheduled - Controlled):
AP sends Trigger Frame with RU assignments
Stations transmit simultaneously in assigned RUs
No contention within RU - deterministic access
Benefit:
- Legacy compatibility via CSMA/CA
- Low-latency, multi-user via OFDMA scheduling
Modern protocols increasingly adapt their behavior based on load and traffic type. Light load uses contention (fast access). Heavy load switches to scheduling (efficient, fair). Real-time traffic gets priority/reserved slots. The MAC layer becomes an intelligent resource allocator rather than a fixed algorithm.
Let's consolidate everything into actionable guidelines for controlled access protocol selection.
| If You Need... | Choose | Because |
|---|---|---|
| Centralized control | Polling | Natural master-slave hierarchy |
| Distributed resilience | Token Passing | No single point of failure |
| Flexible bandwidth allocation | Reservation | Dynamic slot assignment |
| Minimum station complexity | Polling (secondary role) | Slaves just respond |
| Real-time guarantees | Any controlled access | Bounded latency |
| Voice traffic efficiency | PRMA (Reservation) | Exploits activity factor |
| Long-distance links | Token (early release) or DAMA | Handles propagation delay |
| Sparse, bursty traffic | Reservation (contention-based) | Request only when needed |
| Dense, continuous traffic | Polling or Token | Low per-frame overhead |
Module Complete:
You have now mastered controlled access protocols for medium access control. You understand:
This knowledge forms the foundation for understanding deterministic networking—essential for industrial control, real-time systems, and quality-of-service guarantees in modern networks.
Congratulations! You've completed the Controlled Access module. You now have the analytical tools and conceptual understanding to design or select controlled access protocols for any networking application requiring collision-free, deterministic channel access.