Loading learning content...
Imagine a classroom where every student shouts out answers simultaneously—chaos ensues, nobody is heard, and learning grinds to a halt. Now imagine the teacher systematically calling on each student in turn: "Alice, do you have something to say? No? Bob, how about you?" This orderly approach is precisely what polling achieves in computer networks.
Unlike contention-based protocols (ALOHA, CSMA/CD) where stations compete for the channel and collisions are inevitable, controlled access protocols eliminate collisions entirely by coordinating which station may transmit at any given moment. Polling represents the centralized approach to this coordination—a primary station (controller) sequentially queries secondary stations, granting each an exclusive opportunity to transmit.
This seemingly simple concept has profound implications for network determinism, fairness, and real-time guarantees that contention-based systems simply cannot provide.
By the end of this page, you will understand the complete mechanics of polling protocols—how primary stations coordinate channel access, the poll/select protocol structure, round-robin versus priority-based polling strategies, overhead analysis, and why polling remains essential for industrial control systems, satellite networks, and real-time applications where predictable latency matters more than raw throughput.
Before we dive into polling, we must understand why contention-based protocols—despite their simplicity and widespread use—fail catastrophically in certain applications.
The Contention Paradigm:
In protocols like CSMA/CD (Ethernet), stations compete for the shared medium:
This works remarkably well for bursty, best-effort traffic. But it has fundamental limitations:
| Limitation | Consequence | Impact on Applications |
|---|---|---|
| Non-deterministic access | Unbounded worst-case latency | Fatal for real-time control systems |
| Collision overhead | Wasted bandwidth during contention | Throughput degrades under heavy load |
| Unfairness possible | Some stations may dominate | Starvation of low-priority traffic |
| Hidden terminal problem | Collision detection fails in wireless | Severe degradation in WLANs |
| Load-dependent performance | Efficiency drops as load increases | Unpredictable behavior at scale |
Consider a factory floor with 100 sensors reporting to a central controller. If each sensor uses CSMA/CD and a critical alarm sensor needs to transmit during peak load, it might wait indefinitely while collisions resolve. In industrial control, this delay could mean equipment damage or safety hazards. Controlled access becomes mandatory.
The Controlled Access Solution:
Controlled access protocols eliminate these problems through coordination:
| Property | Contention (CSMA/CD) | Controlled Access (Polling) |
|---|---|---|
| Collisions | Occur and must be resolved | Never occur |
| Access delay | Variable, unbounded | Bounded, deterministic |
| Fairness | Statistical, best-effort | Guaranteed by design |
| Overhead type | Collisions, backoff | Polling messages |
| Complexity | Distributed, simple | Centralized, moderate |
Polling achieves these guarantees through a master-slave architecture where one station controls all medium access.
Polling is a centralized controlled access protocol where a designated primary station (also called the controller, master, or poller) systematically queries secondary stations (also called slaves or polled stations) for data.
Key Architectural Components:
Primary Station (Controller)
Secondary Stations (Slaves)
Polling List
The Poll/Select Protocol:
Polling systems typically implement two complementary operations:
Poll Operation (Secondary → Primary):
Select Operation (Primary → Secondary):
This bidirectional capability allows the controller to both collect data from sensors and send commands to actuators—essential for industrial control systems.
Think of the primary station as a traffic officer at an intersection without traffic lights. Instead of letting cars (stations) compete and potentially collide, the officer explicitly directs each car when to go. No collisions occur, but every movement requires the officer's explicit permission.
The efficiency of polling depends critically on the overhead of poll and response messages. Let's examine the typical message formats used in polling protocols.
Poll Message Structure:
+----------+-------------+----------+--------+
| Header | Address | Type | FCS |
| (1 byte) | (1-2 bytes)| (1 byte)| (2 bytes)|
+----------+-------------+----------+--------+
Type field values:
0x01 = POLL (request data from secondary)
0x02 = SELECT (prepare to receive data)
0x03 = EOT (end of transmission, release line)
Response Message Structure:
+----------+-------------+----------+----------+--------+
| Header | Address | Type | Data | FCS |
| (1 byte) | (1-2 bytes)| (1 byte)| (0-N bytes)| (2 bytes)|
+----------+-------------+----------+----------+--------+
Type field values:
0x10 = DATA (contains data payload)
0x11 = NAK (no data to send)
0x12 = ACK (ready to receive / received OK)
123456789101112131415161718192021222324252627282930313233343536373839404142434445
# Polling Protocol Message Typesclass MessageType: POLL = 0x01 # Primary -> Secondary: "Do you have data?" SELECT = 0x02 # Primary -> Secondary: "I have data for you" EOT = 0x03 # End of transmission DATA = 0x10 # Secondary -> Primary: "Here is my data" NAK = 0x11 # Secondary -> Primary: "Nothing to send" ACK = 0x12 # Acknowledgment class PollMessage: """Poll message from primary to secondary.""" def __init__(self, address: int): self.header = 0x7E # Flag byte self.address = address self.msg_type = MessageType.POLL self.fcs = self.compute_fcs() def to_bytes(self) -> bytes: return bytes([ self.header, self.address >> 8, # High byte of address self.address & 0xFF, # Low byte of address self.msg_type, self.fcs >> 8, self.fcs & 0xFF ]) def size(self) -> int: return 6 # bytes (poll message overhead) class DataResponse: """Data response from secondary to primary.""" def __init__(self, address: int, data: bytes): self.header = 0x7E self.address = address self.msg_type = MessageType.DATA self.data = data self.fcs = self.compute_fcs() def overhead(self) -> int: return 6 # Header + Address + Type + FCS def total_size(self) -> int: return self.overhead() + len(self.data)Polling Overhead Analysis:
For each station polled, the overhead includes:
For a network with N stations where only fraction f have data:
Per-cycle overhead = N × (poll_size + response_overhead) + N × RTT
Useful data = f × N × avg_data_size
Efficiency = Useful data / (Useful data + Per-cycle overhead)
This overhead becomes significant when:
If a secondary has no data, it still must respond with NAK, consuming channel time. In networks where most stations rarely have data (sparse traffic), polling efficiency drops dramatically. This is why polling works best when stations consistently have data to send or when deterministic timing matters more than efficiency.
The simplest and most common polling strategy is round-robin polling, where the primary station cycles through all secondary stations in a fixed, repeating sequence.
Algorithm:
current_station = 0
while True:
send_poll(station_list[current_station])
response = wait_for_response(timeout)
if response.type == DATA:
process_data(response.data)
elif response == TIMEOUT:
mark_station_unresponsive(current_station)
current_station = (current_station + 1) % num_stations
Properties of Round-Robin Polling:
| Property | Characteristic |
|---|---|
| Fairness | Perfect - each station gets equal opportunity |
| Maximum wait | Bounded: (N-1) × poll_cycle_time |
| Complexity | O(1) per poll decision |
| Adaptability | None - static scheduling |
| Overhead | Proportional to N (all stations polled) |
Calculating Polling Cycle Time:
For N stations with:
Cycle time formula:
T_cycle = N × [T_poll + T_response + 2τ]
where:
T_poll = P × 8 / C (poll transmission time)
T_response = f × (D × 8 / C) + (1-f) × (R × 8 / C)
Example Calculation:
Given:
T_poll = 6 × 8 / 1,000,000 = 48 μs
T_response_avg = 0.3 × (100 × 8 / 1,000,000) + 0.7 × (6 × 8 / 1,000,000)
= 0.3 × 800 μs + 0.7 × 48 μs
= 240 μs + 33.6 μs = 273.6 μs
Per-station time = 48 + 273.6 + 200 = 521.6 μs
T_cycle = 20 × 521.6 μs = 10.43 ms
Maximum latency for any station = T_cycle ≈ 10.43 ms
The critical insight is that maximum latency is BOUNDED and PREDICTABLE. Unlike CSMA/CD where a station might wait indefinitely during heavy collisions, a polled station knows it will be served within at most one polling cycle time. This determinism is invaluable for real-time systems.
Round-robin polling treats all stations equally, but many applications require differentiated service. Priority polling and weighted polling address this need.
Priority Polling:
Stations are assigned priority levels. Higher-priority stations are polled more frequently or before lower-priority stations.
Strategy 1: Priority Classes with Multiple Rounds
Polling sequence per cycle:
[High priority x 4]: A, B
[Medium priority x 2]: C, D, E
[Low priority x 1]: F, G, H, I
Resulting pattern: A, B, A, B, A, B, A, B, C, D, E, C, D, E, F, G, H, I
Strategy 2: Priority Ordering
Polling sequence:
All high-priority stations first
Then medium-priority
Then low-priority
Pattern: A, B, C, D, E, F, G, H, I (where A,B are high; C,D,E medium; F,G,H,I low)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
from dataclasses import dataclassfrom enum import IntEnumfrom typing import List class Priority(IntEnum): HIGH = 3 MEDIUM = 2 LOW = 1 @dataclassclass Station: address: int priority: Priority weight: int = 1 # For weighted polling class WeightedPriorityPoller: """Implements weighted priority polling.""" def __init__(self, stations: List[Station]): self.stations = sorted(stations, key=lambda s: (-s.priority, s.address)) self.polling_list = self._build_weighted_list() self.current_index = 0 def _build_weighted_list(self) -> List[Station]: """Build polling list with priority-based repetition.""" poll_list = [] for station in self.stations: # Higher priority = more polls per cycle repetitions = station.priority * station.weight poll_list.extend([station] * repetitions) return poll_list def next_station(self) -> Station: """Get next station to poll.""" station = self.polling_list[self.current_index] self.current_index = (self.current_index + 1) % len(self.polling_list) return station def get_poll_ratio(self, station: Station) -> float: """Calculate what fraction of polls go to this station.""" station_polls = self.polling_list.count(station) return station_polls / len(self.polling_list) # Example usagestations = [ Station(address=1, priority=Priority.HIGH, weight=2), # Critical sensor Station(address=2, priority=Priority.HIGH, weight=1), # Safety sensor Station(address=3, priority=Priority.MEDIUM, weight=1), # Process sensor Station(address=4, priority=Priority.LOW, weight=1), # Status sensor] poller = WeightedPriorityPoller(stations) # Station 1 gets: 3 (HIGH) × 2 (weight) = 6 polls per cycle# Station 2 gets: 3 (HIGH) × 1 (weight) = 3 polls per cycle# Station 3 gets: 2 (MEDIUM) × 1 (weight) = 2 polls per cycle# Station 4 gets: 1 (LOW) × 1 (weight) = 1 poll per cycleService Guarantees with Priority Polling:
| Priority Level | Poll Frequency | Max Latency | Use Case |
|---|---|---|---|
| High (weight 3) | 3× per cycle | T_cycle / 3 | Safety alarms, emergency stops |
| Medium (weight 2) | 2× per cycle | T_cycle / 2 | Process control, feedback loops |
| Low (weight 1) | 1× per cycle | T_cycle | Status monitoring, logging |
Trade-offs of Priority Polling:
If a high-priority station has large amounts of data and transmits for a long time when polled, lower-priority stations are delayed even further. Some systems implement transmission limits (maximum bytes per poll) to prevent high-priority stations from monopolizing the channel.
A variant of centralized polling is Hub Polling (also called Roll-Call Polling), commonly used in multipoint networks where the primary station is at the center of a star or tree topology.
Hub Polling Characteristics:
Hub Polling Sequence:
Primary: POLL Station_1
Wait for response (with timeout)
If DATA → accept and acknowledge
If NAK → note nothing to send
If TIMEOUT → mark station as failed
Primary: POLL Station_2
... (repeat sequence)
Hub Polling in Mainframe Environments:
Historically, hub polling was extensively used in IBM mainframe networks (SNA - Systems Network Architecture):
Cluster Controller (Primary)
↓ Poll
3270 Terminal (Secondary)
↓ Response
Cluster Controller
↓ Poll
Printer (Secondary)
↓ NAK (nothing to print)
Cluster Controller
↓ Poll
3270 Terminal (next)
...
Efficiency Considerations for Hub Polling:
| Factor | Impact on Efficiency |
|---|---|
| Number of terminals | Linear increase in cycle time |
| Terminal response time | Adds to per-poll overhead |
| Propagation delay | Multiplied by 2N (poll + response for each) |
| Failed terminals | Timeout delays add significant overhead |
Hub polling was the dominant paradigm for mainframe-terminal communication from the 1970s through the 1990s. IBM's SDLC (Synchronous Data Link Control) and its ISO standardization as HDLC (High-level Data Link Control) include comprehensive polling support. These protocols influenced practically all subsequent WAN protocols.
Polling offers a fundamentally different set of trade-offs compared to contention-based protocols. Understanding these trade-offs is essential for selecting the appropriate protocol for a given application.
| Criterion | Favor Polling | Favor Contention (CSMA) |
|---|---|---|
| Traffic pattern | Continuous, regular | Bursty, irregular |
| Latency requirements | Bounded, guaranteed | Best-effort acceptable |
| Number of stations | Moderate (< 50) | Large (100+) |
| Data availability | Most stations have data | Few stations active at once |
| Failure tolerance | Single point OK | Distributed resilience needed |
| Network management | Centralized preferred | Autonomous stations preferred |
| Real-time constraints | Hard real-time | Soft or no real-time |
Despite being considered "old-fashioned" compared to modern switched networks, polling remains critically important in several domains where its deterministic properties are essential.
Case Study: Modbus Polling in Industrial Control
Modbus Master (SCADA Server)
↓ Query (Read Holding Registers)
Modbus Slave (PLC, Address 01)
↓ Response (Register values: temperature, pressure)
Modbus Master
↓ Query (Read Input Status)
Modbus Slave (RTU, Address 02)
↓ Response (Digital inputs: switch states)
Typical polling cycle: 100ms - 1000ms
Maximum slaves per network: 247 (Modbus limit)
Modbus remains one of the most widely deployed industrial protocols, and its polling-based architecture ensures that every sensor is read at predictable intervals—essential for process control where a missed reading could mean unsafe operating conditions.
While Ethernet and IP have taken over most general networking, polling persists wherever deterministic behavior trumps raw throughput. Modern industrial Ethernet protocols (PROFINET, EtherNet/IP, EtherCAT) often layer deterministic mechanisms on top of Ethernet to achieve polling-like guarantees.
We've explored polling as a fundamental controlled access protocol. Let's consolidate the key concepts:
What's Next:
In the next page, we'll explore Token Passing—a distributed controlled access protocol that eliminates the single point of failure inherent in polling while maintaining collision-free operation. We'll see how stations pass a special token to coordinate access without requiring a central controller.
You now understand polling as a controlled access protocol—its centralized architecture, message formats, round-robin and priority scheduling, efficiency trade-offs, and real-world applications. Next, we'll discover how token passing achieves similar determinism in a fully distributed manner.