Loading learning content...
We've established why flow control is necessary, analyzed sender-receiver speed dynamics, and explored buffer management strategies. Now we turn to the mechanisms—the specific protocols and procedures that implement flow control in practice.
Flow control mechanisms vary dramatically in complexity, efficiency, and applicability. From the simplest Stop-and-Wait protocol that restricts a sender to one outstanding frame, to sophisticated credit-based systems that can fully utilize multi-gigabit links, each mechanism represents a different point in the tradeoff space between simplicity, efficiency, and implementation cost.
In this page, we'll systematically examine each major flow control mechanism, understanding its operation, analyzing its efficiency, and identifying its appropriate use cases.
By the end of this page, you will understand Stop-and-Wait and its efficiency limitations, master Sliding Window protocol operation and window size selection, learn Ethernet PAUSE and Priority Flow Control, explore credit-based and rate-based mechanisms, and be able to select appropriate mechanisms for given scenarios.
Stop-and-Wait is the simplest flow control mechanism: the sender transmits one frame, then waits for an acknowledgment (ACK) before transmitting the next. This fundamentally prevents overflow because the receiver has time to process each frame before the next arrives.
Protocol Operation
Why It Works for Flow Control
Stop-and-Wait inherently limits the sender's rate. The maximum transmission rate is:
$$R_{max} = \frac{L}{RTT + T_{proc}}$$
Where:
Even if the sender could transmit faster, it's blocked waiting for ACKs. This 'pacing' prevents receiver overflow.
Efficiency Analysis
The critical metric is link utilization—the fraction of time spent actually transmitting data versus waiting.
$$Utilization = \frac{T_{trans}}{T_{trans} + RTT}$$
Where:
Or, in terms of the propagation-bandwidth product:
$$U = \frac{1}{1 + 2a}$$
Where a = (propagation delay × bandwidth) / frame size = one-way BDP/frame size
This reveals the fundamental problem: as links get faster or longer, utilization plummets.
Stop-and-Wait's simplicity comes at a severe efficiency cost for high-bandwidth or high-delay links. It's appropriate for serial connections, low-speed wireless, or links where simplicity of implementation outweighs throughput concerns. Never use it for high-performance connections.
Sliding Window protocols solve Stop-and-Wait's efficiency problem by allowing multiple frames to be 'in flight' simultaneously. The sender maintains a window of sequence numbers it's allowed to transmit; the receiver controls flow by adjusting this window.
Core Concepts
Sender Window (Ws): Range of sequence numbers the sender may transmit without waiting for acknowledgment. As ACKs arrive, the window 'slides' forward.
Receiver Window (Wr): Range of sequence numbers the receiver will accept. Frames outside this range are discarded or cause errors.
Window Size: Maximum number of unacknowledged frames allowed. Larger window = more frames in flight = higher potential throughput.
Sequence Number Space: Typically 0 to 2^n - 1 for n-bit sequence numbers. Must be larger than window size to avoid ambiguity.
Flow Control via Window Adjustment
The receiver controls sending rate by adjusting the window it advertises:
Window Advertisement Process
Efficiency with Sliding Window
For window size W frames:
$$U = \begin{cases} 1 & \text{if } W \geq 1 + 2a \ \frac{W}{1 + 2a} & \text{if } W < 1 + 2a \end{cases}$$
Where a = propagation delay × bandwidth / frame size
This means: if the window is large enough to cover the BDP (plus one for the frame being sent), we achieve full utilization.
| Link Scenario | BDP (bits) | BDP (frames) | Min Window Size |
|---|---|---|---|
| LAN 100m, 1 Gbps | 1,000 bits | < 1 frame | 1-2 frames |
| Metro 50 km, 10 Gbps | 5 Mbits | 417 frames | ~420 frames |
| Continental 3000 km, 10 Gbps | 300 Mbits | 25,000 frames | ~25,000 frames |
| Satellite 35,000 km, 50 Mbps | 35 Mbits | 2,917 frames | ~3,000 frames |
The sequence number space must be at least twice the window size to avoid ambiguity between old and new frames. For Go-Back-N: seq# ≥ W+1. For Selective Repeat: seq# ≥ 2W. Using insufficient sequence numbers causes protocol failures that are very difficult to debug.
Ethernet PAUSE provides link-level flow control for full-duplex Ethernet. When a receiver's buffers approach exhaustion, it sends a PAUSE frame instructing the sender to stop transmission for a specified duration.
PAUSE Frame Structure
| Field | Size | Value/Purpose |
|---|---|---|
| Destination MAC | 6 bytes | 01:80:C2:00:00:01 (reserved multicast) |
| Source MAC | 6 bytes | Sender's MAC address |
| EtherType | 2 bytes | 0x8808 (MAC Control) |
| MAC Control Opcode | 2 bytes | 0x0001 (PAUSE) |
| Pause Time | 2 bytes | Quanta of pause (1 quantum = 512 bit times) |
| Padding | 42 bytes | Zero padding to minimum frame size |
| FCS | 4 bytes | Frame check sequence |
Pause Duration Calculation
Pause time is measured in 'quanta,' where 1 quantum = time to transmit 512 bits.
For 10 Gbps Ethernet:
Protocol Operation
Advantages of PAUSE Frames
Limitations of PAUSE Frames
When to Use PAUSE
When a device sends PAUSE, its sender stops... but its sender's buffers now fill, potentially triggering another PAUSE. This cascade can propagate across multiple hops, freezing large portions of the network. This is why PAUSE is generally avoided in shared network environments.
Priority Flow Control (PFC) extends Ethernet PAUSE to support per-priority-class flow control. Instead of stopping all traffic, PFC can pause only specific priority classes while allowing others to continue.
The Problem PFC Solves
Consider a link carrying both storage traffic (must be lossless) and general data (can tolerate drops). With basic PAUSE:
PFC allows:
PFC Frame Structure
PFC uses the same MAC Control frame type as PAUSE (0x8808) but with a different opcode:
| Field | Size | Value/Purpose |
|---|---|---|
| Destination MAC | 6 bytes | 01:80:C2:00:00:01 |
| Source MAC | 6 bytes | Sender's MAC address |
| EtherType | 2 bytes | 0x8808 (MAC Control) |
| Opcode | 2 bytes | 0x0101 (PFC) |
| Priority Enable Vector | 2 bytes | Bitmap: which priorities to pause |
| Time[0] through Time[7] | 16 bytes | Pause time for each of 8 priorities |
| Padding | 26 bytes | Zero padding |
| FCS | 4 bytes | Frame check sequence |
Priority Enable Vector
An 8-bit bitmap indicating which priorities are being paused:
Each paused priority has its own timer, allowing independent control.
| Feature | Standard PAUSE | Priority Flow Control |
|---|---|---|
| Granularity | All traffic | Per-priority class (8 classes) |
| Frame type | 0x8808, opcode 0x0001 | 0x8808, opcode 0x0101 |
| Pause timers | Single timer | 8 independent timers |
| Mixed traffic support | Poor (all stops) | Excellent (selective pause) |
| Complexity | Low | Medium |
| Use case | Simple point-to-point | Converged networks (DCB) |
Data Center Bridging (DCB) Context
PFC is a key component of Data Center Bridging (DCB), a set of IEEE standards for lossless Ethernet in data centers:
Together, these enable 'converged' data center networks carrying storage, compute, and management traffic on shared Ethernet infrastructure.
PFC Best Practices
Modern switches implement 'PFC watchdog'—if a priority remains paused for an abnormally long time (indicating potential deadlock or misconfiguration), the watchdog disables PFC for that priority, dropping frames but allowing the network to recover. This prevents indefinite freezes at the cost of some data loss.
Credit-based flow control represents the most sophisticated approach to preventing receiver overflow. Instead of reactive PAUSE signals, the receiver proactively grants 'credits' representing permission to send a specific amount of data. The sender can only transmit when it has credits; credits are replenished as the receiver frees buffer space.
Core Mechanism
Key Advantages
Credit Sizing
For lossless operation, total credits issued must be:
$$Credits \geq Buffer_Size + BDP$$
Where BDP accounts for credits in transit from receiver to sender plus data in transit from sender to receiver.
Credit-Based in Practice: InfiniBand
InfiniBand High-Speed Interconnect uses credit-based flow control extensively:
Virtual Lanes (VLs):
Credit Exchange:
Guaranteed Lossless:
Credit-Based in Fibre Channel
Fibre Channel, the storage networking standard, uses Buffer-to-Buffer (BB) credits:
| Characteristic | PAUSE Frames | Sliding Window | Credit-Based |
|---|---|---|---|
| Control granularity | Binary (on/off) | Window size | Byte/frame precise |
| Proactive/Reactive | Reactive | Mixed | Proactive |
| BDP handling | Must size thresholds | Window ≥ BDP | Credits ≥ Buffer + BDP |
| Implementation complexity | Low | Medium | High |
| Hardware requirements | Timer logic | Sequence tracking | Per-flow credit counters |
| Typical use case | Ethernet link | WAN protocols | InfiniBand, Fibre Channel |
Credit-based systems can suffer 'credit starvation' if credit return messages are delayed or lost. Robust implementations include credit update timeouts, redundant credit signaling, and credit recovery protocols. Without these, a single lost credit message could permanently reduce link throughput.
Rate-based flow control takes a fundamentally different approach: instead of controlling how much data is outstanding, it controls the rate at which data is transmitted. The sender is limited to a specific transmission rate that the receiver can sustain.
Approach
Rate Limiting Mechanisms
Token Bucket:
Leaky Bucket:
Rate Negotiation
For rate-based flow control to work, both parties must agree on the rate:
Explicit Negotiation:
Dynamic Adjustment:
Quantized Congestion Notification (QCN)
IEEE 802.1Qau QCN implements rate-based flow control for Ethernet:
Rate-based flow control excels when traffic characteristics are predictable and consistent (CBR audio/video), when guaranteed service levels are required, or when the overhead of per-frame acknowledgments is prohibitive. It's less suitable for bursty, unpredictable traffic where window-based approaches can better match instantaneous available capacity.
Selecting the appropriate flow control mechanism requires understanding your specific requirements and constraints. Here's a systematic decision framework:
Decision Factors
| Scenario | Recommended Mechanism | Rationale |
|---|---|---|
| Low-speed serial link | Stop-and-Wait | Simple, adequate efficiency at low BDP |
| LAN switch port, mixed traffic | No flow control or tail drop | TCP handles loss recovery, simplicity |
| Storage network (iSCSI, NFS) | PAUSE frames | Lossless required, single traffic class |
| Converged data center | PFC with DCB | Lossless for storage, lossy for data |
| InfiniBand cluster | Credit-based | Native to InfiniBand, RDMA requirements |
| Fibre Channel SAN | BB Credits | Standard FC flow control, long distance |
| Video streaming server | Rate-based + PAUSE backup | Smooth delivery, prevent overflow |
| High-speed WAN | Sliding Window (TCP) | Transport layer handles flow control |
Common Combinations
Real-world systems often combine multiple mechanisms:
Enterprise Network (edges):
Data Center Spine-Leaf:
Storage Array Backend:
High-Frequency Trading:
Mismatched flow control configurations between link partners cause problems. If one side expects PAUSE and the other ignores it, drops occur. If one side sends unexpected PAUSE, the other may ignore it or treat it as an error. Always verify configuration consistency across links.
We've examined the major flow control mechanisms used at the Data Link Layer, from simple Stop-and-Wait to sophisticated credit-based systems. Let's consolidate the key takeaways:
What's Next
The final page in this module examines feedback-based control in depth—how receivers communicate their state to senders, the design of feedback loops, and the stability considerations that determine whether flow control oscillates or converges smoothly. This completes our study of flow control at the Data Link Layer.
You now understand the major flow control mechanisms at the Data Link Layer and can select appropriate mechanisms for different scenarios. From simple Stop-and-Wait to sophisticated credit-based systems, each serves specific use cases. Next, we'll explore the feedback dynamics that make these mechanisms work—or fail—in practice.