Loading learning content...
The Internet was designed as a best-effort network—every packet receives the same treatment regardless of its content or urgency. While this egalitarian approach served the early Internet well, modern applications have wildly different requirements. A real-time video call cannot tolerate the same delays acceptable for email. A stock trading update must arrive faster than a software update download.
The Traffic Class field in IPv6 addresses this fundamental challenge. This 8-bit field, directly corresponding to IPv4's Type of Service (ToS) field, enables networks to classify packets into priority categories and provide differentiated treatment based on application requirements. It's the primary mechanism through which Quality of Service (QoS) is implemented at the network layer.
By the end of this page, you will understand the Traffic Class field structure – the 6-bit DSCP (Differentiated Services Code Point) and 2-bit ECN (Explicit Congestion Notification) subfields, how DiffServ per-hop behaviors (PHB) work, the standard DSCP values and their meanings, how ECN enables congestion signaling without packet drops, and practical QoS implementation scenarios using Traffic Class.
The Traffic Class field occupies bits 4-11 of the IPv6 header (immediately after the 4-bit Version field). Though only 8 bits, this field is subdivided into two distinct components with different purposes:
| Component | Bits | Range | Purpose | Defined By |
|---|---|---|---|---|
| DSCP | 6 bits (4-9) | 0-63 | Per-hop behavior classification | RFC 2474 |
| ECN | 2 bits (10-11) | 0-3 | Congestion notification | RFC 3168 |
Historical Context: From ToS to DiffServ
In IPv4's original design, the 8-bit Type of Service field was intended for:
This design proved problematic—the flags were too coarse, implementations were inconsistent, and the model didn't scale. In the late 1990s, the IETF redefined the field as part of the Differentiated Services (DiffServ) architecture:
IPv6's Traffic Class field was designed from the start to use this modern DiffServ interpretation.
The DSCP field's values are designed to be backward-compatible with IPv4 Precedence bits. The three high-order DSCP bits map to IPv4 Precedence, allowing interoperability between DiffServ-aware and legacy devices.
DiffServ is the dominant QoS architecture for IP networks. It operates on a simple but powerful principle: classify packets at the edge, treat them consistently throughout the core.
Core Concepts
1. Traffic Classification (Edge)
At network entry points, packets are classified and marked with appropriate DSCP values based on:
2. Per-Hop Behavior (Core)
Routers in the network core examine DSCP values and apply Per-Hop Behaviors (PHBs)—standardized treatment policies including:
3. Aggregation
DiffServ aggregates many individual flows into a small number of classes, scaling to large networks without per-flow state.
Why DiffServ Scales
Unlike earlier QoS approaches (like IntServ with RSVP) that required per-flow reservation state at every router, DiffServ:
| Property | IntServ (Per-Flow) | DiffServ (Per-Class) |
|---|---|---|
| State at routers | O(flows) | O(classes) ≈ O(1) |
| Signaling | Required for each flow | None in core |
| Scalability | Poor at scale | Excellent |
| Granularity | Individual flows | Traffic aggregates |
| Configuration | Dynamic, complex | Static, simple |
With only 64 possible DSCP values and typically 4-8 distinct PHBs deployed, routers need minimal hardware resources for DiffServ—just a few queues and scheduling policies.
DiffServ's brilliant insight is that most applications don't need unique treatment—they can be grouped into classes with similar needs. Voice and video both need low latency; file transfers and email both tolerate delay. By serving classes rather than flows, DiffServ achieves QoS at Internet scale.
The 6-bit DSCP field allows 64 possible values (0-63). The IETF has standardized several code points while allowing network-specific customization:
Standard Per-Hop Behaviors
| PHB Name | DSCP Value | Binary | Traffic Type | Treatment |
|---|---|---|---|---|
| Default (BE) | 0 (CS0) | 000000 | Best-effort traffic | Standard treatment, no guarantees |
| CS1 | 8 | 001000 | Scavenger/Background | Lower than best-effort (bulk data) |
| CS2 | 16 | 010000 | Low-priority data | OAM, management traffic |
| CS3 | 24 | 011000 | Broadcast video | Streaming media |
| CS4 | 32 | 100000 | Real-time interactive | Video conferencing |
| CS5 | 40 | 101000 | Signaling | Control plane traffic |
| CS6 | 48 | 110000 | Network control | Routing protocols |
| CS7 | 56 | 111000 | Network control | Highest priority reserved |
| EF | 46 | 101110 | Expedited Forwarding | Voice, low latency required |
| AF11 | 10 | 001010 | Assured Forwarding 1 Low | Business data, low drop |
| AF12 | 12 | 001100 | Assured Forwarding 1 Med | Business data, med drop |
| AF13 | 14 | 001110 | Assured Forwarding 1 High | Business data, high drop |
| AF21 | 18 | 010010 | Assured Forwarding 2 Low | Critical apps, low drop |
| AF41 | 34 | 100010 | Assured Forwarding 4 Low | Multimedia, low drop |
Expedited Forwarding (EF) — DSCP 46
EF is designed for traffic requiring:
Routers implement EF with a strict priority queue—EF packets are served before any other class. This makes EF ideal for VoIP and real-time video.
Assured Forwarding (AF) — RFC 2597
AF provides four classes (AF1-AF4), each with three drop precedence levels:
AFxy where:
x = Class (1-4): Higher class = higher priority
y = Drop Precedence (1-3): Higher = more likely dropped under congestion
For example:
Within an AF class, all packets receive the same delay treatment, but under congestion, higher drop precedence packets are discarded first. This allows traffic shaping: in-profile traffic gets low drop precedence, out-of-profile traffic gets high drop precedence.
The final 2 bits of the Traffic Class field (bits 10-11) carry ECN—a mechanism that allows routers to signal congestion without dropping packets. This is a significant improvement over traditional congestion detection, which relied on packet loss as the signal.
The Problem with Drop-Based Congestion Signaling
Traditionally, when routers experience congestion, they drop packets. Senders detect drops (via timeouts or duplicate ACKs) and reduce sending rate. This works, but:
ECN Solution
With ECN, routers mark packets with congestion indication instead of dropping them. The destination echoes this marking to the source, which reduces sending rate without data loss.
| Binary | Name | Meaning | Behavior |
|---|---|---|---|
| 00 | Not-ECT | Non-ECN-Capable Transport | Normal packets, may be dropped |
| 01 | ECT(1) | ECN-Capable Transport (1) | ECN capable, sender can respond |
| 10 | ECT(0) | ECN-Capable Transport (0) | ECN capable, sender can respond |
| 11 | CE | Congestion Experienced | Router marked congestion |
ECN-TCP Interaction
TCP uses two flags to support ECN:
Connection Setup:
Data Transfer:
This signaling prevents the severe throughput drops that occur with packet loss while still achieving congestion control.
ECN is particularly valuable in data centers where latency matters and networks are well-engineered. Protocols like DCTCP and DCQCN use ECN for fine-grained congestion control, achieving both low latency and high throughput—impossible with drop-based signaling alone.
Implementing Traffic Class-based QoS requires coordination across network elements:
1. Classification and Marking (Edge Routers)
Classifiers examine packets and assign DSCP values:
Policy: Voice Traffic Classification
IF:
Protocol = UDP AND
Destination Port IN (5060-5061, 16384-32767) AND
Packet Size < 200 bytes
THEN:
Mark DSCP = EF (46)
IF:
Protocol = TCP AND
Destination Port = 443 AND
Source matches known video CDN
THEN:
Mark DSCP = AF41 (34)
2. Policing and Shaping (Edge)
3. Queuing and Scheduling (Core Routers)
Core routers map DSCP values to queues and apply scheduling:
| Queue | DSCP Values | Scheduler | Weight/Priority |
|---|---|---|---|
| Priority Queue | EF (46) | Strict Priority | Served first always |
| Queue 1 | AF4x (34,36,38) | WFQ | 40% |
| Queue 2 | AF3x (26,28,30) | WFQ | 30% |
| Queue 3 | AF2x (18,20,22) | WFQ | 20% |
| Queue 4 | AF1x (10,12,14) | WFQ | 10% |
| Default Queue | 0 (Best Effort) | WFQ | Remaining |
Common Scheduling Algorithms:
Strict priority queues must be rate-limited! If unlimited EF traffic can starve all other classes. Typical deployments limit priority queue to 10-30% of link capacity.
Let's design a complete QoS policy for an enterprise network supporting voice, video, business applications, and general traffic:
Business Requirements:
| Traffic Type | DSCP | Queue | Bandwidth | Drop Treatment |
|---|---|---|---|---|
| VoIP (RTP) | EF (46) | Priority | 10% max | Never drop |
| Video Conferencing | AF41 (34) | Class 1 | 30% min | WRED, low drop |
| Interactive Business Apps | AF31 (26) | Class 2 | 25% min | WRED, med drop |
| Bulk Business Data | AF21 (18) | Class 3 | 15% min | WRED, higher drop |
| Best Effort (Web, Email) | 0 | Default | 15% | Tail drop |
| Background/Scavenger | CS1 (8) | Scavenger | 5% | Aggressive drop |
Implementation on Router Interface:
class-map match-all VOIP
match dscp ef
class-map match-all VIDEO
match dscp af41 af42 af43
class-map match-all INTERACTIVE
match dscp af31 af32 af33
class-map match-all BUSINESS-DATA
match dscp af21 af22 af23
class-map match-all SCAVENGER
match dscp cs1
policy-map WAN-EDGE
class VOIP
priority percent 10
class VIDEO
bandwidth percent 30
random-detect dscp-based
class INTERACTIVE
bandwidth percent 25
random-detect dscp-based
class BUSINESS-DATA
bandwidth percent 15
random-detect dscp-based
class SCAVENGER
bandwidth percent 5
class class-default
bandwidth percent 15
fair-queue
interface GigabitEthernet0/0
service-policy output WAN-EDGE
This policy ensures voice always gets priority treatment, video and business apps receive guaranteed bandwidth, and scavenger traffic only uses residual capacity.
Enterprise networks define 'trust boundaries' where DSCP markings from endpoints are accepted. Untrusted traffic (from user devices, internet) gets remarked at ingress. Trusted traffic (from VoIP phones, known servers) keeps original markings.
Service providers use Traffic Class to offer tiered services and maintain SLAs:
Multi-Tenant QoS
ISPs serving multiple enterprise customers must isolate QoS domains while providing consistent treatment:
| SLA Tier | Description | Typical DSCP | Latency | Jitter | Loss |
|---|---|---|---|---|---|
| Real-Time | VoIP, live video | EF (46) | < 50ms | < 5ms | < 0.1% |
| Premium Data | Critical apps | AF41-43 | < 100ms | < 20ms | < 0.5% |
| Standard Data | Business apps | AF21-23 | < 200ms | < 50ms | < 1% |
| Best Effort | General internet | 0 | No guarantee | No guarantee | No guarantee |
Cross-Domain QoS Challenges
When traffic crosses multiple providers:
Solutions:
QoS works well within managed networks (enterprises, single providers) but breaks across the public Internet. Cross-provider traffic is typically best-effort. Applications needing guaranteed quality often use dedicated circuits or SDN overlays.
While Traffic Class functions identically to IPv4's redefined ToS field, several IPv6-specific aspects affect QoS implementation:
Tunneling and Traffic Class
When IPv6 packets are tunneled (e.g., 6to4, Teredo, GRE):
Option 1: Copy Traffic Class
Inner IPv6: Traffic Class = 46 (EF)
Outer IPv4: ToS = 46 (EF) → Same treatment
Option 2: Uniform Tunnel Class
Inner IPv6: Traffic Class = 46 (EF)
Outer IPv4: ToS = 0 → Tunnel treated as best-effort
Most implementations copy Traffic Class to outer header to preserve QoS treatment. RFC 2983 provides guidance on this topic.
IPv6 Flow Label + Traffic Class Synergy
The combination enables powerful optimization:
First packet of flow:
1. Examine Traffic Class → Determine PHB
2. Examine Flow Label → Cache (Src, Dst, FL) → PHB mapping
Subsequent packets:
1. Lookup (Src, Dst, FL) in cache
2. Apply cached PHB directly
3. Skip Traffic Class examination if cached
This reduces per-packet processing while maintaining correct QoS treatment.
Smart implementations cache not just the routing decision but the complete QoS treatment for a flow. The Flow Label enables this caching; Traffic Class provides the initial classification. Together they enable wire-speed QoS with minimal per-packet overhead.
The Traffic Class field is IPv6's primary mechanism for Quality of Service, enabling differentiated treatment across networks of any scale.
What's Next
Having covered Traffic Class (QoS) and Flow Label (flow identification), we'll next examine the Hop Limit field—IPv6's replacement for IPv4's Time to Live. Understanding Hop Limit is essential for comprehending packet lifetime, routing loop prevention, and diagnostic tools like traceroute.
You now understand the Traffic Class field—its DSCP and ECN components, how DiffServ provides scalable QoS, common per-hop behaviors, and practical implementation. This knowledge is fundamental for anyone designing, operating, or troubleshooting QoS-enabled IPv6 networks.