Loading content...
At the heart of circuit switching lies a contractual promise: once a circuit is established, the network guarantees that specific resources—bandwidth, switch capacity, transmission facilities—remain available for the duration of the session. This is resource reservation, the mechanism that transforms a network of shared infrastructure into what appears to be a private, dedicated connection.
Resource reservation is what enables the guaranteed Quality of Service that distinguishes circuit switching from best-effort packet networks. It's also the source of circuit switching's efficiency limitations. Understanding resource reservation deeply reveals both the power and the constraints of this architectural approach.
In this page, we examine exactly what resources are reserved, how reservation decisions are made, the granularity at which resources are allocated, and how reservation concepts have evolved and persisted into modern networking.
By the end of this page, you will understand the mechanics of resource reservation in circuit-switched networks: the types of resources reserved, allocation granularity, time slot assignment in TDM systems, bandwidth management strategies, and how reservation concepts appear in modern QoS-aware networks like MPLS-TE and RSVP.
Resource reservation in circuit switching encompasses multiple categories of network resources. Each category must be available and allocated for a circuit to function correctly.
1. Transmission Bandwidth:
The most fundamental reserved resource. For each hop in the circuit path, sufficient transmission capacity must be allocated:
2. Switch Port Capacity:
Each switch in the path allocates:
3. Processing Resources:
During setup and teardown:
| Resource Type | Where Reserved | Duration | Impact of Exhaustion |
|---|---|---|---|
| Transmission bandwidth | Each link in path | Entire call duration | Blocking (no circuit possible) |
| Switch fabric connections | Each switch | Entire call duration | Blocking at that switch |
| Trunk interface circuits | Each trunk endpoint | Entire call duration | Trunk group busy |
| Time slots (TDM) | Each TDM link | Entire call duration | Slot exhaustion = blocking |
| Signaling capacity | SS7 network | Setup/teardown only | Delayed setup, possible timeout |
| Call processing | Switch CPUs | Transient (ms-seconds) | Call attempt throttling |
| Memory buffers | Minimal in circuit switching | Transient | Generally not a constraint |
Contrast with packet switching:
In packet-switched networks, resources are not reserved:
Circuit switching's reservation model trades statistical efficiency for predictable performance. The resources allocated to an idle circuit cannot serve other traffic—but the circuit owner enjoys guaranteed, consistent service.
Resource reservation creates a fundamental tradeoff: predictability vs. efficiency. Reserving resources guarantees they're available when needed but wastes them when not fully utilized. Packet switching shares resources efficiently but cannot guarantee availability. Modern QoS systems attempt to blend both approaches.
In digital circuit-switched networks, Time Division Multiplexing (TDM) is the primary mechanism for creating multiple logical circuits on a single physical link. Understanding time slot assignment is essential for comprehending how resources are actually reserved.
TDM fundamentals:
TDM divides a transmission link's capacity into recurring frames, each containing multiple time slots. Each slot carries data for one circuit during each frame period:
T1 (DS-1) structure:
E1 structure:
Time slot assignment process:
1. Slot selection at originating switch:
When a call is setup:
2. Slot interchange at intermediate switches:
Here's a critical complexity: the incoming slot number may differ from the available outgoing slot. Switches perform Time Slot Interchange (TSI):
Incoming: Slot 5 on Trunk A
Needed: Slot 12 on Trunk B (slot 5 busy on B)
Switch performs:
1. Read sample from Slot 5, Trunk A
2. Buffer the sample
3. Write sample to Slot 12, Trunk B
This happens in real-time, every 125 μs, maintaining synchronous timing.
3. End-to-end slot chain:
A complete circuit might use different slot numbers on each link:
| Hop | Link | Slot |
|---|---|---|
| 1 | Subscriber to Local Exchange | Channel 1 |
| 2 | Local to Tandem | Slot 15 |
| 3 | Tandem to Tandem | Slot 3 |
| 4 | Tandem to Remote Local | Slot 22 |
| 5 | Remote Local to Subscriber | Channel 1 |
The TSI at each switch maps between these slots seamlessly.
The 64 kbps slot size comes from voice digitization: 8,000 samples/second × 8 bits/sample = 64,000 bps. This sampling rate (Nyquist rate for 4 kHz voice) became the fundamental unit of digital telephony, and even non-voice circuits often use 64 kbps channels or multiples thereof (n×64 kbps).
The granularity of resource reservation determines how finely capacity can be allocated. Finer granularity improves efficiency but increases complexity. Different technologies offer different granularity options.
Historical progression:
| Technology | Minimum Allocation Unit | Typical Maximum | Granularity Step |
|---|---|---|---|
| Analog FDM | 4 kHz voice channel | 600 voice channels | 4 kHz |
| DS-0 (64 kbps) | 1 time slot = 64 kbps | 24-32 slots per frame | 64 kbps |
| DS-1 (T1/E1) | 1.544/2.048 Mbps | Groups of T1/E1s | ~1.5/2 Mbps |
| DS-3 (T3) | 44.736 Mbps | Multiple DS-3s | ~45 Mbps |
| SONET STS-1 | 51.84 Mbps | STS-192 (10 Gbps) | ~52 Mbps |
| SONET VT1.5 | 1.728 Mbps | 28 per STS-1 | ~1.7 Mbps |
| OTN ODU0 | 1.244 Gbps | ODUflex container | ~1.25 Gbps |
| OTN ODUflex | 1.25 Gbps increments | Up to ODU4 | Flexible |
The granularity problem:
If your application needs 100 Mbps but the smallest allocatable unit is 155 Mbps (STS-3), you waste 55 Mbps—35% overhead.
Conversely, if you need 200 Mbps, you must allocate STS-3c (155 Mbps) + STS-1 (52 Mbps) = 207 Mbps, or step up to STS-12 (622 Mbps), wasting even more.
Solutions to granularity constraints:
1. Virtual Concatenation (VCAT):
SONET Virtual Concatenation allows combining non-contiguous smaller containers:
2. Link Capacity Adjustment Scheme (LCAS):
Builds on VCAT to allow dynamic adjustment:
3. ODUflex in OTN:
Optical Transport Network's flexible containers:
Finer granularity reduces waste but increases management complexity. Every allocatable unit needs tracking, fault monitoring, and performance management. A 10 Gbps link could be managed as 1 unit or 156,250 × 64 kbps units—the latter offers flexibility but enormous overhead. Practical systems balance these concerns.
Beyond transmission links, circuit switching requires reserving resources within switching systems. The switch fabric—the internal mechanism that connects input ports to output ports—is a critical reservation point.
Switch fabric architectures:
1. Space Division Switching:
Physically separate paths through the switch for each circuit:
Crossbar reservation:
Output Ports
1 2 3 4
┌───┬───┬───┬───┐
1 │ X │ │ │ │ ← Port 1→1 connected
├───┼───┼───┼───┤
I 2 │ │ │ X │ │ ← Port 2→3 connected
n ├───┼───┼───┼───┤
p 3 │ │ │ │ X │ ← Port 3→4 connected
u ├───┼───┼───┼───┤
t 4 │ │ X │ │ │ ← Port 4→2 connected
└───┴───┴───┴───┘
Each circuit reserves one crosspoint. A 1000×1000 crossbar needs 1,000,000 crosspoints.
2. Time Division Switching:
Shares physical paths using time multiplexing:
TDM switch reservation:
Slot 5 Input → Write to Memory Location 205
Slot 12 Output ← Read from Memory Location 205
Connection table:
| Input Slot | Memory Location | Output Slot | Output Trunk |
|------------|-----------------|-------------|---------------|
| 5 | 205 | 12 | Trunk B |
| 8 | 208 | 3 | Trunk C |
Reservation = allocating memory location and connection table entry.
3. Combined Time-Space-Time (TST) Switching:
Large switches combine both approaches:
Reservation must secure resources at all stages for a circuit to be established.
Some switch fabrics are 'blocking'—not every input-output combination can be connected simultaneously. Multi-stage fabrics may have internal congestion points where multiple paths compete for middle-stage resources. Non-blocking fabrics guarantee any connection pattern is possible, but at higher cost. This internal blocking is distinct from trunk blocking covered earlier.
Resource reservations are not eternal—they have lifecycles that must be actively managed. Understanding reservation lifetime management is essential for network reliability and efficiency.
Reservation states:
Soft state vs. hard state:
Reservation systems differ in how they maintain state:
Hard state (traditional circuit switching):
Soft state (RSVP and similar):
Lifetime management mechanisms:
1. Explicit release:
2. Timeout release:
3. Audit and reconciliation:
Modern systems often combine hard and soft state. Core reservations use hard state for efficiency; edge signaling uses soft state for robustness. Periodic audits (soft-state-like) ensure hard-state reservations remain consistent across network elements.
Resource reservation doesn't happen in isolation—it's intimately connected to traffic engineering, the practice of optimizing how traffic flows through the network. Reservation decisions affect network utilization, and traffic engineering goals constrain reservation choices.
Traffic engineering objectives:
Reservation strategies:
1. First-fit allocation:
2. Best-fit allocation:
3. Load-balanced allocation:
4. Constraint-based routing:
| Strategy | Speed | Fragmentation | Load Balance | Best For |
|---|---|---|---|---|
| First-fit | Very fast | High | Poor | Simple, stable traffic patterns |
| Best-fit | Fast | Low | Poor | Variable-size allocations |
| Load-balanced | Medium | Medium | Excellent | High aggregate traffic |
| Constraint-based | Slow | Low | Good | QoS-sensitive applications |
Bandwidth protection and preemption:
Not all circuits are equal. Networks implement priority classes that affect reservation behavior:
Protection classes:
Preemption behavior:
When a high-priority request cannot be satisfied:
Preemption ensures critical services (emergency calls, priority customers) can always get capacity, but requires careful policy design to avoid abuse.
If preempted circuits try to re-establish and must preempt other circuits, cascading preemption can destabilize the network. Policies typically limit preemption depth and include hold-off timers to prevent rapid re-preemption cycles.
While traditional circuit switching used implicit reservation (connection establishment = resource allocation), modern networks implement explicit reservation protocols. These bring circuit-switching-like guarantees to packet-based infrastructure.
RSVP (Resource Reservation Protocol):
RSVP enables end hosts to request specific QoS from the network:
How it works:
Key RSVP concepts:
MPLS-TE (Traffic Engineering):
MPLS-TE creates reserved paths through an MPLS network:
Signaling options:
Capabilities:
Segment Routing:
The latest evolution simplifies reservation:
Modern networking increasingly blends circuit and packet concepts. MPLS-TE paths provide circuit-like guarantees over packet infrastructure. Segment routing adds explicit paths without per-flow state. 5G network slicing uses virtualized reservation. The clear boundary between circuit and packet switching is dissolving into a spectrum of flow management approaches.
Pure circuit switching reserves resources conservatively—if a connection might need capacity, that capacity is fully reserved. But most traffic is bursty; circuits are often idle. Oversubscription and statistical multiplexing exploit this to improve efficiency.
The insight:
Not all reserved circuits are active simultaneously. If we reserve 100 Mbps for 10 users each 'needing' 10 Mbps, but they're only active 20% of the time on average:
Statistical multiplexing shares that 100 Mbps among users, betting that not everyone peaks simultaneously.
Oversubscription ratio:
The oversubscription ratio is total subscribed capacity divided by actual capacity:
Oversubscription Ratio = Total Subscribed Capacity / Physical Capacity
Example:
- Physical capacity: 100 Mbps
- 50 users each with 10 Mbps 'guaranteed'
- Total subscribed: 500 Mbps
- Ratio: 500 / 100 = 5:1
Common oversubscription ratios:
| Network Type | Typical Ratio | Justification |
|---|---|---|
| Carrier backbone | 1:1 (none) | Must honor SLAs strictly |
| Enterprise WAN | 2:1 to 4:1 | Traffic is predictable |
| Residential broadband | 10:1 to 50:1 | Highly bursty, price-sensitive |
| Mobile data | 20:1 to 100:1 | Very bursty, mobile constraints |
Risks of oversubscription:
When statistical assumptions fail:
Managing oversubscription:
Traditional circuit switching's inefficiency is also its simplicity. By never oversubscribing, it never faces congestion decisions. The tradeoff between efficiency (oversubscription) and predictability (no oversubscription) is fundamental to network design. Different applications warrant different points on this spectrum.
We have comprehensively explored how resources are reserved in circuit-switched networks. Let's consolidate the essential insights:
What's next:
Now that we understand resource reservation mechanics, we'll examine the PSTN (Public Switched Telephone Network)—the ultimate example of circuit switching at global scale. You'll see how dedicated paths, connection establishment, and resource reservation combine to create the most reliable communication network ever built.
You now possess deep understanding of resource reservation in circuit-switched networks. This knowledge extends to modern QoS systems, traffic engineering, and any domain where guaranteed capacity is essential. You understand not just what gets reserved, but how, why, and with what tradeoffs.