Loading content...
The routing table is comprehensive but complex. It contains policy information, backup routes, and protocol-specific attributes—details essential for routing decisions but unnecessary (and too slow to query) for per-packet forwarding.
Enter the forwarding table (also known as the Forwarding Information Base or FIB). This is the streamlined, speed-optimized derivative of the routing table, designed for one purpose: looking up the next-hop for packets as fast as physically possible.
By the end of this page, you will understand how forwarding tables are constructed from routing tables, the data structures and algorithms used for fast lookups, hardware implementation techniques, and how routers maintain FIB consistency during routing changes.
A forwarding table (Forwarding Information Base or FIB) is a data structure optimized for fast lookup that contains the essential information needed to forward packets: destination prefix, output interface, and next-hop (often pre-resolved to MAC address).
Formal Definition:
A forwarding table is a hardware-optimized data structure derived from the routing table that maps destination prefixes to forwarding actions, enabling per-packet lookup at line rate.
Key Characteristics:
| Aspect | RIB (Routing Table) | FIB (Forwarding Table) |
|---|---|---|
| Purpose | Route selection and policy | Packet forwarding |
| Contains | All routes (best and backup) | Only best routes |
| Attributes | Full protocol information | Minimal (prefix, next-hop, interface) |
| Location | Route processor (CPU) | Line cards (hardware) |
| Updated by | Routing protocols | Derived from RIB changes |
| Lookup speed | Milliseconds (software) | Nanoseconds (hardware) |
| Data structure | Tree, hash table (flexible) | Trie, TCAM (optimized) |
Despite the routing table's prominence in documentation and troubleshooting, the FIB is what actually touches packets. A route in the RIB doesn't forward traffic until it's installed in the FIB. Understanding this distinction is crucial for troubleshooting.
A FIB entry is stripped down to the essentials—everything needed to forward a packet, nothing more:
Core FIB Entry Fields:
Router# show ip cef 192.168.1.0/24192.168.1.0/24 nexthop 10.0.0.2 GigabitEthernet0/0 Router# show ip cef 192.168.1.0/24 detail 192.168.1.0/24, epoch 2, flags [rib only nolabel, rib defined all labels] recursive via 10.0.0.2 attached to GigabitEthernet0/0 Router# show adjacency GigabitEthernet0/0 detailProtocol Interface AddressIP GigabitEthernet0/0 10.0.0.2(7) 0 packets, 0 bytes epoch 0 sourced in sev-epoch 2 Encap length 14 0A002B001122 0A002B003344 0800 L2 destination address byte offset 0 L2 destination address byte length 6Understanding Adjacencies:
The adjacency table is a companion to the FIB. While the FIB entry says 'send to next-hop 10.0.0.2 via Gig0/0,' the adjacency entry contains the actual Layer 2 encapsulation—destination MAC address, source MAC, ethertype, and any required VLAN tags.
By pre-computing this encapsulation, the router avoids per-packet ARP lookups. The forwarding engine simply:
This is how nanosecond-class forwarding becomes possible.
The FIB must perform longest-prefix match (LPM) for every packet at line rate. Various algorithms and data structures have been developed to achieve this:
1. Binary Tries (Patricia Tries)
A trie (retrieval tree) where each bit of the IP address determines the path through the tree. For every bit position, go left for 0, right for 1. The deepest node reached with a valid prefix is the match.
2. Multi-Bit Tries
Process multiple bits per step (e.g., 8 bits), creating a tree with higher fanout but fewer levels.
3. TCAM (Ternary Content-Addressable Memory)
Specialized hardware that searches all entries in parallel. Each entry can match 0, 1, or 'don't care' for each bit—perfect for prefix matching.
Hybrid Approaches:
Modern routers often combine techniques:
The Bottom Line: The choice of FIB data structure directly determines forwarding performance. This is why router architecture is a specialized discipline—general-purpose computing cannot match purpose-built forwarding hardware.
IPv6's 128-bit addresses quadruple the address space compared to IPv4. TCAM entries become 4x wider, and trie depth increases significantly. This drives continued innovation in LPM algorithms and hardware.
The FIB must accurately reflect the routing table's best routes. When routing changes occur, the FIB must be updated to match. This process has several stages:
FIB Population Flow:
Synchronization Challenges:
Delay Between RIB and FIB: There's inherent delay between a route appearing in the RIB and being installed in hardware FIB. During this window, packets may be forwarded incorrectly.
Batch Updates: To reduce overhead, FIB managers often batch multiple updates together. This improves efficiency but increases latency for individual updates.
TCAM Fragmentation: Deleting routes can leave holes in TCAM. Over time, these require compaction (moving entries), which can be disruptive.
Capacity Limits: If the FIB hardware fills up, new routes cannot be installed. This is a serious failure mode requiring careful capacity planning.
During routing convergence (e.g., after a link failure), different line cards may have different FIB contents momentarily. This can cause micro-loops or packet drops. Techniques like 'ordered FIB' ensure updates are applied in a sequence that prevents loops.
High-performance routers implement the FIB in specialized hardware. Understanding this hardware is essential for capacity planning and performance analysis:
TCAM (Ternary Content-Addressable Memory)
TCAM is the dominant technology for line-rate FIB lookup. Unlike regular CAM (which matches exact values), TCAM allows 'don't care' bits—essential for prefix matching.
How TCAM works:
| Technology | Lookup Speed | Power | Cost | Typical Use |
|---|---|---|---|---|
| TCAM | O(1), single cycle | Very high | Expensive | Core/edge router FIB |
| SRAM | O(1), hash/direct | Medium | Moderate | Adjacency tables, caches |
| DRAM | O(1), but slow access | Low | Cheap | Large RIBs, backup FIB |
| Custom ASIC | Configurable | Optimized | High NRE | Vendor-specific solutions |
| NPU/FPGA | Programmable | Medium | Moderate | Software-defined networking |
TCAM Capacity Considerations:
TCAM is expensive, power-hungry, and heat-generating. A typical high-end router might have:
When TCAM fills up:
Always verify FIB hardware capacity before deploying routers. Commands like 'show platform hardware capacity' or 'show hardware capacity' reveal limits. Plan for headroom—running at 90% FIB capacity leaves no room for growth or emergencies.
Cisco Express Forwarding (CEF) is a widely deployed example of FIB-based forwarding. Studying CEF illustrates practical FIB implementation:
CEF Components:
FIB Table: The actual forwarding table, a modified radix tree optimized for fast lookup. Contains prefix-to-adjacency mappings.
Adjacency Table: Contains L2 rewrite information (MAC addresses, encapsulation). Linked from FIB entries.
Software CEF (dCEF): FIB maintained in software on the route processor. Suitable for low-end routers.
Hardware CEF (distributed CEF): FIB replicated to line card hardware. Each line card forwards independently.
CEF Adjacency Types:
| Type | Description | Forwarding Action |
|---|---|---|
| Cached | Normal adjacency with L2 info | Forward with rewrite |
| Punt | Needs CPU processing | Send to route processor |
| Glean | Directly connected, ARP incomplete | ARP then forward |
| Drop | Null route or filtered | Discard silently |
| Discard | Administratively dropped | Discard, may log |
| Receive | Destined for router itself | Local delivery |
! Verify CEF is enabledRouter# show ip cefPrefix Next Hop Interface0.0.0.0/0 10.0.0.1 GigabitEthernet0/010.0.0.0/24 attached GigabitEthernet0/010.1.0.0/16 10.0.0.2 GigabitEthernet0/0192.168.1.0/24 10.0.0.3 GigabitEthernet0/1 ! Check specific prefix in CEFRouter# show ip cef 10.1.5.1010.1.0.0/16 nexthop 10.0.0.2 GigabitEthernet0/0 ! Verify adjacency resolutionRouter# show adjacency summaryAdjacency Table has 4 adjacencies: 4 complete adjacencies ! Troubleshoot CEF consistencyRouter# show ip cef exact-route 10.1.5.10 192.168.1.110.1.5.10 -> 192.168.1.1 => GigabitEthernet0/0 (next hop 10.0.0.2)The RIB and FIB should contain the same best routes. Inconsistencies cause traffic issues. Use 'show ip cef inconsistency' to detect mismatches. Common causes: software bugs, memory corruption, or incomplete updates.
Software-Defined Networking (SDN) introduces a new paradigm for FIB management. Instead of routing protocols populating the FIB, a centralized controller programs forwarding tables directly:
Traditional vs. SDN Forwarding:
Traditional:
SDN:
OpenFlow and Flow Tables:
OpenFlow extends the forwarding table concept to 'flow tables' with richer matching:
P4 (Programming Protocol-Independent Packet Processors):
P4 goes further, allowing the packet parser and match-action pipeline itself to be programmed:
Implications:
SDN and programmable data planes blur the line between RIB and FIB. The 'routing table' may be an abstraction maintained by the controller, with flow rules pushed directly to hardware tables that don't resemble traditional FIBs.
As programmable ASICs and P4 mature, the fixed-function FIB is evolving into programmable match-action tables. The conceptual distinction between 'FIB' (prefix → next-hop) and 'flow table' (arbitrary match → action) is becoming increasingly blurred.
We've explored the forwarding table as the hardware-optimized structure that enables line-rate packet forwarding. Let's consolidate the key concepts:
You now understand the forwarding table as the hardware-optimized structure that actually moves packets at line rate. Next, we'll tie everything together by examining the control plane vs. data plane separation—the architectural foundation that enables routing and forwarding to be optimized independently.