Loading content...
When you access a website halfway around the world, your request traverses a remarkable infrastructure—submarine cables spanning oceans, fiber optic networks crossing continents, and massive data centers interconnecting thousands of networks. This is the Internet backbone, and it runs entirely on BGP.
The backbone isn't a single network or company—it's an emergent structure arising from thousands of independent networks choosing to interconnect. Understanding how these interconnections work, how traffic flows between networks, and how BGP enables global reachability is essential for anyone working with Internet infrastructure at scale.
By the end of this page, you will understand the Internet's tier hierarchy, how Internet Exchange Points enable efficient interconnection, the economics of transit and peering, how content delivery works at scale, and the role of BGP in maintaining global Internet connectivity.
The Internet's structure is often described as a hierarchy of network "tiers," though the reality is more nuanced than a simple pyramid.
Tier 1 Networks:
Tier 1 networks sit at the top of the Internet hierarchy. They have the defining characteristic of being able to reach every destination on the Internet without paying for transit—either through direct peering or through peering with other Tier 1 networks.
| Characteristic | Definition |
|---|---|
| Transit-Free | Can reach entire Internet via settlement-free peering |
| Global Reach | Network presence on multiple continents |
| Peer with Each Other | All Tier 1s have direct peering relationships |
| Backbone Capacity | Multi-terabit network infrastructure |
| Customer Revenue | Provide transit to lower-tier networks |
Major Tier 1 Networks (as of recent classifications):
The exact list is debated, as the criteria for "Tier 1" aren't formally defined, and the landscape evolves through mergers and peering changes.
| Tier | Description | Transit? | Examples |
|---|---|---|---|
| Tier 1 | Global backbone, settlement-free only | Never pays | Lumen, NTT, Cogent |
| Tier 2 | Regional/national, peers + buys transit | Some paid | Regional ISPs, carriers |
| Tier 3 | Local, mostly buys transit | Mostly paid | Local ISPs, enterprises |
| Content | Large content providers, hybrid | Varies | Google, Meta, Netflix, Amazon |
Tier 2 Networks:
Tier 2 networks have significant infrastructure but rely on at least one transit provider to reach parts of the Internet. They typically:
Tier 3 Networks:
Tier 3 networks are primarily consumers of transit. They include:
The Tier System Reality:
The neat tier hierarchy is a simplification. In practice:
Content Networks: Companies like Google, Meta, and Netflix have massive networks that peer directly with most ISPs, bypassing the traditional hierarchy entirely.
Hybrid Relationships: A network might be Tier 2 in one region and Tier 3 in another.
Peering vs. Transit: The same two networks might have peering in one location and transit in another.
Constant Evolution: Mergers, new submarine cables, and shifting traffic patterns continuously reshape the landscape.
The tier hierarchy has flattened significantly over the past decade. Content delivery networks, cloud providers, and large enterprises increasingly peer directly with eyeball networks (ISPs serving end users), bypassing traditional transit. Most Internet traffic now flows from large content networks directly to access providers with minimal transit.
Understanding the economics of Internet interconnection is essential for network operators. These relationships directly influence routing policy and network design.
Transit Agreements:
In a transit relationship, one network pays another for full Internet connectivity:
| Aspect | Detail |
|---|---|
| Payment | Customer pays provider (monthly, per Mbps) |
| Routes Received | Full Internet routing table |
| Routes Advertised | Customer's routes (and their customers') |
| Traffic Ratio | Provider carries traffic to/from Internet |
| SLA | Provider commits to uptime, capacity |
Typical Transit Pricing (varies significantly by region/volume):
Peering Agreements:
In a peering relationship, networks exchange traffic without payment:
| Aspect | Detail |
|---|---|
| Payment | None (settlement-free) |
| Routes Received | Peer's routes and their customers' routes |
| Routes Advertised | Own routes and customers' routes |
| Traffic Ratio | Ideally balanced, but varies |
| SLA | Usually best-effort |
Types of Peering:
1. Public Peering (IXP):
2. Private Network Interconnect (PNI):
3. Paid Peering:
Why Peer?
Peering Policies:
Networks define peering policies that specify requirements:
PeeringDB (peeringdb.com) is the de facto database for peering information. Networks list their peering policies, IXP presence, and contact information. Before requesting peering, check PeeringDB to understand the target network's policy and find mutual exchange points.
Internet Exchange Points (IXPs) are physical locations where networks interconnect to exchange traffic. They're the meeting grounds of the Internet, reducing costs and improving performance for all participants.
How IXPs Work:
An IXP typically consists of:
IXP Architecture:
[Member A] ---+
|
[Member B] ---+---- IXP Switch Fabric ----+--- Route Server
| |
[Member C] ---+ [Member D]
|
[Member E] ---+
Members connect to the switch fabric and can:
| IXP | Location | Peak Traffic | Members |
|---|---|---|---|
| DE-CIX Frankfurt | Germany | 14 Tbps | 1100+ |
| AMS-IX | Netherlands | 12 Tbps | 900+ |
| LINX | London | 8 Tbps | 900+ |
| Equinix IX (various) | Global | 10+ Tbps combined | 2000+ unique |
| NAPAfrica | South Africa | 3 Tbps | 600+ |
| IX.br | Brazil | 20 Tbps (largest by traffic) | 2000+ |
| HKIX | Hong Kong | 2 Tbps | 300+ |
Route Servers:
Route servers simplify IXP peering by providing a central BGP "hub":
Without Route Server:
With Route Server:
Route Server Policies:
Members can configure which routes to receive/send via the route server:
! Accept routes from all RS participants
route-map RS-IN permit 10
! Advertise to all RS participants except AS 65003
route-map RS-OUT permit 10
set community 0:65003 additive
! 0:ASN = do not announce to ASN (route server convention)
IXP Membership Considerations:
Most IXPs operate as neutral parties—they don't inspect traffic, don't sell transit, and treat all members equally. This neutrality is essential for trust. Members connect knowing the IXP won't favor competitors or monetize traffic data.
The largest sources of Internet traffic—video streaming, web content, software updates—come from content networks that have radically reshaped Internet topology.
Content Network Strategies:
1. Traditional CDN (Akamai, Cloudflare, Fastly):
2. Private Content Networks (Netflix Open Connect, Google GGC):
3. Cloud Provider Edge (AWS CloudFront, Azure CDN):
BGP and Content Delivery:
Content networks use BGP to:
Anycast in Practice:
Anycast allows the same IP address to be announced from multiple locations:
Location: NYC Location: LON Location: TOK
| | |
| Announce | Announce | Announce
| 192.0.2.0/24 | 192.0.2.0/24 | 192.0.2.0/24
v v v
Internet Internet Internet
Users automatically route to the "closest" (by BGP metrics) instance. This provides:
Embedded Cache Model:
Netflix's Open Connect and Google's GGC place cache servers inside ISP networks:
[End User] → [ISP Access] → [Embedded Cache] → Content
↓
[ISP Core] → [Internet] → [Content Origin]
(only for uncached content)
Benefits:
BGP may or may not be involved—some deployments use private routing or simple static routes.
Modern Internet traffic is dominated by a handful of content networks. Netflix, Google/YouTube, Meta, Amazon, Microsoft, and Apple together account for over 50% of Internet traffic. These networks peer directly with most ISPs, meaning the traditional transit hierarchy carries less and less of the actual traffic.
International Internet connectivity relies primarily on submarine cables—fiber optic cables laid across ocean floors connecting continents. These are the true backbone of global connectivity.
Submarine Cable Infrastructure:
| Statistic | Value |
|---|---|
| Total cables in service | ~500+ |
| Total cable length | >1.4 million km |
| Capacity | Hundreds of Tbps per cable |
| Lifespan | 25+ years |
| Cost | $100M-$1B+ per cable |
Cable Ownership Models:
BGP and Submarine Cables:
Submarine cables appear in BGP as segments of AS paths:
User in USA → US ISP (AS 65001) → Cable Landing (US)
↓
Submarine Cable (no AS, Layer 1)
↓
Cable Landing (EU) → EU ISP (AS 65002) → Destination
At cable landing stations, networks peer or exchange traffic. The cable itself is transparent to BGP—it's just the physical medium.
| Cable | Route | Capacity | Owners |
|---|---|---|---|
| MAREA | Virginia ↔ Spain | 200+ Tbps | Microsoft, Meta |
| Dunant | Virginia ↔ France | 250 Tbps | |
| 2Africa | Africa ring + extensions | 180 Tbps | Meta consortium |
| PEACE | Asia ↔ Europe via Pakistan | 96 Tbps | PCCW, others |
| SEA-ME-WE 5 | Singapore ↔ France | 24 Tbps | Consortium |
| JUPITER | US ↔ Japan, Philippines | 60 Tbps | Amazon, Meta, others |
Cable Landing Points and Diversity:
Cable landing points are critical infrastructure:
Resilience Considerations:
Lessons from Outages:
Submarine cable damage (anchors, earthquakes, sabotage) has caused significant outages:
BGP enables automatic rerouting when cables fail—but only if diverse paths exist and BGP sessions are configured for redundancy.
Despite hundreds of cables, traffic concentration at cable landing points and through a small number of major carriers creates systemic risk. A few well-placed failures could significantly impact global Internet connectivity. Network operators should plan for diverse transit and carefully consider geographic dependencies.
The Internet backbone's reliance on BGP creates significant security challenges. Understanding threats and mitigations is essential for anyone operating production networks.
Major BGP Security Threats:
| Threat | Description | Impact |
|---|---|---|
| Prefix Hijacking | Malicious AS announces someone else's prefix | Traffic interception, blackhole |
| Route Leak | Accidental propagation of routes beyond intended scope | Traffic misdirection, overload |
| AS Path Manipulation | Inserting false ASes in path | Bypass filtering, traffic attraction |
| BGP Session Hijacking | Attacker takes over BGP session | Full control of routing |
| DDoS via BGP | Exploit routing to amplify attacks | Service disruption |
Notable BGP Security Incidents:
Pakistan/YouTube (2008): Pakistan Telecom hijacked YouTube's prefixes to block access domestically; leak propagated globally.
China Telecom (2010, 2019+): Suspicious route announcements affecting US government and military prefixes.
Cloudflare Leak (2019): Customer's route leak through Verizon caused widespread Cloudflare service degradation.
Russian Hijacks (2022): Multiple incidents during Ukraine conflict, including Twitter route hijacking.
Defense Mechanisms:
1. RPKI (Resource Public Key Infrastructure):
Cryptographically sign route origins:
RPKI Adoption (as of 2024):
2. BGP Prefix Filtering:
3. BGPSec (Emerging):
Cryptographically sign AS paths:
4. Mutually Agreed Norms for Routing Security (MANRS):
Industry initiative with commitments:
Use BGP monitoring services to detect hijacks of your prefixes: RIPE RIS, RouteViews, BGPStream, Cloudflare Radar, or commercial services. Configure alerts for unexpected origin AS changes or more-specific announcements. Early detection enables rapid response to hijacking incidents.
Not every router on the Internet carries the complete global routing table. The routers that do form the Default-Free Zone (DFZ)—they have no default route because they know explicit paths to every destination.
Default-Free Zone Characteristics:
| Aspect | Detail |
|---|---|
| Definition | Routers carrying full Internet routing table |
| No Default Route | Every destination has explicit entry |
| Members | Tier 1 cores, major transit provider edges, large enterprises |
| Table Size | ~950,000+ IPv4 prefixes, ~200,000+ IPv6 prefixes (2024) |
| Growth Rate | ~5-10% per year |
Routing Table Growth:
The DFZ routing table has grown dramatically:
Year IPv4 Prefixes IPv6 Prefixes
2000 ~80,000 ~200
2010 ~350,000 ~4,000
2020 ~850,000 ~100,000
2024 ~950,000+ ~200,000+
Factors Driving Growth:
Hardware Requirements:
DFZ routers require significant resources:
| Resource | Requirement |
|---|---|
| Memory | 16-32 GB for full table with multiple peers |
| CPU | Fast convergence requires capable processors |
| TCAM | Hardware forwarding requires large forwarding tables |
| FIB | 2M+ entries for IPv4+IPv6 with path diversity |
Reducing DFZ Participation:
Not every router needs the full table:
Example Partial Table Request:
! Request only North American routes
route-map TRANSIT-IN permit 10
match community NA-ROUTES
route-map TRANSIT-IN permit 20
match ip address prefix-list DEFAULT-ONLY
! Accept only 0.0.0.0/0
neighbor 10.0.0.1 route-map TRANSIT-IN in
Convergence Implications:
A full BGP table means:
Smaller networks should carefully consider whether DFZ participation is necessary.
Many predicted IPv4 table growth would exhaust router capacity. While tables are large, hardware has kept pace. IPv6 tables are smaller but growing faster. The bigger concern may be churn (update rate) rather than absolute size—each update requires processing across the DFZ.
BGP has evolved significantly since its creation in 1989, but the Internet continues to change. Several developments are shaping the future of inter-domain routing.
Emerging Technologies:
1. Segment Routing:
SR enables source routing across network domains:
2. Secure Path Validation:
Beyond RPKI origin validation:
3. Software-Defined Networking:
SDN principles applied to inter-domain:
4. AI/ML for Routing:
Machine learning applications:
| Era | Focus | Key Developments |
|---|---|---|
| 1990s | Basic connectivity | BGP-4, path vector routing, AS architecture |
| 2000s | Scalability | Route reflectors, confederations, MP-BGP |
| 2010s | Performance | ADD-PATH, fast failover, BFD integration |
| 2020s | Security | RPKI adoption, MANRS, improved filtering |
| 2030s+ | Automation | Segment Routing, AI-driven operations, zero-trust routing? |
Challenges Ahead:
Security at Scale: Full path validation remains elusive; new attack vectors emerge
Table Growth: More networks, more prefixes, more paths to process
Convergence Speed: Users expect instant failover; BGP takes seconds to minutes
Automation Complexity: More automation enables faster changes but also faster mistakes
Geopolitical Factors: Internet fragmentation, regulatory restrictions, sovereignty concerns
The Irreplaceable Protocol:
Despite decades of proposals, no BGP replacement has emerged. The protocol is:
BGP will likely remain the Internet's inter-domain routing protocol for decades to come, evolving gradually as new needs emerge.
Follow NANOG, RIPE, APNIC meetings and mailing lists for operational routing discussions. Track IETF IDR working group for BGP protocol developments. Participate in communities like MANRS to stay connected with routing security best practices.
The Internet backbone is a remarkable emergent structure—thousands of independent networks choosing to interconnect, running BGP to exchange routing information. Understanding this infrastructure is essential for anyone operating networks at scale.
Module Complete:
You have now completed Module 6: BGP Operation. You understand the complete operational mechanics of the Border Gateway Protocol—from session establishment through route advertisement, best path selection, policy routing, and the protocol's critical role in the Internet backbone.
This knowledge positions you to design, operate, and troubleshoot BGP in production networks, from single-homed enterprises to global backbone providers.
Congratulations! You have mastered BGP Operation—the protocol that makes the Internet work. You now understand BGP sessions, route advertisement, best path selection, policy routing, and the Internet backbone. This foundation enables you to work with inter-domain routing at any scale, from enterprise multihoming to Internet backbone operations.