Loading learning content...
Few organizations exist entirely in the cloud. Most enterprises maintain on-premises data centers, colocation facilities, branch offices, and legacy systems that must communicate with cloud workloads. Cloud connectivity encompasses the technologies and architectures that bridge these environments securely and reliably.
Connectivity decisions profoundly impact application performance, security posture, operational complexity, and cost. A video conferencing application that tolerates VPN latency variability differs fundamentally from a financial trading system requiring predictable sub-millisecond latency. Understanding the full spectrum of connectivity options enables architects to match solutions to requirements.
This page examines the complete cloud connectivity landscape: Site-to-Site VPN for encrypted tunnels over the internet, dedicated connections like AWS Direct Connect and Azure ExpressRoute, SD-WAN overlays, inter-cloud connectivity patterns, and the decision frameworks for selecting appropriate solutions.
By completing this page, you will: (1) Understand VPN technologies and their cloud implementations, (2) Master dedicated connectivity options (Direct Connect, ExpressRoute, Cloud Interconnect), (3) Design resilient hybrid connectivity architectures, (4) Evaluate SD-WAN and its role in cloud networking, (5) Apply decision frameworks for connectivity selection.
Site-to-Site VPN creates encrypted tunnels over the public internet, enabling private communication between your on-premises network and cloud VPCs. It's the most accessible connectivity option—requiring only an internet connection and a capable VPN device.
IPsec Protocol Suite:
Cloud VPNs use IPsec (Internet Protocol Security) for encryption and authentication. IPsec operates in tunnel mode, encapsulating entire IP packets within encrypted outer packets.
Original Packet:
┌──────────────────────────────────────────┐
│ IP Header (10.0.1.5 → 172.16.2.10) │
│ TCP/UDP Header │
│ Application Data │
└──────────────────────────────────────────┘
IPsec Tunnel Mode Encapsulation:
┌──────────────────────────────────────────────────────────────┐
│ Outer IP Header (VPN Gateway IPs) │
│ ESP Header (Security Parameters Index, Sequence Number) │
│ ┌──────────────────────────────────────────┐ ← Encrypted │
│ │ Original IP Header (10.0.1.5 → 172.16.2.10) │
│ │ TCP/UDP Header │
│ │ Application Data │
│ └──────────────────────────────────────────┘ │
│ ESP Trailer + Auth Data │
└──────────────────────────────────────────────────────────────┘
IKE (Internet Key Exchange):
Before data can be encrypted, the VPN endpoints must agree on cryptographic parameters and exchange keys. IKE negotiates:
| Provider | Service Name | Max Bandwidth | HA Options | Key Features |
|---|---|---|---|---|
| AWS | Site-to-Site VPN | 1.25 Gbps per tunnel | 2 tunnels per connection; multiple connections | Accelerated VPN (Global Accelerator) |
| Azure | VPN Gateway | 1.25 Gbps (VpnGw3) | Active-Active; Zone-redundant | Route-based and policy-based options |
| GCP | Cloud VPN | 3 Gbps per tunnel (HA VPN) | HA VPN with 4 tunnels | Classic VPN (single tunnel) deprecated |
On-Premises Side:
Cloud Side:
Why Two Tunnels?
Cloud VPNs establish two tunnels to different endpoints for high availability. If one tunnel fails, traffic fails over to the other. AWS creates two tunnels to different physical routers in different data centers.
On-Premises Tunnel 1 AWS VPN Endpoint 1
VPN Device ─────────────────────────► (AZ-A)
─────────────────────────►
Tunnel 2 AWS VPN Endpoint 2
(AZ-B)
Both tunnels should be active (active-active configuration) for maximum resilience. Many customer gateways support ECMP (Equal-Cost Multi-Path) to utilize both tunnels simultaneously.
VPN over internet inherits internet's limitations: variable latency, packet loss, congestion. AWS specifies 1.25 Gbps per tunnel, but you're unlikely to achieve this consistently. Expect 300-800 Mbps in practice, with latency spikes during internet congestion. For predictable, high-bandwidth requirements, dedicated connections are essential.
Static Routing:
BGP (Border Gateway Protocol):
Recommendation: Always use BGP if your equipment supports it. The operational benefits of automatic routing far outweigh the initial configuration complexity.
Standard VPN traffic traverses the public internet from your premises to the AWS region hosting your VGW. Accelerated VPN routes traffic onto AWS's global network at the nearest AWS edge location, then uses AWS's backbone to reach the VPN endpoint.
Benefits:
Consideration: Additional cost (~$0.035/hour per VPN connection + data processing fees).
For enterprise workloads requiring predictable performance, dedicated connections provide private, high-bandwidth links that bypass the public internet entirely. Each major cloud provider offers equivalent services with different names and capabilities.
Direct Connect establishes a dedicated network connection from your premises to AWS. Traffic never touches the public internet.
Connection Types:
| Type | Bandwidth Options | Physical Requirement |
|---|---|---|
| Dedicated Connection | 1 Gbps, 10 Gbps, 100 Gbps | Cross-connect at Direct Connect location |
| Hosted Connection | 50 Mbps to 10 Gbps | Partner provides last-mile connectivity |
Architecture:
Your Data Center AWS
┌──────────────┐ ┌───────────────────┐ ┌──────────────┐
│ Network Gear │────│ Partner/Carrier │────│ Direct Connect│
│ │ │ Circuit │ │ Location │
└──────────────┘ └───────────────────┘ └──────┬───────┘
│
Cross-Connect
│
┌────────────────────────────────▼──────┐
│ AWS Network │
│ │
│ ┌─────────────────────────────┐ │
│ │ Direct Connect Gateway │ │
│ │ (or Virtual Private GW) │ │
│ └────────────┬────────────────┘ │
│ │ │
│ ┌────────────▼────────────────┐ │
│ │ Your VPCs │ │
│ └─────────────────────────────┘ │
└───────────────────────────────────────┘
Virtual Interfaces (VIFs):
| Aspect | AWS Direct Connect | Azure ExpressRoute | GCP Cloud Interconnect |
|---|---|---|---|
| Max Bandwidth | 100 Gbps | 100 Gbps (ExpressRoute Direct) | 200 Gbps (8×25 Gbps LAG) |
| Hosted Option | Partner Hosted Connection | ExpressRoute Provider | Partner Interconnect |
| Global Reach | Direct Connect Gateway | Global Reach add-on | Global Routing / VLAN attachment |
| Private Access | Private VIF to VPC | Private Peering to VNet | VLAN attachment to VPC |
| Cloud Service Access | Public VIF | Microsoft Peering | Cloud Router with NAT |
| Redundancy Model | Multiple connections + locations | Zone-redundant + geo-redundant | Dedicated: 99.99% SLA with diversity |
ExpressRoute provides private connectivity to Azure and Microsoft cloud services. Unlike Direct Connect's VIF model, ExpressRoute uses "peerings":
Private Peering: Connects to your Azure VNets via private IP addresses Microsoft Peering: Connects to Microsoft 365 and Azure public services via Microsoft's IP addresses
ExpressRoute Circuits:
Global Reach: Connects ExpressRoute circuits in different regions, enabling your on-premises sites to communicate through Microsoft's backbone. Data center in Singapore can reach data center in London via Azure network rather than public internet.
Dedicated Interconnect:
Partner Interconnect:
VLAN Attachments:
Dedicated connections require physical circuit provisioning—expect weeks to months. Start the process early. In the interim, use VPN as temporary connectivity. Design your architecture so VPN and dedicated connections are additive; when the dedicated connection comes online, routing preferences shift traffic automatically without application changes.
Single Circuit = Single Point of Failure
One dedicated connection provides no more availability than a cable cut can destroy. Production architectures require redundancy:
Pattern 1: Dual Circuits at Same Location
Pattern 2: Dual Circuits at Different Locations
Pattern 3: Dedicated + VPN Backup
┌─────────────────────────────────────────┐
On-Premises │ AWS Region │
┌─────────────┐ │ │
│ │───┼──► Direct Connect (Primary) ──► VPC │
│ Core Router │ │ │
│ │───┼──► VPN over Internet (Backup) ──► VPC │
└─────────────┘ │ ↑ Lower BGP preference │
└─────────────────────────────────────────┘
Selecting the right connectivity option requires evaluating multiple dimensions: bandwidth, latency, reliability, security, cost, and time to deploy. No single option dominates all dimensions—the choice involves trade-offs aligned with your specific requirements.
| Dimension | VPN | Dedicated Connection | Accelerated VPN |
|---|---|---|---|
| Bandwidth | 1.25 Gbps per tunnel (variable actual) | 10-100 Gbps (dedicated line rate) | 1.25 Gbps (more consistent than VPN) |
| Latency | Variable (internet-dependent) | Consistent, predictable | More consistent than standard VPN |
| Reliability | Subject to internet outages | Private path; provider SLAs | AWS backbone reduces internet variability |
| Cost | Low (~$0.05/hour + data) | High (port fees + data transfer) | Medium (VPN + Global Accelerator) |
| Deployment Time | Hours to days | Weeks to months | Hours to days |
| Encryption | Built-in (IPsec) | Optional (MACsec) or add VPN overlay | Built-in (IPsec) |
Question 1: Do you need more than 1 Gbps consistent bandwidth?
Question 2: Is latency consistency critical (sub-10ms, predictable)?
Question 3: Do compliance/regulatory requirements mandate private connectivity?
Question 4: Can you wait weeks for circuit provisioning?
Question 5: Is cost the primary constraint?
Many organizations deploy both VPN and dedicated connections. VPN serves as backup for dedicated connection failures. VPN also provides immediate connectivity for new sites while dedicated circuits are provisioned. Design your routing (BGP) to prefer dedicated paths, with automatic failover to VPN when dedicated is unavailable.
Consider a scenario: 5 TB monthly data transfer between on-premises and AWS, requiring 200 Mbps average bandwidth.
Option A: Site-to-Site VPN
VPN Connection: 2 tunnels × $0.05/hour × 730 hours = $73/month
Data Transfer: 5 TB × $0.02/GB egress* = ~$100/month
──────────────────────────────────────────────────────────
Total: ~$175/month
(*Egress from AWS; ingress is free)
Option B: Direct Connect 500 Mbps Hosted
Port Fee: $0.30/hour × 730 hours = $219/month
Data Transfer: 5 TB × $0.02/GB (reduced DX rate) = ~$100/month
──────────────────────────────────────────────────────────
Total: ~$320/month
Analysis:
Break-even typically occurs at 10+ TB/month or when latency requirements justify the premium.
Software-Defined WAN (SD-WAN) is transforming enterprise connectivity by abstracting the underlying transport (MPLS, broadband, LTE) and providing intelligent path selection, application-aware routing, and simplified management. SD-WAN increasingly integrates directly with cloud providers.
Traditional WAN Challenge:
SD-WAN Solution:
SD-WAN vendors offer pre-integrated cloud connectivity:
AWS Transit Gateway Connect:
Azure Virtual WAN:
GCP Network Connectivity Center:
1. Direct Internet Access (DIA) to Cloud
Branch users access cloud apps directly without backhauling through data center:
2. Multi-Cloud Connectivity
SD-WAN provides unified connectivity across multiple cloud providers:
3. Application Performance Optimization
SD-WAN identifies application traffic and applies QoS:
SD-WAN with direct internet access from branches means security inspection moves from centralized data center to distributed branches. Ensure your SD-WAN solution includes adequate security features (NGFW, IPS, URL filtering, sandbox) or integrates with cloud-delivered security (SASE). Don't sacrifice security for performance.
Organizations increasingly deploy workloads across multiple cloud providers. Connecting these clouds—and ensuring they communicate efficiently—presents unique challenges.
Option 1: VPN Between Clouds
Each cloud terminates a VPN; tunnels traverse the public internet.
AWS VPC Azure VNet
┌────────────────┐ ┌────────────────┐
│ │ │ │
│ Virtual │ │ VPN │
│ Private GW ◄──┼── IPsec ────►│ Gateway │
│ │ over │ │
│ │ Internet │ │
└────────────────┘ └────────────────┘
Pros: Low cost, quick to deploy Cons: Internet latency, variable performance, limited bandwidth
Option 2: Third-Party Cloud Exchange
Services like Equinix Cloud Exchange Fabric, Megaport, or PacketFabric provide private interconnects between cloud providers.
AWS Cloud Exchange Azure
┌────────────┐ ┌─────────────────┐ ┌────────────┐
│ │ │ │ │ │
│ Direct ◄──┼────────│ Equinix ECX │────────│►ExpressRoute│
│ Connect │ │ (Private) │ │ │
│ │ │ │ │ │
└────────────┘ └─────────────────┘ └────────────┘
Pros: Private connectivity, consistent performance, single provider for multi-cloud Cons: Monthly port fees, exchange provider dependency
| Option | Bandwidth | Latency | Cost | Complexity |
|---|---|---|---|---|
| VPN | 1-2 Gbps | Variable | Low | Low |
| Cloud Exchange | 10+ Gbps | Consistent | Medium-High | Medium |
| Co-lo + Interconnects | 100 Gbps | Lowest | High | High |
| SD-WAN Overlay | Dependent on underlay | Variable | Medium | Medium |
Option 3: Colocation-Based Connectivity
Deploy your own network equipment in a colocation facility with direct interconnects to multiple cloud providers.
┌───────────────────────────────────────────────────────────┐
│ Colocation Facility (Equinix, etc.) │
│ │
│ ┌───────────────┐ │
│ │ Your Routers/ │ │
│ │ Firewalls │ │
│ └───┬───────┬───┘ │
│ │ │ │
│ ┌───▼───┐ ┌─▼───────┐ ┌─────────────┐ │
│ │ AWS │ │ Azure │ │ GCP Cloud │ │
│ │ Direct│ │ Express │ │ Interconnect│ │
│ │Connect│ │ Route │ │ │ │
│ └───────┘ └─────────┘ └─────────────┘ │
│ │
└───────────────────────────────────────────────────────────┘
Pros: Maximum control, lowest latency, highest bandwidth Cons: Requires physical infrastructure, highest cost, longest deployment
1. IP Address Overlap
Different clouds may have been provisioned independently with overlapping CIDRs. Before interconnecting, audit address space.
2. Asymmetric Routing
Traffic paths between clouds may differ for outbound vs inbound, potentially traversing different security controls.
3. Inconsistent Security Models
AWS security groups, Azure NSGs, and GCP firewall rules have different semantics. Maintaining consistent policy requires abstraction or automation.
4. Egress Cost Accumulation
Data leaving any cloud incurs egress charges. Multi-cloud architectures with high inter-cloud traffic face compounded egress costs.
5. Hybrid DNS Complexity
Each cloud has its own DNS service. Cross-cloud resolution requires careful forwarding configuration.
Multi-cloud often introduces more complexity than value. Before designing multi-cloud connectivity, honestly assess whether you need resources in multiple clouds or if one provider plus disaster recovery would suffice. True multi-cloud (portable workloads, failover between clouds) is expensive and operationally demanding. Many 'multi-cloud' deployments are actually different apps in different clouds with minimal cross-cloud traffic.
Cloud connectivity bridges your cloud environments with on-premises networks, branches, and other clouds. The right connectivity choice depends on your bandwidth, latency, reliability, and cost requirements.
What's Next:
With cloud connectivity understood, we conclude this module by exploring Hybrid Cloud architectures—the design patterns, operational considerations, and best practices for integrating on-premises and cloud environments into cohesive, manageable infrastructure.
You now possess comprehensive knowledge of cloud connectivity options, from VPN basics to enterprise-grade dedicated connections. You can evaluate trade-offs, design resilient architectures, and select appropriate solutions matched to workload requirements. The final page completes your understanding with hybrid cloud integration patterns.