Loading content...
At the heart of every cloud deployment lies a virtual network—an isolated, software-defined network environment that provides the connectivity fabric for cloud resources. Whether called a VPC (Virtual Private Cloud) in AWS and GCP or a VNet (Virtual Network) in Azure, the concept is identical: a logical network that you define and control, completely isolated from other customers' networks on the same physical infrastructure.
Virtual networks represent one of the most important abstractions in cloud computing. They transform networking from a hardware-centric discipline—purchasing routers, running cables, configuring VLANs—into a software-defined practice where networks are declared, version-controlled, and instantiated via API calls.
This page explores virtual networks at depth: their architectural foundations, IP addressing and CIDR planning, subnet strategies, routing mechanics, and the sophisticated isolation techniques that enable multi-tenancy on shared infrastructure.
By completing this page, you will: (1) Understand the core abstractions comprising virtual networks, (2) Master CIDR notation and IP address planning for cloud environments, (3) Design effective subnet architectures for various workload patterns, (4) Comprehend virtual routing tables and traffic flow, (5) Appreciate the isolation mechanisms that enable secure multi-tenancy.
A virtual network is a software-defined representation of a traditional network. It provides all the essential functions of a physical network—IP addressing, subnetting, routing, firewall rules—implemented entirely in software running on the cloud provider's physical infrastructure.
Every virtual network comprises these fundamental building blocks:
1. Address Space (CIDR Block)
The foundation of any virtual network is its IP address space, defined using CIDR (Classless Inter-Domain Routing) notation. When you create a VPC/VNet, you specify an IP range—typically from private RFC 1918 space (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
2. Subnets
Subnets divide the virtual network's address space into smaller segments. Each subnet:
3. Route Tables
Route tables contain rules that determine how traffic flows within the virtual network and to external destinations. Each subnet associates with exactly one route table, though one route table can serve multiple subnets.
4. Gateways
Gateways provide connectivity to destinations outside the virtual network:
5. Security Controls
Layered controls regulate traffic at different granularities:
Under the hood, cloud providers implement virtual networks using sophisticated SDN (Software-Defined Networking) technologies. While the specific implementations are proprietary, the general architecture involves:
Hypervisor-Level Encapsulation
Virtual machines see a standard network interface, but their traffic is actually encapsulated before leaving the host. Techniques like VXLAN (Virtual eXtensible LAN) or proprietary encapsulation wrap customer packets in outer headers containing provider-controlled routing information.
┌─────────────────────────────────────────────────────┐
│ Customer's Perspective │
│ │
│ VM-A (10.0.1.5) ──────TCP─────→ VM-B (10.0.2.10) │
│ │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ Provider's Implementation │
│ │
│ Host-1 Encapsulated Host-2 │
│ ┌──────┐ Tunnel ┌──────┐ │
│ │ VM-A │ ─→ [VXLAN(customer pkt)] ─→│ VM-B │ │
│ └──────┘ Provider backbone └──────┘ │
│ Fabric Switch │
└─────────────────────────────────────────────────────┘
Distributed Virtual Routers
Routing isn't performed by a single physical device. Instead, routing logic is distributed across every hypervisor host. When a VM sends a packet, the hypervisor immediately knows the next hop without consulting a central router, enabling wire-speed performance.
Control Plane / Data Plane Separation
The control plane (route programming, policy distribution) operates separately from the data plane (packet forwarding). API calls to modify routes update a central controller, which propagates changes to all affected hypervisors. This separation allows massive scale without control plane bottlenecks.
Virtual networks are virtual in the same sense as virtual machines. Just as a VM appears to have dedicated CPU and memory but actually shares physical hardware, a virtual network appears to own its IP address space and routing infrastructure but actually shares physical switches, routers, and fiber with thousands of other customers. The cloud provider's job is making this sharing invisible, secure, and performant.
Thoughtful IP address planning is critical for cloud deployments. Unlike on-premises networks where you might stretch a range or renumber painfully, cloud networks often can't be resized after creation. Poor initial planning creates lasting constraints.
CIDR notation expresses an IP range as a base address and prefix length. The prefix length (after the /) indicates how many leading bits define the network; remaining bits identify hosts within that network.
| CIDR Block | Network Bits | Host Bits | Total Addresses | Usable Hosts* |
|---|---|---|---|---|
| /8 | 8 | 24 | 16,777,216 | 16,777,214 |
| /16 | 16 | 16 | 65,536 | 65,534 |
| /20 | 20 | 12 | 4,096 | 4,094 |
| /24 | 24 | 8 | 256 | 254 |
| /28 | 28 | 4 | 16 | 14 |
| /32 | 32 | 0 | 1 | 1 (single host) |
*In cloud environments, providers reserve additional addresses per subnet (typically 5), reducing usable hosts further.
Virtual networks should use private IPv4 addresses:
| Range | CIDR | Total Addresses | Typical Use |
|---|---|---|---|
| 10.0.0.0 – 10.255.255.255 | 10.0.0.0/8 | 16,777,216 | Large enterprises, cloud VPCs |
| 172.16.0.0 – 172.31.255.255 | 172.16.0.0/12 | 1,048,576 | Medium deployments, often conflicts with Docker default |
| 192.168.0.0 – 192.168.255.255 | 192.168.0.0/16 | 65,536 | Small networks, home networks (avoid due to conflicts) |
Common VPC Size: /16 (65,536 addresses)
A /16 provides substantial room for growth while remaining manageable:
Enterprise Strategy: Hierarchical Allocation
For organizations with many accounts and environments:
10.0.0.0/8 (Enterprise supernet)
├── 10.0.0.0/12 (Production)
│ ├── 10.0.0.0/16 (Prod Account 1)
│ ├── 10.1.0.0/16 (Prod Account 2)
│ └── ...
├── 10.16.0.0/12 (Staging)
│ ├── 10.16.0.0/16 (Stage Account 1)
│ └── ...
├── 10.32.0.0/12 (Development)
│ ├── 10.32.0.0/16 (Dev Account 1)
│ └── ...
└── 10.48.0.0/12 (Corporate / On-Premises)
This approach:
VPC peering requires non-overlapping CIDR blocks. If VPC-A uses 10.0.0.0/16 and VPC-B also uses 10.0.0.0/16, they cannot be peered—the provider can't know which 10.0.1.5 you mean. Plan your address space from day one with a global registry of CIDR allocations. IPAM (IP Address Management) tools are essential for large deployments.
Cloud providers reserve several addresses in each subnet. Using AWS as an example, in a 10.0.1.0/24 subnet:
| Address | Reservation |
|---|---|
| 10.0.1.0 | Network address |
| 10.0.1.1 | Reserved for VPC router |
| 10.0.1.2 | Reserved for DNS |
| 10.0.1.3 | Reserved for future use |
| 10.0.1.255 | Broadcast address (not used in VPC, but reserved) |
Practical Impact: A /24 subnet provides 256 addresses, but only 251 are usable. For a /28 (16 addresses), only 11 are usable—almost 31% overhead.
Cloud providers support dual-stack (IPv4 + IPv6) configurations:
AWS: Assign an Amazon-provided /56 IPv6 block to VPC; create /64 subnets Azure: Assign /48 or /56 IPv6; subnets get /64 GCP: Assign /48 per VPC; /64 per subnet
IPv6 addresses in cloud are globally unique and publicly routable by default (no NAT). Security groups/firewalls control access.
Why IPv6 Matters for Cloud:
Subnets serve two primary purposes: availability zone distribution (resilience) and network segmentation (security). Effective subnet architecture achieves both while remaining maintainable.
Cloud providers organize physical data centers into Availability Zones (AZs)—isolated failure domains with independent power, cooling, and networking. Subnets are AZ-bound: a subnet resides in exactly one AZ.
Best Practice: Create mirror subnets in each AZ. If you have a 'web' subnet in AZ-A, create an identical 'web' subnet in AZ-B and AZ-C. Load balancers distribute traffic across AZs, and Auto Scaling places instances in multiple AZs.
VPC: 10.0.0.0/16
├── AZ-A (us-east-1a)
│ ├── public-a: 10.0.1.0/24 (ALB, bastion)
│ ├── private-a: 10.0.10.0/24 (application tier)
│ └── data-a: 10.0.20.0/24 (databases, caches)
│
├── AZ-B (us-east-1b)
│ ├── public-b: 10.0.2.0/24
│ ├── private-b: 10.0.11.0/24
│ └── data-b: 10.0.21.0/24
│
└── AZ-C (us-east-1c)
├── public-c: 10.0.3.0/24
├── private-c: 10.0.12.0/24
└── data-c: 10.0.22.0/24
The distinction between public and private subnets is purely a matter of routing:
Public Subnets:
Private Subnets:
Network Segmentation Principle:
Limit the blast radius of a compromised system. A web server breach shouldn't immediately grant access to your database. The layered architecture typically includes:
| Tier | Visibility | Route to IGW | Security Group From | Security Group To |
|---|---|---|---|---|
| Public/DMZ | Internet-facing | Yes | 0.0.0.0/0 (port 443/80) | Private tier only |
| Application | Private | No (NAT for egress) | Public tier, same tier | Data tier, public tier |
| Data | Private, restricted | No (rarely needs egress) | Application tier only | Application tier only |
Subnet size planning depends on workload type:
Container/Kubernetes Workloads:
Container platforms like EKS, AKS, and GKE consume IP addresses rapidly. In AWS EKS with VPC CNI, each pod gets a VPC IP address. A node might run 50+ pods, each consuming an address.
EKS Cluster with 100 nodes × 50 pods/node = 5,000 IP addresses
A /24 subnet (251 usable) is woefully insufficient.
A /20 subnet (4,091 usable) provides room to grow.
Traditional VM Workloads:
VMs consume fewer addresses. A /24 per tier per AZ often suffices:
3 AZs × 3 tiers × /24 (251 addresses) = ~750 instances supported
Serverless Workloads:
Lambda (AWS), Functions (Azure), and Cloud Functions (GCP) can deploy inside VPCs for private resource access. Each concurrent invocation (depending on configuration) may consume an IP address.
Recommendation: Size generously. Subnets are free. IP address space in virtual networks costs nothing. Running out of addresses mid-deployment forces painful re-architecture.
Beyond your application subnets, reserve space for: managed services needing VPC placement (RDS, ElastiCache, managed NAT gateways), future networking services (Transit Gateway attachments), VPN tunnel interfaces, and administrative infrastructure (bastion hosts, monitoring agents). A common mistake is allocating all VPC CIDR space to application subnets, leaving no room for these essential components.
Route tables define how traffic moves within and beyond your virtual network. Understanding routing is essential for both connectivity and security—traffic can only flow where routes exist.
A route table is an ordered list of routing rules. Each rule specifies:
Routing follows longest prefix match: when multiple rules could match, the most specific (longest prefix) wins.
| Destination | Target | Purpose |
|---|---|---|
| 10.0.0.0/16 | local | All VPC traffic stays within VPC |
| 0.0.0.0/0 | igw-1234abcd | Internet traffic via Internet Gateway |
| Destination | Target | Purpose |
|---|---|---|
| 10.0.0.0/16 | local | All VPC traffic stays within VPC |
| 0.0.0.0/0 | nat-1234abcd | Outbound internet via NAT Gateway |
| 10.1.0.0/16 | pcx-peer1234 | Traffic to peered VPC via peering connection |
| 172.16.0.0/12 | vgw-vpn1234 | On-premises traffic via VPN Gateway |
Every route table implicitly includes a route for the VPC's CIDR with target 'local'. This route cannot be modified or deleted. It ensures all intra-VPC traffic routes correctly without configuration.
Implication: If you need to inspect or firewall east-west traffic (between subnets within the same VPC), you can't do it with route tables alone. The 'local' route always wins. You need network firewall services that intercept traffic before routing occurs.
Let's trace a request from a private subnet instance to the internet:
1. Application on 10.0.10.5 initiates HTTPS request to api.example.com
2. DNS resolution (via VPC DNS at .2 address) returns 93.184.216.34
3. Instance constructs packet:
Source: 10.0.10.5:54321 → Destination: 93.184.216.34:443
4. Hypervisor evaluates route table for subnet 10.0.10.0/24:
- 10.0.0.0/16 local? No, 93.x doesn't match.
- 0.0.0.0/0 via nat-gateway? Yes, longest matching prefix.
5. Packet routes to NAT Gateway in public subnet.
6. NAT Gateway performs SNAT:
New Source: 52.1.2.3:port (NAT Gateway's public IP)
Preserves: Destination: 93.184.216.34:443
7. NAT Gateway's route table has 0.0.0.0/0 → Internet Gateway.
8. Packet exits to internet via IGW.
9. Response returns to 52.1.2.3:port (NAT Gateway).
10. NAT Gateway reverse-translates, delivers to 10.0.10.5:54321.
NAT Gateways and Internet Gateways are stateful for TCP/UDP connections. Return traffic for an established connection automatically follows the reverse path—you don't need explicit routes for responses. This is why a private instance can reach the internet: it initiates connections outbound through NAT, and responses follow the connection state back.
Transit Gateway / Virtual WAN Routing
For complex topologies (hub-and-spoke, many VPCs), centralized routing hubs simplify management. Instead of meshing N VPCs with N×(N-1)/2 peering connections, connect all VPCs to a Transit Gateway.
┌────────────────────────────────────────────────────────┐
│ Transit Gateway │
│ │
│ Route Table: │
│ - 10.0.0.0/16 → attachment-vpc-prod │
│ - 10.1.0.0/16 → attachment-vpc-dev │
│ - 10.2.0.0/16 → attachment-vpc-shared │
│ - 172.16.0.0/12 → attachment-vpn-onprem │
│ - 0.0.0.0/0 → attachment-egress-vpc (for inspection) │
│ │
└────────────────────────────────────────────────────────┘
▲ ▲ ▲ ▲
│ │ │ │
┌───┴───┐ ┌───┴───┐ ┌────┴────┐ ┌───┴───┐
│ Prod │ │ Dev │ │ Shared │ │ VPN │
│ VPC │ │ VPC │ │ Svc VPC │ │ OnPrem│
└───────┘ └───────┘ └─────────┘ └───────┘
Prefix List and Route Table Updates
For frequently changing destinations (like on-premises networks), use prefix lists or customer gateway advertisements (BGP). Routes update dynamically without manual intervention.
Private Endpoint/Service Routing
Cloud services (S3, DynamoDB, Storage Accounts) typically have public endpoints. Routing all this traffic through NAT or internet paths is inefficient and incurs egress costs. VPC Endpoints (AWS) or Private Endpoints (Azure/GCP) create internal routes to these services:
New route: s3.us-east-1.amazonaws.com (prefix list) → vpc-endpoint-s3
Now S3 traffic stays entirely within AWS's network, faster and cheaper.
Cloud multi-tenancy requires absolute isolation between customers. If Company A's traffic could leak to Company B, cloud computing would be fundamentally broken. Providers employ multiple layers of isolation.
The hypervisor is the first line of defense. Virtual machines from different customers may run on the same physical host, but:
Virtual networks are isolated at the SDN layer:
Encapsulation: Customer traffic is wrapped in provider-controlled headers. Customer A's packet with destination 10.0.1.5 is encapsulated with metadata identifying it belongs to Customer A's VPC. Even if Customer B also has a 10.0.1.5, the provider's network fabric routes based on the encapsulation, never mixing traffic.
Control Plane Isolation: API calls to manage VPC settings are authenticated and authorized. Customer A cannot see, enumerate, or modify Customer B's VPCs—they're invisible.
Data Plane Isolation: The physical underlay network (provider's switches/routers) treats all customer traffic as opaque payloads. Routing decisions happen within encapsulation headers that customers cannot influence.
| Layer | Mechanism | What It Prevents |
|---|---|---|
| Hardware | CPU memory protection, IOMMU | VMs reading each other's memory; device passthrough attacks |
| Hypervisor | Virtual switch filtering, MAC enforcement | Traffic sniffing; MAC spoofing; promiscuous mode |
| SDN/Overlay | VPC-aware encapsulation (VXLAN, proprietary) | Cross-customer packet routing; IP conflicts between tenants |
| Control Plane | IAM, API authentication, account isolation | Unauthorized configuration access; cross-tenant enumeration |
| Customer Config | Security groups, NACLs, private subnets | Unintended exposure of customer resources within their own VPC |
Within your own VPC, subnets and security groups provide internal isolation:
Subnet-Level (Network ACLs):
Instance-Level (Security Groups):
Defense in Depth:
Layer both controls:
Even if misconfigured security group allows too much, the NACL provides a backstop.
Reference security groups by ID, not CIDR blocks, whenever possible. Instead of allowing 'port 3306 from 10.0.10.0/24', allow 'port 3306 from sg-application-tier'. This decouples security from IP addressing—as instances scale or IPs change, security policy remains correct. It also enables reasoning about security in terms of roles ("apps can access databases") rather than addresses.
Cloud services (S3, Azure Storage, Cloud Storage) traditionally have public endpoints. Traffic to them leaves your VPC, traverses the internet (even within the provider's network), and creates attack surface.
Interface Endpoints (PrivateLink):
Create an elastic network interface inside your VPC that routes to the service. The service appears as a private IP within your VPC:
Without endpoint:
Instance → NAT → Internet → s3.amazonaws.com (public IP)
With endpoint:
Instance → vpce-s3-interface (10.0.5.100) → AWS internal fabric → S3
Gateway Endpoints:
For high-volume services like S3 and DynamoDB (AWS), gateway endpoints add routes to your route tables pointing to a special gateway. No ENI created, no per-hour charge.
Service Endpoints (Azure):
Similar concept—configure subnets to route traffic to Azure services over Microsoft backbone rather than internet.
Private Service Connect (GCP):
Create a private connection to Google APIs or customer services running in other VPCs.
Security Benefit: Service endpoints keep traffic off the public internet, reduce NAT Gateway egress costs, and enable stricter security policies ("S3 bucket only accessible from this VPC endpoint").
Virtual networks are the foundational abstraction for cloud connectivity. Mastering their design and operation is essential for building secure, scalable cloud architectures.
What's Next:
With virtual network fundamentals established, we'll explore Virtual Private Clouds (VPCs) in greater depth—the specific implementation patterns for designing production-ready network infrastructure across major cloud providers.
You now possess deep understanding of virtual network architecture, IP addressing, subnet design, routing, and isolation. These concepts form the backbone of every cloud deployment. The remaining pages build on this foundation with specific implementation patterns and connectivity options.