Loading learning content...
Kubernetes doesn't just run containers—it orchestrates entire distributed systems. With thousands of pods across hundreds of nodes, traditional port-based container networking breaks down. How do you address a specific pod among millions? How do you route traffic to services that scale up and down dynamically? How do you enforce security policies in a constantly changing environment?
Kubernetes answers these questions with a fundamentally different networking model: every pod gets its own IP address, there's no NAT between pods, and sophisticated abstractions (Services, Ingress, Network Policies) handle discovery, load balancing, and security. Understanding this model is essential for anyone operating Kubernetes in production.
By the end of this page, you will understand Kubernetes' fundamental networking requirements, the pod networking model, how Services abstract ephemeral pod IPs into stable endpoints, Ingress controllers for HTTP routing, Network Policies for security, and how CNI plugins implement these abstractions. You'll see how Kubernetes networking differs fundamentally from Docker's approach.
Kubernetes imposes three fundamental requirements on any compliant networking implementation. These requirements are non-negotiable—any CNI plugin or network solution must satisfy them:
Why these requirements matter:
These requirements simplify application development dramatically. Traditional container networking (like Docker's published ports) forces applications to be aware of the mapping between container ports and host ports. Kubernetes' flat network means:
The cost is implementation complexity: creating a flat network across potentially thousands of nodes requires sophisticated solutions (VXLAN, BGP, eBPF, etc.) provided by CNI plugins.
| Aspect | Docker (Bridge) | Kubernetes |
|---|---|---|
| Pod/Container IP visibility | Private to host | Globally routable in cluster |
| Cross-host communication | Requires port mapping or overlay | Native; any pod reaches any pod |
| Port usage | Host port conflicts possible | Each pod has full port space |
| NAT | Required for external access | No NAT within cluster |
| Service discovery | Per-network DNS or manual | Cluster-wide DNS + Services |
| IP per workload | Often shared (port mapping) | Every pod has unique IP |
NAT breaks many protocols (SIP, FTP active mode, some game protocols) and complicates debugging. Kubernetes' no-NAT requirement means any network protocol works without modification, and packet captures show real addresses—invaluable for production troubleshooting.
A pod is Kubernetes' fundamental unit of deployment—one or more containers that share network namespace, IPC namespace, and storage. From a networking perspective, all containers in a pod share the same network stack: they have the same IP address and can communicate via localhost.
Pod network namespace structure:
When the kubelet creates a pod, it:
Inter-container communication within a pod:
Containers in the same pod communicate via localhost. This enables powerful patterns:
One IP address, many ports:
All containers in a pod share the IP address but must coordinate on ports. Container A on port 8080 and Container B on port 9090 both use 10.244.1.5, but different ports. This is intentional—it encourages cohesive, tightly-coupled containers that form a logical unit.
123456789101112131415161718192021222324
apiVersion: v1kind: Podmetadata: name: multi-container-podspec: containers: - name: web image: nginx:alpine ports: - containerPort: 80 # Web server accessible on pod IP:80 - name: metrics image: prom/prometheus:latest ports: - containerPort: 9090 # Prometheus accessible on pod IP:9090 # Also reachable from web container via localhost:9090 - name: debug image: nicolaka/netshoot command: ["sleep", "infinity"] # Debug container shares same network namespace # Can see/debug network traffic from other containers123456789101112131415161718192021
# View pod's assigned IP addresskubectl get pod multi-container-pod -o wide# NAME READY STATUS IP NODE# multi-container-pod 3/3 Running 10.244.1.5 node-1 # All containers see the same network configurationkubectl exec multi-container-pod -c web -- ip addrkubectl exec multi-container-pod -c metrics -- ip addrkubectl exec multi-container-pod -c debug -- ip addr# All output: 10.244.1.5 (same IP!) # Container can reach others via localhostkubectl exec multi-container-pod -c debug -- curl localhost:80# Returns nginx response from 'web' container kubectl exec multi-container-pod -c debug -- curl localhost:9090# Returns Prometheus response from 'metrics' container # Pod-to-pod communication (across cluster)kubectl exec multi-container-pod -c debug -- curl 10.244.2.10:8080# Directly reaches another pod's IPPods are ephemeral—they're created, destroyed, and rescheduled constantly. Their IP addresses change with each incarnation. Kubernetes Services solve this problem by providing stable network endpoints that abstract away pod ephemerality.
A Service is a stable IP address (and DNS name) that load-balances traffic across a dynamic set of pods selected by labels. Clients connect to the Service; Kubernetes routes traffic to healthy pods.
| Type | ClusterIP | External Access | Use Case |
|---|---|---|---|
| ClusterIP (default) | Assigned from cluster CIDR | None (internal only) | Internal microservice communication |
| NodePort | Yes + Node IP:NodePort | Any node IP:30000-32767 | Development; simple external access |
| LoadBalancer | Yes + External LB IP | Cloud provider LB | Production external services |
| ExternalName | None (CNAME) | DNS alias to external | Bridge to external services |
| Headless | None (ClusterIP: None) | Direct pod IPs via DNS | StatefulSets; direct pod access |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
# ClusterIP Service (default) - Internal onlyapiVersion: v1kind: Servicemetadata: name: backend-apispec: type: ClusterIP # Can be omitted (default) selector: app: backend tier: api ports: - name: http port: 80 # Service port (what clients use) targetPort: 8080 # Pod port (what app listens on) protocol: TCP---# NodePort Service - External access via node IPsapiVersion: v1kind: Servicemetadata: name: web-nodeportspec: type: NodePort selector: app: web ports: - port: 80 targetPort: 80 nodePort: 30080 # Optional; auto-assigned if omitted---# LoadBalancer Service - Cloud LB provisioningapiVersion: v1kind: Servicemetadata: name: web-lbspec: type: LoadBalancer selector: app: web ports: - port: 443 targetPort: 8443 # Cloud controller provisions external LB---# Headless Service - Direct pod IPsapiVersion: v1kind: Servicemetadata: name: databasespec: clusterIP: None # Makes it headless selector: app: postgres ports: - port: 5432# DNS returns all pod IPs, not a single VIPHow Services work: kube-proxy and iptables
Kubernetes Services are typically implemented by kube-proxy, which runs on every node and programs the data plane:
iptables mode (default):
IPVS mode (high performance):
12345678910111213141516171819202122
# View Service detailskubectl get svc backend-api -o wide# NAME TYPE CLUSTER-IP PORT(S) SELECTOR# backend-api ClusterIP 10.96.145.82 80/TCP app=backend,tier=api # View endpoints (actual pod IPs behind the Service)kubectl get endpoints backend-api# NAME ENDPOINTS# backend-api 10.244.1.4:8080,10.244.2.7:8080,10.244.3.2:8080 # DNS resolution for Serviceskubectl run dns-test --rm -it --image=busybox -- nslookup backend-api# Server: 10.96.0.10# Address: 10.96.0.10:53# Name: backend-api.default.svc.cluster.local# Address: 10.96.145.82 # Full DNS pattern: <service>.<namespace>.svc.<cluster-domain> # View kube-proxy's iptables rules (on a node)sudo iptables -t nat -L KUBE-SERVICES -n | grep backend-api# Shows DNAT rules routing ClusterIP to pod IPsServices get DNS entries in the format: <service-name>.<namespace>.svc.cluster.local. Within the same namespace, use just <service-name>. Across namespaces, use <service-name>.<namespace>. For headless services, DNS returns A records for each pod.
Services provide Layer 4 (TCP/UDP) load balancing. For Layer 7 (HTTP/HTTPS) routing—hostname-based routing, path-based routing, TLS termination—Kubernetes provides the Ingress resource.
An Ingress defines HTTP routing rules mapping external requests to internal Services. An Ingress Controller (nginx, Traefik, HAProxy, AWS ALB, etc.) implements these rules—Kubernetes itself doesn't include one by default.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: main-ingress annotations: # Ingress controller-specific annotations nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "50m"spec: ingressClassName: nginx # Which controller handles this tls: - hosts: - api.example.com - web.example.com secretName: example-tls # Kubernetes Secret with TLS cert rules: # Host-based routing - host: api.example.com http: paths: - path: /v1 pathType: Prefix backend: service: name: api-v1 port: number: 80 - path: /v2 pathType: Prefix backend: service: name: api-v2 port: number: 80 # Different host - host: web.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-frontend port: number: 80 - path: /static pathType: Prefix backend: service: name: cdn-service port: number: 80Ingress Controller capabilities:
Popular Ingress controllers provide features beyond basic routing:
| Feature | nginx-ingress | Traefik | HAProxy | AWS ALB |
|---|---|---|---|---|
| TLS termination | ✅ | ✅ | ✅ | ✅ |
| Rate limiting | ✅ | ✅ | ✅ | ✅ |
| OAuth/OIDC auth | ✅ | ✅ | ✅ | ✅ |
| WebSocket support | ✅ | ✅ | ✅ | ✅ |
| gRPC support | ✅ | ✅ | ✅ | ✅ |
| Canary deployments | ✅ | ✅ | ✅ | Limited |
| Custom error pages | ✅ | ✅ | ✅ | Limited |
| Request tracing | Via integration | Built-in | Via integration | AWS X-Ray |
Kubernetes Gateway API is the next-generation replacement for Ingress, providing more expressive, extensible, and role-oriented configuration. It supports TCP/UDP routing, traffic splitting, and cross-namespace routing—capabilities limited in Ingress. As of 2024, Gateway API is GA and recommended for new deployments.
By default, Kubernetes allows all pods to communicate with all other pods—a flat, permissive network. In production, this is often undesirable. Network Policies implement microsegmentation, controlling which pods can communicate with which others.
Network Policies are firewall rules at the pod level: you specify which ingress (incoming) and egress (outgoing) traffic is allowed. All other traffic is denied by the default-deny policy.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
# Default deny all ingress traffic to database podsapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: database-deny-all namespace: productionspec: podSelector: matchLabels: tier: database policyTypes: - Ingress # No ingress rules = deny all ingress---# Allow backend pods to access database pods on port 5432apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: database-allow-backend namespace: productionspec: podSelector: matchLabels: tier: database policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: backend ports: - protocol: TCP port: 5432---# Allow frontend to reach backend API onlyapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-allow-frontend namespace: productionspec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend - namespaceSelector: matchLabels: environment: production ports: - protocol: TCP port: 8080---# Restrict egress: backend can only reach database and external APIsapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-egress namespace: productionspec: podSelector: matchLabels: tier: backend policyTypes: - Egress egress: - to: - podSelector: matchLabels: tier: database ports: - protocol: TCP port: 5432 - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 # Block internal IPs - 172.16.0.0/12 - 192.168.0.0/16 ports: - protocol: TCP port: 443Network Policy enforcement:
Network Policies are enforced by the CNI plugin, not by Kubernetes itself. Not all CNI plugins support Network Policies:
| CNI Plugin | Network Policy Support |
|---|---|
| Calico | ✅ Full support (L3-L4) |
| Cilium | ✅ Full + L7 policies |
| Weave | ✅ Full support |
| Flannel | ❌ No support* |
| AWS VPC CNI | ✅ With Calico addon |
| Azure CNI | ✅ With Azure Network Policy |
*Flannel requires adding Calico for policy enforcement
In production, always implement default-deny policies and explicitly allow required traffic. Start by denying all ingress/egress, then incrementally add allow rules. This 'zero-trust' approach prevents lateral movement if a pod is compromised.
Every Kubernetes cluster runs a DNS server (typically CoreDNS) that provides name resolution for Services and Pods. DNS is fundamental to service discovery—applications use DNS names, not IP addresses, to find each other.
DNS records Kubernetes creates:
| Resource | DNS Format | Record Type | Returns |
|---|---|---|---|
| Service | <svc>.<ns>.svc.cluster.local | A/AAAA | ClusterIP |
| Headless Service | <svc>.<ns>.svc.cluster.local | A/AAAA | All pod IPs |
| Pod (headless svc) | <pod-ip-dashed>.<svc>.<ns>.svc.cluster.local | A/AAAA | Pod IP |
| StatefulSet Pod | <pod-name>.<svc>.<ns>.svc.cluster.local | A/AAAA | Specific pod IP |
| Service Port | _<port>._<proto>.<svc>.<ns>.svc.cluster.local | SRV | Port + target |
| External (headless) | CNAME | CNAME | External hostname |
1234567891011121314151617181920212223242526272829303132
# View CoreDNS podskubectl get pods -n kube-system -l k8s-app=kube-dns# NAME READY STATUS RESTARTS AGE# coredns-558bd4d5db-abcde 1/1 Running 0 15d# coredns-558bd4d5db-fghij 1/1 Running 0 15d # View CoreDNS ConfigMapkubectl get configmap coredns -n kube-system -o yaml # DNS resolution examples from a podkubectl run dns-test --rm -it --image=busybox:1.28 -- sh # Within the same namespacenslookup my-service# Name: my-service# Address: 10.96.100.50 # Cross-namespacenslookup postgres.database.svc.cluster.local# Name: postgres.database.svc.cluster.local# Address: 10.96.200.100 # StatefulSet pods (stable DNS per pod)nslookup postgres-0.postgres.database.svc.cluster.local# Returns specific pod IP - crucial for stateful apps # SRV records for port discoverynslookup -query=SRV _http._tcp.my-service.default.svc.cluster.local # External name servicenslookup external-db.default.svc.cluster.local# Returns CNAME: rds.aws.amazon.comPod DNS configuration:
Each pod's /etc/resolv.conf is configured to use the cluster DNS:
nameserver 10.96.0.10 # CoreDNS ClusterIP
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
The search directive allows short names: my-service automatically expands to my-service.default.svc.cluster.local.
The ndots:5 option means names with fewer than 5 dots are tried with search domains first. This can add latency for external lookups (like api.github.com, which has only 2 dots). For performance-sensitive applications, consider customizing this.
High-traffic clusters often hit DNS rate limits. Solutions: 1) Use NodeLocal DNSCache (runs DNS cache on each node), 2) Scale CoreDNS replicas, 3) Reduce ndots value, 4) Use FQDNs with trailing dot to bypass search expansion.
Kubernetes doesn't implement pod networking itself—it delegates to CNI (Container Network Interface) plugins. A CNI plugin is responsible for:
| CNI Plugin | Networking Mode | Network Policy | Notable Features |
|---|---|---|---|
| Calico | BGP (L3) or VXLAN | ✅ Advanced | High performance; enterprise support; eBPF mode |
| Cilium | eBPF (kernel bypass) | ✅ L3-L7 | API-aware policies; observability; service mesh |
| Flannel | VXLAN overlay | ❌ | Simple; lightweight; requires Calico for policies |
| Weave | VXLAN + mesh | ✅ | Encryption; multicast; simple setup |
| AWS VPC CNI | Native VPC networking | ✅ (addon) | Pod IPs from VPC; no overlay; high performance |
| Azure CNI | Native VNET | ✅ (addon) | Pod IPs from VNET; Azure integration |
| Antrea | Open vSwitch | ✅ | VMware-backed; Windows support; observability |
CNI plugin architectures:
Overlay networks (VXLAN, Geneve):
Routed networks (BGP):
Native cloud integration:
For cloud environments: use the cloud-native CNI (AWS VPC CNI, Azure CNI, GKE) for best performance and integration. For on-prem or advanced needs: Calico for performance and policies, Cilium for eBPF observability and L7 policies. For simplicity: Flannel + Calico (policy only).
We've covered Kubernetes networking comprehensively—from fundamental requirements to advanced policy enforcement.
What's next:
Kubernetes networking provides the foundation for pod communication. In the next page, we'll explore Service Meshes—an advanced pattern that adds observability, security, and traffic management at the application layer, using sidecar proxies to intercept and control all pod network traffic.
You now understand Kubernetes networking deeply—from the flat pod network to Services, Ingress, Network Policies, and DNS. These concepts form the foundation for operating any Kubernetes cluster in production.