Loading content...
By default, Kubernetes is completely permissive—every Pod can communicate with every other Pod in the cluster, regardless of namespace. This flat network model simplifies initial development but creates significant security risks:
Network Policies are Kubernetes' answer to this problem. They act as Pod-level firewalls, allowing you to define exactly which Pods can communicate with which other Pods, on which ports, from which namespaces.
By the end of this page, you will understand Network Policy concepts, syntax, and patterns. You'll learn to implement zero-trust networking, isolate namespaces, protect databases, and debug policy-related connectivity issues.
A Network Policy is a Kubernetes resource that specifies how Pods can communicate with each other and with external endpoints. It uses label selectors to identify groups of Pods and then defines ingress (incoming) and/or egress (outgoing) rules.
Network Policies are not enforced by Kubernetes itself—they require a CNI (Container Network Interface) plugin that supports them. Popular options include Calico, Cilium, Weave Net, and Antrea. Cloud providers vary: GKE and AKS support them natively; EKS requires installing Calico.
Without any Network Policies:
With a Network Policy selecting a Pod:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: api-policy namespace: productionspec: # Which Pods does this policy apply to? podSelector: matchLabels: app: api-server # What types of traffic does this policy restrict? policyTypes: - Ingress # Controls incoming traffic - Egress # Controls outgoing traffic # Rules defining allowed incoming traffic ingress: - from: # Allow from Pods with specific labels - podSelector: matchLabels: app: web-frontend # Allow from Pods in specific namespace - namespaceSelector: matchLabels: environment: production # Allow from IP blocks (external) - ipBlock: cidr: 10.0.0.0/8 except: - 10.1.0.0/16 # But not this subnet ports: - protocol: TCP port: 8080 - protocol: TCP port: 9090 # Rules defining allowed outgoing traffic egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432 - to: # Allow DNS resolution (essential!) - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53Network Policy selectors are powerful but subtle. Understanding their behavior is essential for writing correct policies.
| Selector | Matches | Example Use |
|---|---|---|
| podSelector | Pods in the SAME namespace | Allow web → api within namespace |
| namespaceSelector | All Pods in matching namespaces | Allow from any Pod in 'monitoring' ns |
| podSelector + namespaceSelector | Specific Pods in specific namespaces | Allow prometheus in monitoring ns |
| ipBlock | External IP ranges | Allow traffic from corporate network |
The placement of selectors within the YAML structure determines AND vs OR logic:
# OR logic: Two separate array items
- from:
- podSelector: # Match A
matchLabels:
app: web
- namespaceSelector: # OR Match B
matchLabels:
name: monitoring
# AND logic: Same array item with both selectors
- from:
- podSelector: # Match A AND B
matchLabels:
app: prometheus
namespaceSelector:
matchLabels:
name: monitoring
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
# Example 1: Allow from specific Pods in SAME namespace (podSelector only)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend # This ONLY allows frontend Pods in the 'production' namespace ---# Example 2: Allow from specific namespace (namespaceSelector only)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-monitoring-namespace namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: monitoring # This allows ALL Pods in any namespace labeled purpose=monitoring ---# Example 3: Allow specific Pods from specific namespace (AND logic)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-prometheus-from-monitoring namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: # IMPORTANT: Same array item = AND logic - namespaceSelector: matchLabels: name: monitoring podSelector: matchLabels: app: prometheus # This ONLY allows Pods labeled app=prometheus # AND in namespaces labeled name=monitoring ---# Example 4: Multiple sources with OR logicapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-multiple-sources namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: # Separate array items = OR logic - podSelector: matchLabels: app: frontend # Allow frontend in same namespace - podSelector: matchLabels: app: mobile-bff # OR mobile-bff in same namespace - namespaceSelector: matchLabels: purpose: monitoring # OR any Pod in monitoring namespaces• podSelector: {} matches all Pods in the scope
• namespaceSelector: {} matches all namespaces
• Combined: namespaceSelector: {} + podSelector: {} matches everything in the cluster
• No selectors in ingress/egress means allow from/to nowhere (deny rule)
Zero-trust networking starts with denying all traffic by default and then explicitly allowing only necessary communication. This is implemented using Default Deny policies that select all Pods but define no allow rules.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
# Default deny all ingress traffic to all Pods in namespaceapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-ingress namespace: productionspec: podSelector: {} # Selects ALL Pods in the namespace policyTypes: - Ingress # Denies all incoming traffic # No ingress rules = deny all ingress ---# Default deny all egress traffic from all Pods in namespaceapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-egress namespace: productionspec: podSelector: {} policyTypes: - Egress # Denies all outgoing traffic # No egress rules = deny all egress ---# Default deny both ingress AND egress (most restrictive)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-deny-all namespace: productionspec: podSelector: {} policyTypes: - Ingress - Egress # No rules = deny everything ---# IMPORTANT: Allow DNS when denying egress!apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-dns-egress namespace: productionspec: podSelector: {} # Applies to all Pods policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53 - protocol: TCP port: 53When implementing egress deny policies, you MUST allow DNS traffic to kube-dns/CoreDNS. Without DNS, Pods cannot resolve Service names and most applications will fail. This is one of the most common Network Policy mistakes.
Here are battle-tested patterns for common security requirements:
Databases should only be accessible from application Pods that need them:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: database-protection namespace: productionspec: # Apply to database Pods podSelector: matchLabels: app: postgresql tier: database policyTypes: - Ingress - Egress ingress: # Only allow from API servers - from: - podSelector: matchLabels: app: api-server tier: backend ports: - protocol: TCP port: 5432 # Allow from backup jobs in ops namespace - from: - namespaceSelector: matchLabels: name: ops podSelector: matchLabels: app: backup-agent ports: - protocol: TCP port: 5432 egress: # Only allow DNS (for initial setup/configuration) - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53Isolate namespaces so tenants/teams cannot access each other:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
# Apply this to each namespace that should be isolatedapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: namespace-isolation namespace: team-alpha # Repeat for each namespacespec: podSelector: {} # All Pods in namespace policyTypes: - Ingress - Egress ingress: # Only allow traffic from within same namespace - from: - podSelector: {} # Any Pod in THIS namespace # Allow from Ingress controllers (external traffic) - from: - namespaceSelector: matchLabels: app.kubernetes.io/name: ingress-nginx podSelector: matchLabels: app.kubernetes.io/component: controller # Allow from monitoring namespace - from: - namespaceSelector: matchLabels: purpose: monitoring egress: # Allow to same namespace - to: - podSelector: {} # Allow DNS - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - protocol: UDP port: 53 # Allow to external (internet) - optional - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 # Exclude internal cluster IPs - 172.16.0.0/12 - 192.168.0.0/16Classic web application with frontend, backend, and database tiers:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116
# Frontend: Allow from Ingress, allow to BackendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: frontend-policy namespace: myappspec: podSelector: matchLabels: tier: frontend policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: app.kubernetes.io/name: ingress-nginx ports: - protocol: TCP port: 80 egress: - to: - podSelector: matchLabels: tier: backend ports: - protocol: TCP port: 8080 - to: # DNS - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53 ---# Backend: Allow from Frontend, allow to DatabaseapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: backend-policy namespace: myappspec: podSelector: matchLabels: tier: backend policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: tier: frontend ports: - protocol: TCP port: 8080 egress: - to: - podSelector: matchLabels: tier: database ports: - protocol: TCP port: 5432 - to: # DNS - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53 - to: # External APIs - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 ports: - protocol: TCP port: 443 ---# Database: Only allow from Backend (most restrictive)apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: database-policy namespace: myappspec: podSelector: matchLabels: tier: database policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: tier: backend ports: - protocol: TCP port: 5432 egress: - to: # DNS only - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53Egress policies are often neglected but are critical for:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
# Pattern: Allow only specific external endpointsapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: restricted-egress namespace: productionspec: podSelector: matchLabels: app: payment-service policyTypes: - Egress egress: # Allow internal cluster traffic - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432 # Allow DNS - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - protocol: UDP port: 53 # Allow specific external payment gateway IPs - to: - ipBlock: cidr: 198.51.100.0/24 # Stripe API servers ports: - protocol: TCP port: 443 - to: - ipBlock: cidr: 203.0.113.0/24 # PayPal API servers ports: - protocol: TCP port: 443 ---# Pattern: Block egress to metadata service (security hardening)# The instance metadata service (169.254.169.254) is a common attack vectorapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: block-metadata-service namespace: productionspec: podSelector: {} policyTypes: - Egress egress: # Allow everything EXCEPT metadata service - to: - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.169.254/32 # Block cloud metadata service ---# Pattern: Allow egress only to internal clusterapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: internal-only-egress namespace: sensitive-dataspec: podSelector: {} policyTypes: - Egress egress: # Allow to all internal cluster IPs only - to: - ipBlock: cidr: 10.0.0.0/8 # Adjust to your cluster CIDR - ipBlock: cidr: 172.16.0.0/12 # Explicitly allow DNS - to: ports: - protocol: UDP port: 53Cloud instance metadata services (169.254.169.254) expose sensitive credentials and information. Attackers who compromise a Pod can query this endpoint to escalate privileges. Always consider blocking this in production environments.
Understanding how multiple Network Policies interact is essential for complex environments.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
# Policy A: Allow from frontendapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080 ---# Policy B: Allow from monitoringapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-monitoring namespace: productionspec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: monitoring ports: - protocol: TCP port: 9090 # RESULT: the 'api' Pods allow:# - TCP 8080 from frontend Pods in production namespace (from Policy A)# - TCP 9090 from any Pod in namespaces labeled monitoring (from Policy B)# All other ingress traffic is denied (Pods are isolated by having policies)| Scenario | Result | Explanation |
|---|---|---|
| No policies select a Pod | All traffic allowed | Pod is not isolated |
| One policy selects, no rules | All traffic of that type denied | Isolation without allows |
| Multiple policies select | Union of all rules | Additive behavior |
| Policy allows port 80, another allows port 443 | Both ports allowed | Rules combined |
| Two policies, one for ingress, one for egress | Both types restricted per their rules | Independent policy types |
There's no way to create a 'higher priority deny rule' that overrides an allow. If you need to prevent traffic that another policy allows, you must either remove/modify that policy or redesign your label structure to prevent the unwanted allow rule from matching.
The standard Kubernetes NetworkPolicy API covers common use cases, but CNI plugins like Calico and Cilium offer extended capabilities through custom resources.
Calico extends Network Policies with cluster-wide policies and host-level controls:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
# Calico GlobalNetworkPolicy - applies across all namespacesapiVersion: projectcalico.org/v3kind: GlobalNetworkPolicymetadata: name: deny-external-egressspec: # Apply to all Pods in all namespaces selector: all() types: - Egress egress: # Allow internal cluster traffic - action: Allow destination: nets: - 10.0.0.0/8 # Allow DNS - action: Allow protocol: UDP destination: ports: - 53 # Deny everything else (explicit deny available in Calico) - action: Deny destination: notNets: - 10.0.0.0/8 ---# Calico supports application layer (L7) policiesapiVersion: projectcalico.org/v3kind: NetworkPolicymetadata: name: l7-policy namespace: productionspec: selector: app == 'api' ingress: - action: Allow http: methods: ["GET", "POST"] paths: - exact: /api/health - prefix: /api/v1/ - action: DenyCilium uses eBPF for high-performance, feature-rich network policies:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# Cilium L7 HTTP-aware policyapiVersion: cilium.io/v2kind: CiliumNetworkPolicymetadata: name: l7-api-policy namespace: productionspec: endpointSelector: matchLabels: app: api ingress: - fromEndpoints: - matchLabels: app: frontend toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: "GET" path: "/api/public/.*" - method: "POST" path: "/api/v1/.*" headers: - 'Content-Type: application/json' ---# Cilium DNS-aware egress policyapiVersion: cilium.io/v2kind: CiliumNetworkPolicymetadata: name: dns-aware-egress namespace: productionspec: endpointSelector: matchLabels: app: payment-service egress: - toEndpoints: - matchLabels: k8s:io.kubernetes.pod.namespace: kube-system k8s-app: kube-dns toPorts: - ports: - port: "53" protocol: UDP rules: dns: - matchPattern: "*" # Allow egress by DNS name (resolved automatically) - toFQDNs: - matchName: "api.stripe.com" - matchPattern: "*.amazonaws.com" toPorts: - ports: - port: "443" protocol: TCP| Feature | Standard K8s | Calico | Cilium |
|---|---|---|---|
| L3/L4 Policies | ✅ | ✅ | ✅ |
| Namespace Selectors | ✅ | ✅ | ✅ |
| Global/Cluster Policies | ❌ | ✅ | ✅ |
| L7 HTTP Policies | ❌ | ✅ (Enterprise) | ✅ |
| DNS-Based Egress | ❌ | Limited | ✅ |
| Explicit Deny Rules | ❌ | ✅ | ✅ |
| Host-Level Policies | ❌ | ✅ | ✅ |
| Policy Visualization | ❌ | ✅ | ✅ (Hubble) |
Network Policy issues can be frustrating to debug because blocked traffic often fails silently. Here's a systematic approach:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# Step 1: Verify CNI supports Network Policieskubectl get pods -n kube-system | grep -E 'calico|cilium|weave'# If using EKS without Calico, policies won't work! # Step 2: List all policies affecting a namespacekubectl get networkpolicies -n production # Step 3: Describe a specific policykubectl describe networkpolicy my-policy -n production # Step 4: Check what policies select a specific Pod# Get Pod labels firstkubectl get pod my-pod -n production --show-labels # Then find policies that match those labelskubectl get networkpolicies -n production -o yaml | \ grep -B20 "app: my-app" # Search for matching selectors # Step 5: Test connectivity from source to destination# Start a debug podkubectl run -it --rm netshoot --image=nicolaka/netshoot -n production -- bash # Inside debug pod:# nc -zv <service-name> <port> # Test TCP connectivity# curl -v <service-name>:<port> # Test HTTP# nslookup <service-name> # Test DNS # Step 6: Use network policy dry-run tools# For Calico:calicoctl get networkpolicies -n production -o yamlcalicoctl node status # For Cilium:cilium policy get -n productioncilium endpoint listhubble observe --namespace production # Live traffic flow # Step 7: Temporarily disable policies to isolate issue# Create allow-all policy (testing only!)kubectl apply -f - <<EOFapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: debug-allow-all namespace: productionspec: podSelector: {} ingress: - {} egress: - {} policyTypes: - Ingress - EgressEOF # Don't forget to delete after testing!kubectl delete networkpolicy debug-allow-all -n production| Symptom | Cause | Solution |
|---|---|---|
| All traffic blocked | Default deny without allow rules | Add specific ingress/egress rules |
| DNS not working | Egress policy blocking DNS | Add rule allowing UDP 53 to kube-dns |
| Policies not enforcing | CNI doesn't support policies | Install Calico/Cilium or enable provider support |
| Some Pods work, others don't | Label mismatch in selectors | Verify Pod labels match policy selectors |
| Cross-namespace blocked | Missing namespaceSelector | Add namespaceSelector to from/to rules |
| Health checks failing | Policy blocking kubelet | Allow from node CIDR or kubelet IPs |
| Service discovery broken | kubectl exec/port-forward blocked | Allow API server communication |
Use visualization tools to understand traffic flow: • Cilium Hubble: Real-time network flow visibility • Calico Enterprise: Policy visualization dashboard • Network Policy Viewer: Open-source tool for visualizing policies • Kube-network-policies: CLI tool to preview policy effects
We've comprehensively covered Kubernetes Network Policies. Let's consolidate the key takeaways:
What's Next:
With Network Policies securing Pod communication, we need reliable service discovery within the cluster. In the next page, we'll explore DNS in Kubernetes—how CoreDNS works, DNS records for Services, and debugging DNS issues.
You now understand Network Policies—from fundamentals through advanced patterns. You can implement zero-trust networking, protect databases, isolate namespaces, and troubleshoot policy issues. Next, we'll explore the DNS system that enables service discovery in Kubernetes.