Loading learning content...
Docker didn't merely containerize applications—it revolutionized how developers think about networking. Before Docker, deploying a networked application meant wrestling with IP configurations, firewall rules, and port assignments. Docker introduced declarative network configuration: describe what you want, and the platform handles the how.
But beneath Docker's elegant CLI and compose files lies a sophisticated networking subsystem. Understanding this system—its drivers, its service discovery mechanisms, its integration with the host network—is essential for debugging production issues, optimizing performance, and designing scalable container architectures.
By the end of this page, you will master Docker's networking architecture: the built-in network drivers and when to use each, custom bridge network configuration, Docker's embedded DNS server, inter-container communication patterns, and networking in Docker Compose. You'll understand how to diagnose connectivity issues and optimize network performance for containerized applications.
Docker's networking is built on a plugin architecture called the Container Network Model (CNM). This model defines three fundamental building blocks that together provide a complete networking solution for containers.
The libnetwork implementation:
Docker's networking is implemented by libnetwork, a Go library that:
When you run Docker network commands, you're interacting with libnetwork through the Docker daemon. Understanding this architecture helps when troubleshooting: issues can originate at the driver level, the libnetwork level, or the kernel level.
123456789101112131415161718192021
# List all Docker networksdocker network ls# Output:# NETWORK ID NAME DRIVER SCOPE# 7fca4eb8c647 bridge bridge local# 9f904ee27bf5 host host local# cf03ee007fb4 none null local # Inspect the default bridge networkdocker network inspect bridge# Shows: subnet, gateway, IPAM config, connected containers # Key fields in output:# "Driver": "bridge" - Network driver in use# "IPAM.Config[].Subnet" - IP address range (typically 172.17.0.0/16)# "IPAM.Config[].Gateway" - Gateway IP (typically 172.17.0.1)# "Containers" - Map of connected container details # View network from container's perspectivedocker run -it --rm alpine sh -c "ip addr && ip route && cat /etc/resolv.conf"# Shows container's network configurationDocker includes several built-in network drivers, each implementing a different networking paradigm. The driver you choose fundamentally affects container isolation, performance, and connectivity patterns.
| Driver | Scope | Isolation | Performance | Primary Use Case |
|---|---|---|---|---|
| bridge | Local | Network namespace | Good (veth + bridge) | Default; most single-host deployments |
| host | Local | None | Best (no virtualization) | Performance-critical; network tooling |
| overlay | Swarm | VXLAN encapsulation | Good (encryption optional) | Multi-host container communication |
| macvlan | Local | Unique MAC per container | Excellent (L2 bypass) | Direct network integration; legacy apps |
| ipvlan | Local | L3 routing | Excellent (L3 bypass) | Environments with MAC restrictions |
| none | Local | Complete | N/A | Security isolation; custom networking |
The bridge driver creates a Linux bridge (virtual switch) on the Docker host. Containers attach via veth pairs, receiving IP addresses from the bridge's subnet. The default docker0 bridge is created automatically, but custom bridges offer significant advantages.
Default bridge vs. Custom bridges:
Docker's default docker0 bridge lacks important features that custom user-defined bridges provide:
| Feature | Default Bridge | User-defined Bridge |
|---|---|---|
| DNS resolution by container name | ❌ (IP only) | ✅ (automatic) |
| Automatic DNS alias for services | ❌ | ✅ |
| Container isolation | All containers connected | Scoped to network |
| --link support | Required for name resolution | Not needed |
| Live connect/disconnect | ❌ | ✅ |
| Configurable options | Limited | Full control |
12345678910111213141516171819202122
# Create a custom bridge network with specific configurationdocker network create \ --driver bridge \ --subnet 10.10.0.0/24 \ --gateway 10.10.0.1 \ --ip-range 10.10.0.128/25 \ --opt "com.docker.network.bridge.name"="my-bridge" \ --opt "com.docker.network.bridge.enable_icc"=true \ --opt "com.docker.network.bridge.enable_ip_masquerade"=true \ --opt "com.docker.network.driver.mtu"=1500 \ my-custom-network # Run containers on the custom networkdocker run -d --name web --network my-custom-network nginxdocker run -d --name api --network my-custom-network myapi:latest # Containers can now resolve each other by name!docker exec web ping -c 2 api# PING api (10.10.0.129): 56 data bytes# 64 bytes from 10.10.0.129: seq=0 ttl=64 time=0.091 ms # This works because Docker provides embedded DNS for user-defined networksOne of Docker's most powerful networking features is its embedded DNS server that provides automatic service discovery for containers on user-defined networks. This DNS server intercepts DNS queries from containers and resolves container names to IP addresses without external configuration.
How Docker DNS works:
Every container on a user-defined network has its /etc/resolv.conf configured to use 127.0.0.11 as the nameserver—Docker's internal DNS server. When a container makes a DNS query:
123456789101112131415161718192021222324252627282930313233
# Create a custom network and run containersdocker network create app-networkdocker run -d --name database --network app-network postgres:14docker run -d --name cache --network app-network redis:7docker run -d --name web --network app-network nginx # Examine DNS configuration inside a containerdocker exec web cat /etc/resolv.conf# Output:# nameserver 127.0.0.11# options ndots:0 # DNS resolution works automaticallydocker exec web nslookup database# Server: 127.0.0.11# Address: 127.0.0.11:53# Non-authoritative answer:# Name: database# Address: 172.18.0.2 # Containers can communicate using namesdocker exec web ping -c 2 databasedocker exec web ping -c 2 cache # Network aliases for multiple namesdocker run -d \ --name api-v1 \ --network app-network \ --network-alias api \ --network-alias backend \ myapi:latest # Now 'api', 'backend', and 'api-v1' all resolve to this containerWhen multiple containers share the same network alias, Docker DNS returns all IP addresses in round-robin order. This provides basic load balancing without additional infrastructure. However, for production workloads, consider using a proper load balancer (like Docker Swarm's ingress routing or Kubernetes Services) as DNS-based load balancing has limitations with caching clients.
DNS search domains and options:
Docker allows customization of container DNS behavior:
# Custom DNS servers (bypass Docker DNS)
docker run --dns 8.8.8.8 --dns 8.8.4.4 nginx
# Custom search domains
docker run --dns-search example.com nginx
# Allows 'ping server' to resolve as 'ping server.example.com'
# Custom DNS options
docker run --dns-opt timeout:3 --dns-opt attempts:2 nginx
Important limitation: Docker's embedded DNS only works on user-defined networks. Containers on the default bridge network cannot resolve each other by name—they must use IP addresses or the deprecated --link flag.
| Query Type | Resolved By | Returns |
|---|---|---|
| Container name on same network | Docker DNS | Container IP |
| Network alias | Docker DNS | IPs of all containers with that alias |
| Swarm service name | Docker DNS | Virtual IP (VIP) of the service |
| External domain | Host DNS (forwarded) | External IP(s) |
| Container name on different network | Host DNS (fails) | NXDOMAIN |
Docker Compose simplifies multi-container application networking by automatically creating a dedicated network for each project. All services defined in a docker-compose.yml are placed on this network, enabling seamless inter-service communication using service names.
Default Compose networking behavior:
When you run docker-compose up, Compose:
{project}_{network} (default: {project}_default)1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
version: "3.9" services: web: image: nginx:alpine ports: - "80:80" networks: - frontend - backend depends_on: - api api: build: ./api networks: - backend - database environment: - DATABASE_URL=postgres://db:5432/myapp # 'db' is resolvable because both are on 'database' network db: image: postgres:14 networks: - database volumes: - db-data:/var/lib/postgresql/data cache: image: redis:7-alpine networks: backend: aliases: - redis - session-store # Multiple aliases for the same service networks: frontend: driver: bridge backend: driver: bridge internal: true # No external connectivity database: driver: bridge internal: true ipam: config: - subnet: 172.28.0.0/16 volumes: db-data:Key networking concepts in Compose:
internal: true have no external gateway—containers can only communicate with each other, not the outside world.external: true—useful for sharing networks across multiple Compose projects.1234567891011121314151617181920212223242526
version: "3.9" services: app: image: myapp:latest networks: - shared-services # Connect to existing network networks: shared-services: external: true # Must be created before docker-compose up: # docker network create shared-services # Another pattern: use the host networkservices: monitoring: image: prometheus:latest network_mode: host # Shares host's network namespace # Container network mode - share another container's networkservices: debug-sidecar: image: nicolaka/netshoot network_mode: "service:api" # Shares api's network namespace # Can see api's network interfaces, useful for debuggingPublishing (exposing) container ports to the host or external networks is fundamental to making containerized services accessible. Docker provides multiple port publishing modes, each suited to different scenarios.
| Syntax | Mode | Host Port | Binding | Use Case |
|---|---|---|---|---|
-p 8080:80 | Standard | 8080 | 0.0.0.0:8080 | General external access |
-p 80 | Random | Random high port | 0.0.0.0:XXXXX | Development; avoid conflicts |
-p 127.0.0.1:8080:80 | Localhost only | 8080 | 127.0.0.1:8080 | Local development; security |
-p 192.168.1.100:8080:80 | Specific interface | 8080 | 192.168.1.100:8080 | Multi-NIC hosts |
-p 8080:80/udp | UDP protocol | 8080 | 0.0.0.0:8080/udp | UDP services (DNS, QUIC) |
-p 8080:80/tcp -p 8080:80/udp | Both protocols | 8080 | 0.0.0.0:8080 (both) | Services using both TCP/UDP |
12345678910111213141516171819202122232425262728
# Standard port mappingdocker run -d -p 8080:80 nginx# Host port 8080 → Container port 80 # Random port assignment (useful in CI/CD)docker run -d -p 80 nginxdocker port <container_id> 80# Output: 0.0.0.0:32768 # Bind to localhost only (not accessible from network)docker run -d -p 127.0.0.1:8080:80 nginx# Only accessible from the Docker host itself # Multiple port mappingsdocker run -d \ -p 80:80 \ -p 443:443 \ -p 8080:8080 \ myapp # Port rangesdocker run -d -p 8000-8010:8000-8010 myapp# Maps host:8000-8010 to container:8000-8010 # View all port mappingsdocker port <container_id># 80/tcp -> 0.0.0.0:8080# 443/tcp -> 0.0.0.0:8443When you publish a port, Docker modifies iptables to allow incoming traffic, potentially bypassing host firewall rules (like ufw or firewalld). This can expose services unexpectedly. For production, consider using -p 127.0.0.1:PORT:PORT and placing a reverse proxy (nginx, traefik) in front, or use Docker's --iptables=false flag and manage firewall rules manually.
12345678910
FROM nginx:alpine # EXPOSE is documentation - does NOT publishEXPOSE 80EXPOSE 443 # Port publishing happens at runtime:# docker run -p 80:80 -p 443:443 myimage# or# docker run -P myimage # publishes 80 and 443 to random portsNetwork issues in containerized environments can be challenging to diagnose. Docker's abstraction layers—namespaces, bridges, NAT, and overlay networks—add complexity. A systematic approach to debugging is essential.
docker ps to confirm the target container is up and healthydocker inspect <container> → NetworkSettings → Networksdocker exec <c> nslookup <target>)docker port <container> to verify port mappingssudo iptables -t nat -L -n for NAT issuesbrctl show and bridge link show for bridge issues12345678910111213141516171819202122232425262728
# Essential debugging commands # 1. Get container's network configurationdocker inspect --format='{{json .NetworkSettings.Networks}}' <container> | jq # 2. Check if container can reach anotherdocker exec <container> ping -c 3 <target_name_or_ip> # 3. DNS debuggingdocker exec <container> nslookup <service_name>docker exec <container> cat /etc/resolv.conf # 4. Use netshoot for comprehensive debuggingdocker run -it --rm --network container:<target> nicolaka/netshoot# Gives you tcpdump, netstat, ss, curl, nmap, iptables, etc. # 5. Capture traffic on Docker bridgesudo tcpdump -i docker0 -nn # 6. View Docker's iptables chainssudo iptables -t nat -L DOCKER -n -vsudo iptables -L DOCKER-USER -n -v # 7. Check connectivity from host to containerping $(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container>) # 8. Trace packet path (requires kernel tracing tools)sudo strace -e trace=network docker exec <container> curl http://target:80nicolaka/netshoot is an essential tool for Docker network debugging. It contains all common networking utilities (tcpdump, netstat, dig, curl, etc.) and can attach to another container's network namespace for in-depth troubleshooting: docker run --rm -it --network container:myapp nicolaka/netshoot
| Issue | Symptoms | Diagnosis | Solution |
|---|---|---|---|
| Container can't reach external internet | ping 8.8.8.8 fails | Check iptables masquerade rule | Enable ip_forward: sysctl net.ipv4.ip_forward=1 |
| DNS resolution fails inside container | nslookup google.com fails | Check /etc/resolv.conf points to 127.0.0.11 | Ensure using user-defined network, not default bridge |
| Container-to-container fails (same host) | ping by name or IP fails | Check both containers on same network | docker network connect <net> <container> |
| Published port not accessible | curl localhost:8080 fails | docker port shows mapping; check iptables | Verify no host firewall blocking; check binding address |
| Overlay network not working | Cross-host communication fails | Check UDP 4789 (VXLAN) is open between hosts | Open firewall for UDP 4789, TCP 7946, UDP 7946 |
Docker networking introduces overhead compared to bare-metal networking. Understanding and minimizing this overhead is critical for performance-sensitive applications.
--network host for latency-critical applications.--opt com.docker.network.bridge.enable_icc=false123456789101112131415161718192021222324252627282930
# Benchmark network performance with iperf3 # Server containerdocker run -d --name iperf-server --network my-network \ networkstatic/iperf3 -s # Client container (measure throughput)docker run -it --rm --network my-network \ networkstatic/iperf3 -c iperf-server # Compare with host network modedocker run -d --name iperf-host --network host \ networkstatic/iperf3 -s -p 5202 docker run -it --rm --network host \ networkstatic/iperf3 -c 127.0.0.1 -p 5202 # Typical results:# Bridge network: ~20-40 Gbps (on 100G host)# Host network: ~90-95 Gbps (near line rate) # Measure latency with pingdocker run -it --rm --network my-network alpine \ ping -c 100 iperf-server | tail -1 # Check for MTU issuesdocker run -it --rm --network my-network alpine \ ping -c 1 -s 1472 -M do iperf-server# -s 1472 + 28 bytes header = 1500 MTU# If this fails, there's an MTU mismatchFor most applications, Docker's networking overhead is negligible. Focus on optimization only when: (1) handling >100k packets/second, (2) latency-sensitive real-time applications, or (3) throughput requirements exceed 10 Gbps. Premature optimization of container networking often provides minimal benefit while complicating operations.
We've covered Docker's networking system in depth—from the Container Network Model architecture to practical debugging techniques.
What's next:
Docker's networking was designed for single-host and simple multi-host scenarios. In the next page, we'll explore Kubernetes networking—a fundamentally different model designed for orchestrating thousands of containers across hundreds of nodes, with flat networking, service abstraction, and sophisticated ingress patterns.
You now have a comprehensive understanding of Docker's networking capabilities. From the Container Network Model to embedded DNS, from network drivers to debugging strategies—you're equipped to design, operate, and troubleshoot Docker networking in production environments.