Loading content...
Modern container technologies like Docker and Kubernetes depend on a deceptively simple yet profoundly powerful kernel feature: network namespaces. When you run a container that appears to have its own network interfaces, IP addresses, routing tables, and firewall rules—all isolated from the host and other containers—network namespaces are the mechanism making this possible.
A network namespace provides a complete, independent instance of the network stack. Each namespace has its own:
This isolation is complete enough that processes in different namespaces cannot directly communicate via networking—they might as well be on different physical machines. Yet it's achieved without any hardware support or virtualization overhead; it's purely a kernel abstraction that partitions the data structures we explored in previous pages.
By the end of this page, you will understand network namespace architecture, including the struct net data structure, namespace creation and lifecycle, how network structures are namespace-aware, virtual network devices for cross-namespace communication, and practical namespace manipulation with ip netns and system calls.
Network namespaces are one of seven namespace types in Linux (PID, mount, UTS, IPC, network, user, cgroup). While other namespaces isolate specific resources, network namespaces isolate the entire networking subsystem.
What's isolated in a network namespace:
Each network namespace contains independent instances of:
| Component | Description | Isolation Impact |
|---|---|---|
| Interfaces | Network devices (eth0, lo, veth, etc.) | Each namespace sees only its own devices |
| Addresses | IP addresses on interfaces | Same IP can exist in multiple namespaces |
| Routes | Routing tables | Independent routing decisions |
| Firewall | iptables/nftables rules | Per-namespace packet filtering |
| Sockets | Socket port bindings | Same port can be bound in each namespace |
| /proc/net | Network proc filesystem | Shows only namespace-local state |
| /sys/class/net | Network sysfs | Shows only namespace-local devices |
The init_net: The Root Namespace
Every Linux system starts with a single network namespace called init_net. This is the "root" namespace where the initial network configuration lives. New namespaces are created as independent copies with minimal initial configuration (just a loopback interface in the down state).
Processes inherit their parent's network namespace by default. When Docker starts a container, it creates a new network namespace and moves the container's processes into it—from the container's perspective, it has its own complete networking stack.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
/** * struct net - Network namespace structure * * This structure contains all namespace-specific networking state. * Every network-related kernel structure references a struct net * to determine which namespace it belongs to. */struct net { /* Reference counting */ refcount_t count; /* Current references */ refcount_t passive; /* Pending cleanup refs */ /* List of all namespaces */ struct list_head list; /* Global namespace list */ /* Namespace identification */ spinlock_t nsid_lock; atomic_t fnhe_genid; /* Flow cache generation */ /* Network devices in this namespace */ struct list_head dev_base_head; /* Device list head */ struct hlist_head *dev_name_head; /* Hash by name */ struct hlist_head *dev_index_head; /* Hash by ifindex */ unsigned int dev_base_seq; /* Sequence number */ int ifindex; /* Next ifindex to assign */ /* Loopback device for this namespace */ struct net_device *loopback_dev; /* Proc and sysfs entries */ struct proc_dir_entry *proc_net; struct proc_dir_entry *proc_net_stat; /* Namespace parameters */ struct ctl_table_set sysctls; /* Sysctl values */ /* Protocol-specific namespace state */ struct netns_core core; struct netns_mib mib; struct netns_unix unx; /* Unix domain sockets */ struct netns_packet packet; /* AF_PACKET sockets */ struct netns_ipv4 ipv4; /* IPv4 state */ struct netns_ipv6 ipv6; /* IPv6 state */ struct netns_nf nf; /* Netfilter state */ struct netns_ct ct; /* Connection tracking */ struct netns_nf_frag nf_frag; /* Fragment tracking */ struct netns_xfrm xfrm; /* IPsec/XFRM state */ /* Network policy (xfrm, etc.) */ struct net_generic *gen; /* User namespace owner */ struct user_namespace *user_ns; /* ... more fields ... */}; /* The initial (root) network namespace */struct net init_net = { .count = REFCOUNT_INIT(1), .dev_base_head = LIST_HEAD_INIT(init_net.dev_base_head), /* ... initialized at boot ... */};EXPORT_SYMBOL(init_net); /** * netns_ipv4 - IPv4 namespace-specific state * * Contains all IPv4 configuration that varies per namespace */struct netns_ipv4 { /* Sysctl parameters (unique per namespace) */ int sysctl_icmp_echo_ignore_all; int sysctl_tcp_ecn; int sysctl_ip_forward; int sysctl_ip_default_ttl; /* FIB (routing) tables */ struct fib_table __rcu *fib_main; struct fib_table __rcu *fib_default; struct hlist_head *fib_table_hash; unsigned int fib_rules_require_fldissect; /* TCP metrics */ struct tcp_metrics_block __rcu *tcp_metrics_hash; /* Netfilter */ struct xt_table *iptable_filter; struct xt_table *iptable_nat; struct xt_table *iptable_mangle; struct xt_table *iptable_raw; /* ... more IPv4 state ... */};The struct net pointer appears throughout Linux networking code. Functions that access network state (lookup routes, find sockets, access devices) typically receive a struct net parameter or extract it from the current task (current->nsproxy->net_ns). This pervasive namespace awareness is what enables complete network isolation.
Network namespaces can be created and manipulated through system calls, the ip netns utility, or container runtimes. Understanding the low-level mechanisms helps debug complex containerization issues.
System calls for namespace management:
| System Call | Purpose | Flags for Network NS |
|---|---|---|
| clone() | Create process in new namespace | CLONE_NEWNET |
| unshare() | Move current process to new namespace | CLONE_NEWNET |
| setns() | Join existing namespace via fd | CLONE_NEWNET |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
/** * Creating a new network namespace * * Method 1: clone() with CLONE_NEWNET * Creates a child process in a new network namespace */#define _GNU_SOURCE#include <sched.h>#include <unistd.h>#include <sys/wait.h> int child_func(void *arg) { /* This code runs in a new network namespace */ /* Only loopback exists, and it's down */ system("ip link show"); /* Bring up loopback */ system("ip link set lo up"); /* Configure network... */ return 0;} int main() { char stack[65536]; /* Child stack */ /* Create child in new namespaces */ pid_t pid = clone(child_func, stack + sizeof(stack), CLONE_NEWNET | CLONE_NEWNS | SIGCHLD, NULL); waitpid(pid, NULL, 0); return 0;} /** * Method 2: unshare() to move current process */int main() { /* Move self to new network namespace */ if (unshare(CLONE_NEWNET) == -1) { perror("unshare"); return 1; } /* Now in new namespace - only loopback exists */ execlp("bash", "bash", NULL); return 0;} /** * Method 3: setns() to join existing namespace * * Namespaces persist as files under /proc/[pid]/ns/ * Opening and passing the fd to setns joins that namespace */int main() { /* Open another process's network namespace */ int fd = open("/proc/1234/ns/net", O_RDONLY); if (fd == -1) { perror("open"); return 1; } /* Join that namespace */ if (setns(fd, CLONE_NEWNET) == -1) { perror("setns"); return 1; } close(fd); /* Now share networking with PID 1234 */ execlp("bash", "bash", NULL); return 0;}Using ip netns for namespace management:
The ip netns command provides convenient tools for working with named network namespaces:
# Create a new named namespace
sudo ip netns add mynamespace
# List all named namespaces
ip netns list
# Execute command in namespace
sudo ip netns exec mynamespace ip link show
# Delete namespace
sudo ip netns delete mynamespace
# Monitor namespace creation/deletion
ip netns monitor
Named namespaces are stored as bind mounts under /var/run/netns/. This persistence mechanism keeps the namespace alive even when no processes are running in it.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104
/** * copy_net_ns - Create a new network namespace * * Called during clone(CLONE_NEWNET) or unshare(CLONE_NEWNET). * Creates and initializes a new struct net. */struct net *copy_net_ns(unsigned long flags, struct user_namespace *user_ns, struct net *old_net){ struct net *net; int rv; if (!(flags & CLONE_NEWNET)) return get_net(old_net); /* Share existing namespace */ /* Allocate new namespace structure */ net = net_alloc(); if (!net) return ERR_PTR(-ENOMEM); net->user_ns = user_ns; /* Initialize all subsystem namespace state */ rv = setup_net(net, user_ns); if (rv < 0) { put_net(net); return ERR_PTR(rv); } return net;} /** * setup_net - Initialize network namespace * * Calls init functions for each networking subsystem * to set up namespace-local state. */static int setup_net(struct net *net, struct user_namespace *user_ns){ const struct pernet_operations *ops; int error; /* Initialize reference counts */ refcount_set(&net->count, 1); refcount_set(&net->passive, 1); /* Initialize device lists */ INIT_LIST_HEAD(&net->dev_base_head); /* Call each subsystem's init function */ list_for_each_entry(ops, &pernet_list, list) { if (ops->init) { error = ops->init(net); if (error < 0) goto out_undo; } } /* Create loopback device */ error = loopback_net_init(net); if (error < 0) goto out_undo; return 0; out_undo: /* Cleanup on failure */ list_for_each_entry_continue_reverse(ops, &pernet_list, list) { if (ops->exit) ops->exit(net); } return error;} /** * pernet_operations - Per-namespace initialization hooks * * Each networking subsystem registers these to set up * its namespace-specific state. */struct pernet_operations { struct list_head list; /* Called when namespace is created */ int (*init)(struct net *net); /* Called when namespace is destroyed */ void (*exit)(struct net *net); /* ... batch operations for efficiency ... */}; /* Example: IPv4 routing table initialization */static struct pernet_operations fib_net_ops = { .init = fib_net_init, .exit = fib_net_exit,}; static int __init fib4_init(void){ return register_pernet_subsys(&fib_net_ops);}A network namespace exists as long as there's either a process in it or a persistent reference (like a bind mount at /var/run/netns/ or an open file descriptor). When all references are gone, the kernel destroys the namespace and frees all associated resources. Network devices are moved back to init_net or deleted depending on type.
Network namespaces achieve complete isolation, but containers and VMs need to communicate—with each other and with the external network. Linux provides several types of virtual network devices to connect namespaces while maintaining controlled communication boundaries.
Key virtual device types:
| Device Type | Purpose | Namespace Behavior | Use Case |
|---|---|---|---|
| veth | Virtual Ethernet pair | Each end in different namespace | Container-to-host, container-to-container |
| bridge | Software L2 switch | Exists in one namespace | Connecting multiple containers |
| macvlan | Multiple MACs on one parent | Can be moved to namespace | Direct network attachment |
| ipvlan | IP-based virtual LAN | Can be moved to namespace | Scalable container networking |
| vxlan | Virtual L2 over L3 | Tunnel between namespaces/hosts | Multi-host container networking |
| tun/tap | User-space tunnel | Can be moved to namespace | VPNs, virtual machines |
Veth Pairs: The Container Networking Workhorse
Veth (virtual Ethernet) pairs are the most common mechanism for container networking. A veth pair creates two connected virtual interfaces—packets sent on one appear on the other. By placing one end in the container namespace and the other in the host namespace (attached to a bridge), containers gain network connectivity.
# Create a veth pair
ip link add veth0 type veth peer name veth1
# Move one end to container namespace
ip link set veth1 netns container_ns
# Configure host end
ip addr add 10.0.0.1/24 dev veth0
ip link set veth0 up
# Configure container end (from within namespace)
ip netns exec container_ns ip addr add 10.0.0.2/24 dev veth1
ip netns exec container_ns ip link set veth1 up
ip netns exec container_ns ip route add default via 10.0.0.1
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115
/** * struct veth_priv - Private data for veth device * * Each veth device stores a reference to its peer, * enabling packet forwarding between namespace boundaries. */struct veth_priv { struct net_device __rcu *peer; /* Other end of the pair */ atomic64_t dropped; /* TX drops */ struct bpf_prog *_xdp_prog; /* XDP program */ struct xdp_rxq_info xdp_rxq;}; /** * veth_newlink - Create a new veth pair */static int veth_newlink(struct net *src_net, struct net_device *dev, struct nlattr *tb[], struct nlattr *data[], struct netlink_ext_ack *extack){ struct net_device *peer; struct net *net; /* Determine peer namespace (default: same as requester) */ net = rtnl_link_get_net(src_net, tb); /* Create peer device */ peer = rtnl_create_link(net, peer_name, ...); /* Link the two devices together */ priv = netdev_priv(dev); rcu_assign_pointer(priv->peer, peer); peer_priv = netdev_priv(peer); rcu_assign_pointer(peer_priv->peer, dev); /* Register both devices */ register_netdevice(dev); register_netdevice(peer); return 0;} /** * veth_xmit - Transmit packet through veth pair * * Packets sent on one end are directly delivered to the peer, * crossing namespace boundaries transparently. */static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev){ struct veth_priv *priv = netdev_priv(dev); struct net_device *rcv; /* Get peer device */ rcu_read_lock(); rcv = rcu_dereference(priv->peer); if (!rcv || !(rcv->flags & IFF_UP)) goto drop; /* Transfer packet to peer device */ skb->dev = rcv; skb->pkt_type = PACKET_HOST; /* Switch to peer's network namespace */ skb->protocol = eth_type_trans(skb, rcv); /* Deliver to peer's receive path */ if (likely(dev_forward_skb(rcv, skb) == NET_RX_SUCCESS)) { /* Update statistics */ u64_stats_update_begin(&priv->stats.syncp); priv->stats.rx_packets++; priv->stats.rx_bytes += skb->len; u64_stats_update_end(&priv->stats.syncp); } rcu_read_unlock(); return NETDEV_TX_OK; drop: atomic64_inc(&priv->dropped); rcu_read_unlock(); kfree_skb(skb); return NETDEV_TX_OK;} /** * Moving devices between namespaces * * Most virtual devices can be moved between namespaces. * This is how container runtimes assign network interfaces. */int dev_change_net_namespace(struct net_device *dev, struct net *net, const char *pat){ /* Lock both source and destination namespaces */ /* Unregister from source namespace */ unlist_netdevice(dev); /* Change device's namespace reference */ dev_net_set(dev, net); /* Assign new ifindex in destination namespace */ dev->ifindex = dev_new_index(net); /* Register in destination namespace */ list_netdevice(dev); /* Notify userspace in both namespaces */ rtmsg_ifinfo(RTM_DELLINK, dev, ...); /* Old namespace */ rtmsg_ifinfo(RTM_NEWLINK, dev, ...); /* New namespace */ return 0;}A Linux bridge acts as a virtual switch. Docker's default bridge network creates a bridge (docker0) and attaches each container's veth pair to it. The bridge forwards packets between containers on the same host, while NAT rules enable external connectivity. This is how containers on the same host communicate even though they're in separate namespaces.
Container runtimes use network namespaces to implement various networking models. Understanding these models helps when debugging container connectivity or designing container infrastructure.
Common container networking approaches:
Docker Bridge Networking in Detail:
When Docker starts with default networking:
docker0 (typically 172.17.0.1/16)# View Docker's iptables rules
sudo iptables -t nat -L POSTROUTING
# Chain POSTROUTING
# MASQUERADE all -- 172.17.0.0/16 anywhere
# View bridge and connected veths
brctl show docker0
# bridge name bridge id STP enabled interfaces
# docker0 8000.0242c0a8ff01 no veth1234567
123456789101112131415161718192021222324252627282930313233343536373839404142434445
#!/bin/bash# Simplified CNI bridge plugin logic# This shows what happens when Kubernetes sets up pod networking # Input: Container namespace path from kubeletCONTAINER_NS="/proc/12345/ns/net"CONTAINER_IF="eth0"HOST_IF="veth$(random)"BRIDGE="cni0"IP_ADDR="10.244.1.5/24"GATEWAY="10.244.1.1" # Create veth pairip link add $HOST_IF type veth peer name $CONTAINER_IF # Move container end to container namespaceip link set $CONTAINER_IF netns $CONTAINER_NS # Attach host end to bridgeip link set $HOST_IF master $BRIDGEip link set $HOST_IF up # Configure container end (enter namespace)nsenter --net=$CONTAINER_NS /bin/bash <<EOF # Set MAC address ip link set $CONTAINER_IF address $(random_mac) # Set IP address ip addr add $IP_ADDR dev $CONTAINER_IF # Bring up interface ip link set $CONTAINER_IF up ip link set lo up # Add default route ip route add default via $GATEWAYEOF # Enable IP forwarding on hostecho 1 > /proc/sys/net/ipv4/ip_forward # Add iptables rules for pod-to-external trafficiptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE echo "Container network configured: $IP_ADDR"In Kubernetes, each Pod gets its own network namespace, but all containers within a Pod share that namespace. This is why containers in the same Pod can reach each other via localhost. The pause container holds the namespace alive, and CNI plugins configure the networking before app containers start.
Socket operations are inherently namespace-aware. When a process creates a socket, it's bound to that process's network namespace. Socket lookups, port bindings, and routing decisions all occur within the context of the socket's namespace.
How sockets track their namespace:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293
/** * Socket namespace tracking * * Every struct sock has a reference to its network namespace. * This determines which namespace's routing, firewall, etc. applies. */struct sock_common { /* ... connection tuple ... */ /* Network namespace this socket belongs to */ possible_net_t skc_net;}; /* Helper to get namespace from socket */static inline struct net *sock_net(const struct sock *sk){ return read_pnet(&sk->sk_net);} /** * Socket creation binds to creating process's namespace */static int inet_create(struct net *net, struct socket *sock, int protocol, int kern){ struct sock *sk; /* Allocate socket in caller's namespace */ sk = sk_alloc(net, PF_INET, GFP_KERNEL, answer_prot, kern); /* sk->sk_net now references 'net' */ sock_init_data(sock, sk); return 0;} /** * Socket lookup respects namespace boundaries * * When finding a socket for incoming packet, only sockets * in the packet's namespace are considered. */struct sock *__inet_lookup_established(struct net *net, struct inet_hashinfo *hashinfo, const __be32 saddr, const __be16 sport, const __be32 daddr, const u16 hnum, const int dif, const int sdif){ struct sock *sk; /* Hash lookup in established connections */ sk_for_each_rcu(sk, &head->chain) { /* CHECK: Socket must be in same namespace as packet */ if (!net_eq(sock_net(sk), net)) continue; /* CHECK: Connection tuple matches */ if (sk->sk_hash == hash && inet_match(sk, saddr, daddr, ports, dif, sdif)) return sk; } return NULL;} /** * Port binding respects namespace boundaries * * Same port can be bound in different namespaces because * each namespace has independent port space. */int inet_csk_get_port(struct sock *sk, unsigned short snum){ struct inet_hashinfo *hinfo = sk->sk_prot->h.hashinfo; struct inet_bind_hashbucket *head; struct net *net = sock_net(sk); /* Check existing bindings in THIS namespace only */ inet_bind_bucket_for_each(tb, &head->chain) { /* Only check bindings in same namespace */ if (!net_eq(ib_net(tb), net)) continue; if (tb->port == snum) goto bind_conflict_check; } /* Port is available in this namespace */ goto success; bind_conflict_check: /* Check if binding conflicts with existing users */ /* (considers SO_REUSEADDR, SO_REUSEPORT, etc.) */}Practical implications:
Port reuse across namespaces: Two containers can both bind to port 80 because they're in different namespaces with independent port spaces.
Socket visibility: A process can only see sockets in its namespace. netstat and ss show only namespace-local sockets.
Connection routing: Outgoing connections use the namespace's routing table; incoming packets are delivered based on the namespace of the receiving interface.
File descriptor passing: If a socket fd is passed to a process in a different namespace (via Unix domain socket), the socket still operates in its original namespace—the fd is just a reference.
If a container process receives an open socket file descriptor from the host namespace (e.g., via inherited fd or passed socket), it can communicate through that socket using host networking. This is a potential security concern; container runtimes close inherited fds before executing container processes.
Each network namespace maintains independent routing tables and firewall rules. This enables containers to have completely different network topologies and security policies—a container might have a default route to the host's veth while the host routes to the physical network.
Per-namespace routing:
12345678910111213141516171819202122
# Host routing table$ ip routedefault via 192.168.1.1 dev eth0192.168.1.0/24 dev eth0 proto kernel src 192.168.1.100172.17.0.0/16 dev docker0 proto kernel src 172.17.0.1 # Container routing table (completely independent)$ ip netns exec container ip routedefault via 172.17.0.1 dev eth0172.17.0.0/16 dev eth0 proto kernel src 172.17.0.2 # Each namespace has its own routing rules$ ip rule0: from all lookup local32766: from all lookup main32767: from all lookup default # Container might have different rules$ ip netns exec container ip rule0: from all lookup local32766: from all lookup main32767: from all lookup defaultPer-namespace firewall:
Netfilter state (iptables/nftables rules) is per-namespace. This is crucial for security isolation:
# Host firewall rules
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP all -- 10.0.0.0/8 anywhere
# Container has independent rules (default empty)
sudo ip netns exec container iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
# (empty - container's rules are separate)
This independence means:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
/** * Routing operations are namespace-aware * * Each route lookup uses the namespace from the socket * or flow information structure. */struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4, struct fib_result *res, const struct sk_buff *skb){ /* Look up route in THIS namespace's FIB tables */ struct fib_table *tb; int err; /* Get namespace's main routing table */ tb = fib_get_table(net, RT_TABLE_MAIN); /* Perform route lookup */ err = fib_table_lookup(tb, fl4, res, ...); /* res now contains route from this namespace */ return __ip_route_output_key(net, res, fl4);} /** * FIB table management per namespace */struct fib_table *fib_get_table(struct net *net, u32 id){ struct hlist_head *head; struct fib_table *tb; /* Each namespace has its own hash of FIB tables */ head = &net->ipv4.fib_table_hash[h]; hlist_for_each_entry_rcu(tb, head, tb_hlist) { if (tb->tb_id == id) return tb; } return NULL;} /** * Netfilter hooks are namespace-aware * * Rules are registered per-namespace and only apply * to traffic in that namespace. */int nf_register_net_hook(struct net *net, const struct nf_hook_ops *ops){ struct nf_hook_entries *hooks; /* Register hook in this specific namespace */ hooks = nf_hook_entries_head(net, ops->pf, ops->hooknum); /* Add to namespace's hook list */ return nf_hook_entries_insert_raw(hooks, ops);} /* When processing packets, use packet's namespace */static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb){ /* 'net' comes from skb->dev->nd_net */ /* Routing lookup uses this namespace */ if (!skb_dst(skb)) { err = ip_route_input_noref(skb, iph->daddr, iph->saddr, iph->tos, skb->dev); /* Uses net->ipv4.fib_* tables */ } return dst_input(skb);}Kubernetes NetworkPolicy is typically implemented by the CNI plugin manipulating iptables/nftables in each pod's namespace. When you create a NetworkPolicy, the network plugin translates it into firewall rules injected into affected pod namespaces. This is why NetworkPolicy requires a CNI plugin that supports it.
Network namespaces provide isolated views of /proc/net and /sys/class/net. Processes see only their namespace's network state, which is essential for container isolation and prevents information leakage between tenants.
/proc/net isolation:
12345678910111213141516171819202122232425262728293031323334
# Host /proc/net shows all host sockets, routes, etc.$ cat /proc/net/tcp sl local_address rem_address st tx_queue rx_queue ... 0: 0100007F:0050 00000000:0000 0A 00000000:00000000 ... 1: 00000000:0016 00000000:0000 0A 00000000:00000000 ... ...many more... # Container sees only its own sockets$ ip netns exec container cat /proc/net/tcp sl local_address rem_address st tx_queue rx_queue ... 0: 0100007F:0050 00000000:0000 0A 00000000:00000000 ... # Only one socket - the container's # Host /sys/class/net shows all host interfaces$ ls /sys/class/netdocker0 eth0 lo veth1234567 wlan0 # Container sees only its own interfaces$ ip netns exec container ls /sys/class/neteth0 lo # Network statistics are also namespaced$ cat /proc/net/devInter-| Receive | Transmit face |bytes packets errs drop ... eth0: 1234567890 9876543 0 0 ... lo: 1234 100 0 0 ...docker0: 0 0 0 0 ... $ ip netns exec container cat /proc/net/devInter-| Receive | Transmit face |bytes packets errs drop ... eth0: 123456 1000 0 0 ... lo: 100 10 0 0 ...Implementation of namespaced procfs:
The kernel implements per-namespace /proc/net by checking the reading process's namespace:
/* In net/core/net-procfs.c */
static int seq_show(struct seq_file *seq, void *v)
{
struct net *net = seq_file_net(seq); /* Get reader's namespace */
/* Iterate only devices in this namespace */
for_each_netdev(net, dev) {
seq_printf(seq, "%6s: ...", dev->name);
}
}
The mount namespace also plays a role: container runtimes typically mount a new /proc for containers, and /proc/net is a symlink to /proc/self/net, which automatically shows the reading process's network namespace view.
When debugging container networking, use nsenter or ip netns exec to run diagnostic commands in the container's namespace. Commands like ip addr, netstat, ss, and iptables will show the container's view. Remember that /proc/net inside the container shows container state, while /proc/net on the host shows host state.
Network namespaces are the foundation of container networking isolation. Understanding their implementation illuminates how containers achieve network independence while remaining efficient enough for production workloads.
What's next:
With namespace isolation understood, we'll dive into the core of Linux networking: TCP/IP implementation. You'll see how Linux implements the TCP state machine, manages connection tables, handles congestion control, and achieves the performance that powers the modern internet.
You now understand Linux network namespaces—the kernel feature that enables container network isolation. This knowledge is essential for debugging container networking issues, understanding Kubernetes pod networking, and designing containerized application architectures.