Loading learning content...
When you type docker run nginx and a web server starts in milliseconds, a sophisticated orchestration of components springs into action. Understanding Docker's architecture transforms container usage from magic to mastery—enabling you to troubleshoot issues, optimize performance, and make informed decisions about container infrastructure.
Docker's architecture has evolved significantly since its inception. What started as a monolithic daemon has become a modular stack of pluggable components, each with well-defined responsibilities. This evolution mirrors the broader shift in container technology toward open standards and composable tooling.
In this page, we'll trace the complete path from typing a command to a running container, examining each component along the way.
By the end of this page, you will understand Docker's client-server architecture, the role of the Docker daemon, how containerd manages container lifecycles, how runc creates containers, and the complete execution flow from 'docker run' to running process. You'll also learn how these components interact with kernel features to create isolation.
Docker is not a single program but an ecosystem of tools that work together. Understanding the major components and their relationships is essential before diving into details.
Core Components:
| Component | Role | Process |
|---|---|---|
| Docker CLI | User interface, sends commands | docker |
| Docker Daemon | API server, orchestrates operations | dockerd |
| containerd | Container lifecycle management | containerd |
| runc | OCI runtime, creates containers | Invoked per container |
| containerd-shim | Keeps containers alive | One per container |
DOCKER ARCHITECTURE STACK========================= ┌──────────────────────────────────────────────────────────────────┐ │ USER SPACE │ │ ┌─────────────────────────────────────────────────────────────┐ │ │ │ DOCKER CLI (docker) │ │ │ │ Commands: run, build, push, pull, exec, logs, ... │ │ │ └───────────────────────────────────────────────────────────────│ │ │ │ │ │ REST API (Unix socket or TCP) │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────┐ │ │ │ DOCKER DAEMON (dockerd) │ │ │ │ • Image management • Network management │ │ │ │ • Build operations • Volume management │ │ │ │ • Swarm orchestration • API authentication │ │ │ └───────────────────────────────────────────────────────────────│ │ │ │ │ │ gRPC API │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────┐ │ │ │ CONTAINERD (containerd) │ │ │ │ • Container lifecycle • Image distribution │ │ │ │ • Execution supervision • Snapshot management │ │ │ │ • Task management • Namespace isolation │ │ │ └───────────────────────────────────────────────────────────────│ │ │ │ │ ┌───────────────┼───────────────┐ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ │ │ containerd-shim │ │ containerd-shim │ │ containerd-shim │ │ │ │ (per container)│ │ │ │ │ │ │ └──────────────────┘ └──────────────────┘ └──────────────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ │ │ runc │ │ runc │ │ runc │ │ │ │ (spawns, exits) │ │ (spawns, exits) │ │ (spawns, exits) │ │ │ └──────────────────┘ └──────────────────┘ └──────────────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ │ │ Container Process│ │ Container Process│ │ Container Process│ │ │ │ (your app) │ │ (your app) │ │ (your app) │ │ │ └──────────────────┘ └──────────────────┘ └──────────────────┘ │ └──────────────────────────────────────────────────────────────────┘ ═══════════════════════════════════════════════════════════════════ ┌──────────────────────────────────────────────────────────────────┐ │ LINUX KERNEL │ │ Namespaces │ cgroups │ OverlayFS │ seccomp │ Capabilities │ └──────────────────────────────────────────────────────────────────┘Originally, Docker daemon did everything—this made it a single point of failure. If the daemon restarted, all containers died. The modern architecture separates concerns: containerd manages containers independently, so containers survive daemon restarts. This design enables live upgrades and improves reliability.
The Docker CLI (docker command) is the primary interface for interacting with Docker. It communicates with the Docker daemon via a REST API, making it possible to control Docker remotely.
Communication Methods:
/var/run/docker.socktcp://host:2376 (with TLS) or tcp://host:2375 (insecure)ssh://user@host123456789101112131415161718192021222324252627282930
# Docker CLI sends REST API requests to the daemon# You can interact with the API directly: # List containers via Unix socket$ curl --unix-socket /var/run/docker.sock http://localhost/containers/json | jq[ { "Id": "abc123...", "Names": ["/nginx"], "Image": "nginx:latest", "State": "running", "Status": "Up 2 hours" }] # Create a container via API (equivalent to 'docker create')$ curl --unix-socket /var/run/docker.sock -X POST \ -H "Content-Type: application/json" \ -d '{"Image": "alpine", "Cmd": ["echo", "hello"]}' \ http://localhost/containers/create?name=test # Start the container$ curl --unix-socket /var/run/docker.sock -X POST \ http://localhost/containers/test/start # The docker CLI is essentially a user-friendly wrapper around these API calls # View what API calls docker makes with debugging enabled$ DOCKER_DEBUG=1 docker ps 2>&1 | head -20# Shows: GET /v1.41/containers/jsonEnvironment Configuration:
The Docker CLI reads several environment variables:
| Variable | Purpose | Example |
|---|---|---|
DOCKER_HOST | Daemon address | tcp://192.168.1.100:2376 |
DOCKER_TLS_VERIFY | Require TLS | 1 |
DOCKER_CERT_PATH | TLS certificates | /etc/docker/certs |
DOCKER_CONTEXT | Named context | production |
DOCKER_CONFIG | Config directory | ~/.docker |
Docker contexts allow switching between multiple Docker environments (local, staging, production) with a single command: docker context use production.
Access to the Docker socket (/var/run/docker.sock) grants root-equivalent privileges on the host. Anyone who can send commands to the daemon can mount the host filesystem, start privileged containers, and escape isolation. Protect this socket carefully and avoid mounting it into containers unless absolutely necessary.
The Docker daemon (dockerd) is the core service that manages Docker objects—images, containers, networks, and volumes. It exposes the Docker API and coordinates with containerd for container operations.
Daemon Responsibilities:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// /etc/docker/daemon.json - Docker daemon configuration{ // Storage driver (overlay2 is recommended for most cases) "storage-driver": "overlay2", "storage-opts": [ "overlay2.size=50G" ], // Logging configuration "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, // Network configuration "bip": "172.17.0.1/16", // Bridge network IP "default-address-pools": [ // IP pools for custom networks {"base": "172.80.0.0/16", "size": 24} ], // Security options "userns-remap": "default", // Enable user namespaces "no-new-privileges": true, // Default security option // Runtime options "default-runtime": "runc", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime" }, "kata": { "path": "/usr/bin/kata-runtime" } }, // Performance "max-concurrent-downloads": 10, "max-concurrent-uploads": 10, // Remote API access (use with caution!) "hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2376"], "tls": true, "tlscacert": "/etc/docker/ca.pem", "tlscert": "/etc/docker/server-cert.pem", "tlskey": "/etc/docker/server-key.pem"}Data Directory Structure:
Docker stores its data in /var/lib/docker/. Understanding this structure helps with troubleshooting and maintenance:
/var/lib/docker/
├── containers/ # Container metadata, logs, config
│ └── <container-id>/
│ ├── config.v2.json # Container configuration
│ ├── hostconfig.json # Host bindings, resources
│ └── <container-id>-json.log # Container logs
├── image/ # Image metadata by storage driver
│ └── overlay2/
│ ├── imagedb/ # Image database
│ ├── layerdb/ # Layer database
│ └── repositories.json
├── overlay2/ # Actual layer filesystems (if using overlay2)
│ ├── <layer-id>/
│ │ ├── diff/ # Layer contents
│ │ ├── link # Short layer identifier
│ │ └── lower # Parent layer chain
│ └── l/ # Shortened symlinks to layers
├── network/ # Network configuration
├── volumes/ # Named volume data
└── buildkit/ # BuildKit cache and state
containerd is an industry-standard container runtime that manages the complete container lifecycle—from image transfer and storage to container execution and supervision. It's a CNCF (Cloud Native Computing Foundation) graduated project, used by Docker, Kubernetes, and many other platforms.
Why containerd exists:
Docker extracted containerd as a standalone project to:
| Service | Function | Key Operations |
|---|---|---|
| Content | Image content storage | Blobs, manifests, configs |
| Snapshots | Filesystem layer management | Create layers, mount, unmount |
| Images | Image metadata | Create, list, delete images |
| Containers | Container metadata | Create, update, delete containers |
| Tasks | Running container processes | Start, stop, pause, resume, exec |
| Events | Lifecycle notifications | Subscribe to container events |
| Namespaces | Multi-tenancy isolation | Separate Docker from Kubernetes |
123456789101112131415161718192021222324252627
# Interact with containerd directly using ctr (containerd CLI) # List namespaces (Docker uses 'moby' namespace)$ ctr namespaces listNAME LABELSdefaultmoby k8s.io # List images in Docker's namespace$ ctr -n moby images listREF TYPE DIGEST SIZEdocker.io/library/nginx:latest application/vnd.oci... sha256:abc123... 142.6 MiBdocker.io/library/alpine:latest application/vnd.oci... sha256:def456... 5.33 MiB # List running containers (tasks)$ ctr -n moby tasks listTASK PID STATUSabc123def456... 12345 RUNNING789012abc345... 67890 RUNNING # Inspect a container's process tree from host$ ps aux | grep 'containerd'root 1234 0.5 1.2 /usr/bin/containerdroot 5678 0.0 0.1 containerd-shim -namespace moby -id abc123... -address /run/containerd/containerd.sock # The shim keeps the container alive independently of containerd and dockerdcontainerd uses namespaces (different from Linux kernel namespaces) to separate different clients. Docker uses the 'moby' namespace, Kubernetes uses 'k8s.io'. This allows both to use the same containerd instance without conflicts—each sees only their own containers.
runc is the OCI (Open Container Initiative) reference runtime—the actual tool that creates containers. When containerd needs to start a container, it generates an OCI runtime bundle and invokes runc to create the container process.
What runc does:
runc is the last step before your container process runs. It:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
// Simplified OCI runtime spec (config.json)// This is what containerd generates and runc reads{ "ociVersion": "1.0.2", "process": { "terminal": false, "user": { "uid": 0, "gid": 0 }, "args": [ "/bin/sh", "-c", "while true; do date; sleep 1; done" ], "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "MY_VAR=my_value" ], "cwd": "/", "capabilities": { "bounding": ["CAP_NET_BIND_SERVICE", "CAP_CHOWN"], "effective": ["CAP_NET_BIND_SERVICE"], "permitted": ["CAP_NET_BIND_SERVICE", "CAP_CHOWN"] }, "rlimits": [ {"type": "RLIMIT_NOFILE", "hard": 1024, "soft": 1024} ], "noNewPrivileges": true }, "root": { "path": "rootfs", // Container's root filesystem "readonly": false }, "hostname": "my-container", "mounts": [ {"destination": "/proc", "type": "proc", "source": "proc"}, {"destination": "/dev", "type": "tmpfs", "source": "tmpfs"}, {"destination": "/sys", "type": "sysfs", "source": "sysfs", "options": ["ro"]} ], "linux": { "namespaces": [ {"type": "pid"}, {"type": "network"}, {"type": "ipc"}, {"type": "uts"}, {"type": "mount"} ], "resources": { "memory": { "limit": 536870912 // 512 MB limit }, "cpu": { "shares": 1024, "quota": 100000, "period": 100000 // 1 CPU core limit } }, "seccomp": { "defaultAction": "SCMP_ACT_ERRNO", "architectures": ["SCMP_ARCH_X86_64"], "syscalls": [ {"names": ["read", "write", "exit", "..."], "action": "SCMP_ACT_ALLOW"} ] } }}Running runc directly:
You can use runc independently of Docker to understand exactly what it does:
12345678910111213141516171819202122232425262728
# Create a minimal container with runc directly # 1. Create a bundle directory$ mkdir -p /tmp/mycontainer/rootfs # 2. Extract a filesystem (from Docker export or manually)$ docker export $(docker create alpine) | tar -C /tmp/mycontainer/rootfs -xf - # 3. Generate a default spec$ cd /tmp/mycontainer$ runc spec# This creates config.json with defaults # 4. Edit config.json to set:# "args": ["sh"]# "terminal": true # 5. Run the container!$ sudo runc run mycontainer/ # hostnamemycontainer/ # psPID USER TIME COMMAND 1 root 0:00 sh 8 root 0:00 ps/ # exit # We just ran a container without Docker!runc isn't the only OCI runtime. Kata Containers runs each container in a lightweight VM for stronger isolation. gVisor intercepts system calls and implements them in userspace for security. nvidia-container-runtime enables GPU access. Docker can be configured to use any OCI-compliant runtime.
The containerd-shim is a small process that sits between containerd and the container's main process. It serves critical functions that enable containers to outlive their management infrastructure.
Why the shim exists:
runc is designed to create containers and exit. Once a container is running, runc is no longer needed. But something needs to:
The shim handles all these responsibilities. It's reparented to init (PID 1) and continues running as long as the container runs.
PROCESS HIERARCHY DURING CONTAINER LIFECYCLE============================================= PHASE 1: Container Creation┌─────────────────────────────────────────────────────────────────────┐│ systemd (PID 1) ││ └── containerd ││ └── containerd-shim (creates container) ││ └── runc init ││ └── runc create (sets up namespaces, cgroups) │└─────────────────────────────────────────────────────────────────────┘ PHASE 2: Container Running┌─────────────────────────────────────────────────────────────────────┐│ systemd (PID 1) ││ ├── containerd (manages container lifecycle) ││ │ ││ └── containerd-shim (reparented to init, independent!) ││ └── nginx (container's main process) ││ ├── nginx: worker ││ └── nginx: worker ││ ││ NOTE: runc has exited! It's no longer needed. ││ The shim holds the container's file descriptors and state. │└─────────────────────────────────────────────────────────────────────┘ PHASE 3: containerd or dockerd Restarts┌─────────────────────────────────────────────────────────────────────┐│ systemd (PID 1) ││ ├── containerd (NEW) (reconnects to existing containers) ││ │ ││ └── containerd-shim (STILL RUNNING, container unaffected!) ││ └── nginx (container continues serving requests) ││ ├── nginx: worker ││ └── nginx: worker ││ ││ Container survived daemon restart! Zero downtime. │└─────────────────────────────────────────────────────────────────────┘ View shim processes:$ ps aux | grep containerd-shimroot 12345 containerd-shim -namespace moby -id abc123 -address /run/containerd/...This architecture enables 'daemonless' containers—Podman, for example, doesn't require a persistent daemon. Each container has its conmon process (similar to a shim) that manages it independently. The container management tool can exit, and containers keep running.
Now let's trace the complete path from typing docker run nginx to a running container. Understanding this flow demystifies container operations and aids troubleshooting.
Step-by-Step Execution:
COMPLETE FLOW: docker run -d nginx==================================== USER DOCKER CLI │ │ │ $ docker run -d nginx │ └─────────────────────────────────┤ │ ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 1: CLI PROCESSING ││ • Parse command arguments (-d, nginx) ││ • Construct API request body ││ • Send HTTP POST to /containers/create │└────────────────────────────────────────────────────────────────────────────┘ │ │ REST API via /var/run/docker.sock ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 2: DOCKER DAEMON (dockerd) ││ • Receive API request ││ • Check if image exists locally ││ • If not found: pull from registry (docker.io/library/nginx:latest) ││ - Authenticate if needed ││ - Download layers (only missing ones) ││ - Verify digests ││ • Prepare container configuration ││ • Set up networking (allocate IP, port mappings) ││ • Create volume mounts ││ • Send create request to containerd │└────────────────────────────────────────────────────────────────────────────┘ │ │ gRPC API ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 3: CONTAINERD ││ • Receive container creation request ││ • Create container record in metadata store ││ • Create snapshot from image layers (using overlay2) ││ • Prepare OCI runtime bundle: ││ - Generate config.json from Docker container config ││ - Set up rootfs pointing to snapshot ││ • Start a new containerd-shim process ││ • Tell shim to create the container │└────────────────────────────────────────────────────────────────────────────┘ │ │ ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 4: CONTAINERD-SHIM ││ • Receive bundle path and container ID ││ • Fork and prepare to become independent ││ • Invoke runc with the bundle │└────────────────────────────────────────────────────────────────────────────┘ │ │ ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 5: RUNC ││ • Parse config.json ││ • Create new namespaces: ││ - PID namespace (container sees only its processes) ││ - MNT namespace (container has its own filesystem view) ││ - NET namespace (container has its own network stack) ││ - UTS namespace (container has its own hostname) ││ - IPC namespace (container has isolated IPC) ││ • Set up cgroups for resource limits ││ • Mount rootfs and other filesystems ││ • Drop capabilities ││ • Apply seccomp filters ││ • Execute the container's entrypoint: nginx -g "daemon off;" ││ • Exit (runc is no longer needed) │└────────────────────────────────────────────────────────────────────────────┘ │ ▼┌────────────────────────────────────────────────────────────────────────────┐│ STEP 6: CONTAINER RUNNING ││ • nginx master process runs as PID 1 inside container ││ • containerd-shim monitors the process ││ • Container is now serving HTTP on its allocated port ││ • dockerd returns container ID to CLI ││ • CLI prints container ID to user ││ $ e4b2f3c8a9d1... │└────────────────────────────────────────────────────────────────────────────┘ TIME: ~500ms from command to running container (if image cached) ~30s+ if image needs to be pulledThis flow helps identify where problems occur: image pull failures (Step 2), OCI spec errors (Step 4-5), process startup failures (Step 5), or networking issues (Step 2/6). Use 'docker logs' and 'journalctl -u docker' to diagnose issues at each level.
We've traced through Docker's complete architecture from CLI to running container. Let's consolidate the key concepts:
What's next:
Now that we understand Docker's runtime architecture, the next page explores container images—how they're built, structured, stored, and distributed. We'll examine Dockerfiles, layer caching, image optimization, and registry operations.
You now understand Docker's complete architecture—from CLI through daemon, containerd, runc, and shim to running container. This knowledge is essential for troubleshooting container issues, optimizing performance, and understanding how modern container platforms like Kubernetes interface with container runtimes.