Loading learning content...
While container technologies existed before Docker (LXC, OpenVZ, Solaris Zones), Docker transformed containerization from a systems engineering curiosity into a mainstream development practice. Launched in 2013, Docker provided the user experience layer that made containers accessible to every developer.
Docker's genius wasn't in inventing containers—it was in making them usable. The Dockerfile format, the layered image system, the Docker Hub registry, and the simple CLI combined to create a workflow that felt natural to developers accustomed to version control and package managers.
Today, Docker Desktop has millions of active users, and Docker's image format became the de facto standard (later formalized as OCI). Understanding Docker is essential for any engineer working with modern infrastructure.
By the end of this page, you will understand Docker's architecture, master essential Docker commands, learn to write effective Dockerfiles, understand multi-stage builds, and develop best practices for containerized development workflows. You'll have the practical skills to build, run, and ship containerized applications.
Docker uses a client-server architecture. Understanding this architecture helps you diagnose issues and configure Docker for different environments.
Core Components:
| Component | Role | Details |
|---|---|---|
| Docker Client (docker) | User interface | CLI that sends commands to the daemon. Can connect to local or remote daemons. |
| Docker Daemon (dockerd) | Background service | Listens for API requests, manages images, containers, networks, and volumes. |
| containerd | Container runtime | Manages container lifecycle (create, start, stop). Industry-standard runtime. |
| runc | OCI runtime | Low-level tool that creates namespaces and cgroups. Actually runs containers. |
| Docker Registry | Image storage | Stores and distributes Docker images. Docker Hub is the default public registry. |
12345678910111213141516171819202122232425262728293031323334
┌───────────────────────────────────────────────────────────────────┐│ DOCKER HOST ││ ││ ┌──────────────────────────────────────────────────────────────┐ ││ │ Docker Daemon (dockerd) │ ││ │ │ ││ │ ┌─────────────────────────────────────────────────────┐ │ ││ │ │ containerd │ │ ││ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ ││ │ │ │Container│ │Container│ │Container│ │ │ ││ │ │ │ A │ │ B │ │ C │ │ │ ││ │ │ │ (runc) │ │ (runc) │ │ (runc) │ │ │ ││ │ │ └─────────┘ └─────────┘ └─────────┘ │ │ ││ │ └─────────────────────────────────────────────────────┘ │ ││ │ │ ││ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ ││ │ │ Images │ │ Volumes │ │ Networks │ │ ││ │ └─────────────┘ └─────────────┘ └─────────────┘ │ ││ └──────────────────────────────────────────────────────────────┘ ││ ▲ ││ │ REST API (unix socket or TCP) ││ ▼ ││ ┌────────────────────┐ ┌─────────────────────────────┐ ││ │ Docker CLI Client │◀───────▶│ Docker Compose │ ││ │ (docker) │ │ (docker-compose / docker) │ ││ └────────────────────┘ └─────────────────────────────┘ │└───────────────────────────────────────────────────────────────────┘ ▲ │ Push / Pull ▼┌───────────────────────────────────────────────────────────────────┐│ Docker Registry ││ (Docker Hub, ECR, GCR, ACR, Harbor, etc.) │└───────────────────────────────────────────────────────────────────┘Key architectural insights:
Docker Desktop (for Windows/Mac) includes Docker Engine plus additional tooling: a VM to run the Linux kernel, a GUI, Kubernetes integration, and file system syncing. On Linux, Docker Engine runs natively without a VM. Docker Desktop is free for personal and small business use but requires a paid subscription for larger organizations.
Mastering Docker begins with its command-line interface. The Docker CLI follows a consistent pattern: docker <object> <command> [options]. Let's cover the commands you'll use daily.
Container Lifecycle Commands:
1234567891011121314151617181920212223242526272829303132333435
# Pull an image from registrydocker pull nginx:1.25 # Run a container (pull if needed)docker run nginx:1.25 # Run interactively with terminaldocker run -it ubuntu:22.04 /bin/bash # Run detached (background) with port mapping and namedocker run -d --name webserver -p 8080:80 nginx:1.25 # List running containersdocker ps # List all containers (including stopped)docker ps -a # Stop a running container (SIGTERM, then SIGKILL after 10s)docker stop webserver # Start a stopped containerdocker start webserver # Restart a containerdocker restart webserver # Remove a stopped containerdocker rm webserver # Force remove a running containerdocker rm -f webserver # Remove all stopped containersdocker container pruneImage Management Commands:
1234567891011121314151617181920212223242526272829303132
# List local imagesdocker images # Pull specific versiondocker pull python:3.11-slim # Build image from Dockerfile in current directorydocker build -t myapp:1.0 . # Build with specific Dockerfiledocker build -f Dockerfile.prod -t myapp:prod . # Tag an imagedocker tag myapp:1.0 myregistry.com/myapp:1.0 # Push to registrydocker push myregistry.com/myapp:1.0 # Remove an imagedocker rmi myapp:1.0 # Remove unused imagesdocker image prune # Remove ALL unused images (not just dangling)docker image prune -a # Inspect image layers and metadatadocker image inspect nginx:1.25 # View image history (layers)docker history nginx:1.25Debugging and Inspection Commands:
1234567891011121314151617181920212223242526272829
# View container logsdocker logs webserver # Follow logs in real-timedocker logs -f webserver # Show last 100 lines with timestampsdocker logs --tail 100 -t webserver # Execute command in running containerdocker exec webserver cat /etc/nginx/nginx.conf # Get interactive shell in running containerdocker exec -it webserver /bin/sh # View container resource usagedocker stats # View container processesdocker top webserver # Inspect container details (JSON output)docker inspect webserver # Copy files from container to hostdocker cp webserver:/etc/nginx/nginx.conf ./nginx.conf # Copy files from host to containerdocker cp ./custom.conf webserver:/etc/nginx/conf.d/Many Docker commands have shortened forms. 'docker container ls' is the same as 'docker ps'. 'docker image ls' is the same as 'docker images'. Container IDs can be abbreviated—you only need enough characters to be unique (often 3-4 characters). Use tab completion by installing Docker's shell completions.
The docker run command has dozens of flags that control container behavior. Understanding the most important flags is crucial for both development and production.
Essential docker run Flags:
| Flag | Purpose | Example |
|---|---|---|
| -d, --detach | Run in background | docker run -d nginx |
| -p, --publish | Map container port to host | docker run -p 8080:80 nginx |
| --name | Assign container name | docker run --name web nginx |
| -e, --env | Set environment variable | docker run -e DB_HOST=localhost app |
| --env-file | Read env vars from file | docker run --env-file .env app |
| -v, --volume | Mount volume or bind mount | docker run -v data:/app/data app |
| --mount | Mount with explicit options | docker run --mount type=bind,src=.,dst=/app app |
| -w, --workdir | Set working directory | docker run -w /app node npm start |
| --network | Connect to network | docker run --network mynet app |
| --restart | Restart policy | docker run --restart unless-stopped app |
| --rm | Remove container on exit | docker run --rm ubuntu cat /etc/os-release |
| -it | Interactive tty (combined) | docker run -it ubuntu bash |
Resource Constraint Flags:
12345678910111213141516171819202122
# Memory limitsdocker run --memory=512m myapp # Hard limit at 512 MBdocker run --memory=512m --memory-swap=1g myapp # 512 MB RAM + 512 MB swap # CPU limitsdocker run --cpus=2 myapp # Limit to 2 CPU coresdocker run --cpu-shares=512 myapp # Relative weight (default 1024)docker run --cpuset-cpus="0,1" myapp # Pin to specific cores # Combined production exampledocker run -d \ --name myapp \ --memory=2g \ --memory-swap=2g \ --cpus=1.5 \ --restart=unless-stopped \ --health-cmd="curl -f http://localhost/health || exit 1" \ --health-interval=30s \ --health-timeout=10s \ --health-retries=3 \ -p 8080:8080 \ myapp:latestWhen a container exceeds its memory limit, the Linux OOM (Out-of-Memory) Killer terminates it. Docker reports this as exit code 137. Always set memory limits in production to prevent one container from consuming all host memory. Use --memory-swap equal to --memory to disable swap and make behavior predictable.
Security-Related Flags:
1234567891011121314151617181920
# Run as specific user (avoid running as root)docker run --user 1000:1000 myapp # Read-only filesystemdocker run --read-only myapp # Drop all capabilities, add only needed onesdocker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp # Disable privilege escalationdocker run --security-opt=no-new-privileges:true myapp # Custom seccomp profiledocker run --security-opt seccomp=/path/to/profile.json myapp # Limit processes (fork bomb protection)docker run --pids-limit=100 myapp # Never run in production: gives container full host access# docker run --privileged myapp # DANGEROUSA Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a layer in the image. Understanding how to write efficient Dockerfiles is fundamental to containerization.
Dockerfile Instruction Reference:
| Instruction | Purpose | Example |
|---|---|---|
| FROM | Base image to start from | FROM python:3.11-slim |
| WORKDIR | Set working directory | WORKDIR /app |
| COPY | Copy files from host to image | COPY . /app |
| RUN | Execute command during build | RUN pip install -r requirements.txt |
| ENV | Set environment variable | ENV NODE_ENV=production |
| ARG | Build-time variable | ARG VERSION=1.0 |
| EXPOSE | Document exposed ports (no publish) | EXPOSE 8080 |
| USER | Set user for subsequent commands | USER appuser |
| CMD | Default command when container starts | CMD ["python", "app.py"] |
| ENTRYPOINT | Fixed command, CMD becomes args | ENTRYPOINT ["./entrypoint.sh"] |
| HEALTHCHECK | Container health check | HEALTHCHECK CMD curl -f http://localhost/ |
| VOLUME | Declare volume mount point | VOLUME /data |
Example: Well-Structured Python Dockerfile:
12345678910111213141516171819202122232425262728293031323334353637
# Use specific version for reproducibilityFROM python:3.11-slim-bookworm # Set build-time variablesARG APP_VERSION=1.0.0 # Prevent Python from writing pyc files and buffering stdout/stderrENV PYTHONDONTWRITEBYTECODE=1 \ PYTHONUNBUFFERED=1 \ APP_VERSION=${APP_VERSION} # Set working directoryWORKDIR /app # Create non-root user for securityRUN groupadd --gid 1000 appgroup && \ useradd --uid 1000 --gid 1000 --shell /bin/bash appuser # Install dependencies first (for better layer caching)COPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txt # Copy application codeCOPY --chown=appuser:appgroup . . # Switch to non-root userUSER appuser # Document the port (doesn't publish it)EXPOSE 8000 # Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1 # Default commandCMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]ENTRYPOINT defines the executable that always runs. CMD provides default arguments that can be overridden. Use ENTRYPOINT when your container should always run a specific command (like a web server). Use CMD alone for development containers that might need different commands. Combine them when you have a fixed command with configurable arguments.
Docker's layer caching is the key to fast builds. Each Dockerfile instruction creates a layer that can be cached. Understanding caching behavior helps you write Dockerfiles that build in seconds instead of minutes.
How Layer Caching Works:
The critical rule: A cache miss invalidates all subsequent layers.
Build Optimization Strategies:
123456789101112131415161718192021222324252627282930313233343536
# Version control.git.gitignore # Dependencies (will be installed in container)node_modules__pycache__*.pycvenv.venv # Build artifactsdistbuild*.egg-info # IDE and editor files.idea.vscode*.swp # Tests and docs (unless needed in production)testsdocs*.md!README.md # Local environment.env.env.local*.log # Docker filesDockerfile*docker-compose*.dockerignoreMulti-stage builds are one of Docker's most powerful features for creating lean, secure production images. They solve the problem of needing build tools during image creation but not in the final image.
The problem multi-stage builds solve:
1234567891011121314151617181920212223242526272829303132333435363738
# ===== Stage 1: Build =====FROM golang:1.21-alpine AS builder # Install build dependenciesRUN apk add --no-cache git ca-certificates WORKDIR /app # Download dependencies (cached layer)COPY go.mod go.sum ./RUN go mod download # Build the applicationCOPY . .RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main . # ===== Stage 2: Runtime =====FROM alpine:3.18 AS runtime # Install runtime dependencies onlyRUN apk --no-cache add ca-certificates # Create non-root userRUN adduser -D -g '' appuser WORKDIR /app # Copy ONLY the binary from builder stageCOPY --from=builder /app/main . # Use non-root userUSER appuser EXPOSE 8080CMD ["./main"] # Final image: ~20 MB instead of ~1 GB# No compilers, no source code, minimal attack surfaceMulti-stage for Node.js with dependency separation:
12345678910111213141516171819202122232425262728293031323334353637383940414243
# ===== Stage 1: Dependencies =====FROM node:20-slim AS deps WORKDIR /app # Install dependencies onlyCOPY package.json package-lock.json ./RUN npm ci --only=production # ===== Stage 2: Build =====FROM node:20-slim AS builder WORKDIR /app COPY package.json package-lock.json ./RUN npm ci COPY . .RUN npm run build # ===== Stage 3: Runtime =====FROM node:20-slim AS runner WORKDIR /app ENV NODE_ENV=production # Create non-root userRUN addgroup --system --gid 1001 nodejs && \ adduser --system --uid 1001 nextjs # Copy production dependencies from deps stageCOPY --from=deps /app/node_modules ./node_modules # Copy built application from builder stageCOPY --from=builder --chown=nextjs:nodejs /app/.next ./.nextCOPY --from=builder /app/public ./publicCOPY --from=builder /app/package.json ./ USER nextjs EXPOSE 3000CMD ["npm", "start"]You can name stages with 'AS name' and reference them with 'COPY --from=name'. You can also copy from external images: 'COPY --from=nginx:alpine /etc/nginx/nginx.conf /nginx.conf'. This lets you selectively extract artifacts from any image without including it in your build.
Real applications rarely run in a single container. A typical web application might include the application server, database, cache, and message queue. Docker Compose defines and runs multi-container applications using a YAML configuration file.
Docker Compose Key Concepts:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
version: '3.8' services: app: build: context: . dockerfile: Dockerfile ports: - "3000:3000" environment: - DATABASE_URL=postgres://user:pass@db:5432/myapp - REDIS_URL=redis://cache:6379 depends_on: db: condition: service_healthy cache: condition: service_started volumes: - ./src:/app/src # Development hot reload networks: - app-network db: image: postgres:15-alpine environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=pass - POSTGRES_DB=myapp volumes: - postgres_data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U user -d myapp"] interval: 5s timeout: 5s retries: 5 networks: - app-network cache: image: redis:7-alpine volumes: - redis_data:/data networks: - app-network volumes: postgres_data: redis_data: networks: app-network: driver: bridgeEssential Docker Compose Commands:
1234567891011121314151617181920212223242526272829
# Start all services (detached mode)docker compose up -d # Start and rebuild imagesdocker compose up -d --build # View logs for all servicesdocker compose logs -f # View logs for specific servicedocker compose logs -f app # Stop all servicesdocker compose stop # Stop and remove containers, networksdocker compose down # Stop and remove everything including volumesdocker compose down -v # Scale a servicedocker compose up -d --scale worker=3 # Execute command in running servicedocker compose exec app sh # View running servicesdocker compose psContainers in the same Compose network can reach each other by service name. In the example above, the app container connects to 'db:5432' and 'cache:6379'. Docker's embedded DNS resolves these names to container IP addresses. No IP addresses are hardcoded.
Let's consolidate our Docker knowledge:
What's next:
With Docker basics mastered, we'll explore Container Images in depth—understanding how images are structured, versioning strategies, and how to create reproducible, scannable images that form the foundation of your deployment pipeline.
You now have the practical skills to build, run, and manage Docker containers. These fundamentals will serve you whether you're developing locally, deploying to production, or working with container orchestration platforms like Kubernetes.