Loading learning content...
Kubernetes fundamentally changes how we think about infrastructure management. Instead of writing scripts that tell the system how to reach a state (imperative), you declare what state you want and let Kubernetes figure out how to achieve it (declarative).
This shift may seem subtle, but it's transformative. Declarative configuration enables self-healing, GitOps workflows, reliable rollbacks, and operations at scale that would be impossible with imperative approaches. Understanding and embracing this paradigm is essential for effective Kubernetes usage.
By the end of this page, you will understand the declarative model deeply—how manifests are structured, how the reconciliation loop works, why declarative beats imperative at scale, and how to structure your configurations for maintainability and reliability.
Before Kubernetes, infrastructure management was typically imperative—you wrote scripts that executed step-by-step:
This approach has fundamental limitations:
Kubernetes is declarative:
Example: Imperative Script vs. Declarative Manifest
Imperative (shell script):
#!/bin/bash
if ! kubectl get deployment nginx &>/dev/null; then
kubectl create deployment nginx --image=nginx:1.25
fi
kubectl scale deployment nginx --replicas=3
kubectl set image deployment/nginx nginx=nginx:1.25
# What if a pod crashes? What if the node dies? Script doesn't know.
Declarative (YAML manifest):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels: { app: nginx }
template:
metadata:
labels: { app: nginx }
spec:
containers:
- name: nginx
image: nginx:1.25
# Apply once. Kubernetes maintains this state forever.
# Pod crashes? Replaced. Node dies? Rescheduled.
With declarative configuration, Kubernetes controllers continuously compare desired state (your manifest) with actual state (what's running). Deviations are automatically corrected. A Pod dies? Controller creates a new one. Someone manually deletes a replica? Controller restores it. This is self-healing at infrastructure level.
Every Kubernetes resource follows a consistent structure. Understanding this structure helps you read, write, and debug manifests effectively.
Required Fields:
| Field | Purpose |
|---|---|
apiVersion | API group and version (e.g., apps/v1, v1) |
kind | Resource type (e.g., Deployment, Service) |
metadata | Name, namespace, labels, annotations |
spec | Desired state specification |
Anatomy of a Complete Manifest:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
# API version determines available fields and behaviorsapiVersion: apps/v1 # Resource typekind: Deployment # Metadata: identity and organizationmetadata: # Required: resource name (unique within namespace for this kind) name: web-app # Optional: namespace (defaults to 'default') namespace: production # Optional: key-value pairs for organization and selection labels: app: web environment: production team: platform version: v2.1.0 # Optional: non-identifying metadata (for tools, documentation) annotations: kubernetes.io/change-cause: "Deployed v2.1.0 with performance fixes" prometheus.io/scrape: "true" prometheus.io/port: "8080" # Spec: the desired state (structure varies by resource type)spec: replicas: 3 selector: matchLabels: app: web strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 # For Deployments, spec contains a Pod template template: metadata: labels: app: web spec: containers: - name: web image: myapp:v2.1.0 ports: - containerPort: 8080 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "256Mi" # Status: current state (managed by Kubernetes, not in manifests)# status:# replicas: 3# readyReplicas: 3# availableReplicas: 3# conditions:# - type: Available# status: "True"Labels vs. Annotations:
| Aspect | Labels | Annotations |
|---|---|---|
| Purpose | Identify and select objects | Attach non-identifying metadata |
| Selection | Used in selectors and queries | Cannot be used for selection |
| Constraints | Limited to 63 chars, alphanumeric | Up to 256KB, any string |
| Examples | app: web, env: prod | description: "Main API", last-deployed: "2024-01-15" |
Common Label Conventions:
| Label | Purpose |
|---|---|
app.kubernetes.io/name | Application name |
app.kubernetes.io/version | Application version |
app.kubernetes.io/component | Component within application |
app.kubernetes.io/managed-by | Tool managing this resource (Helm, ArgoCD) |
The 'status' section is managed by Kubernetes controllers, reflecting current state. Never include it in your manifests—it will be ignored or cause errors. Your manifests define 'spec' (desire); Kubernetes updates 'status' (reality).
The magic of Kubernetes lies in its reconciliation loops. Controllers continuously watch resources and work to make reality match your declared intent.
The Controller Pattern:
Every controller follows the same pattern:
This is sometimes called a "level-triggered" system—controllers react to the current level (state) rather than edges (individual changes).
Reconciliation Loop Illustrated: ┌─────────────────────────────────────────────────────────────────┐│ RECONCILIATION LOOP │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 1. OBSERVE: Watch API server for Deployment changes ││ - User applies manifest (kubectl apply) ││ - Receives event: "Deployment web-app updated" │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 2. READ CURRENT STATE ││ - Query API: "What Pods exist with label app=web?" ││ - Result: 2 Pods running │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 3. READ DESIRED STATE ││ - Query API: "What's the Deployment spec?" ││ - Result: replicas=3 │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 4. COMPARE ││ - Current: 2 Pods ││ - Desired: 3 Pods ││ - Difference: Need 1 more Pod │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 5. ACT ││ - Create 1 new Pod with spec from template ││ - Update Deployment status │└─────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────┐│ 6. REPEAT ││ - Return to step 1, wait for next change ││ - Loop runs forever until controller shuts down │└─────────────────────────────────────────────────────────────────┘Why Level-Triggered Beats Edge-Triggered:
In an edge-triggered system, the controller would react to individual events: "Pod created", "Pod deleted". This creates problems:
Level-triggered systems don't care about event history—they only care about "what is current state vs. desired state?" This makes them:
Declarative systems are eventually consistent—there's always some lag between declaring state and achieving it. This is a feature, not a bug. It allows graceful handling of failures, retries, and concurrent operations. Design your applications to tolerate this lag.
Understanding the different kubectl commands for managing resources is crucial for declarative workflows.
kubectl create (Imperative):
kubectl apply (Declarative):
kubectl replace (Imperative):
Comparison:
| Command | Resource Exists | Resource Doesn't Exist | Use Case |
|---|---|---|---|
| kubectl create | Error | Creates | Initial creation scripts |
| kubectl apply | Updates (merge) | Creates | GitOps, declarative workflows |
| kubectl replace | Replaces | Error | Full manifest replacement |
| kubectl patch | Patches specific fields | Error | Targeted updates |
| kubectl delete | Deletes | No-op (or error) | Resource removal |
How kubectl apply Works (Three-Way Merge):
kubectl apply compares three versions of the resource:
This three-way merge allows:
Three-Way Merge Example: Last Applied: replicas: 3 image: myapp:v1 Live Server (after HPA scaled up): replicas: 5 ← Changed by HPA image: myapp:v1 New Local Manifest: replicas: 3 image: myapp:v2 ← Your change Result of kubectl apply: replicas: 5 ← Preserved (not in delta between last-applied and new) image: myapp:v2 ← Updated (you changed it) This is why 'apply' is smart—it respects changes made by other actors.In GitOps workflows, always use 'kubectl apply -f'. The last-applied-configuration annotation is essential for proper merge behavior. Using 'create' or 'replace' breaks this tracking and can cause unexpected behavior.
Real applications have dozens of resources: Deployments, Services, ConfigMaps, Secrets, Ingresses, PVCs, ServiceAccounts, Roles, and more. Organizing these manifests well is critical for maintainability.
Directory Structure Patterns:
Pattern 1: By Resource Type (Simple Projects) k8s/├── deployments/│ ├── web.yaml│ └── api.yaml├── services/│ ├── web.yaml│ └── api.yaml├── configmaps/│ └── app-config.yaml└── ingresses/ └── main-ingress.yaml Pattern 2: By Component (Microservices) k8s/├── web/│ ├── deployment.yaml│ ├── service.yaml│ └── configmap.yaml├── api/│ ├── deployment.yaml│ ├── service.yaml│ ├── configmap.yaml│ └── secret.yaml (reference only, actual secret managed separately)├── database/│ ├── statefulset.yaml│ ├── service.yaml│ └── pvc.yaml└── common/ ├── namespace.yaml └── network-policies.yaml Pattern 3: By Environment with Kustomize k8s/├── base/│ ├── kustomization.yaml│ ├── deployment.yaml│ ├── service.yaml│ └── configmap.yaml├── overlays/│ ├── development/│ │ ├── kustomization.yaml│ │ └── replicas-patch.yaml│ ├── staging/│ │ ├── kustomization.yaml│ │ └── replicas-patch.yaml│ └── production/│ ├── kustomization.yaml│ ├── replicas-patch.yaml│ └── resources-patch.yamlMulti-Document YAML Files:
You can combine related resources in a single file using --- separator:
apiVersion: v1
kind: Service
metadata:
name: web
spec:
selector: { app: web }
ports: [{ port: 80 }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector: { matchLabels: { app: web } }
template:
metadata: { labels: { app: web } }
spec:
containers:
- name: web
image: nginx
File Naming Conventions:
web-deployment.yamlapi-service.yamlmonitoring.yaml (Prometheus + Grafana)Don't duplicate entire manifests for different environments. Use Kustomize overlays or Helm values to manage environment-specific differences. DRY (Don't Repeat Yourself) applies to infrastructure code too.
Declarative configuration extends to application configuration itself. Instead of baking config into images, externalize it using ConfigMaps and Secrets.
ConfigMaps: Store non-sensitive configuration data Secrets: Store sensitive data (passwords, tokens, certificates)
Creating ConfigMaps:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
# ConfigMap with key-value pairsapiVersion: v1kind: ConfigMapmetadata: name: app-configdata: DATABASE_HOST: "postgres.database.svc.cluster.local" LOG_LEVEL: "info" MAX_CONNECTIONS: "100" ---# ConfigMap with file contentapiVersion: v1kind: ConfigMapmetadata: name: nginx-configdata: nginx.conf: | server { listen 80; server_name localhost; location / { proxy_pass http://backend:8080; } } ---# Using ConfigMaps in PodsapiVersion: v1kind: Podmetadata: name: appspec: containers: - name: app image: myapp:v1 # Method 1: Environment variables from ConfigMap envFrom: - configMapRef: name: app-config # Method 2: Individual env vars env: - name: DB_HOST valueFrom: configMapKeyRef: name: app-config key: DATABASE_HOST # Method 3: Mount as files volumeMounts: - name: config-volume mountPath: /etc/nginx/conf.d volumes: - name: config-volume configMap: name: nginx-configSecrets:
Secrets work similarly to ConfigMaps but with a focus on sensitive data:
12345678910111213141516171819202122232425262728293031323334353637
# Secret with encoded values (base64)apiVersion: v1kind: Secretmetadata: name: db-credentialstype: Opaquedata: username: YWRtaW4= # base64 encoded 'admin' password: cGFzc3dvcmQxMjM= # base64 encoded 'password123' ---# Secret with plaintext (stringData is auto-encoded)apiVersion: v1kind: Secretmetadata: name: api-keystype: OpaquestringData: api-key: "sk-live-abc123xyz789" webhook-secret: "whsec_somelongsecret" ---# Using Secrets in PodsapiVersion: v1kind: Podmetadata: name: appspec: containers: - name: app image: myapp:v1 env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: passwordBy default, Kubernetes Secrets are only base64-encoded, not encrypted. Anyone with API access can read them. For real security: enable encryption at rest in etcd, use external secret managers (Vault, AWS Secrets Manager), or use tools like Sealed Secrets for GitOps workflows.
Kubernetes resources often depend on each other. A Pod references a ConfigMap; a Deployment references a ServiceAccount; an Ingress references a Service. How does kubectl apply handle these dependencies?
The Short Answer: It Doesn't (Directly)
Kubernetes is eventually consistent. You can apply a Deployment that references a non-existent ConfigMap—the Pods will fail to start, but Kubernetes will retry. Once the ConfigMap is created, Pods will start successfully.
Dependency Ordering Considerations:
| Resource | Typical Dependencies | What Happens If Missing |
|---|---|---|
| Pod | ConfigMaps, Secrets, PVCs, ServiceAccount | Pod stays in Pending or container fails to start |
| Deployment | ConfigMaps, Secrets, ServiceAccount | Pods fail to start |
| Service | None (Pods selected by labels) | Service works but has no endpoints |
| Ingress | Service | Ingress backend unhealthy |
| PVC | StorageClass | PVC stays Pending |
Strategies for Handling Dependencies:
1. Apply Everything Together
kubectl apply -f k8s/ # Apply entire directory
Kubernetes reconciliation will eventually resolve dependencies. This works well for most cases.
2. Apply in Order (Critical Dependencies)
kubectl apply -f k8s/namespaces.yaml
kubectl apply -f k8s/crds/ # Custom Resource Definitions first
kubectl apply -f k8s/configmaps/
kubectl apply -f k8s/secrets/
kubectl apply -f k8s/deployments/
kubectl apply -f k8s/services/
kubectl apply -f k8s/ingresses/
3. Use Init Containers for Runtime Dependencies
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z db-service 5432; do sleep 1; done']
Custom Resource Definitions (CRDs) must be applied and become Available before you can create Custom Resources of that type. Unlike other dependencies, this isn't eventually consistent—the API rejects unknown resource types. Always apply CRDs first.
Before applying changes to production, you want to see what will change. kubectl provides tools for this.
Dry Run Modes:
# Client-side dry run (no API validation)kubectl apply -f deployment.yaml --dry-run=client# Output: deployment.apps/web-app created (dry run)# Only checks YAML syntax, not server-side validation # Server-side dry run (full validation, no persist)kubectl apply -f deployment.yaml --dry-run=server# API server validates request, runs admission webhooks# But doesn't persist the change # Output the resulting manifestkubectl apply -f deployment.yaml --dry-run=client -o yaml# Shows what would be sent to API server # Diff: See what would changekubectl diff -f deployment.yaml# Shows diff between local manifest and server state# Like 'git diff' for Kubernetes resources # Example diff output:# - replicas: 3# + replicas: 5# - image: myapp:v1# + image: myapp:v2Best Practices for Safe Deployments:
kubectl diff -f manifest.yaml before applyGitOps Integration:
Tools like ArgoCD and Flux automate this workflow:
In PRs, run 'kubectl diff' and post the output as a comment. Reviewers can see exactly what Kubernetes changes the PR introduces. This is like 'terraform plan' for Kubernetes—review before apply.
Declarative configuration creates and updates resources—but what about deletion? If you remove a resource from your manifests, by default it stays in the cluster.
The Problem:
# Day 1: Apply all resources
kubectl apply -f k8s/
# Creates: web-deployment, api-deployment, old-deployment
# Day 2: Remove old-deployment.yaml from k8s/
kubectl apply -f k8s/
# Creates/updates: web-deployment, api-deployment
# old-deployment still exists! (orphaned)
The Solution: --prune
# Prune: delete resources not in manifests (use carefully!)kubectl apply -f k8s/ --prune -l app=myapp # How it works:# 1. Apply all manifests in k8s/# 2. Find all resources with label app=myapp# 3. Delete resources with that label not in manifests # Safer: dry run firstkubectl apply -f k8s/ --prune -l app=myapp --dry-run=server # Modern approach: --prune with --applyset (Kubernetes 1.27+)kubectl apply -f k8s/ --prune --applyset=myapp-resources# Tracks applied resources in a ConfigMap, prunes based on that # Example workflow:# 1. Always use consistent labels on all resources# 2. Always use the same apply command with --prune -l# 3. When you remove a manifest file, resource is deletedGitOps Pruning:
GitOps tools handle this more elegantly:
These track what was applied from Git and automatically delete orphaned resources.
Deletion Strategies:
| Strategy | Behavior | Use Case |
|---|---|---|
| No prune | Orphaned resources remain | Legacy migration, careful control |
| --prune with labels | Delete matching orphans | Traditional kubectl GitOps |
| ApplySet pruning | Track and delete orphans | Modern kubectl (1.27+) |
| GitOps auto-prune | Tool tracks and deletes | ArgoCD, Flux |
Misconfigured prune labels can delete resources you didn't intend. If your label selector is too broad, you might delete unrelated resources. Always use specific labels and test with --dry-run=server first.
Kustomize is built into kubectl and provides a way to customize manifests without templates. It follows a declarative, overlay-based approach.
Core Concepts:
Kustomize Project Structure: app/├── base/│ ├── kustomization.yaml│ ├── deployment.yaml│ ├── service.yaml│ └── configmap.yaml├── overlays/│ ├── development/│ │ ├── kustomization.yaml│ │ └── patches/│ │ └── replicas.yaml│ ├── staging/│ │ ├── kustomization.yaml│ │ ├── patches/│ │ │ └── replicas.yaml│ │ └── resources.yaml│ └── production/│ ├── kustomization.yaml│ ├── patches/│ │ ├── replicas.yaml│ │ └── resources.yaml│ └── hpa.yaml12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
# base/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization resources: - deployment.yaml - service.yaml commonLabels: app: myapp ---# overlays/production/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization resources: - ../../base - hpa.yaml namespace: production # Add name suffixnameSuffix: -prod # Override image tagimages: - name: myapp newTag: v2.1.0 # Apply patchespatches: - path: patches/replicas.yaml - path: patches/resources.yaml # Generate ConfigMap from filesconfigMapGenerator: - name: app-config files: - config.properties options: labels: env: production ---# overlays/production/patches/replicas.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myapp # matches base namespec: replicas: 5 # override base valueUsing Kustomize:
# Preview rendered manifests
kubectl kustomize overlays/production/
# Apply directly
kubectl apply -k overlays/production/
# Or with --dry-run
kubectl apply -k overlays/production/ --dry-run=server
Kustomize: Patch-based, no templates, built into kubectl. Best for customizing your own applications. Helm: Template-based, package manager, huge ecosystem. Best for third-party charts and complex parameterization. You can even combine them—use Helm to install base, Kustomize to customize.
We've covered the declarative paradigm that makes Kubernetes powerful. Let's consolidate the key takeaways:
What's Next:
You now understand Kubernetes' declarative model. The final page explores the broader Kubernetes ecosystem—the tools, extensions, and resources that make Kubernetes a complete platform for production workloads.
You now understand Kubernetes' declarative paradigm deeply—how manifests work, how reconciliation maintains state, and how to structure configurations for real projects. This knowledge is fundamental for effective Kubernetes operations. Next, we'll explore the broader Kubernetes ecosystem.