Loading content...
Infrastructure as code brings tremendous benefits: version control, reproducibility, peer review, automation. But it creates a fundamental tension when it comes to secrets. Your database password, API keys, TLS private keys, and other sensitive credentials are as much a part of configuration as nginx worker counts or log retention policies.
Yet the cardinal rule of security is clear: never commit secrets to version control. How do we reconcile infrastructure-as-code with secret security? How do we achieve reproducibility without storing plaintext credentials in Git?
This challenge has spawned an entire category of specialized tools and practices. Secrets management is no longer an afterthought—it's a first-class concern in modern infrastructure.
Secrets leaked to Git remain in history forever. Attackers actively scan public repositories for accidentally committed credentials. A single exposed AWS key can result in cryptocurrency mining bills in the hundreds of thousands of dollars. Database credentials can lead to complete data breaches. This is not theoretical—it happens to organizations of all sizes, regularly.
This page provides a comprehensive exploration of secrets management: the types of secrets in configuration, the risks of improper handling, the major tools and approaches (HashiCorp Vault, AWS Secrets Manager, SOPS, External Secrets Operator), integration patterns, and best practices for secure, auditable, and operationally practical secrets management.
Secrets are sensitive data that, if exposed, could lead to unauthorized access, data breaches, or system compromise. In configuration management, secrets fall into several categories:
Types of Configuration Secrets
| Category | Examples | Sensitivity | Typical Lifespan |
|---|---|---|---|
| Database Credentials | PostgreSQL password, MongoDB connection string | Critical | Long (months) |
| API Keys | AWS access keys, Stripe API keys, SendGrid keys | High-Critical | Medium (weeks-months) |
| TLS/SSL Certificates | Private keys, certificate files | Critical | Long (1-2 years) |
| Service Account Tokens | Kubernetes SA tokens, GCP service accounts | High | Variable |
| Encryption Keys | AES keys, KMS key IDs, JWT signing keys | Critical | Long (may be permanent) |
| OAuth/SSO Secrets | Client secrets, SAML certificates | High | Medium-Long |
| Internal Tokens | Inter-service authentication, API gateways | Medium-High | Short-Medium |
| Webhook Secrets | GitHub webhook secrets, Slack signing secrets | Medium | Long |
The Secrets Lifecycle
Secrets have a lifecycle that must be managed:
Generation — Creating secrets with appropriate entropy and format requirements.
Storage — Safely storing secrets with encryption at rest and access controls.
Distribution — Getting secrets to applications and systems that need them.
Rotation — Regularly changing secrets to limit exposure window.
Revocation — Immediately disabling compromised or no-longer-needed secrets.
Auditing — Tracking who accessed which secrets when.
Each phase presents security challenges. A comprehensive secrets management strategy addresses all phases.
Removing a secret from Git doesn't remove it from history. Even after removal, the secret remains in every clone's history until a history rewrite (git filter-branch or BFG Repo-Cleaner) is performed—and even then, cached copies may exist in CI systems, developer machines, or forks. Prevention is massively easier than remediation.
HashiCorp Vault has emerged as the industry standard for enterprise secrets management. It provides a unified approach to secrets across on-premises and cloud environments.
Vault Architecture
Vault operates as a centralized secrets management service with several key architectural components:
Storage Backend — Where encrypted secrets are persisted (Consul, DynamoDB, S3, integrated storage).
Secrets Engines — Pluggable components that store, generate, or transform secrets (KV, database, AWS, PKI).
Auth Methods — How clients authenticate (tokens, AppRole, Kubernetes, AWS IAM, LDAP).
Policies — Fine-grained access control defining what authenticated identities can do.
Audit Devices — Logging of all access to secrets for compliance and forensics.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# Vault Server Configuration# Production-ready configuration with HA and auto-unseal # Storage backend: Integrated Raft storage for HAstorage "raft" { path = "/vault/data" node_id = "vault-1" # Cluster configuration for HA retry_join { leader_api_addr = "https://vault-1.internal:8200" } retry_join { leader_api_addr = "https://vault-2.internal:8200" } retry_join { leader_api_addr = "https://vault-3.internal:8200" }} # Listener configurationlistener "tcp" { address = "0.0.0.0:8200" cluster_address = "0.0.0.0:8201" # TLS configuration (required in production) tls_cert_file = "/vault/certs/vault.crt" tls_key_file = "/vault/certs/vault.key" # mTLS for cluster communication tls_client_ca_file = "/vault/certs/ca.crt"} # Auto-unseal using AWS KMSseal "awskms" { region = "us-east-1" kms_key_id = "alias/vault-unseal"} # API and cluster addressesapi_addr = "https://vault.internal:8200"cluster_addr = "https://vault-1.internal:8201" # UI access (optional)ui = true # Audit loggingaudit { file { path = "/vault/logs/audit.log" }} # Telemetry for monitoringtelemetry { prometheus_retention_time = "30s" disable_hostname = true}Key Vault Features
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
# Vault Policy: Web Application Access# Defines exactly what the web application can access # Read-only access to application configurationpath "secret/data/apps/webapp/config" { capabilities = ["read"]} # Read-only access to database credentialspath "database/creds/webapp-role" { capabilities = ["read"]} # Ability to generate TLS certificatespath "pki/issue/webapp" { capabilities = ["create", "update"]} # No access to other applications' secrets# (implicit deny - Vault is deny-by-default) ---# Vault Policy: Database Administrator# Broader access for database management # Manage database connectionspath "database/config/*" { capabilities = ["create", "read", "update", "delete", "list"]} # Manage database rolespath "database/roles/*" { capabilities = ["create", "read", "update", "delete", "list"]} # Rotate root credentialspath "database/rotate-root/*" { capabilities = ["update"]} # Read (but not modify) secrets for backup purposespath "secret/data/apps/+/database" { capabilities = ["read"]} ---# Vault Policy: Operator (break-glass access)# Emergency access with heavy auditing path "secret/*" { capabilities = ["read", "list"] # Require MFA for this policy required_parameters = ["mfa_method", "mfa_passcode"]} # Cannot delete or modify - read-only emergency accesspath "auth/*" { capabilities = ["read", "list"]}123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176
"""Vault Integration: Application Secret RetrievalDemonstrates secure secret consumption in applications""" import hvacimport osfrom functools import lru_cachefrom typing import Optional, Dictimport logging logging.basicConfig(level=logging.INFO)logger = logging.getLogger(__name__) class VaultClient: """ Production-ready Vault client with caching and error handling. """ def __init__(self): self.vault_addr = os.environ.get('VAULT_ADDR', 'https://vault.internal:8200') self.client = self._authenticate() def _authenticate(self) -> hvac.Client: """ Authenticate using the appropriate method for the environment. Supports Kubernetes auth, AppRole, and token-based auth. """ client = hvac.Client(url=self.vault_addr) # Kubernetes Service Account Authentication (preferred in K8s) if os.path.exists('/var/run/secrets/kubernetes.io/serviceaccount/token'): with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f: jwt = f.read() vault_role = os.environ.get('VAULT_ROLE', 'webapp') client.auth.kubernetes.login( role=vault_role, jwt=jwt, mount_point='kubernetes' ) logger.info("Authenticated via Kubernetes service account") # AppRole Authentication (for non-K8s environments) elif os.environ.get('VAULT_ROLE_ID'): role_id = os.environ['VAULT_ROLE_ID'] secret_id = os.environ.get('VAULT_SECRET_ID', '') client.auth.approle.login( role_id=role_id, secret_id=secret_id, mount_point='approle' ) logger.info("Authenticated via AppRole") # Token-based (development only) elif os.environ.get('VAULT_TOKEN'): client.token = os.environ['VAULT_TOKEN'] logger.warning("Using token auth - not recommended for production") else: raise RuntimeError("No valid Vault authentication method available") if not client.is_authenticated(): raise RuntimeError("Vault authentication failed") return client def get_secret(self, path: str, key: Optional[str] = None) -> Dict: """ Retrieve a secret from Vault's KV v2 engine. Args: path: The path to the secret (e.g., 'apps/webapp/config') key: Optional specific key within the secret Returns: The secret data as a dictionary """ try: secret = self.client.secrets.kv.v2.read_secret_version( path=path, mount_point='secret' ) data = secret['data']['data'] if key: return {key: data.get(key)} return data except hvac.exceptions.InvalidPath: logger.error(f"Secret not found: {path}") raise except hvac.exceptions.Forbidden: logger.error(f"Access denied to secret: {path}") raise def get_database_credentials(self, role: str) -> Dict[str, str]: """ Generate dynamic database credentials. These credentials are short-lived and automatically revoked. Args: role: The database role to use Returns: Dictionary with 'username' and 'password' """ creds = self.client.secrets.database.generate_credentials( name=role, mount_point='database' ) # Log the lease for management lease_id = creds['lease_id'] lease_duration = creds['lease_duration'] logger.info(f"Generated DB credentials (lease: {lease_id}, TTL: {lease_duration}s)") return { 'username': creds['data']['username'], 'password': creds['data']['password'], 'lease_id': lease_id, 'lease_duration': lease_duration } def get_aws_credentials(self, role: str) -> Dict[str, str]: """ Generate dynamic AWS credentials via the AWS secrets engine. Args: role: The AWS role to assume Returns: AWS credentials dictionary """ creds = self.client.secrets.aws.generate_credentials( name=role, mount_point='aws' ) return { 'access_key': creds['data']['access_key'], 'secret_key': creds['data']['secret_key'], 'security_token': creds['data'].get('security_token'), 'lease_id': creds['lease_id'] } def renew_lease(self, lease_id: str) -> bool: """ Renew a lease to extend credential lifetime. Call this periodically for long-running processes. """ try: self.client.sys.renew_lease(lease_id=lease_id) logger.info(f"Renewed lease: {lease_id}") return True except hvac.exceptions.InvalidRequest: logger.warning(f"Could not renew lease: {lease_id}") return False # Application usage exampleif __name__ == "__main__": vault = VaultClient() # Get application configuration config = vault.get_secret('apps/webapp/config') print(f"App config loaded: {list(config.keys())}") # Get dynamic database credentials db_creds = vault.get_database_credentials('webapp-role') print(f"DB user: {db_creds['username']} (TTL: {db_creds['lease_duration']}s)") # Use credentials to connect to database # connection_string = f"postgresql://{db_creds['username']}:{db_creds['password']}@..."Dynamic secrets fundamentally change the security model. Instead of long-lived credentials that can be stolen and used indefinitely, each application instance gets unique, short-lived credentials. If compromised, they expire automatically. Rotation becomes a non-issue—every credential is 'rotated' on every request.
Cloud providers offer managed secrets services that integrate deeply with their ecosystems. For organizations already invested in a cloud platform, these can provide excellent secrets management with minimal operational overhead.
AWS Secrets Manager
AWS Secrets Manager provides centralized secrets management with automatic rotation for supported services (RDS, Redshift, DocumentDB).
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091
# Terraform: AWS Secrets Manager Configuration# Demonstrates secret creation with automatic rotation # Create a secret for RDS credentialsresource "aws_secretsmanager_secret" "db_credentials" { name = "production/database/credentials" description = "PostgreSQL database credentials for production" # KMS key for encryption (use customer-managed for production) kms_key_id = aws_kms_key.secrets.id # Recovery window for accidental deletion protection recovery_window_in_days = 30 tags = { Environment = "production" Application = "webapp" ManagedBy = "terraform" }} # Store the secret valueresource "aws_secretsmanager_secret_version" "db_credentials" { secret_id = aws_secretsmanager_secret.db_credentials.id secret_string = jsonencode({ username = "app_user" password = random_password.db_password.result engine = "postgresql" host = aws_db_instance.main.address port = 5432 dbname = "production" })} # Generate random passwordresource "random_password" "db_password" { length = 32 special = true # Exclude characters that may cause issues in connection strings override_special = "!#$%&*()-_=+[]{}|:,.<>?"} # Configure automatic rotationresource "aws_secretsmanager_secret_rotation" "db_credentials" { secret_id = aws_secretsmanager_secret.db_credentials.id rotation_lambda_arn = aws_lambda_function.secret_rotation.arn rotation_rules { automatically_after_days = 30 }} # IAM policy for application accessresource "aws_iam_policy" "secret_access" { name = "webapp-secret-access" description = "Allow webapp to read database credentials" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ] Resource = aws_secretsmanager_secret.db_credentials.arn }, { Effect = "Allow" Action = [ "kms:Decrypt" ] Resource = aws_kms_key.secrets.arn Condition = { StringEquals = { "kms:ViaService" = "secretsmanager.us-east-1.amazonaws.com" } } } ] })} # Output for reference (don't output secret values!)output "secret_arn" { value = aws_secretsmanager_secret.db_credentials.arn description = "ARN of the database credentials secret"}Kubernetes External Secrets Operator
For Kubernetes environments, the External Secrets Operator synchronizes secrets from external sources (AWS Secrets Manager, Vault, GCP Secret Manager) into Kubernetes Secrets.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103
# External Secrets Operator: Sync from AWS to Kubernetes# The secret lives in AWS Secrets Manager; ESO creates K8s Secret # SecretStore: Credentials and connection config for AWSapiVersion: external-secrets.io/v1beta1kind: ClusterSecretStoremetadata: name: aws-secrets-managerspec: provider: aws: service: SecretsManager region: us-east-1 auth: jwt: serviceAccountRef: name: external-secrets-sa namespace: external-secrets ---# ExternalSecret: Define which secret to syncapiVersion: external-secrets.io/v1beta1kind: ExternalSecretmetadata: name: db-credentials namespace: productionspec: # How often to refresh from source refreshInterval: 1h # Which SecretStore to use secretStoreRef: name: aws-secrets-manager kind: ClusterSecretStore # Target Kubernetes Secret configuration target: name: db-credentials creationPolicy: Owner template: type: Opaque data: # Connection string for applications DATABASE_URL: "postgresql://{{ .username }}:{{ .password }}@{{ .host }}:{{ .port }}/{{ .dbname }}" # Mapping from external secret to K8s Secret data: - secretKey: username remoteRef: key: production/database/credentials property: username - secretKey: password remoteRef: key: production/database/credentials property: password - secretKey: host remoteRef: key: production/database/credentials property: host - secretKey: port remoteRef: key: production/database/credentials property: port - secretKey: dbname remoteRef: key: production/database/credentials property: dbname ---# Application consuming the synced secretapiVersion: apps/v1kind: Deploymentmetadata: name: webapp namespace: productionspec: template: spec: containers: - name: webapp image: webapp:v2.3.1 env: # Use the synced secret - name: DATABASE_URL valueFrom: secretKeyRef: name: db-credentials key: DATABASE_URL # Or mount as file volumeMounts: - name: db-credentials mountPath: /secrets/database readOnly: true volumes: - name: db-credentials secret: secretName: db-credentials| Feature | AWS Secrets Manager | GCP Secret Manager | Azure Key Vault |
|---|---|---|---|
| Auto Rotation | Lambda-based for RDS, Redshift | Cloud Functions based | Event Grid + Functions |
| Versioning | Yes, with staging labels | Yes, version numbers | Yes, with versions |
| Replication | Cross-region replication | Automatic replication | Geo-replication |
| IAM Integration | IAM policies + resource policies | IAM + Secret Manager roles | RBAC + access policies |
| Audit Logging | CloudTrail | Cloud Audit Logs | Azure Monitor + logs |
| Pricing | $0.40/secret/month + API calls | $0.06/secret version/month | Based on operations |
| K8s Integration | External Secrets Operator | Workload Identity + ESO | Azure Key Vault Provider |
For single-cloud shops, managed secrets services are often sufficient and operationally simpler. HashiCorp Vault shines in multi-cloud, hybrid, or on-premises environments where a single, consistent secrets management layer is valuable across all infrastructure. Vault also offers more advanced features like dynamic secrets for a wider range of systems.
Mozilla's SOPS (Secrets OPerationS) takes a different approach: encrypt secrets and store the encrypted files directly in Git. This maintains the GitOps workflow—secrets are version-controlled, peer-reviewed, and deployed like any other configuration—while keeping plaintext secrets out of the repository.
How SOPS Works
.sops.yaml)The key insight is that only values are encrypted—the file structure remains visible. This enables code review, diff viewing, and merge conflict resolution without exposing secrets.
12345678910111213141516171819202122
# SOPS Configuration: Define encryption keys and rules# This file tells SOPS how to encrypt files in this repository creation_rules: # Production secrets: encrypted with AWS KMS + PGP backup - path_regex: secrets/production/.*\.yaml$ kms: 'arn:aws:kms:us-east-1:123456789:alias/sops-production' gcp_kms: 'projects/myproject/locations/global/keyRings/sops/cryptoKeys/prod' # PGP fingerprints for offline recovery pgp: '1234567890ABCDEF1234567890ABCDEF12345678' # Staging secrets: development team can decrypt - path_regex: secrets/staging/.*\.yaml$ kms: 'arn:aws:kms:us-east-1:123456789:alias/sops-staging' # Development secrets: broader access - path_regex: secrets/development/.*\.yaml$ age: 'age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p' # Default: fail if no explicit rule matches - path_regex: .* kms: '' # Empty = no encryption, will fail if secrets present12345678910111213141516171819202122232425262728
# SOPS Encrypted File: Database Credentials# Notice: keys are visible, only values are encrypted database: host: ENC[AES256_GCM,data:Pj2...,iv:...,tag:...,type:str] port: 5432 # Non-sensitive values can remain unencrypted name: production credentials: username: ENC[AES256_GCM,data:C8O...,iv:...,tag:...,type:str] password: ENC[AES256_GCM,data:mHz...,iv:...,tag:...,type:str] ssl: enabled: true ca_cert: ENC[AES256_GCM,data:very-long-encrypted-cert...,type:str] # SOPS metadata (added automatically)sops: kms: - arn: arn:aws:kms:us-east-1:123456789:alias/sops-production created_at: "2024-01-15T10:30:00Z" enc: AQEDAHhq... aws_profile: "" gcp_kms: - resource_id: projects/myproject/locations/global/keyRings/sops/cryptoKeys/prod created_at: "2024-01-15T10:30:00Z" enc: CiQA... lastmodified: "2024-01-15T10:30:00Z" mac: ENC[AES256_GCM,data:...,iv:...,tag:...,type:str] version: 3.8.11234567891011121314151617181920212223242526272829303132333435
#!/bin/bash# SOPS Workflow: Common Operations # Create a new encrypted secrets filesops secrets/production/new-service.yaml # SOPS opens your editor with a template# When you save, values are encrypted automatically # Edit an existing encrypted filesops secrets/production/database.yaml # SOPS decrypts to your editor, re-encrypts on save # Decrypt to stdout (for piping to applications)sops --decrypt secrets/production/database.yaml # Extract a specific valuesops --decrypt --extract '["database"]["credentials"]["password"]' \ secrets/production/database.yaml # Rotate the data key (re-encrypt with new data key)sops --rotate secrets/production/database.yaml # Update which KMS keys can decrypt (add new key)sops --rotate \ --add-kms 'arn:aws:kms:eu-west-1:123456789:alias/sops-dr' \ secrets/production/database.yaml # Use in CI/CD (assumes IAM role or service account has KMS access)export DATABASE_PASSWORD=$(sops --decrypt --extract '["database"]["credentials"]["password"]' secrets/production/database.yaml) # Kubernetes integration with Kustomize + SOPS# Uses kustomize-sops or KSOPS pluginkustomize build --enable-alpha-plugins . | kubectl apply -f -SOPS and Vault aren't mutually exclusive. A common pattern: use SOPS for infrastructure secrets (IaC configuration, deployment secrets) and Vault for application secrets (database credentials, API keys). SOPS handles the 'bootstrap' secrets that configure Vault, avoiding a chicken-and-egg problem.
CI/CD pipelines present unique secrets challenges. Pipelines need credentials to deploy infrastructure, push images, and configure applications—but pipelines are also attack vectors. A compromised pipeline with access to production secrets is a catastrophic breach.
CI/CD Secrets Best Practices
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
# GitHub Actions: Secure Secrets Handling# Demonstrates OIDC federation and minimal credential exposure name: Production Deploy on: push: branches: [main] workflow_dispatch: # Define required permissionspermissions: id-token: write # Required for OIDC contents: read jobs: deploy: runs-on: ubuntu-latest environment: production # Requires environment approval steps: - uses: actions/checkout@v4 # OIDC Authentication to AWS (no stored credentials!) - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::123456789:role/github-actions-deploy role-session-name: github-deploy-${{ github.run_id }} aws-region: us-east-1 # Retrieve secrets from Secrets Manager (not stored in GitHub) - name: Get deployment secrets id: secrets run: | # Retrieve secrets via AWS CLI (role has permission) DB_URL=$(aws secretsmanager get-secret-value \ --secret-id production/database/credentials \ --query SecretString --output text | jq -r '.connection_url') # Set as masked output (won't appear in logs) echo "::add-mask::$DB_URL" echo "database_url=$DB_URL" >> $GITHUB_OUTPUT # Decrypt SOPS files for deployment - name: Decrypt configuration run: | # AWS assumed role has KMS decrypt permission sops --decrypt secrets/production/config.yaml > config.yaml # Deploy with secrets available only for this step - name: Deploy application env: DATABASE_URL: ${{ steps.secrets.outputs.database_url }} run: | ./scripts/deploy.sh # Secrets automatically expire when workflow ends # (STS credentials, GitHub token, etc.) # Separate job for audit logging audit: needs: deploy runs-on: ubuntu-latest permissions: id-token: write steps: - name: Log deployment run: | # Log to audit system (not shown in detail) echo "Deployed by ${{ github.actor }} at $(date)"OIDC federation is a game-changer for CI/CD secrets. Instead of storing long-lived cloud credentials in your CI system:
This eliminates an entire class of credential exposure risks.
Assume CI environments are compromised. Third-party actions can be malicious. Environment variables can be exfiltrated. Treat pipelines as untrusted code running with powerful credentials. Defense in depth: short-lived credentials, minimal permissions, extensive logging, and clear blast radius limits.
Effective secrets management requires a comprehensive approach spanning technology, process, and culture. These best practices synthesize industry experience into actionable guidance.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
#!/bin/bash# Pre-commit hook: Prevent secrets in commits# Save as .git/hooks/pre-commit set -e echo "Checking for secrets in staged files..." # Pattern matching for common secret formatsPATTERNS=( 'AKIA[A-Z0-9]{16}' # AWS Access Key ID '[a-zA-Z0-9/+]{40}' # AWS Secret Key (40 chars base64) 'sk-[a-zA-Z0-9]{32}' # Stripe/OpenAI API keys 'ghp_[a-zA-Z0-9]{36}' # GitHub Personal Access Token 'glpat-[a-zA-Z0-9_-]{20}' # GitLab Personal Access Token 'xoxb-[0-9]{10,13}-[0-9]{10,13}-[a-zA-Z0-9]{24}' # Slack Bot Token 'passwords*[=:]s*["047][^047"]+["047]' # password = "..." 'secrets*[=:]s*["047][^047"]+["047]' # secret = "..." '-----BEGIN (RSA |EC )?PRIVATE KEY-----' # Private keys 'mongodb+srv://[^:]+:[^@]+@' # MongoDB connection strings 'postgres://[^:]+:[^@]+@' # PostgreSQL connection strings) FOUND_SECRETS=false for file in $(git diff --cached --name-only --diff-filter=ACM); do # Skip binary files and specific directories if file "$file" | grep -q "text"; then for pattern in "${PATTERNS[@]}"; do if grep -qE "$pattern" "$file" 2>/dev/null; then echo "ERROR: Potential secret found in $file" grep -nE "$pattern" "$file" | head -5 FOUND_SECRETS=true fi done fidone if $FOUND_SECRETS; then echo "" echo "=== COMMIT BLOCKED ===" echo "Secrets detected in staged files." echo "If these are false positives, add them to .secretsignore" echo "Or use 'git commit --no-verify' to bypass (not recommended)" exit 1fi # Also run gitleaks for comprehensive scanningif command -v gitleaks &> /dev/null; then gitleaks protect --staged --verbosefi echo "No secrets detected. Proceeding with commit."| Level | Characteristics | Risk Level |
|---|---|---|
| Level 0: Chaos | Secrets in code, environment vars, shared files. No rotation. No inventory. | Critical |
| Level 1: Basic | Centralized secrets (Vault, cloud SM). Manual rotation. Basic access controls. | High |
| Level 2: Managed | Automated rotation. Comprehensive auditing. Dynamic secrets for some systems. | Medium |
| Level 3: Advanced | Full dynamic secrets. OIDC everywhere. Zero-knowledge principles. Real-time anomaly detection. | Low |
| Level 4: Zero-Trust | Secrets accessed only at point of use. Ephemeral credentials. Continuous verification. | Minimal |
Before implementing new tools, inventory existing secrets: Where do they live? Who has access? When were they last rotated? Who owns them? This audit often reveals forgotten credentials, over-permissioned access, and secrets that should have been revoked long ago. You can't secure what you don't know about.
We've conducted a comprehensive exploration of secrets management in configuration. Let's consolidate the key insights:
What's Next:
With comprehensive coverage of secrets management, we'll conclude this module with configuration management best practices. We'll synthesize the patterns and anti-patterns across all aspects of configuration management into actionable guidance for building reliable, secure, and maintainable infrastructure.
You now possess comprehensive knowledge of secrets management in configuration: the challenges, tools, integration patterns, and best practices. This foundation enables you to design and implement secure, auditable, and operationally practical secrets management for any infrastructure environment. Next, we'll explore configuration management best practices.