Loading content...
Running terraform apply from your laptop works fine for learning—but it's a recipe for disaster in production. Consider the failure modes:
Professional Terraform usage requires a workflow—a defined process that takes infrastructure changes from initial proposal through review, approval, and controlled deployment. This workflow typically involves version control (Git), pull requests for code review, CI/CD pipelines for execution, and operational practices for safety.
This page covers the complete professional Terraform workflow used by mature engineering organizations.
By the end of this page, you will understand the Git-based Terraform workflow, CI/CD integration patterns for GitHub Actions and GitLab CI, code review best practices for infrastructure changes, environment promotion strategies, and operational safety practices for production infrastructure.
The foundation of professional Terraform usage is version control. All Terraform configuration should live in Git repositories with proper branching strategies, pull request workflows, and commit history.
The typical workflow follows this pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
┌─────────────────────────────────────────────────────────────────────────────┐│ GIT-BASED TERRAFORM WORKFLOW │├─────────────────────────────────────────────────────────────────────────────┤│ ││ 1. CREATE BRANCH ││ ───────────────── ││ $ git checkout main ││ $ git pull origin main ││ $ git checkout -b feature/add-redis-cluster ││ ││ 2. MAKE CHANGES ││ ───────────────── ││ • Edit .tf files for desired infrastructure changes ││ • Run terraform fmt to format code ││ • Run terraform validate to check syntax ││ • Run terraform plan to preview changes locally ││ ││ 3. COMMIT AND PUSH ││ ───────────────── ││ $ git add . ││ $ git commit -m "Add Redis ElastiCache cluster for session caching" ││ $ git push origin feature/add-redis-cluster ││ ││ 4. CREATE PULL REQUEST ││ ───────────────── ││ • Open PR in GitHub/GitLab ││ • CI automatically runs: fmt check, validate, plan ││ • Plan output is posted as PR comment ││ • Reviewers examine both code AND plan output ││ ││ 5. CODE REVIEW ││ ───────────────── ││ • Reviewer checks configuration for correctness ││ • Reviewer examines plan for unintended changes ││ • Address feedback, push additional commits ││ • Reviewer approves PR ││ ││ 6. MERGE AND APPLY ││ ───────────────── ││ • Merge PR to main branch ││ • CI/CD pipeline triggers terraform apply ││ • Plan re-generated and applied (with optional manual approval) ││ • State updated, infrastructure changes complete ││ ││ 7. VERIFY ││ ───────────────── ││ • Verify infrastructure via monitoring/tests ││ • Update documentation if needed ││ • Close related tickets ││ │└─────────────────────────────────────────────────────────────────────────────┘123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
# RECOMMENDED REPOSITORY STRUCTURE FOR MULTI-ENVIRONMENT TERRAFORM infrastructure/├── .github/│ └── workflows/│ ├── terraform-plan.yml # Run on PR│ └── terraform-apply.yml # Run on merge to main│├── modules/ # Shared modules│ ├── vpc/│ ├── eks/│ ├── rds/│ └── redis/│├── environments/ # Environment-specific configurations│ ├── development/│ │ ├── main.tf│ │ ├── variables.tf│ │ ├── outputs.tf│ │ ├── providers.tf│ │ └── backend.tf # Dev state backend config│ ││ ├── staging/│ │ ├── main.tf│ │ ├── variables.tf│ │ ├── outputs.tf│ │ ├── providers.tf│ │ └── backend.tf # Staging state backend config│ ││ └── production/│ ├── main.tf│ ├── variables.tf│ ├── outputs.tf│ ├── providers.tf│ └── backend.tf # Prod state backend config│├── scripts/ # Helper scripts│ ├── plan.sh│ └── apply.sh│├── .terraform-version # Pin Terraform version (for tfenv)├── .gitignore└── README.md # ALTERNATIVE: TERRAGRUNT STRUCTURE (For DRY Configuration) infrastructure/├── terragrunt.hcl # Root config with common settings├── modules/ # Same as above│└── live/ ├── development/ │ ├── region.hcl # Regional settings │ ├── vpc/ │ │ └── terragrunt.hcl # References ../../../modules/vpc │ └── eks/ │ └── terragrunt.hcl │ ├── staging/ │ └── ... │ └── production/ └── ...Each environment should have its own backend configuration pointing to a separate state file. This prevents any possibility of development changes affecting production state. Use different S3 keys, different DynamoDB tables, and consider different AWS accounts entirely.
CI/CD pipelines automate the validation and application of Terraform changes. A well-designed pipeline provides:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
# ==============================================================================# GITHUB ACTIONS: Terraform Plan on Pull Request# ============================================================================== name: Terraform Plan on: pull_request: branches: [main] paths: - 'environments/**' - 'modules/**' - '.github/workflows/terraform-*.yml' permissions: id-token: write # Required for OIDC authentication contents: read pull-requests: write # To post plan as comment env: TF_VERSION: "1.6.0" AWS_REGION: "us-west-2" jobs: # ============================================================================= # Format Check (fast, catch obvious issues) # ============================================================================= format: name: Format Check runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} - name: Check Format run: terraform fmt -check -recursive -diff # ============================================================================= # Validate and Plan for each environment # ============================================================================= plan: name: Plan ${{ matrix.environment }} runs-on: ubuntu-latest needs: format strategy: fail-fast: false # Continue other envs if one fails matrix: environment: [development, staging, production] steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} # OIDC authentication - no stored credentials! - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsTerraform aws-region: ${{ env.AWS_REGION }} - name: Terraform Init working-directory: environments/${{ matrix.environment }} run: terraform init -input=false - name: Terraform Validate working-directory: environments/${{ matrix.environment }} run: terraform validate - name: Terraform Plan id: plan working-directory: environments/${{ matrix.environment }} run: | terraform plan -input=false -no-color -out=tfplan 2>&1 | tee plan.txt echo "plan<<EOF" >> $GITHUB_OUTPUT cat plan.txt >> $GITHUB_OUTPUT echo "EOF" >> $GITHUB_OUTPUT continue-on-error: true # Post plan as PR comment - name: Post Plan Comment uses: actions/github-script@v7 with: script: | const output = `### Terraform Plan - ${{ matrix.environment }} #### Terraform Format 🖌: \`${{ needs.format.result }}\` #### Terraform Init ⚙️: \`Success\` #### Terraform Plan 📖: \`${{ steps.plan.outcome }}\` <details><summary>Show Plan</summary> \`\`\`terraform ${{ steps.plan.outputs.plan }} \`\`\` </details> *Pushed by: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`; github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: output }); - name: Plan Status if: steps.plan.outcome == 'failure' run: exit 1 # Store plan for apply job - name: Upload Plan uses: actions/upload-artifact@v4 with: name: tfplan-${{ matrix.environment }} path: environments/${{ matrix.environment }}/tfplan123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115
# ==============================================================================# GITHUB ACTIONS: Terraform Apply on Merge to Main# ============================================================================== name: Terraform Apply on: push: branches: [main] paths: - 'environments/**' - 'modules/**' permissions: id-token: write contents: read env: TF_VERSION: "1.6.0" AWS_REGION: "us-west-2" jobs: # ============================================================================= # Apply changes in order: dev -> staging -> production # ============================================================================= apply-development: name: Apply Development runs-on: ubuntu-latest environment: development # GitHub Environment for protection rules steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::111111111111:role/GitHubActionsTerraform aws-region: ${{ env.AWS_REGION }} - name: Terraform Init working-directory: environments/development run: terraform init -input=false - name: Terraform Apply working-directory: environments/development run: terraform apply -input=false -auto-approve apply-staging: name: Apply Staging runs-on: ubuntu-latest needs: apply-development environment: staging # May have required reviewers steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::222222222222:role/GitHubActionsTerraform aws-region: ${{ env.AWS_REGION }} - name: Terraform Init working-directory: environments/staging run: terraform init -input=false - name: Terraform Apply working-directory: environments/staging run: terraform apply -input=false -auto-approve apply-production: name: Apply Production runs-on: ubuntu-latest needs: apply-staging environment: production # Required reviewers + wait timer steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::333333333333:role/GitHubActionsTerraform aws-region: ${{ env.AWS_REGION }} - name: Terraform Init working-directory: environments/production run: terraform init -input=false - name: Terraform Plan working-directory: environments/production run: terraform plan -input=false -no-color - name: Terraform Apply working-directory: environments/production run: terraform apply -input=false -auto-approveModern CI/CD should use OpenID Connect (OIDC) for cloud authentication, not stored access keys. OIDC provides short-lived credentials, eliminates secret rotation burden, and reduces blast radius if credentials are exposed. AWS, Azure, and GCP all support OIDC with GitHub Actions, GitLab CI, and other platforms.
Infrastructure code review differs from application code review. A bug in application code might cause a feature to malfunction; a bug in infrastructure code might delete your production database or expose your network to the internet.
What to Review:
-destroy indicator is a red flag requiring explanation.123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475
# ==============================================================================# DANGEROUS PATTERNS TO WATCH FOR IN CODE REVIEW# ============================================================================== # 🚨 DANGER: Security group open to the worldresource "aws_security_group_rule" "dangerous" { type = "ingress" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] # <-- SSH open to internet! security_group_id = aws_security_group.web.id} # 🚨 DANGER: Overly permissive IAM policyresource "aws_iam_policy" "dangerous" { name = "admin-everything" policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Action = "*" # <-- Full admin access! Resource = "*" }] })} # 🚨 DANGER: Public S3 bucketresource "aws_s3_bucket_public_access_block" "dangerous" { bucket = aws_s3_bucket.data.id block_public_acls = false # <-- Allows public ACLs! block_public_policy = false # <-- Allows public policies! ignore_public_acls = false restrict_public_buckets = false} # 🚨 DANGER: Unencrypted RDSresource "aws_db_instance" "dangerous" { identifier = "production-db" engine = "postgres" instance_class = "db.r5.large" storage_encrypted = false # <-- No encryption at rest! # kms_key_id = null # <-- No KMS key specified} # 🚨 DANGER: Disabled deletion protectionresource "aws_db_instance" "dangerous" { identifier = "production-db" engine = "postgres" instance_class = "db.r5.large" deletion_protection = false # <-- Can be accidentally deleted!} # 🚨 DANGER: Hardcoded secretsresource "aws_db_instance" "dangerous" { identifier = "production-db" engine = "postgres" instance_class = "db.r5.large" username = "admin" password = "supersecretpassword123" # <-- Secret in code!} # ✅ SAFE: Use secrets manager or variablesresource "aws_db_instance" "safe" { identifier = "production-db" engine = "postgres" instance_class = "db.r5.large" username = var.db_username manage_master_user_password = true # AWS manages password in Secrets Manager}Configure your CI/CD to require additional approvers for production changes. GitHub Environments, GitLab Protected Environments, and Terraform Cloud all support required reviewers. For high-risk changes (IAM, networking, databases), consider requiring security team approval.
Infrastructure changes should flow through environments just like application changes: development → staging → production. This catches issues before they affect production users.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
┌─────────────────────────────────────────────────────────────────────────────┐│ ENVIRONMENT PROMOTION PATTERNS │├─────────────────────────────────────────────────────────────────────────────┤│ ││ PATTERN 1: SEQUENTIAL PIPELINE (Most Common) ││ ─────────────────────────────────────────── ││ ││ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ││ │ DEVELOPMENT │────▶│ STAGING │────▶│ PRODUCTION │ ││ │ (auto-apply)│ │ (auto-apply) │ │(manual gate) │ ││ └──────────────┘ └──────────────┘ └──────────────┘ ││ ││ • Same code flows through all environments ││ • Problems caught early in dev/staging ││ • Production requires manual approval ││ ││ ─────────────────────────────────────────────────────────────────────────││ ││ PATTERN 2: PARALLEL WITH DIFFERENT TIMING ││ ───────────────────────────────────────── ││ ││ PR Created ───▶ Dev Plan + Dev Apply ││ ││ PR Merged ────▶ Staging Plan + Staging Apply ││ │ ││ ▼ ││ Wait 24 hours ││ │ ││ ▼ ││ Prod Plan + Prod Apply (with approval) ││ ││ ─────────────────────────────────────────────────────────────────────────││ ││ PATTERN 3: CANARY/RING DEPLOYMENT ││ ───────────────────────────────────── ││ ││ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ││ │ PRODUCTION │────▶│ PRODUCTION │────▶│ PRODUCTION │ ││ │ RING 0 │ │ RING 1 │ │ RING 2 │ ││ │(internal/1%) │ │(early/10%) │ │(general/90%) │ ││ └──────────────┘ └──────────────┘ └──────────────┘ ││ ││ • Used for large-scale infrastructure ││ • Gradual rollout with monitoring between stages ││ • Allow rollback between rings ││ │└─────────────────────────────────────────────────────────────────────────────┘12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
# ==============================================================================# DRIFT DETECTION BEFORE PROMOTION# ============================================================================== # It's critical to check for drift before promoting changes.# If production has drifted from its last known state, your carefully-tested# changes might interact unexpectedly with that drift. name: Drift Detection on: schedule: - cron: '0 6 * * *' # Daily at 6 AM workflow_dispatch: # Manual trigger jobs: drift-detection: name: Detect Drift - ${{ matrix.environment }} runs-on: ubuntu-latest strategy: matrix: environment: [development, staging, production] steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ vars.AWS_ROLE_ARN }} aws-region: us-west-2 - name: Terraform Init working-directory: environments/${{ matrix.environment }} run: terraform init -input=false - name: Detect Drift id: drift working-directory: environments/${{ matrix.environment }} run: | terraform plan -input=false -detailed-exitcode -out=tfplan 2>&1 | tee plan.txt echo "exit_code=$?" >> $GITHUB_OUTPUT continue-on-error: true # Exit code 2 = changes detected (drift) - name: Alert on Drift if: steps.drift.outputs.exit_code == '2' uses: slackapi/slack-github-action@v1 with: channel-id: 'C012345' slack-message: | :warning: *Drift Detected in ${{ matrix.environment }}* Infrastructure has drifted from Terraform state. Review: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}Drift occurs when infrastructure is modified outside Terraform (console changes, other tools, manual interventions). Regular drift detection prevents surprises. If you find drift, decide: reconcile to Terraform (refresh state) or reconcile to actual (update config). Don't ignore it.
Production infrastructure changes carry real risk. A misconfigured change can take down your service, expose sensitive data, or cause irreversible data loss. These operational safety practices minimize risk.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
# ==============================================================================# OPERATIONAL COMMANDS REFERENCE# ============================================================================== # VIEW CURRENT STATEterraform state list # List all resourcesterraform state show aws_instance.web # Show specific resourceterraform output # Show all outputsterraform providers # Show configured providers # PLANNINGterraform plan # Standard planterraform plan -out=tfplan # Save plan to fileterraform plan -target=aws_instance.web # Plan single resource (use carefully!)terraform plan -refresh=false # Skip state refresh (faster, but stale)terraform plan -parallelism=5 # Reduce parallelism (for rate limiting) # APPLYINGterraform apply # Apply with interactive approvalterraform apply tfplan # Apply saved plan (no re-plan)terraform apply -auto-approve # Skip approval (CI/CD only!)terraform apply -parallelism=5 # Reduce parallelism # STATE OPERATIONS (backup first!)terraform state pull > backup.tfstate # Download state to local fileterraform state mv OLD NEW # Rename resource in stateterraform state rm RESOURCE # Remove from state (doesn't delete!)terraform import TYPE.NAME ID # Import existing resource # REFRESHINGterraform refresh # Update state from real resourcesterraform apply -refresh-only # Newer: refresh with plan/apply workflow # DESTROYING (extreme caution!)terraform destroy # Destroy all managed resourcesterraform destroy -target=RESOURCE # Destroy specific resourceterraform plan -destroy # Preview destruction without destroying # DEBUGGINGTF_LOG=DEBUG terraform plan # Very verbose loggingTF_LOG=TRACE terraform apply # Extremely verbose (includes API calls)terraform console # Interactive expression evaluatorterraform graph | dot -Tpng > graph.png # Visualize resource graph # VERSION MANAGEMENTterraform version # Show Terraform versionterraform --version # Sameterraform providers lock # Update .terraform.lock.hclUsing -target regularly indicates a design problem. It's intended for exceptional circumstances like recovering from errors. Regular use leads to state drift, broken dependencies, and configurations that can only be applied in specific orders. If you need -target often, split your configuration.
For organizations seeking a managed solution, Terraform Cloud (SaaS) and Terraform Enterprise (self-hosted) provide a complete workflow platform. These platforms handle state management, execution, policy enforcement, and collaboration without custom CI/CD pipelines.
| Feature | Description | Benefit |
|---|---|---|
| Remote State | Managed state storage with encryption, locking, versioning | No S3/DynamoDB setup required |
| Remote Execution | Runs execute in Terraform Cloud, not locally | Consistent execution environment, no local dependencies |
| VCS Integration | Automatic plans on PR, applies on merge | No custom CI/CD pipelines needed |
| Policy as Code (Sentinel) | Define policies that run before apply | Enforce compliance programmatically |
| Private Registry | Host internal modules with versioning and discovery | Easy organizational module sharing |
| SSO/SAML | Enterprise authentication integration | Centralized access management |
| Audit Logging | Complete record of all operations | Compliance and troubleshooting |
| Cost Estimation | Estimate infrastructure costs before apply | Budget awareness during development |
12345678910111213141516171819202122232425262728
# ==============================================================================# TERRAFORM CLOUD CONFIGURATION# ============================================================================== terraform { cloud { organization = "my-company" workspaces { name = "infrastructure-production" } } required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }} # With Terraform Cloud:# - Remote state is automatic (no backend config needed)# - terraform plan/apply execute in the cloud# - VCS integration triggers runs automatically# - Variables and secrets managed in the UI/API1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
# ==============================================================================# SENTINEL POLICY: Enforce Encryption on All S3 Buckets# File: policies/s3-encryption.sentinel# ============================================================================== import "tfplan/v2" as tfplan # Get all S3 bucket resourcess3_buckets = filter tfplan.resource_changes as _, rc { rc.type is "aws_s3_bucket" and rc.mode is "managed" and (rc.change.actions contains "create" or rc.change.actions contains "update")} # Check that all buckets reference a server-side encryption configuration# (This checks for the presence of related resources)main = rule { all s3_buckets as _, bucket { # Must have encryption configuration resource any tfplan.resource_changes as _, rc { rc.type is "aws_s3_bucket_server_side_encryption_configuration" and rc.change.after.bucket is bucket.change.after.bucket } }} # ==============================================================================# SENTINEL POLICY: Require Tags on All Resources# File: policies/required-tags.sentinel# ============================================================================== import "tfplan/v2" as tfplan # Required tags that must exist on all taggable resourcesrequired_tags = ["Environment", "CostCenter", "Owner"] # Taggable resource typestaggable_types = [ "aws_instance", "aws_s3_bucket", "aws_db_instance", "aws_vpc", "aws_subnet",] # Get all taggable resources being created or updatedtaggable_resources = filter tfplan.resource_changes as _, rc { rc.type in taggable_types and (rc.change.actions contains "create" or rc.change.actions contains "update")} # Check that all required tags are presentmain = rule { all taggable_resources as _, resource { all required_tags as tag { resource.change.after.tags contains tag } }}Consider Terraform Cloud when: (1) You don't want to manage state infrastructure (S3, DynamoDB), (2) You need policy enforcement (Sentinel), (3) You want turnkey VCS integration, (4) You lack CI/CD expertise to build custom pipelines. The free tier supports small teams; paid tiers add enterprise features.
Even with the best practices, you'll encounter issues. Here are solutions to the most common Terraform problems.
| Problem | Cause | Solution |
|---|---|---|
| State lock error | Previous run crashed or another user is running | Wait for other run; if stuck, terraform force-unlock LOCK_ID |
| Provider not found | terraform init not run, or version mismatch | Run terraform init -upgrade |
| Resource already exists | Resource created outside Terraform | Import: terraform import ADDR ID |
| Cycle error | Circular dependency between resources | Use depends_on to break cycle, or restructure |
| Timeout errors | Resource taking too long to create | Increase timeouts in resource block |
| Invalid credentials | Expired or missing cloud credentials | Re-authenticate, check environment variables |
| Plan shows destroy-create | Changed an immutable attribute | Confirm change is intended, or adjust config |
| Module not found | Wrong source path or version | Check source URL, run terraform init |
| State contains secrets I need | Need to read a value from state | terraform state show RESOURCE or terraform output |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
# ==============================================================================# DEBUGGING TECHNIQUES# ============================================================================== # Enable debug logging$ TF_LOG=DEBUG terraform plan 2>&1 | tee debug.log # Even more verbose (shows API requests)$ TF_LOG=TRACE terraform apply 2>&1 | tee trace.log # Log to file only$ TF_LOG=DEBUG TF_LOG_PATH="./terraform.log" terraform plan # Test expressions interactively$ terraform console> length(var.subnets)3> cidrsubnet("10.0.0.0/16", 8, 1)"10.0.1.0/24"> aws_instance.web.public_ip"52.10.20.30" # Visualize resource graph$ terraform graph | dot -Tpng -o graph.png # Show providers in use$ terraform providers # Validate configuration syntax$ terraform validate # Check formatting$ terraform fmt -check -diff -recursive # ==============================================================================# RECOVERING FROM STATE ISSUES# ============================================================================== # State corrupted or lost? # 1. If you have versioning enabled on state bucket:# - Restore previous version of terraform.tfstate from S3/Azure/GCS # 2. If no backup, re-import everything:$ terraform import aws_vpc.main vpc-12345abc$ terraform import aws_subnet.public[0] subnet-67890def# ... for each resource # 3. Use terraform state rm to remove problem resources:$ terraform state rm aws_instance.problematic# Then re-import or recreate # State out of sync with reality?$ terraform apply -refresh-only # Update state to match realityProfessional Terraform usage is about process as much as syntax. Let's consolidate the key points:
Module Complete:
You've now mastered Terraform—from the fundamentals of HCL and providers, through state management and modules, to professional workflows. This knowledge equips you to manage infrastructure as code at any scale, from personal projects to enterprise platforms.
The key to mastery is practice. Start with simple configurations, build reusable modules, establish team workflows, and continuously refine your approach based on operational experience. Infrastructure as Code is a journey, and you now have the map.
You now understand Terraform comprehensively—fundamentals, providers, resources, state, modules, and professional workflows. This knowledge positions you to design, implement, and operate infrastructure as code for any organization, following the practices used by the world's most sophisticated engineering teams.