Loading content...
How does a single tool manage infrastructure across AWS, Azure, Google Cloud, Kubernetes, GitHub, Datadog, and thousands of other platforms—each with completely different APIs, authentication mechanisms, and resource models?
The answer lies in Terraform's provider architecture—a plugin-based system that separates Terraform's core logic from the specifics of any individual platform. Providers are the bridges between your declarative configuration and the imperative API calls needed to create actual infrastructure.
This architectural decision is what makes Terraform truly universal. You write the same style of configuration, follow the same workflow, and use the same state management patterns whether you're creating an AWS EC2 instance, an Azure Virtual Network, a Kubernetes Deployment, or a GitHub repository. The provider handles the translation.
By the end of this page, you will understand how providers work internally, how to configure them for various scenarios (multiple accounts, multiple regions, authentication), how resources are defined and managed, the full resource lifecycle, and patterns for working with complex resource attributes. You'll be equipped to work with any provider confidently.
A provider is a plugin that teaches Terraform how to interact with a specific API or platform. Providers are responsible for:
12345678910111213141516171819202122232425262728293031323334353637383940
┌─────────────────────────────────────────────────────────────────────────────┐│ TERRAFORM PROVIDER ECOSYSTEM │├─────────────────────────────────────────────────────────────────────────────┤│ ││ TERRAFORM CORE ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ • Configuration parsing │ ││ │ • Dependency graph construction │ ││ │ • State management │ ││ │ • Plan generation │ ││ │ • Uses gRPC to communicate with providers │ ││ └─────────────────────────────────────────────────────────────────────┘ ││ │ ││ gRPC calls via plugin protocol ││ │ ││ ▼ ││ PROVIDER LAYER (Plugins - separate binaries) ││ ┌─────────────┬─────────────┬─────────────┬─────────────┬─────────────┐ ││ │ CLOUD │ PLATFORMS │ SERVICES │ SAAS/OPS │ UTILITIES │ ││ ├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┤ ││ │ hashicorp/ │ hashicorp/ │ integrations│ integrations│ hashicorp/ │ ││ │ aws │ kubernetes │ /github │ /datadog │ random │ ││ │ │ │ │ │ │ ││ │ hashicorp/ │ hashicorp/ │ integrations│ integrations│ hashicorp/ │ ││ │ azurerm │ helm │ /gitlab │ /pagerduty │ null │ ││ │ │ │ │ │ │ ││ │ hashicorp/ │ hashicorp/ │ cloudflare/ │ mongodb/ │ hashicorp/ │ ││ │ google │ nomad │ cloudflare │ mongodbatlas│ local │ ││ │ │ │ │ │ │ ││ │ hashicorp/ │ hashicorp/ │ digitalocean│ snowflake/ │ hashicorp/ │ ││ │ oci │ consul │ /digital.. │ snowflake │ tls │ ││ └─────────────┴─────────────┴─────────────┴─────────────┴─────────────┘ ││ ││ PROVIDER REGISTRY: registry.terraform.io ││ • 3,500+ providers available ││ • HashiCorp official, verified partners, and community providers ││ • Versioned releases with changelogs ││ • Documentation for every resource and data source ││ │└─────────────────────────────────────────────────────────────────────────────┘Provider Installation and Versioning:
When you run terraform init, Terraform reads the required_providers block in your configuration and downloads the specified providers from the Terraform Registry (or alternative sources). Understanding version constraints is crucial for reproducible infrastructure.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
terraform { # Minimum Terraform version required required_version = ">= 1.5.0" # Provider requirements required_providers { # AWS provider from HashiCorp aws = { source = "hashicorp/aws" version = "~> 5.0" # >= 5.0.0 AND < 6.0.0 (pessimistic constraint) } # Azure provider azurerm = { source = "hashicorp/azurerm" version = ">= 3.0.0, < 4.0.0" # Explicit range } # Kubernetes provider kubernetes = { source = "hashicorp/kubernetes" version = "2.23.0" # Exact version pin } # Third-party provider (different namespace) cloudflare = { source = "cloudflare/cloudflare" # Note: cloudflare namespace, not hashicorp version = "~> 4.0" } # Provider from alternative registry custom = { source = "example.com/custom/provider" version = "1.0.0" } }} # ==============================================================================# VERSION CONSTRAINT SYNTAX# ==============================================================================## "= 1.2.3" - Exact version (rarely used, inflexible)# ">= 1.2.3" - Minimum version# "<= 1.2.3" - Maximum version # "~> 1.2.3" - Pessimistic: >= 1.2.3 AND < 1.3.0 (patch updates only)# "~> 1.2" - Pessimistic: >= 1.2.0 AND < 2.0.0 (minor updates only)# ">= 1.0, < 2" - Multiple constraints (AND logic)## RECOMMENDATION: Use "~> X.Y" for production (allows patches, not breaking changes)# ==============================================================================Terraform creates a .terraform.lock.hcl file that records the exact provider versions and checksums used. Commit this file to version control! It ensures everyone on your team (and CI/CD) uses identical provider versions, preventing "works on my machine" issues.
Provider configuration goes beyond just specifying which provider to use—it controls authentication, default settings, and behavioral options. Different providers have vastly different configuration requirements.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
# ==============================================================================# AWS PROVIDER - Comprehensive Configuration# ============================================================================== provider "aws" { # Region is required region = "us-west-2" # Authentication (multiple methods available) # Option 1: Explicit credentials (NOT RECOMMENDED for production) # access_key = var.aws_access_key # secret_key = var.aws_secret_key # Option 2: Assume an IAM role assume_role { role_arn = "arn:aws:iam::123456789012:role/TerraformRole" session_name = "TerraformSession" external_id = "unique-id-for-security" # Optional: additional tags for session tags = { Purpose = "infrastructure-management" } } # Default tags applied to ALL resources created by this provider default_tags { tags = { Environment = var.environment ManagedBy = "terraform" Project = var.project_name CostCenter = var.cost_center } } # Ignore specific tag changes (useful for tags managed externally) ignore_tags { keys = ["LastModifiedBy", "AutoUpdated"] key_prefixes = ["aws:"] # Ignore AWS-managed tags } # Custom endpoints (useful for LocalStack, testing, or private regions) endpoints { s3 = "http://localhost:4566" ec2 = "http://localhost:4566" } # Retry configuration retry_mode = "standard" max_retries = 3} # ==============================================================================# AZURE PROVIDER - Comprehensive Configuration# ============================================================================== provider "azurerm" { features { # Resource group behavior resource_group { prevent_deletion_if_contains_resources = false } # Key Vault behavior key_vault { purge_soft_delete_on_destroy = true recover_soft_deleted_key_vaults = true } # Virtual Machine behavior virtual_machine { delete_os_disk_on_deletion = true graceful_shutdown = false skip_shutdown_and_force_delete = false } } # Subscription configuration subscription_id = var.azure_subscription_id tenant_id = var.azure_tenant_id # Service Principal authentication # client_id = var.azure_client_id # client_secret = var.azure_client_secret # Use Azure CLI authentication (recommended for local development) # No additional config needed - uses: az login} # ==============================================================================# GOOGLE CLOUD PROVIDER# ============================================================================== provider "google" { project = var.gcp_project_id region = var.gcp_region zone = var.gcp_zone # Credentials file path (for service accounts) # credentials = file("path/to/service-account.json") # Or use impersonation impersonate_service_account = "terraform@project-id.iam.gserviceaccount.com" # Batching configuration for efficiency batching { enable_batching = true send_after = "10s" } # Request timeout request_timeout = "60s"} # ==============================================================================# KUBERNETES PROVIDER# ============================================================================== provider "kubernetes" { # Option 1: Use kubeconfig file config_path = "~/.kube/config" config_context = "my-cluster-context" # Option 2: Explicit cluster configuration # host = "https://cluster-endpoint.example.com" # cluster_ca_certificate = base64decode(var.cluster_ca_cert) # token = var.cluster_token # Option 3: EKS cluster (using AWS provider data) # host = data.aws_eks_cluster.cluster.endpoint # cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data) # token = data.aws_eks_cluster_auth.cluster.token}Never put credentials directly in Terraform files. Use environment variables (AWS_ACCESS_KEY_ID, AZURE_CLIENT_SECRET), credential files (~/.aws/credentials), instance profiles/managed identities, or secrets managers. Terraform files are often committed to version control—exposed credentials are a critical security vulnerability.
Authentication Methods by Provider:
| Provider | Local Development | CI/CD / Production | Best Practice |
|---|---|---|---|
| AWS | aws configure / SSO | IAM Role (OIDC or assume_role) | Use OIDC federation with GitHub/GitLab Actions |
| Azure | az login | Service Principal or Managed Identity | Use Managed Identity when running in Azure |
| GCP | gcloud auth login | Service Account Key or Workload Identity | Use Workload Identity Federation |
| Kubernetes | kubectl config | Service Account Token | Use short-lived tokens, not long-lived certs |
Real-world infrastructure often spans multiple regions, accounts, or environments. Terraform supports multiple configurations of the same provider using aliases. This is essential for:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161
# ==============================================================================# MULTI-REGION DEPLOYMENT PATTERN# ============================================================================== # Default provider (primary region)provider "aws" { region = "us-east-1" alias = "primary" # Optional alias for default default_tags { tags = { Region = "primary" } }} # Secondary region providerprovider "aws" { alias = "secondary" region = "us-west-2" default_tags { tags = { Region = "secondary" } }} # Disaster recovery regionprovider "aws" { alias = "dr" region = "eu-west-1" default_tags { tags = { Region = "disaster-recovery" } }} # ==============================================================================# USING ALIASED PROVIDERS IN RESOURCES# ============================================================================== # Resource in primary region (default provider)resource "aws_vpc" "primary" { provider = aws.primary cidr_block = "10.0.0.0/16" tags = { Name = "primary-vpc" }} # Resource in secondary regionresource "aws_vpc" "secondary" { provider = aws.secondary cidr_block = "10.1.0.0/16" tags = { Name = "secondary-vpc" }} # Resource in DR regionresource "aws_vpc" "dr" { provider = aws.dr cidr_block = "10.2.0.0/16" tags = { Name = "dr-vpc" }} # ==============================================================================# CROSS-ACCOUNT PATTERN# ============================================================================== # Shared services accountprovider "aws" { alias = "shared" region = "us-east-1" assume_role { role_arn = "arn:aws:iam::111111111111:role/TerraformRole" }} # Production accountprovider "aws" { alias = "prod" region = "us-east-1" assume_role { role_arn = "arn:aws:iam::222222222222:role/TerraformRole" }} # Development accountprovider "aws" { alias = "dev" region = "us-east-1" assume_role { role_arn = "arn:aws:iam::333333333333:role/TerraformRole" }} # Create shared resourceresource "aws_route53_zone" "main" { provider = aws.shared name = "example.com"} # Create prod resources in prod accountresource "aws_route53_record" "prod" { provider = aws.shared # DNS in shared account zone_id = aws_route53_zone.main.zone_id name = "app.example.com" type = "A" alias { name = aws_lb.prod.dns_name zone_id = aws_lb.prod.zone_id evaluate_target_health = true }} resource "aws_lb" "prod" { provider = aws.prod # ALB in prod account name = "prod-alb" load_balancer_type = "application" subnets = aws_subnet.prod[*].id} # ==============================================================================# PASSING PROVIDERS TO MODULES# ============================================================================== module "vpc_primary" { source = "./modules/vpc" # Explicitly pass provider to module providers = { aws = aws.primary } vpc_cidr = "10.0.0.0/16" name = "primary"} module "vpc_secondary" { source = "./modules/vpc" providers = { aws = aws.secondary } vpc_cidr = "10.1.0.0/16" name = "secondary"} # Module that needs multiple providersmodule "cross_region_replication" { source = "./modules/s3-replication" providers = { aws.source = aws.primary aws.destination = aws.secondary } bucket_name = "my-replicated-bucket"}By default, child modules inherit the default (non-aliased) provider from their parent. To use a specific aliased provider in a module, you must explicitly pass it via the 'providers' argument. This is the only way to make modules work with multi-region or multi-account configurations.
Resources are the fundamental unit of infrastructure in Terraform. Each resource block describes one or more infrastructure objects—a virtual network, a compute instance, a DNS record, an IAM policy. Understanding how resources work is essential for effective Terraform usage.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
# ==============================================================================# RESOURCE SYNTAX ANATOMY# ============================================================================== # resource "TYPE" "LOCAL_NAME" {# ARGUMENT = VALUE# ...# }## TYPE = "<provider>_<resource>" (e.g., aws_instance, google_compute_instance)# LOCAL_NAME = Your local identifier (used for references within Terraform)# ARGUMENTS = Configuration for the resource resource "aws_instance" "web_server" { # Required arguments (must be provided) ami = "ami-0c55b159cbfafe1f0" instance_type = "t3.micro" # Optional arguments (have defaults or are truly optional) associate_public_ip_address = true monitoring = true # Nested blocks (sub-configurations) root_block_device { volume_size = 20 volume_type = "gp3" encrypted = true delete_on_termination = true } # Dynamic nested blocks when count varies ebs_block_device { device_name = "/dev/sdf" volume_size = 100 volume_type = "gp3" encrypted = true } # Tags tags = { Name = "web-server" Environment = var.environment } # Lifecycle customization lifecycle { create_before_destroy = true prevent_destroy = false ignore_changes = [tags["LastModified"]] }} # ==============================================================================# RESOURCE ATTRIBUTES# ============================================================================== # Resources have three types of attributes:## 1. ARGUMENTS - Values you set in configuration# Example: instance_type = "t3.micro"## 2. COMPUTED ATTRIBUTES - Values determined after creation# Example: id, arn, public_ip (known after apply)## 3. COMPUTED/ARGUMENT HYBRID - Can be set or computed# Example: private_ip (set it, or let AWS assign) # Reference computed attributes from other resourcesresource "aws_eip" "web" { # Reference the instance ID (computed after instance creation) instance = aws_instance.web_server.id domain = "vpc" tags = { Name = "web-server-eip" }} output "instance_details" { value = { # Arguments (what you set) instance_type = aws_instance.web_server.instance_type ami = aws_instance.web_server.ami # Computed attributes (what AWS assigned) id = aws_instance.web_server.id arn = aws_instance.web_server.arn public_ip = aws_instance.web_server.public_ip private_ip = aws_instance.web_server.private_ip availability_zone = aws_instance.web_server.availability_zone # Nested computed attributes root_volume_id = aws_instance.web_server.root_block_device[0].volume_id }}Resource Addressing:
Every resource has a unique address within Terraform configuration. Understanding addressing is crucial for referencing resources, importing existing infrastructure, and using terraform state commands.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# RESOURCE ADDRESS FORMAT# # Basic: <resource_type>.<local_name># With count: <resource_type>.<local_name>[<index>]# With for_each: <resource_type>.<local_name>["<key>"]# In module: module.<module_name>.<resource_address># Nested: module.<parent>.module.<child>.<resource_address> # Examples: # Simple resourceresource "aws_vpc" "main" { ... }# Address: aws_vpc.main # Resource with countresource "aws_subnet" "public" { count = 3 ...}# Addresses: aws_subnet.public[0], aws_subnet.public[1], aws_subnet.public[2] # Resource with for_each (map)resource "aws_subnet" "private" { for_each = { "a" = "10.0.10.0/24" "b" = "10.0.11.0/24" "c" = "10.0.12.0/24" } cidr_block = each.value availability_zone = "us-west-2${each.key}"}# Addresses: aws_subnet.private["a"], aws_subnet.private["b"], aws_subnet.private["c"] # In a modulemodule "networking" { source = "./modules/networking"}# Resource address: module.networking.aws_vpc.main # Nested modulesmodule "application" { source = "./modules/app"}# Where app module uses a database submodule# Address: module.application.module.database.aws_db_instance.mainEvery resource goes through a lifecycle managed by Terraform. Understanding this lifecycle—and how to customize it—is essential for handling real-world scenarios like zero-downtime deployments, protected resources, and externally-managed attributes.
1234567891011121314151617181920212223242526272829303132333435
┌─────────────────────────────────────────────────────────────────────────────┐│ TERRAFORM RESOURCE LIFECYCLE │├─────────────────────────────────────────────────────────────────────────────┤│ ││ ┌─────────────┐ ││ │ CREATE │◀──── Resource in config, not in state ││ │ (terraform │ terraform plan: + create ││ │ apply) │ ││ └──────┬──────┘ ││ │ ││ ▼ ││ ┌─────────────┐ ││ │ EXISTS │◀──── Resource in config AND in state ││ │ (managed) │ terraform plan: (no changes) or ~ update ││ └──────┬──────┘ ││ │ ││ ├─────────────────────────────────────────────────────────────┐ ││ │ │ ││ ▼ ▼ ││ ┌─────────────┐ ┌──────────┐││ │ UPDATE │◀──── Config changed, resource in state │ REPLACE │││ │ (in-place) │ terraform plan: ~ update in-place │ (destroy │││ └─────────────┘ │ +create)│││ └─────┬────┘││ ┌─────────────┐ │ ││ │ DELETE │◀──── Resource removed from config ───────┘ ││ │ (destroy) │ or terraform destroy ││ └─────────────┘ terraform plan: - destroy ││ ││ REPLACE triggers: ││ • Changing immutable attributes (e.g., AMI on EC2) ││ • Using replace_triggered_by ││ • Running terraform apply -replace=<address> ││ │└─────────────────────────────────────────────────────────────────────────────┘123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129
# ==============================================================================# LIFECYCLE BLOCK - Customizing Resource Behavior# ============================================================================== resource "aws_instance" "web" { ami = var.ami_id instance_type = var.instance_type lifecycle { # ================================================================== # create_before_destroy # ================================================================== # When a resource must be replaced, create the new one BEFORE # destroying the old one. Essential for zero-downtime deployments. # # Default: destroy, then create # With this: create new, update dependencies, destroy old create_before_destroy = true # ================================================================== # prevent_destroy # ================================================================== # Terraform will error if any plan would destroy this resource. # Use for critical resources that should never be accidentally deleted. # # NOTE: This doesn't prevent deletion via AWS console or other tools. # To truly protect, add DeletionPolicy in the provider. prevent_destroy = true # ================================================================== # ignore_changes # ================================================================== # Ignore changes to specific attributes during plan/apply. # Useful when external processes modify resources. # # Common use cases: # - Auto-scaling groups that change size externally # - Tags managed by other systems # - Launch configurations updated by CI/CD ignore_changes = [ tags["LastUpdated"], # Ignore specific tag user_data, # Ignore user data changes # tags, # Ignore all tags (be careful!) ] # ================================================================== # replace_triggered_by # ================================================================== # Force resource replacement when referenced resources change. # Useful for rolling updates or breaking circular dependencies. replace_triggered_by = [ aws_launch_template.web.latest_version, # Replace when template changes null_resource.force_replacement, # Manual trigger ] # ================================================================== # precondition / postcondition (Terraform 1.2+) # ================================================================== # Validate assumptions about the resource before/after apply precondition { condition = var.instance_type != "t2.micro" || var.environment != "production" error_message = "t2.micro is not allowed in production environment." } postcondition { condition = self.public_ip != null error_message = "Instance must have a public IP address." } } tags = { Name = "web-server" }} # ==============================================================================# PRACTICAL EXAMPLE: Zero-Downtime Database Update# ============================================================================== resource "aws_db_instance" "main" { identifier = "production-db" engine = "postgres" engine_version = "15.3" instance_class = var.db_instance_class lifecycle { # Create new DB before destroying old one # Critical for maintaining database availability during upgrades create_before_destroy = true # Prevent accidental deletion of production database prevent_destroy = true # Ignore storage autoscaling changes ignore_changes = [ allocated_storage, # Managed by autoscaling ] } tags = { Name = "production-db" Environment = "production" }} # ==============================================================================# Using null_resource for Manual Triggers# ============================================================================== resource "null_resource" "force_replacement" { # Change this value to force replacement of dependent resources triggers = { version = "v2" # Change to "v3" to force recreation }} resource "aws_ecs_service" "app" { name = "application" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.app.arn desired_count = 3 lifecycle { # Force redeployment when null_resource trigger changes replace_triggered_by = [ null_resource.force_replacement.id ] }}When using create_before_destroy, ensure the new and old resources can coexist temporarily. For example, if both EC2 instances need unique security group rules on port 443, you'll have a conflict. Plan for temporary resource naming or use lifecycle hooks to handle transitions.
Terraform provides two meta-arguments for creating multiple similar resources: count and for_each. Understanding when to use each—and the implications for state management—is crucial for maintainable configurations.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
# ==============================================================================# COUNT - Create N identical (or nearly identical) resources# ============================================================================== variable "subnet_count" { type = number default = 3} resource "aws_subnet" "public" { count = var.subnet_count # Create 3 subnets vpc_id = aws_vpc.main.id cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index) availability_zone = data.aws_availability_zones.available.names[count.index] tags = { Name = "public-subnet-${count.index + 1}" Type = "public" }} # Reference count resources by indexresource "aws_route_table_association" "public" { count = var.subnet_count subnet_id = aws_subnet.public[count.index].id route_table_id = aws_route_table.public.id} # Access all subnets as a listoutput "public_subnet_ids" { value = aws_subnet.public[*].id # Splat expression} # ==============================================================================# CONDITIONAL RESOURCE CREATION with count# ============================================================================== variable "create_nat_gateway" { type = bool default = true} resource "aws_nat_gateway" "main" { count = var.create_nat_gateway ? 1 : 0 # 1 if true, 0 if false allocation_id = aws_eip.nat[0].id subnet_id = aws_subnet.public[0].id tags = { Name = "main-nat" }} resource "aws_eip" "nat" { count = var.create_nat_gateway ? 1 : 0 domain = "vpc"} # Reference conditional resources carefullyoutput "nat_gateway_ip" { value = var.create_nat_gateway ? aws_nat_gateway.main[0].public_ip : null}123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
# ==============================================================================# FOR_EACH - Create resources from a set or map# ============================================================================== variable "subnets" { type = map(object({ cidr_block = string availability_zone = string public = bool })) default = { "public-1" = { cidr_block = "10.0.1.0/24" availability_zone = "us-west-2a" public = true } "public-2" = { cidr_block = "10.0.2.0/24" availability_zone = "us-west-2b" public = true } "private-1" = { cidr_block = "10.0.10.0/24" availability_zone = "us-west-2a" public = false } "private-2" = { cidr_block = "10.0.11.0/24" availability_zone = "us-west-2b" public = false } }} resource "aws_subnet" "all" { for_each = var.subnets # Create one subnet per map entry vpc_id = aws_vpc.main.id cidr_block = each.value.cidr_block availability_zone = each.value.availability_zone map_public_ip_on_launch = each.value.public tags = { Name = each.key # The map key becomes the name Type = each.value.public ? "public" : "private" }} # Reference for_each resources by keyresource "aws_route_table_association" "subnets" { for_each = aws_subnet.all # Iterate over the created subnets subnet_id = each.value.id route_table_id = each.value.tags["Type"] == "public" ? aws_route_table.public.id : aws_route_table.private.id} # Access all subnetsoutput "subnet_details" { value = { for k, v in aws_subnet.all : k => { id = v.id cidr = v.cidr_block az = v.availability_zone } }} # ==============================================================================# FOR_EACH with toset() - When you have a list of strings# ============================================================================== variable "iam_users" { type = list(string) default = ["alice", "bob", "charlie"]} resource "aws_iam_user" "users" { for_each = toset(var.iam_users) # Convert list to set name = each.value # For sets, each.key == each.value tags = { ManagedBy = "terraform" }} output "iam_user_arns" { value = { for k, v in aws_iam_user.users : k => v.arn }}| Aspect | count | for_each |
|---|---|---|
| Resource Identity | Numeric index [0], [1], [2] | String key ["name"] |
| Removing Middle Item | ALL subsequent items renumbered/recreated | Only that item removed |
| Best For | Identical resources, conditional creation | Named resources with distinct config |
| Input Type | Number | Set or Map |
| Reordering | Causes recreation | No effect (keys unchanged) |
| Conditional | count = condition ? 1 : 0 | for_each = condition ? toset(["x"]) : [] |
With count, if you have subnets [0], [1], [2] and remove the middle one, Terraform doesn't remove [1]—it shifts [2] to become [1] and removes [2]. This means your second subnet gets recreated! Use for_each with meaningful keys for resources that may be added/removed individually.
Some resources have nested blocks that need to be repeated (like multiple ingress rules in a security group). Dynamic blocks allow you to programmatically generate these nested structures.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179
# ==============================================================================# DYNAMIC BLOCKS - Generating Nested Configurations# ============================================================================== variable "ingress_rules" { type = list(object({ port = number protocol = string cidr_blocks = list(string) description = string })) default = [ { port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] description = "HTTP from anywhere" }, { port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] description = "HTTPS from anywhere" }, { port = 22 protocol = "tcp" cidr_blocks = ["10.0.0.0/8"] description = "SSH from internal network" } ]} resource "aws_security_group" "web" { name = "web-sg" description = "Security group for web servers" vpc_id = aws_vpc.main.id # Dynamic block generates multiple ingress blocks dynamic "ingress" { for_each = var.ingress_rules content { from_port = ingress.value.port to_port = ingress.value.port protocol = ingress.value.protocol cidr_blocks = ingress.value.cidr_blocks description = ingress.value.description } } # Static egress (allow all outbound) egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "web-sg" }} # ==============================================================================# DYNAMIC BLOCK ANATOMY# ==============================================================================## dynamic "BLOCK_TYPE" { # The nested block type to generate# for_each = COLLECTION # What to iterate over# iterator = custom_name # Optional: rename from BLOCK_TYPE# # content {# # Use custom_name.key and custom_name.value (or BLOCK_TYPE by default)# argument = custom_name.value.attribute# }# } # ==============================================================================# COMPLEX EXAMPLE: ECS Task Definition with Multiple Containers# ============================================================================== variable "containers" { type = map(object({ image = string cpu = number memory = number essential = bool ports = list(number) env_vars = map(string) })) default = { "web" = { image = "nginx:latest" cpu = 256 memory = 512 essential = true ports = [80, 443] env_vars = { "ENV" = "production" } } "sidecar" = { image = "datadog/agent:latest" cpu = 128 memory = 256 essential = false ports = [] env_vars = { "DD_API_KEY" = "xxx" } } }} resource "aws_ecs_task_definition" "app" { family = "application" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = 512 memory = 1024 container_definitions = jsonencode([ for name, config in var.containers : { name = name image = config.image cpu = config.cpu memory = config.memory essential = config.essential portMappings = [ for port in config.ports : { containerPort = port protocol = "tcp" } ] environment = [ for k, v in config.env_vars : { name = k value = v } ] } ])} # ==============================================================================# CONDITIONAL DYNAMIC BLOCKS# ============================================================================== variable "enable_logging" { type = bool default = true} variable "log_bucket" { type = string default = ""} resource "aws_lb" "main" { name = "main-lb" load_balancer_type = "application" subnets = aws_subnet.public[*].id # Only create access_logs block if logging is enabled dynamic "access_logs" { for_each = var.enable_logging && var.log_bucket != "" ? [1] : [] content { bucket = var.log_bucket prefix = "lb-logs" enabled = true } } tags = { Name = "main-lb" }}Dynamic blocks can make configurations harder to read. They're powerful for variable-length nested structures, but if you're generating most blocks statically, write them explicitly. Terraform's style guide recommends dynamic blocks only when the number of blocks is truly variable.
We've covered the essential knowledge for working with providers and resources. Let's consolidate:
.terraform.lock.hcl.What's Next:
With providers and resources mastered, the next page covers State Management—how Terraform tracks the mapping between your configuration and real infrastructure. You'll learn about state backends, locking, workspaces, and the critical commands for state manipulation.
You now understand how Terraform interfaces with cloud platforms (providers) and how to declare, configure, and manage infrastructure objects (resources). These concepts form the foundation for everything you'll build with Terraform.