Loading learning content...
In 2011, Mitchell Hashimoto faced a problem that would eventually reshape how the entire industry manages infrastructure. Deploying applications across different environments—development, staging, production—required manually configuring servers, networking, and storage. Each deployment was a unique snowflake, prone to configuration drift, human error, and the notorious "works on my machine" syndrome.
His solution was Vagrant, a tool for creating reproducible development environments. But Vagrant was just the beginning. By 2014, Hashimoto recognized that the same declarative, reproducible approach could transform how organizations manage their entire cloud infrastructure. Terraform was born—and with it, a new paradigm for infrastructure management that would become the industry standard.
Today, Terraform is used by millions of practitioners at companies ranging from startups to Fortune 500 enterprises. It manages infrastructure worth billions of dollars across every major cloud provider. Understanding Terraform isn't just a useful skill—it's becoming as fundamental as knowing how to use version control.
By the end of this page, you will understand Terraform's core philosophy, its declarative approach to infrastructure, the architecture that makes it extensible across any provider, and why it has become the lingua franca of cloud infrastructure management. You'll be equipped to think in Terraform—a mental model that will serve you whether you're managing a single cloud account or architecting multi-cloud enterprises.
Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp that enables you to define, provision, and manage infrastructure using a high-level, declarative configuration language called HashiCorp Configuration Language (HCL).
At its core, Terraform answers a deceptively simple question: "What if you could describe your entire infrastructure in text files, version-control those files, and have a tool automatically create, update, or destroy the actual resources to match your description?"
This question represents a fundamental shift from imperative infrastructure management ("click this button, then that button, then run this script") to declarative infrastructure management ("this is what I want; figure out how to make it happen").
| Aspect | Imperative (Scripts/Manual) | Declarative (Terraform) |
|---|---|---|
| Description | Step-by-step instructions | Desired end state |
| "How" vs "What" | Specifies HOW to create | Specifies WHAT to create |
| Idempotency | Requires careful scripting | Built-in by design |
| State Awareness | None (runs blindly) | Tracks current state |
| Change Detection | Manual comparison | Automatic diff calculation |
| Rollback | Manual reverse scripts | Apply previous state file |
| Documentation | Scripts may be unclear | Config IS the documentation |
| Collaboration | Error-prone merging | Git-friendly, reviewable |
The declarative model means you describe WHAT you want, not HOW to achieve it. Terraform handles the orchestration: determining resource dependencies, ordering operations correctly, parallelizing where possible, and managing the complexity of API interactions across potentially hundreds of resources.
Why "Terraform"?
The name comes from the science fiction concept of terraforming—transforming a hostile planet into one suitable for life. HashiCorp's Terraform transforms the hostile complexity of cloud APIs into manageable, reproducible infrastructure. Just as terraforming a planet requires understanding complex interrelated systems (atmosphere, temperature, water), managing cloud infrastructure requires understanding interdependent resources (networks, compute, storage, security).
Key Insight: Terraform doesn't execute infrastructure operations directly. Instead, it:
This workflow—Write → Plan → Apply—is the heartbeat of Terraform usage and forms the foundation for safe, predictable infrastructure changes.
To truly master Terraform, you must understand its internal architecture. Terraform isn't a monolithic application—it's a carefully designed system with distinct components that work together to translate your configuration into real infrastructure.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
┌─────────────────────────────────────────────────────────────────────────────┐│ TERRAFORM ARCHITECTURE │├─────────────────────────────────────────────────────────────────────────────┤│ ││ ┌─────────────────┐ ┌──────────────────────────────────────────┐ ││ │ Configuration │ │ TERRAFORM CORE │ ││ │ Files (.tf) │───────▶│ │ ││ └─────────────────┘ │ ┌─────────────────────────────────────┐ │ ││ │ │ Configuration Parser │ │ ││ ┌─────────────────┐ │ │ (HCL/JSON → Internal Graph Model) │ │ ││ │ State File │◀──────▶│ └─────────────────────────────────────┘ │ ││ │ (terraform. │ │ │ │ ││ │ tfstate) │ │ ▼ │ ││ └─────────────────┘ │ ┌─────────────────────────────────────┐ │ ││ │ │ Dependency Graph │ │ ││ │ │ (Resource ordering & parallelism) │ │ ││ │ └─────────────────────────────────────┘ │ ││ │ │ │ ││ │ ▼ │ ││ │ ┌─────────────────────────────────────┐ │ ││ │ │ Plan Engine │ │ ││ │ │ (Diff calculation & ordering) │ │ ││ │ └─────────────────────────────────────┘ │ ││ │ │ │ ││ └────────────────────│─────────────────────┘ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ PROVIDER LAYER │ ││ ├─────────────────┬─────────────────┬─────────────────┬──────────────┤ ││ │ AWS Provider │ Azure Provider │ GCP Provider │ Kubernetes │ ││ │ │ │ │ Provider │ ││ │ ┌───────────┐ │ ┌───────────┐ │ ┌───────────┐ │ ┌──────────┐ │ ││ │ │ Resources │ │ │ Resources │ │ │ Resources │ │ │Resources │ │ ││ │ │ - ec2 │ │ │ - vm │ │ │ - compute │ │ │- pod │ │ ││ │ │ - s3 │ │ │ - storage │ │ │ - storage │ │ │- deploy │ │ ││ │ │ - vpc │ │ │ - vnet │ │ │ - network │ │ │- svc │ │ ││ │ └───────────┘ │ └───────────┘ │ └───────────┘ │ └──────────┘ │ ││ └─────────────────┴─────────────────┴─────────────────┴──────────────┘ ││ │ ││ ▼ ││ ┌─────────────────────────────────────────────────────────────────────┐ ││ │ ACTUAL CLOUD INFRASTRUCTURE │ ││ │ (AWS, Azure, GCP, Kubernetes, Datadog, GitHub, etc.) │ ││ └─────────────────────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────────────────────┘Core Components Explained:
.tf files written in HCL (or JSON) and transforms them into an internal representation. This parser handles variable interpolation, function evaluation, and syntax validation.Terraform Core is intentionally "infrastructure-agnostic." It doesn't know how to create an AWS EC2 instance or an Azure VM. All cloud-specific logic lives in providers—downloadable plugins that implement the actual API calls. This separation is why Terraform can manage thousands of different resource types across hundreds of platforms.
HCL (HashiCorp Configuration Language) is a declarative language designed specifically for defining infrastructure. It strikes a careful balance between human readability, machine parseability, and expressiveness. Unlike general-purpose programming languages, HCL is constrained to configuration—you can't write arbitrary programs, which makes configurations more predictable and reviewable.
Why not JSON or YAML?
HCL was created because JSON is verbose and lacks comments, while YAML's whitespace sensitivity and complex edge cases create subtle bugs. HCL provides:
# and // style)123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161
# HCL Syntax Fundamentals# This file demonstrates core HCL concepts # ==============================================================================# BLOCKS: The primary structural element# ============================================================================== # A block has a TYPE, optional LABELS, and a BODY containing ARGUMENTS# Format: <block_type> "<label1>" "<label2>" { ... } terraform { # The terraform block configures Terraform itself required_version = ">= 1.0.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } # Backend configuration for remote state storage backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-east-1" }} # Provider block: Configures the provider pluginprovider "aws" { region = "us-west-2" default_tags { tags = { Environment = "production" ManagedBy = "terraform" } }} # ==============================================================================# RESOURCES: Concrete infrastructure objects# ============================================================================== # Resource block: <resource_type> "<local_name>"resource "aws_vpc" "main" { # ARGUMENTS: key-value pairs cidr_block = "10.0.0.0/16" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "main-vpc" Tier = "networking" }} # Resource with reference to another resourceresource "aws_subnet" "public" { # Reference another resource using <resource_type>.<local_name>.<attribute> vpc_id = aws_vpc.main.id cidr_block = "10.0.1.0/24" availability_zone = "us-west-2a" map_public_ip_on_launch = true tags = { Name = "public-subnet" Type = "public" }} # ==============================================================================# VARIABLES: Input parameters for configuration# ============================================================================== variable "environment" { description = "Deployment environment (dev, staging, production)" type = string default = "development" validation { condition = contains(["development", "staging", "production"], var.environment) error_message = "Environment must be development, staging, or production." }} variable "instance_count" { description = "Number of EC2 instances to create" type = number default = 2 validation { condition = var.instance_count > 0 && var.instance_count <= 10 error_message = "Instance count must be between 1 and 10." }} # Complex variable typesvariable "availability_zones" { description = "List of availability zones" type = list(string) default = ["us-west-2a", "us-west-2b", "us-west-2c"]} variable "instance_config" { description = "Configuration for EC2 instances" type = object({ instance_type = string volume_size = number encrypted = bool }) default = { instance_type = "t3.micro" volume_size = 20 encrypted = true }} # ==============================================================================# LOCALS: Computed values for use within the module# ============================================================================== locals { # Simple computed values name_prefix = "${var.environment}-app" # Complex expressions common_tags = { Environment = var.environment Project = "infrastructure" ManagedBy = "terraform" UpdatedAt = timestamp() } # Conditional logic is_production = var.environment == "production" # Computed from other resources vpc_cidr_blocks = [aws_vpc.main.cidr_block]} # ==============================================================================# OUTPUTS: Values exposed after apply# ============================================================================== output "vpc_id" { description = "The ID of the created VPC" value = aws_vpc.main.id} output "vpc_arn" { description = "The ARN of the created VPC" value = aws_vpc.main.arn sensitive = false} output "subnet_ids" { description = "List of subnet IDs" value = [aws_subnet.public.id]}HCL describes WHAT exists, not the steps to create it. There's no 'if VPC doesn't exist, create it'—you simply declare that a VPC exists. Terraform determines whether creation, update, or no action is needed based on the current state.
Every Terraform operation follows a predictable workflow. This workflow is designed for safety and predictability—you always know what changes will be made before they happen. Understanding this workflow deeply is essential for confident infrastructure management.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
┌─────────────────────────────────────────────────────────────────────────────┐│ TERRAFORM WORKFLOW │├─────────────────────────────────────────────────────────────────────────────┤│ ││ PHASE 1: WRITE ││ ───────────────── ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ • Author .tf configuration files │ ││ │ • Define providers, resources, variables, outputs │ ││ │ • Store in version control (Git) │ ││ │ • Peer review changes via pull requests │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ PHASE 2: INIT ││ ───────────────── ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ $ terraform init │ ││ │ │ ││ │ • Download required provider plugins │ ││ │ • Initialize backend for state storage │ ││ │ • Create .terraform directory with cached plugins │ ││ │ • Generate .terraform.lock.hcl for reproducible versions │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ PHASE 3: PLAN ││ ───────────────── ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ $ terraform plan │ ││ │ │ ││ │ • Read current state from state file/backend │ ││ │ • Query providers for actual resource state │ ││ │ • Compare desired (config) vs actual (current) state │ ││ │ • Generate execution plan showing all changes │ ││ │ • Display: + create, ~ update, - destroy, -/+ replace │ ││ │ • NOTHING IS CHANGED - this is read-only │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ PHASE 4: APPLY ││ ───────────────── ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ $ terraform apply │ ││ │ │ ││ │ • Review plan output (same as terraform plan) │ ││ │ • Prompt for approval (yes/no) unless -auto-approve │ ││ │ • Execute changes in dependency order │ ││ │ • Update state file with new resource attributes │ ││ │ • Show outputs after successful completion │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ PHASE 5: DESTROY (when needed) ││ ───────────────── ││ ┌───────────────────────────────────────────────────────────────────┐ ││ │ $ terraform destroy │ ││ │ │ ││ │ • Generate destruction plan for ALL resources │ ││ │ • Prompt for approval │ ││ │ • Delete resources in reverse dependency order │ ││ │ • Clear state file │ ││ └───────────────────────────────────────────────────────────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────┘Understanding a Terraform Plan Output:
The plan output is the most important safety mechanism in Terraform. Learning to read it fluently prevents accidental infrastructure damage.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
$ terraform plan Terraform used the selected providers to generate the following execution plan.Resource actions are indicated with the following symbols: + create ~ update in-place - destroy -/+ destroy and then create replacement Terraform will perform the following actions: # aws_instance.web will be created + resource "aws_instance" "web" { + ami = "ami-0c55b159cbfafe1f0" + arn = (known after apply) + associate_public_ip_address = true + availability_zone = (known after apply) + cpu_core_count = (known after apply) + id = (known after apply) + instance_state = (known after apply) + instance_type = "t3.micro" + private_ip = (known after apply) + public_ip = (known after apply) + subnet_id = "subnet-abc123" + tags = { + "Environment" = "production" + "Name" = "web-server" } + vpc_security_group_ids = (known after apply) + root_block_device { + delete_on_termination = true + encrypted = true + volume_size = 20 + volume_type = "gp3" } } # aws_security_group.web will be updated in-place ~ resource "aws_security_group" "web" { id = "sg-xyz789" name = "web-sg" tags = { "Name" = "web-security-group" } ~ ingress { ~ cidr_blocks = [ - "0.0.0.0/0", + "10.0.0.0/8", ] from_port = 443 protocol = "tcp" to_port = 443 } } # aws_instance.old will be destroyed # (because aws_instance.old is not in configuration) - resource "aws_instance" "old" { - ami = "ami-old-version" -> null - id = "i-abc123def456" -> null - instance_type = "t2.micro" -> null - tags = { - "Name" = "old-server" } -> null } Plan: 1 to add, 1 to change, 1 to destroy. Changes to Outputs: + new_instance_ip = (known after apply) - old_instance_ip = "10.0.1.50" -> nullThe -/+ (replace) indicator is particularly dangerous. Some changes that seem minor (like changing an EC2 instance's AMI) require destroying the old resource and creating a new one. In production, this means downtime. Always review plans carefully before applying.
Terraform's power comes from a small set of core concepts that compose together to describe complex infrastructure. Mastering these concepts is essential for effective Terraform usage.
| Concept | Purpose | Example |
|---|---|---|
| Provider | Plugin that interfaces with an API (AWS, Azure, GCP, etc.) | provider "aws" { region = "us-west-2" } |
| Resource | Infrastructure object to create/manage | resource "aws_instance" "web" { ... } |
| Data Source | Read-only query to existing infrastructure | data "aws_ami" "ubuntu" { ... } |
| Variable | Input parameter for configuration | variable "region" { type = string } |
| Output | Value exposed after apply (for other configs) | output "vpc_id" { value = aws_vpc.main.id } |
| Local | Named expression computed within config | locals { name = "app-${var.env}" } |
| Module | Reusable container for related resources | module "vpc" { source = "./modules/vpc" } |
| State | Mapping between config and real resources | terraform.tfstate file (or remote backend) |
Understanding Data Sources vs Resources:
A common point of confusion is the difference between resources and data sources. The distinction is crucial:
Data sources are essential for referencing existing infrastructure—perhaps an AWS VPC created by another team, or an AMI that your organization publishes.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
# Data Source: Read existing, externally-managed resources# Terraform will QUERY this, not CREATE it # Find the latest Ubuntu AMIdata "aws_ami" "ubuntu" { most_recent = true owners = ["099720109477"] # Canonical's AWS account ID filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] }} # Read an existing VPC by tagdata "aws_vpc" "existing" { filter { name = "tag:Name" values = ["production-vpc"] }} # Read AWS account detailsdata "aws_caller_identity" "current" {} data "aws_region" "current" {} # Use data sources in resourcesresource "aws_instance" "web" { # Use the data source to get the AMI ID dynamically ami = data.aws_ami.ubuntu.id instance_type = "t3.micro" # Reference the existing VPC's ID subnet_id = data.aws_vpc.existing.id tags = { Name = "web-server" # Use data sources for metadata Account = data.aws_caller_identity.current.account_id Region = data.aws_region.current.name AMI = data.aws_ami.ubuntu.name }} output "ami_details" { value = { id = data.aws_ami.ubuntu.id name = data.aws_ami.ubuntu.name owner = data.aws_ami.ubuntu.owner_id created_date = data.aws_ami.ubuntu.creation_date }}Use data sources when you need to: (1) Reference resources created outside Terraform, (2) Look up dynamic values like latest AMIs, (3) Query cloud-provider metadata, (4) Read secrets from external systems like HashiCorp Vault. Data sources make your configuration dynamic without managing external resources' lifecycles.
One of Terraform's most powerful features is automatic dependency detection. When you reference one resource from another, Terraform builds a directed acyclic graph (DAG) that determines the order of operations.
This graph ensures:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
# Implicit Dependencies (Recommended)# Terraform automatically detects dependencies from references resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" tags = { Name = "main-vpc" }} # This DEPENDS ON aws_vpc.main because it references aws_vpc.main.idresource "aws_internet_gateway" "igw" { vpc_id = aws_vpc.main.id # <-- Creates implicit dependency tags = { Name = "main-igw" }} # This DEPENDS ON aws_vpc.mainresource "aws_subnet" "public" { vpc_id = aws_vpc.main.id # <-- Creates implicit dependency cidr_block = "10.0.1.0/24" availability_zone = "us-west-2a" tags = { Name = "public-subnet" }} # This DEPENDS ON both aws_subnet.public AND aws_internet_gateway.igwresource "aws_route_table" "public" { vpc_id = aws_vpc.main.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.igw.id # <-- Creates dependency } tags = { Name = "public-rt" }} resource "aws_route_table_association" "public" { subnet_id = aws_subnet.public.id # <-- Depends on subnet route_table_id = aws_route_table.public.id # <-- Depends on route table} # ==============================================================================# Explicit Dependencies (Use sparingly)# When there's a dependency Terraform can't automatically detect# ============================================================================== resource "aws_iam_role_policy_attachment" "lambda_logs" { role = aws_iam_role.lambda.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"} resource "aws_lambda_function" "example" { filename = "lambda.zip" function_name = "example-function" role = aws_iam_role.lambda.arn handler = "index.handler" runtime = "nodejs18.x" # Explicit dependency: ensure policy is attached before function creation # This is necessary because aws_lambda_function doesn't reference # aws_iam_role_policy_attachment directly, but needs the policy attached depends_on = [aws_iam_role_policy_attachment.lambda_logs]} # ==============================================================================# Visualizing the Dependency Graph# ==============================================================================# Run: terraform graph | dot -Tpng > graph.png## This generates a visual representation of all dependencies1234567891011121314151617181920212223242526272829303132
# The dependency graph for the above configuration: ┌─────────────────┐ │ aws_vpc.main │ └────────┬────────┘ │ ┌──────────────────┼──────────────────┐ │ │ │ ▼ ▼ │┌─────────────────────┐ ┌─────────────────┐ ││aws_internet_gateway │ │ aws_subnet.public│ ││ .igw │ └────────┬────────┘ │└──────────┬──────────┘ │ │ │ │ │ └───────────┬──────────┘ │ │ │ ▼ │ ┌─────────────────────┐ │ │aws_route_table.public│◀──────────────┘ └──────────┬──────────┘ │ ▼ ┌────────────────────────────┐ │aws_route_table_association │ │ .public │ └────────────────────────────┘ # Terraform processes this graph:# 1. Creates aws_vpc.main first (no dependencies)# 2. Creates aws_internet_gateway.igw AND aws_subnet.public IN PARALLEL# 3. Creates aws_route_table.public (waits for IGW and VPC)# 4. Creates aws_route_table_association.public (waits for subnet and RT)Use depends_on only when Terraform cannot detect the dependency automatically. Overusing depends_on reduces parallelism (slowing down operations) and can create confusing, hard-to-maintain configurations. Let Terraform's automatic dependency detection work—it's almost always sufficient.
HCL includes a rich expression language with operators, conditionals, and over 100 built-in functions. These enable dynamic, DRY configurations without resorting to external scripting.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185
# ==============================================================================# OPERATORS AND EXPRESSIONS# ============================================================================== locals { # Arithmetic operators total_cpu = var.cpu_per_node * var.node_count memory_ratio = var.memory_gb / var.cpu_count # Comparison operators (return boolean) is_large_cluster = var.node_count > 10 needs_high_availability = var.environment == "production" # Logical operators enable_monitoring = var.environment == "production" || var.enable_debug is_critical = local.is_large_cluster && local.needs_high_availability disable_feature = !var.feature_enabled # Conditional expressions (ternary) instance_type = var.environment == "production" ? "m5.xlarge" : "t3.micro" # Nested conditionals (use sparingly) storage_size = ( var.environment == "production" ? 500 : var.environment == "staging" ? 100 : 20 # default for development )} # ==============================================================================# STRING FUNCTIONS# ============================================================================== locals { # String manipulation upper_env = upper(var.environment) # "PRODUCTION" lower_name = lower(var.name) # "my-app" title_case = title(var.name) # "My App" trimmed = trimspace(" hello ") # "hello" # String formatting formatted_name = format("%s-%s-%03d", var.project, var.env, var.index) # "app-prod-001" # String interpolation (preferred over format for simple cases) resource_name = "${var.project}-${var.environment}-api" # Regular expressions is_valid_email = can(regex("^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$", var.email)) # Join and split az_string = join(",", var.availability_zones) # "us-west-2a,us-west-2b" az_list = split(",", "us-west-2a,us-west-2b,us-west-2c") # ["us-west-2a", ...] # Replace sanitized_name = replace(var.name, "/[^a-zA-Z0-9]/", "-") # Replace special chars} # ==============================================================================# COLLECTION FUNCTIONS# ============================================================================== variable "server_configs" { default = [ { name = "web-1", role = "web", size = "small" }, { name = "web-2", role = "web", size = "medium" }, { name = "db-1", role = "db", size = "large" }, ]} locals { # Length server_count = length(var.server_configs) # 3 # Element access first_server = var.server_configs[0] # { name = "web-1", ... } last_server = element(var.server_configs, length(var.server_configs) - 1) # Lookup (for maps) regions_map = { "us" = "us-west-2" "eu" = "eu-west-1" "ap" = "ap-northeast-1" } selected_region = lookup(local.regions_map, var.region_code, "us-west-2") # Keys and values region_codes = keys(local.regions_map) # ["us", "eu", "ap"] region_names = values(local.regions_map) # ["us-west-2", "eu-west-1", ...] # Merge maps base_tags = { Environment = var.environment, ManagedBy = "terraform" } app_tags = { Application = var.app_name, Team = var.team } all_tags = merge(local.base_tags, local.app_tags) # Flatten nested lists nested_list = [["a", "b"], ["c", "d"], ["e"]] flat_list = flatten(local.nested_list) # ["a", "b", "c", "d", "e"] # Distinct (remove duplicates) all_zones = ["us-west-2a", "us-west-2b", "us-west-2a"] unique_zones = distinct(local.all_zones) # ["us-west-2a", "us-west-2b"] # Coalesce (first non-null/empty value) actual_name = coalesce(var.custom_name, var.default_name, "unnamed") # Compact (remove null/empty from list) valid_items = compact([var.item1, var.item2, null, "", var.item3])} # ==============================================================================# FOR EXPRESSIONS (List/Map Comprehensions)# ============================================================================== locals { # Transform list -> list server_names = [for s in var.server_configs : s.name] # ["web-1", "web-2", "db-1"] # Transform with index indexed_names = [for i, s in var.server_configs : "${i}: ${s.name}"] # Filter with condition web_servers = [for s in var.server_configs : s if s.role == "web"] # Transform list -> map servers_by_name = { for s in var.server_configs : s.name => s } # Result: { "web-1" = {...}, "web-2" = {...}, "db-1" = {...} } # Map transformation size_lookup = { for s in var.server_configs : s.name => s.size } # Result: { "web-1" = "small", "web-2" = "medium", "db-1" = "large" } # Uppercase all keys upper_regions = { for k, v in local.regions_map : upper(k) => v }} # ==============================================================================# TYPE CONVERSION FUNCTIONS# ============================================================================== locals { # Convert to different types count_string = tostring(var.instance_count) # "3" count_number = tonumber(var.count_string) # 3 enabled_bool = tobool(var.enabled_string) # true # Convert collections az_set = toset(var.availability_zones) # Set (unique values) az_list = tolist(local.az_set) # List config_map = tomap(var.config_object) # Map # JSON encoding/decoding json_string = jsonencode({ name = "app", version = "1.0" }) json_object = jsondecode(file("config.json")) # YAML encoding/decoding yaml_string = yamlencode({ name = "app", version = "1.0" }) yaml_object = yamldecode(file("config.yaml"))} # ==============================================================================# FILE FUNCTIONS# ============================================================================== locals { # Read file contents user_data_script = file("${path.module}/scripts/user-data.sh") # Read and base64 encode (common for user data) encoded_script = base64encode(file("${path.module}/scripts/bootstrap.sh")) # Template file (with variable substitution) # Deprecated: use templatefile() instead # Use templatefile for dynamic content rendered_config = templatefile("${path.module}/templates/config.tpl", { db_host = aws_db_instance.main.endpoint db_port = aws_db_instance.main.port environment = var.environment }) # Path references module_path = path.module # Directory containing current .tf file root_path = path.root # Root module directory cwd_path = path.cwd # Current working directory}Use terraform console to interactively test expressions and functions. It loads your configuration and lets you experiment: > length([1,2,3]) → 3. This is invaluable for debugging complex expressions before committing them.
We've established the foundation of Terraform knowledge. Let's consolidate the key takeaways:
depends_on only when necessary.What's Next:
With the fundamentals established, the next page dives deep into Providers and Resources—the building blocks that actually create infrastructure. You'll learn how to configure providers for different clouds, understand the resource lifecycle, work with resource attributes and references, and master the patterns that make Terraform configurations maintainable across large organizations.
You now understand Terraform's core philosophy, architecture, and workflow. These fundamentals will serve as the foundation for everything else you learn about Terraform—from simple single-resource configurations to complex multi-environment, multi-team enterprise deployments.