Loading learning content...
Application developers have decades of testing wisdom: unit tests, integration tests, end-to-end tests, property-based tests. But how do you test infrastructure? You can't exactly spin up a production-identical environment for every pull request—that would be prohibitively expensive and slow.
Yet infrastructure failures are among the most impactful. A misconfigured security group can expose sensitive data. An incorrect IAM policy can grant admin access to the wrong principals. A missing tag can make resources impossible to track for billing. Automated testing for Infrastructure as Code provides the safety net that catches these issues before they reach production.
By the end of this page, you will understand the testing pyramid for infrastructure, how to implement static analysis and policy testing, integration testing patterns for IaC, contract testing for infrastructure modules, and compliance-as-code frameworks. You will be able to design a comprehensive testing strategy that catches issues at each level of the infrastructure lifecycle.
Like application testing, infrastructure testing follows a pyramid structure. Tests at the bottom are fast and cheap; tests at the top are slow and expensive. The goal is to catch as many issues as possible at lower levels, reserving expensive higher-level tests for validations that can't be done any other way.
The Infrastructure Testing Pyramid:
| Level | Speed | Cost | Catches | When to Run |
|---|---|---|---|---|
| Static Analysis | Seconds | Near zero | Syntax errors, security misconfigs, policy violations | Every commit, pre-commit hooks |
| Contract Tests | Seconds | Low | Module interface issues, input validation failures | Every PR, module changes |
| Integration Tests | Minutes | Moderate | Cross-resource issues, actual behavior validation | Before production deploy |
| End-to-End Tests | Hours | High | Full environment issues, real workload validation | Release gates, scheduled |
The Key Insight: Test What You Can't Observe in Plans
The Terraform plan shows you what will be created, but it doesn't show you:
Testing fills these gaps—verifying not just that resources will be created, but that they will behave correctly.
The shift-left philosophy applies to infrastructure testing: catch issues as early as possible in the development lifecycle. A security misconfiguration caught by static analysis costs minutes to fix. The same issue caught in production costs days of incident response. Invest in lower pyramid levels for maximum return.
Static analysis examines infrastructure code without executing it. This is the fastest, cheapest form of testing and should catch the majority of issues. A robust static analysis pipeline includes multiple specialized tools.
Categories of Static Analysis:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
# TFLint configuration for comprehensive Terraform lintingconfig { # Enable module inspection module = true # Force all variables to be typed force = false} # AWS-specific rulesplugin "aws" { enabled = true version = "0.27.0" source = "github.com/terraform-linters/tflint-ruleset-aws"} # Terraform best practicesplugin "terraform" { enabled = true preset = "recommended"} # Naming conventionsrule "terraform_naming_convention" { enabled = true format = "snake_case"} # Require descriptions on variablesrule "terraform_documented_variables" { enabled = true} # Require descriptions on outputsrule "terraform_documented_outputs" { enabled = true} # Prevent deprecated syntaxrule "terraform_deprecated_interpolation" { enabled = true} # AWS-specific: Check for valid instance typesrule "aws_instance_invalid_type" { enabled = true} # AWS-specific: RDS instance type validationrule "aws_db_instance_invalid_type" { enabled = true}12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
name: Security Scan on: pull_request: paths: - '**/*.tf' - '**/*.yaml' - '**/*.yml' jobs: tfsec: name: Terraform Security Scan runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: tfsec uses: aquasecurity/tfsec-action@v1.0.0 with: soft_fail: false additional_args: --config-file .tfsec.yaml checkov: name: Checkov Security Scan runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Checkov uses: bridgecrewio/checkov-action@v12 with: directory: . framework: terraform skip_check: CKV_AWS_999 # Skip specific check if needed output_format: cli,sarif output_file_path: console,results.sarif soft_fail: false - name: Upload SARIF uses: github/codeql-action/upload-sarif@v2 with: sarif_file: results.sarif trivy: name: Trivy IaC Scan runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Trivy uses: aquasecurity/trivy-action@master with: scan-type: 'config' scan-ref: '.' exit-code: '1' severity: 'CRITICAL,HIGH' format: 'sarif' output: 'trivy-results.sarif'| Finding | Risk | Example Fix |
|---|---|---|
| S3 bucket without encryption | Data exposure | Add server_side_encryption_configuration |
| Security group allows 0.0.0.0/0 on SSH | Unauthorized access | Restrict to specific CIDR ranges |
| RDS without encryption at rest | Data exposure | Enable storage_encrypted = true |
| IAM policy with : permissions | Privilege escalation | Apply least-privilege principles |
| CloudTrail not enabled | No audit trail | Enable CloudTrail for all regions |
Security scanners will produce false positives. It's better to review and suppress false positives than to miss real issues. Most tools support inline suppressions with comments documenting why a finding is acceptable.
While security scanners check for common misconfigurations, policy as code allows organizations to define and test their own rules. This is essential for enforcing organizational standards that generic scanners don't cover.
What Policy as Code Validates:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
# OPA Rego Policy: Require Tags on AWS Resourcespackage terraform.policies.tags import future.keywords.inimport future.keywords.if # Define required tagsrequired_tags := ["Environment", "Owner", "CostCenter", "Application"] # Resources that must have tagstaggable_resources := [ "aws_instance", "aws_s3_bucket", "aws_rds_instance", "aws_vpc", "aws_subnet", "aws_security_group", "aws_lambda_function",] # Violation: Missing required tagsdeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type in taggable_resources some tag in required_tags not tag_exists(resource, tag) msg := sprintf( "Resource %s (%s) is missing required tag: %s", [resource.name, resource.type, tag] )} # Helper: Check if tag existstag_exists(resource, tag) if { resource.values.tags[tag]} # Violation: Empty tag valuesdeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type in taggable_resources some tag in required_tags resource.values.tags[tag] == "" msg := sprintf( "Resource %s has empty value for required tag: %s", [resource.name, tag] )} # Violation: Non-standard environment valuevalid_environments := ["dev", "staging", "production"] deny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type in taggable_resources env_tag := resource.values.tags.Environment not env_tag in valid_environments msg := sprintf( "Resource %s has invalid Environment tag '%s'. Must be one of: %v", [resource.name, env_tag, valid_environments] )}123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
# OPA Rego Policy: Security Requirementspackage terraform.policies.security import future.keywords.ifimport future.keywords.in # Deny public S3 bucketsdeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_s3_bucket" acl := resource.values.acl acl in ["public-read", "public-read-write"] msg := sprintf( "S3 bucket %s has public ACL '%s'. Public buckets are not allowed.", [resource.name, acl] )} # Deny unencrypted EBS volumesdeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_ebs_volume" not resource.values.encrypted msg := sprintf( "EBS volume %s is not encrypted. All volumes must be encrypted.", [resource.name] )} # Deny security groups with unrestricted SSHdeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_security_group" ingress := resource.values.ingress[_] ingress.from_port <= 22 ingress.to_port >= 22 "0.0.0.0/0" in ingress.cidr_blocks msg := sprintf( "Security group %s allows SSH from 0.0.0.0/0. Restrict SSH access.", [resource.name] )} # Deny unapproved AWS regionsapproved_regions := ["us-west-2", "us-east-1", "eu-west-1"] deny[msg] if { resource := input.configuration.provider_config.aws region := resource.expressions.region.constant_value not region in approved_regions msg := sprintf( "AWS provider uses unapproved region '%s'. Approved: %v", [region, approved_regions] )}Running Policy Tests:
Policy tests integrate into the CI pipeline by evaluating policies against the Terraform plan:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
name: Policy Check on: pull_request: paths: - 'infrastructure/**' jobs: policy-check: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_wrapper: false # Need raw output for JSON - name: Setup Conftest run: | wget -q https://github.com/open-policy-agent/conftest/releases/download/v0.45.0/conftest_0.45.0_Linux_x86_64.tar.gz tar xzf conftest_*.tar.gz sudo mv conftest /usr/local/bin/ - name: Terraform Init run: terraform init -backend=false working-directory: infrastructure - name: Generate Plan JSON run: | terraform plan -out=tfplan terraform show -json tfplan > tfplan.json working-directory: infrastructure - name: Run Policy Tests run: | conftest test infrastructure/tfplan.json \ --policy policies/ \ --output table \ --all-namespaces - name: Post Results if: failure() uses: actions/github-script@v7 with: script: | github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, body: '❌ **Policy Check Failed**\n\nThis PR violates organizational policies. See workflow output for details.' });If using Terraform Cloud or Enterprise, HashiCorp Sentinel provides native policy-as-code integration with a more Terraform-aware policy language. OPA/Conftest works well for open-source Terraform and has broader ecosystem support.
Terraform modules are shared components used across multiple configurations. Like APIs, modules have contracts—expected inputs, outputs, and behaviors. Contract testing validates that modules fulfill these contracts and that consumers use modules correctly.
What Contract Testing Validates:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
# Contract test for VPC moduleterraform { required_providers { test = { source = "hashicorp/test" } }} # Test: Module creates VPC with correct CIDRrun "verify_vpc_creation" { command = plan variables { vpc_cidr = "10.0.0.0/16" environment = "test" azs = ["us-west-2a", "us-west-2b"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] } # Assert VPC is created with correct CIDR assert { condition = aws_vpc.main.cidr_block == "10.0.0.0/16" error_message = "VPC CIDR does not match input" } # Assert correct number of subnets assert { condition = length(aws_subnet.private) == 2 error_message = "Expected 2 private subnets" } assert { condition = length(aws_subnet.public) == 2 error_message = "Expected 2 public subnets" }} # Test: Outputs are correct typerun "verify_outputs" { command = plan variables { vpc_cidr = "10.0.0.0/16" environment = "test" azs = ["us-west-2a"] private_subnets = ["10.0.1.0/24"] public_subnets = ["10.0.101.0/24"] } assert { condition = can(output.vpc_id) error_message = "vpc_id output must be defined" } assert { condition = can(output.private_subnet_ids) error_message = "private_subnet_ids output must be defined" } assert { condition = can(output.public_subnet_ids) error_message = "public_subnet_ids output must be defined" }} # Test: Invalid CIDR is rejectedrun "reject_invalid_cidr" { command = plan expect_failures = [var.vpc_cidr] variables { vpc_cidr = "invalid-cidr" # This should fail validation environment = "test" azs = ["us-west-2a"] private_subnets = ["10.0.1.0/24"] public_subnets = ["10.0.101.0/24"] }}Variable Validation in Modules:
Modules should validate their inputs to fail fast with clear errors:
12345678910111213141516171819202122232425262728293031323334353637383940
# Module input validationvariable "vpc_cidr" { description = "CIDR block for the VPC" type = string validation { condition = can(cidrhost(var.vpc_cidr, 0)) error_message = "vpc_cidr must be a valid CIDR block" } validation { condition = tonumber(split("/", var.vpc_cidr)[1]) >= 16 && tonumber(split("/", var.vpc_cidr)[1]) <= 24 error_message = "VPC CIDR must be between /16 and /24" }} variable "environment" { description = "Environment name" type = string validation { condition = contains(["dev", "staging", "production"], var.environment) error_message = "Environment must be dev, staging, or production" }} variable "private_subnets" { description = "List of private subnet CIDRs" type = list(string) validation { condition = length(var.private_subnets) >= 1 error_message = "At least one private subnet is required" } validation { condition = alltrue([for cidr in var.private_subnets : can(cidrhost(cidr, 0))]) error_message = "All private subnet CIDRs must be valid" }}Terraform 1.6+ includes native testing via terraform test. Earlier versions require external tools like Terratest (Go) or kitchen-terraform (Ruby). The native framework is recommended for new projects.
Integration tests go beyond static analysis by actually deploying infrastructure and validating its behavior. This is more expensive but catches issues that only manifest in real cloud environments.
When Integration Tests Are Essential:
Terratest: The Go-Based Testing Framework:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117
package test import ( "testing" "time" "github.com/gruntwork-io/terratest/modules/aws" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require") func TestVpcModule(t *testing.T) { t.Parallel() // Unique ID for test resources uniqueId := random.UniqueId() opts := &terraform.Options{ TerraformDir: "../modules/vpc", Vars: map[string]interface{}{ "vpc_cidr": "10.99.0.0/16", // Test-specific CIDR "environment": "test-" + uniqueId, "azs": []string{"us-west-2a", "us-west-2b"}, "private_subnets": []string{"10.99.1.0/24", "10.99.2.0/24"}, "public_subnets": []string{"10.99.101.0/24", "10.99.102.0/24"}, }, // Tags to help identify test resources EnvVars: map[string]string{ "AWS_DEFAULT_REGION": "us-west-2", }, } // Ensure cleanup happens defer terraform.Destroy(t, opts) // Deploy infrastructure terraform.InitAndApply(t, opts) // Get outputs vpcId := terraform.Output(t, opts, "vpc_id") privateSubnetIds := terraform.OutputList(t, opts, "private_subnet_ids") publicSubnetIds := terraform.OutputList(t, opts, "public_subnet_ids") // Verify VPC exists and has correct CIDR vpc := aws.GetVpcById(t, vpcId, "us-west-2") assert.Equal(t, "10.99.0.0/16", vpc.CidrBlock) // Verify subnet count assert.Equal(t, 2, len(privateSubnetIds)) assert.Equal(t, 2, len(publicSubnetIds)) // Verify public subnets have internet gateway route for _, subnetId := range publicSubnetIds { routeTable := aws.GetRouteTableForSubnet(t, subnetId, "us-west-2") hasInternetRoute := false for _, route := range routeTable.Routes { if route.DestinationCidrBlock == "0.0.0.0/0" { hasInternetRoute = true break } } assert.True(t, hasInternetRoute, "Public subnet %s should have internet gateway route", subnetId) } // Verify private subnets do NOT have direct internet route for _, subnetId := range privateSubnetIds { routeTable := aws.GetRouteTableForSubnet(t, subnetId, "us-west-2") hasDirectInternetRoute := false for _, route := range routeTable.Routes { if route.DestinationCidrBlock == "0.0.0.0/0" && route.GatewayId != "" { hasDirectInternetRoute = true break } } assert.False(t, hasDirectInternetRoute, "Private subnet %s should not have direct internet route", subnetId) }} func TestVpcConnectivity(t *testing.T) { t.Parallel() // This test validates actual network connectivity // by deploying EC2 instances and verifying they can communicate opts := &terraform.Options{ TerraformDir: "../test/fixtures/vpc_connectivity", // ... configuration ... } defer terraform.Destroy(t, opts) terraform.InitAndApply(t, opts) // Deploy test instances (done by fixture) publicInstanceIp := terraform.Output(t, opts, "public_instance_ip") privateInstanceIp := terraform.Output(t, opts, "private_instance_ip") // Verify public instance is reachable err := retry.DoWithRetry( t, "Check public instance SSH", 10, 5*time.Second, func() (string, error) { return ssh.CheckSshConnection(t, publicInstanceIp, 22) }, ) require.NoError(t, err) // Verify private instance reachable via bastion (public instance) // ... additional connectivity tests ...}Cost and Time Considerations:
Integration tests consume real cloud resources:
defer terraform.Destroy()Failed integration tests can leave orphaned resources. Implement scheduled cleanup jobs that destroy resources tagged as test resources older than a threshold (e.g., 24 hours). This prevents cost leaks from abandoned test infrastructure.
Compliance testing ensures infrastructure meets regulatory and organizational requirements. This is especially critical in regulated industries (finance, healthcare, government) but relevant for any organization with security standards.
Compliance Frameworks and Their IaC Implications:
| Framework | Key Requirements | IaC Testing Focus |
|---|---|---|
| SOC 2 | Security, availability, confidentiality | Encryption, access controls, logging |
| PCI DSS | Payment card data protection | Network segmentation, encryption, audit trails |
| HIPAA | Protected health information | Encryption, access controls, audit logging |
| GDPR | Personal data protection | Data residency, encryption, access controls |
| CIS Benchmarks | Cloud security best practices | Configuration hardening across all resources |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104
# CIS AWS Foundations Benchmark Checkspackage terraform.compliance.cis_aws import future.keywords.ifimport future.keywords.in # CIS 2.1.1: Ensure S3 bucket has server-side encryption enableddeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_s3_bucket" not has_encryption(resource) msg := sprintf( "[CIS 2.1.1] S3 bucket %s does not have server-side encryption enabled", [resource.name] )} has_encryption(bucket) if { # Check for default encryption configuration bucket.values.server_side_encryption_configuration[_]} # CIS 2.1.2: Ensure S3 bucket has logging enableddeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_s3_bucket" not has_logging(resource) msg := sprintf( "[CIS 2.1.2] S3 bucket %s does not have logging enabled", [resource.name] )} has_logging(bucket) if { bucket.values.logging[_]} # CIS 2.2.1: Ensure EBS volume encryption is enableddeny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_ebs_volume" not resource.values.encrypted msg := sprintf( "[CIS 2.2.1] EBS volume %s is not encrypted", [resource.name] )} # CIS 3.10: Ensure VPC Flow Logs are enableddeny[msg] if { vpc := input.planned_values.root_module.resources[_] vpc.type == "aws_vpc" not has_flow_log(vpc.values.id) msg := sprintf( "[CIS 3.10] VPC %s does not have flow logs enabled", [vpc.name] )} has_flow_log(vpc_id) if { flow_log := input.planned_values.root_module.resources[_] flow_log.type == "aws_flow_log" flow_log.values.vpc_id == vpc_id} # CIS 4.1: Ensure no security groups allow ingress from 0.0.0.0/0 to port 22deny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_security_group" ingress := resource.values.ingress[_] ingress.from_port <= 22 ingress.to_port >= 22 "0.0.0.0/0" in ingress.cidr_blocks msg := sprintf( "[CIS 4.1] Security group %s allows SSH from 0.0.0.0/0", [resource.name] )} # CIS 4.2: Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389deny[msg] if { resource := input.planned_values.root_module.resources[_] resource.type == "aws_security_group" ingress := resource.values.ingress[_] ingress.from_port <= 3389 ingress.to_port >= 3389 "0.0.0.0/0" in ingress.cidr_blocks msg := sprintf( "[CIS 4.2] Security group %s allows RDP from 0.0.0.0/0", [resource.name] )}Pre-Built Compliance Packs:
Rather than writing all compliance rules from scratch, leverage pre-built policy libraries:
Compliance isn't just point-in-time validation. Run compliance scans continuously: in PRs, post-deployment, and on schedules. Drift from compliant state should trigger alerts, allowing teams to address issues before audits.
Effective testing requires well-crafted test fixtures—minimal configurations that test specific behaviors—and sometimes mocking to avoid cloud costs for certain test types.
Test Fixture Best Practices:
12345678910111213141516171819202122232425262728293031323334353637383940
# Test Fixture: S3 Bucket with Encryption# Purpose: Verify S3 module correctly enables encryption terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }} variable "test_id" { description = "Unique identifier for this test run" type = string} module "s3_bucket" { source = "../../../modules/s3" bucket_name = "test-encryption-${var.test_id}" # Configuration under test enable_encryption = true encryption_algorithm = "aws:kms" tags = { Environment = "test" TestId = var.test_id Purpose = "integration-test" }} output "bucket_arn" { value = module.s3_bucket.bucket_arn} output "encryption_configuration" { value = module.s3_bucket.encryption_configuration}Mocking for Cost-Free Testing:
For tests that don't need real cloud resources, mocking can dramatically reduce costs:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
package test import ( "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert") // Test using LocalStack for AWS mockingfunc TestWithLocalStack(t *testing.T) { // Start LocalStack container (or connect to running instance) // LocalStack mocks AWS APIs locally at no cost opts := &terraform.Options{ TerraformDir: "../modules/s3", Vars: map[string]interface{}{ "bucket_name": "test-bucket", }, EnvVars: map[string]string{ // Point Terraform at LocalStack "AWS_ACCESS_KEY_ID": "test", "AWS_SECRET_ACCESS_KEY": "test", "AWS_DEFAULT_REGION": "us-east-1", }, // Override endpoint URLs for LocalStack BackendConfig: map[string]interface{}{ "endpoint": "http://localhost:4566", }, } defer terraform.Destroy(t, opts) terraform.InitAndApply(t, opts) // Validate outputs bucketArn := terraform.Output(t, opts, "bucket_arn") assert.Contains(t, bucketArn, "test-bucket")} // Test plan output without any cloud callsfunc TestPlanOnly(t *testing.T) { // Tests that only run 'plan' don't need cloud credentials // (if using -backend=false and mocked providers) opts := &terraform.Options{ TerraformDir: "../modules/vpc", Vars: map[string]interface{}{ "vpc_cidr": "10.0.0.0/16", "environment": "test", }, } // InitAndPlan doesn't apply - no resources created plan := terraform.InitAndPlan(t, opts) // Validate plan contains expected resources assert.Contains(t, plan, "aws_vpc.main") assert.Contains(t, plan, "will be created")}LocalStack and similar tools mock AWS APIs but don't perfectly replicate behavior. Use them for basic validation but run critical tests against real cloud resources. Some services and features have limited LocalStack support.
A well-organized test suite integrates smoothly into CI/CD pipelines, providing fast feedback on PRs while running comprehensive tests before production deployments.
Test Suite Organization:
1234567891011121314151617181920212223242526272829303132333435
infrastructure/├── modules/│ ├── vpc/│ │ ├── main.tf│ │ ├── variables.tf│ │ ├── outputs.tf│ │ └── tests/│ │ ├── vpc_test.tftest.hcl # Native Terraform tests│ │ └── contract_test.tf # Contract tests│ └── rds/│ └── tests/├── environments/│ ├── production/│ └── staging/├── test/│ ├── fixtures/ # Test-only configurations│ │ ├── vpc_connectivity/│ │ └── s3_encryption/│ ├── integration/ # Terratest integration tests│ │ ├── vpc_test.go│ │ ├── rds_test.go│ │ └── e2e_test.go│ └── go.mod├── policies/ # OPA/Conftest policies│ ├── security.rego│ ├── compliance/│ │ └── cis_aws.rego│ └── tags.rego├── .tflint.hcl├── .tfsec.yaml└── .github/ └── workflows/ ├── pr-validation.yaml # Fast checks on every PR ├── integration-tests.yaml # Integration tests before merge └── compliance-scan.yaml # Scheduled compliance scanningTiered CI Pipeline:
Structure CI to run appropriate tests at appropriate times:
| Trigger | Tests Run | Duration | Purpose |
|---|---|---|---|
| Every commit | Linting, formatting, syntax validation | < 1 minute | Immediate feedback on basic errors |
| Every PR | Security scans, policy checks, plan generation | 2-5 minutes | Catch security/policy issues before review |
| Before merge | Contract tests, smoke tests | 5-10 minutes | Validate modules work correctly |
| Post-merge (staging) | Integration tests against staging | 15-30 minutes | Validate actual infrastructure behavior |
| Pre-production gate | Full integration + compliance suite | 30-60 minutes | Complete validation before production |
| Scheduled (daily/weekly) | Compliance scans, drift detection | Varies | Continuous compliance monitoring |
Integration tests can often run in parallel if they use isolated resources (different VPC CIDRs, unique naming). This dramatically reduces total test duration. Use test frameworks' parallel capabilities and ensure test isolation.
Automated testing transforms infrastructure development from hope-based deployment to evidence-based confidence. The key principles to remember:
What's Next:
With comprehensive testing in place, the final page covers Deployment Strategies—the patterns for safely rolling out infrastructure changes across environments, including progressive rollouts, canary deployments, and rollback procedures.
You now understand the infrastructure testing pyramid, from static analysis through integration testing to compliance validation. You're equipped to design testing strategies that catch issues before they reach production.