Loading learning content...
In the previous page, we established what TLS does and how it works. But a critical architectural question remains: Where in your infrastructure should TLS connections terminate?
This seemingly simple question has profound implications for security, performance, operational complexity, and compliance. The point where encrypted traffic becomes unencrypted—the TLS termination point—is one of the most consequential decisions in system architecture.
Get it wrong, and you might expose sensitive data within your network, create performance bottlenecks, complicate debugging, or violate compliance requirements. Get it right, and you build a system that's secure, performant, and operationally manageable.
This page examines TLS termination strategies in depth: where termination can happen, what each choice implies, and how to make the right decision for your specific context.
By the end of this page, you will understand the different TLS termination strategies, their security implications, performance characteristics, and operational considerations. You'll be able to choose the appropriate termination point for different architectures—from simple web applications to complex microservices deployments.
TLS termination is the point in your infrastructure where an encrypted TLS connection ends and the underlying plaintext protocol (HTTP, gRPC, database wire protocol) becomes accessible.
The termination point must:
This makes the termination point security-critical. Compromise of this component means:
The fundamental trade-off:
Terminating TLS earlier (closer to the client) simplifies internal architecture but exposes plaintext data to more internal components. Terminating later (closer to the application) keeps data encrypted longer but increases complexity and may impact performance.
The question marks in the diagram represent the key decision points: Should traffic between each component be encrypted or unencrypted? Each choice creates a different security posture.
Common termination patterns:
| Pattern | Termination Point | Encryption Scope |
|---|---|---|
| Edge Termination | CDN/Load Balancer | Internet only |
| Gateway Termination | API Gateway | Edge to gateway |
| Service Termination | Application Service | Edge to service |
| End-to-End | Application + Database | Edge to storage |
Edge termination is the most common pattern: TLS terminates at the edge of your network—typically a CDN, WAF, or load balancer—and internal traffic flows unencrypted or with separate encryption.
How it works:
Traditional perimeter security assumed internal networks were safe. Modern attacks (supply chain compromise, stolen credentials, insider threats) and cloud environments (shared infrastructure, ephemeral workloads) have proven this assumption dangerous. Zero trust architecture rejects implicit network trust—which means edge termination alone is often insufficient for sensitive systems.
When edge termination is appropriate:
Compensating controls for edge termination:
If you must use edge termination, implement additional protections:
Gateway termination pushes the termination point deeper into the architecture—to an API gateway or service mesh sidecar. This pattern maintains encryption beyond the edge while still centralizing TLS management.
Re-encryption (TLS Bridging):
A hybrid approach where the edge terminates the client connection but establishes a new TLS connection to internal services:
TLS passthrough (Layer 4 routing):
Alternatively, the edge can route TLS traffic without terminating it, passing the encrypted connection directly to the destination:
12345678910111213141516171819202122232425262728
# NGINX configuration for TLS passthrough (Layer 4)stream { # Upstream backend servers (TLS will pass through) upstream backend_servers { server backend1.internal:443; server backend2.internal:443; } # Map SNI to determine routing map $ssl_preread_server_name $backend { api.example.com api_backend; web.example.com web_backend; default backend_servers; } server { listen 443; # Enable SNI reading without terminating TLS ssl_preread on; # Pass through to backend (no decryption) proxy_pass $backend; # Preserve client IP (requires PROXY protocol support on backend) proxy_protocol on; }}When using TLS bridging (terminate and re-encrypt), you lose the original client certificate information. If your backend needs client certificate data (for mTLS authentication), you must extract and forward it via headers (e.g., X-Client-Cert, X-Client-DN). Ensure backends validate that these headers come only from trusted proxies.
Service termination keeps TLS encryption all the way to the application service itself. Every service handles its own TLS termination, maintaining encryption throughout the internal network.
End-to-end encryption to services:
Making service termination manageable:
The operational challenges of service termination are real but solvable with proper infrastructure:
1. Service Meshes (Istio, Linkerd, Consul Connect):
Service meshes inject TLS handling into sidecar proxies, removing it from application code entirely. The application communicates over localhost HTTP; the sidecar handles all TLS operations.
┌─────────────────────────────────────────────┐
│ Pod │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Application │ │ Sidecar Proxy │ │
│ │ (HTTP only) │◄──►│ (mTLS handler) │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ localhost:8080 :15001 (mTLS) │
└─────────────────────────────────────────────┘
▲
│ mTLS to other services
▼
2. Automated Certificate Issuance:
Tools like cert-manager (Kubernetes) or HashiCorp Vault automatically issue short-lived certificates to services, eliminating manual certificate management.
3. Application Libraries:
Libraries like gRPC provide built-in TLS support, letting the application load certificates at startup with minimal code.
Service meshes solve mTLS complexity but introduce their own: additional sidecars consume resources, add latency (though minimal), and create new failure modes. For smaller deployments, the overhead may not be justified. For large microservices architectures, meshes are often the most practical path to universal encryption.
Cloud providers offer managed load balancers and services with built-in TLS termination. Understanding how these work—and their security implications—is essential for cloud architects.
| Provider | Service | Termination Type | Key Storage |
|---|---|---|---|
| AWS | Application Load Balancer (ALB) | Edge termination | AWS Certificate Manager (ACM) |
| AWS | Network Load Balancer (NLB) | Passthrough or TLS | ACM or upload custom |
| AWS | CloudFront | Edge (CDN) termination | ACM (free certs for CF) |
| GCP | Global HTTP(S) Load Balancer | Edge termination | Google-managed or custom |
| GCP | SSL Proxy Load Balancer | TLS termination (L4) | Google-managed or custom |
| Azure | Application Gateway | Edge termination | Key Vault or upload |
| Azure | Front Door | Edge (CDN) termination | Managed or custom |
AWS-specific considerations:
ALB to target encryption: ALB supports re-encrypting traffic to EC2/ECS targets using separate certificates. Configure target groups for HTTPS. ALB validates target certificates unless you disable verification (not recommended).
NLB TLS: Network Load Balancer can terminate TLS (since 2019) or pass through TCP. For passthrough, use TCP listeners; targets see the original TLS connection.
ACM private certificates: ACM can issue private CA certificates for internal services, enabling internal mTLS without managing your own PKI infrastructure.
GCP-specific considerations:
Google-managed certificates: Free, automatically renewed certificates for domains. Use external HTTP(S) load balancer for automatic provisioning.
Cloud Armor integration: WAF inspection happens at the load balancer after TLS termination. Cannot inspect encrypted passthrough traffic.
Internal load balancers: Support TLS termination for service-to-service traffic within VPC.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
# AWS ALB with TLS termination and re-encryption to targetsresource "aws_lb" "main" { name = "secure-alb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.alb.id] subnets = var.public_subnets} # HTTPS listener with ACM certificateresource "aws_lb_listener" "https" { load_balancer_arn = aws_lb.main.arn port = 443 protocol = "HTTPS" ssl_policy = "ELBSecurityPolicy-TLS13-1-2-2021-06" # TLS 1.3 preferred certificate_arn = aws_acm_certificate.main.arn default_action { type = "forward" target_group_arn = aws_lb_target_group.app.arn }} # Target group with HTTPS to instances (re-encryption)resource "aws_lb_target_group" "app" { name = "app-targets" port = 443 protocol = "HTTPS" # Re-encrypt to targets vpc_id = var.vpc_id target_type = "instance" health_check { port = 443 protocol = "HTTPS" path = "/health" }} # ACM certificate (auto-validated via DNS)resource "aws_acm_certificate" "main" { domain_name = "api.example.com" validation_method = "DNS" lifecycle { create_before_destroy = true }}When using cloud-managed load balancers, your private key resides in the cloud provider's infrastructure. For ACM and Google-managed certificates, you never have access to the private key. For custom certificates, you upload the key to the provider. Assess whether this meets your security requirements, especially for highly regulated workloads.
Kubernetes adds layers of abstraction that affect where TLS terminates. Understanding these patterns is essential for securing Kubernetes workloads.
type: LoadBalancer services with cloud-specific annotations for TLS. Termination happens outside the cluster.12345678910111213141516171819202122232425262728293031323334353637383940
# Kubernetes Ingress with TLS terminationapiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: api-ingress namespace: production annotations: # Force HTTPS redirect nginx.ingress.kubernetes.io/ssl-redirect: "true" # HSTS header nginx.ingress.kubernetes.io/configuration-snippet: | add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;spec: ingressClassName: nginx tls: - hosts: - api.example.com secretName: api-tls-secret # Contains tls.crt and tls.key rules: - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 80 # Backend is HTTP (ingress handles TLS)---# Secret containing TLS certificate (created by cert-manager or manually)apiVersion: v1kind: Secretmetadata: name: api-tls-secret namespace: productiontype: kubernetes.io/tlsdata: tls.crt: <base64-encoded-certificate> tls.key: <base64-encoded-private-key>cert-manager is the de facto standard for automated certificate management in Kubernetes. It can automatically provision certificates from Let's Encrypt, Vault, or internal CAs, and stores them as Kubernetes Secrets. Combined with Ingress or Gateway API, you get fully automated TLS with automatic renewal.
There's no universal best practice—the right termination strategy depends on your security requirements, architecture, compliance needs, and operational maturity.
| Factor | Edge Only | Gateway/Re-encrypt | End-to-End (mTLS) |
|---|---|---|---|
| Security Posture | Basic | Good | Excellent |
| Compliance Readiness | Limited | Most requirements | All requirements |
| Operational Complexity | Low | Medium | High (without mesh) |
| Performance Overhead | Minimal | Low | Moderate |
| Debugging Ease | Easy | Moderate | Difficult |
| Certificate Management | Simple | Moderate | Complex |
| Suitable For | Simple apps, legacy | API gateways, most apps | Microservices, high security |
Start with what you can operate reliably. Many organizations begin with edge termination and progressively add internal encryption as they mature. A working, well-understood security model beats a theoretically superior model that's misconfigured.
We've explored the critical architectural decision of where to terminate TLS in distributed systems. Let's consolidate the key takeaways:
What's next:
With termination strategies understood, the next page examines End-to-End Encryption—going beyond TLS to scenarios where even intermediate systems (like application servers) should not see plaintext data. We'll explore client-side encryption, envelope encryption, and architectures that minimize trust.
You now understand TLS termination strategies, from edge-only to full mTLS. You can evaluate where termination should happen based on security requirements, operational constraints, and architecture patterns. Next, we'll explore end-to-end encryption for scenarios requiring the highest levels of data protection.