Loading learning content...
Caching enables DNS to scale, but caching introduces a fundamental problem: how long should cached data remain valid?
DNS records change. IP addresses get reassigned. Load balancers shift traffic. Servers fail and replacements come online. Organizations migrate to new infrastructure. Every cached record can become stale—pointing to the wrong server, or worse, to no server at all.
Yet if caches are invalidated too quickly, the benefits of caching evaporate. Short-lived caches increase query volume, latency, and infrastructure load. The Internet's DNS infrastructure cannot function if every record expires in seconds.
Time-to-Live (TTL) is the mechanism that balances these competing demands.
TTL is a field in every DNS record that specifies how long resolvers and clients may cache the record before it must be refreshed from an authoritative source. TTL is the lever that domain administrators use to tune the trade-off between freshness (how quickly changes propagate) and efficiency (how much caching reduces load and latency).
By the end of this page, you will understand how TTL works at the protocol level, how resolvers enforce TTL countdown, the trade-offs involved in TTL selection, common TTL strategies for different scenarios, and the operational implications of TTL decisions.
Time-to-Live is a 32-bit unsigned integer field present in every DNS resource record. It specifies the maximum number of seconds that the record may be cached by resolvers and clients before requiring revalidation.
TTL in the DNS Protocol:
Every Resource Record (RR) in a DNS response includes five components:
The TTL field occupies bytes 7-10 of the standard RR format, as defined in RFC 1035. Being 32-bit unsigned, TTL can theoretically range from 0 to 4,294,967,295 seconds (~136 years), though practical values are far smaller.
TTL Countdown Mechanism:
When a resolver receives a DNS response, it stores the record along with its TTL value. The resolver then decrements the TTL over time as the record ages in cache:
Received: www.example.com → 93.184.216.34, TTL=3600
Time 0: TTL remaining = 3600 seconds
Time 600: TTL remaining = 3000 seconds
Time 1800: TTL remaining = 1800 seconds
Time 3600: TTL remaining = 0 → Record expires, must refresh
1234567891011121314151617181920
; DNS Response for www.example.com; Displayed in zone file format ;; QUESTION SECTION:; www.example.com. IN A ;; ANSWER SECTION:www.example.com. 3600 IN A 93.184.216.34 ^^^^ TTL: 3600 seconds (1 hour) ;; AUTHORITY SECTION: example.com. 86400 IN NS ns1.example.com.example.com. 86400 IN NS ns2.example.com. ^^^^^ TTL: 86400 seconds (24 hours) ;; ADDITIONAL SECTION:ns1.example.com. 86400 IN A 199.43.135.53ns2.example.com. 86400 IN A 199.43.133.53Each DNS record in a response has its own independent TTL. A single response may contain records with different TTLs: the A record might have TTL=300, while the associated NS records have TTL=86400. Resolvers track each record's expiration independently.
As DNS responses flow through multiple caching layers, TTL values are progressively decremented to reflect time spent in transit and in each cache. This ensures that downstream caches never hold records longer than the original TTL permits.
The TTL Countdown Chain:
Consider a query path: Client → Local Resolver → ISP Resolver → Authoritative Server
The key principle: TTL always reflects remaining validity, not original validity.
Why TTL Countdown Matters:
If resolvers returned the original TTL without countdown, downstream caches could hold records far longer than intended:
The record would be held 3x longer than authorized. TTL countdown prevents this runaway caching.
Minimum TTL Policies:
Some resolvers enforce minimum TTL floors, refusing to cache records below a threshold (e.g., 30 seconds). This protects against:
However, enforcing TTL floors delays propagation of legitimate changes, creating operational trade-offs.
When organizations set very low TTLs for failover (e.g., TTL=30), resolvers with minimum TTL floors (e.g., 300 seconds) may override this, causing delayed failover. Network engineers must understand their resolver stack's TTL behavior when designing high-availability DNS strategies.
TTL selection requires balancing two competing priorities that cannot be simultaneously optimized:
Freshness Priority (Low TTL):
Efficiency Priority (High TTL):
The trade-off is inherent and inescapable. Every TTL selection represents a specific position on this spectrum.
| TTL Duration | Change Propagation | Cache Hit Rate | Failover Speed | Infrastructure Load | Best For |
|---|---|---|---|---|---|
| 30-60 seconds | Very fast | Low | Near-instant | Very high | Active failover, traffic steering |
| 300 seconds (5 min) | Fast | Medium | Fast | High | Moderate change frequency |
| 1800 seconds (30 min) | Moderate | Good | Moderate | Moderate | Stable services, reasonable agility |
| 3600 seconds (1 hour) | Slow | High | Slow | Low | Stable infrastructure, cost-sensitive |
| 86400 seconds (24 hours) | Very slow | Very high | Very slow | Very low | Rarely-changing records (NS, MX) |
Real-World TTL Selection Considerations:
1. How often does this record change?
2. How critical is rapid failover?
3. What's the cost of stale data?
4. What's the query volume?
5. What's the resolver behavior?
Before making DNS changes, temporarily lower TTL several days in advance. If normal TTL is 3600, reduce to 300 a few days before the change. After the change propagates (observed in monitoring), restore higher TTL. This minimizes exposure to stale records during transitions.
Experienced DNS administrators apply established TTL patterns based on record purpose and operational requirements. Understanding these patterns helps make informed configuration decisions.
Pattern 1: Differentiated TTL by Record Type
Different record types typically have different change frequencies, warranting different TTLs:
| Record Type | Typical TTL | Rationale |
|---|---|---|
| NS records | 86400 (24h) - 172800 (48h) | Rarely change; high caching value |
| MX records | 3600 (1h) - 86400 (24h) | Infrequent mail server changes |
| A/AAAA records | 300 (5min) - 3600 (1h) | Variable; depends on infrastructure |
| CNAME records | Match target record | Should align with target's TTL |
| TXT records | 300 (5min) - 3600 (1h) | SPF/DKIM may need updates |
Pattern 2: TTL Based on Infrastructure Stability
Pattern 3: TTL Reduction Before Changes
The most common operational TTL pattern is the pre-change reduction sequence:
Why wait longer than the original TTL?
Resolvers may have cached the record at any point during the previous TTL period. If the original TTL was 24 hours and you lowered it 12 hours ago, resolvers who cached 12 hours ago still have 12 hours remaining on their cached copy with the old TTL value.
Pattern 4: Anycast and CDN TTLs
CDN providers often recommend or require very low TTLs (30-60 seconds) because they perform geographic traffic steering and health-based routing. The CDN's edge servers change constantly, and client connections need to follow the steering.
However, this creates high query volumes that CDN providers absorb as part of their service infrastructure.
Many registrars automatically set high TTLs on apex domain records (example.com vs www.example.com). When migrating apex domains, verify and lower these TTLs in advance—they're often overlooked during change planning.
DNS caching doesn't just apply to successful responses—it also applies to negative responses indicating that a domain or record type doesn't exist. Negative caching prevents repeated queries for non-existent resources and is governed by the SOA MINIMUM field.
Types of Negative Responses:
NXDOMAIN — The queried domain name does not exist
nonexistent.example.com returns NXDOMAINNODATA — The domain exists but has no records of the requested type
example.com exists but has no AAAA recordHow Negative TTL Works (RFC 2308):
Negative responses include the SOA record for the zone in the authority section. The negative cache TTL is the minimum of:
This ensures negative responses don't persist longer than the zone administrator intends.
12345678910111213141516171819202122
; SOA Record Structure; example.com. IN SOA ns1.example.com. admin.example.com. (; serial ; Zone serial number; refresh ; Refresh interval for secondaries; retry ; Retry interval after refresh failure; expire ; Time until zone is no longer authoritative; minimum ; Negative cache TTL (RFC 2308); ) example.com. 86400 IN SOA ns1.example.com. admin.example.com. ( 2024011501 ; Serial: YYYYMMDDNN 7200 ; Refresh: 2 hours 3600 ; Retry: 1 hour 1209600 ; Expire: 2 weeks 300 ; Minimum: 5 min (negative TTL) ) ; When a resolver queries for nonexistent.example.com:; - Receives NXDOMAIN response; - Authority section contains this SOA record; - Negative TTL = min(86400, 300) = 300 seconds; - Resolver caches "nonexistent.example.com does not exist" for 300 secondsWhy Negative Caching Matters:
Typo protection — Users mistype domains constantly. Without negative caching, each typo generates repeated queries to authoritative servers.
Attack mitigation — Attackers may flood queries for random, non-existent subdomains (NXDOMAIN attacks). Negative caching limits the damage.
Behavioral optimization — Some applications check multiple domain patterns (e.g., _dmarc.example.com, _domainkey.example.com). Negative caching prevents repeated misses.
Historical Note on MINIMUM Field:
Before RFC 2308 (1998), the SOA MINIMUM field specified the default TTL for all records in the zone. RFC 2308 redefined it specifically as the negative cache TTL. Some legacy documentation and configurations may reflect the older interpretation—modern DNS universally follows the RFC 2308 semantic.
When creating a new subdomain, remember that resolvers may have cached an NXDOMAIN response for that name. If your SOA MINIMUM is 3600 seconds, resolvers who recently queried the (previously) non-existent name will continue returning NXDOMAIN for up to an hour after you create the record. Consider a lower SOA MINIMUM if you frequently create new subdomains.
A TTL of zero has special meaning in DNS: the record should not be cached and must be re-queried for every use. While technically valid, TTL=0 is controversial and its behavior varies across implementations.
What TTL=0 Means:
When TTL=0 Might Be Used:
The Problem with TTL=0 in Practice:
Alternative Approaches to TTL=0:
Rather than using TTL=0, consider these alternatives:
| Goal | Instead of TTL=0 | Recommended Approach |
|---|---|---|
| Fast failover | TTL=0 | TTL=30-60 with health monitoring |
| Per-request steering | TTL=0 | Anycast + edge logic (CDN/load balancer) |
| Always-fresh data | TTL=0 | TTL=60 + accept 60s staleness window |
| Testing/development | TTL=0 | Flush local cache between tests |
Browser and OS TTL Floors:
Even if your authoritative server returns TTL=0, clients may not honor it:
The only truly reliable way to force per-query resolution is at the application level, bypassing system resolver caches entirely.
In practice, 30 seconds is the lowest TTL that reliably works across resolver implementations while still providing rapid propagation. Lower values increasingly face override behavior from caching layers. Design failover and steering systems around 30-60 second propagation, not instantaneous updates.
Understanding TTL behavior in practice requires observing actual DNS responses across different resolvers and time periods. Several tools and techniques help network engineers verify TTL configuration and propagation.
Command-Line TTL Observation:
1234567891011121314151617181920212223242526272829
# Using dig to observe TTL (Linux/macOS)# The +noall +answer options show only the answer section $ dig +noall +answer www.example.comwww.example.com. 3476 IN A 93.184.216.34 ^^^^ Remaining TTL: 3476 seconds # Query again after 60 seconds$ dig +noall +answer www.example.com www.example.com. 3416 IN A 93.184.216.34 ^^^^ TTL decreased by ~60 seconds # Query different resolvers to compare cache state$ dig @8.8.8.8 +noall +answer www.example.com # Googlewww.example.com. 2891 IN A 93.184.216.34 $ dig @1.1.1.1 +noall +answer www.example.com # Cloudflare www.example.com. 3124 IN A 93.184.216.34 # Note: Different resolvers show different remaining TTLs# because they cached the record at different times # Query authoritative server directly (shows original TTL)$ dig @ns1.example.com +noall +answer www.example.comwww.example.com. 3600 IN A 93.184.216.34 ^^^^ Original TTL: 3600 secondsWindows PowerShell TTL Observation:
# Using Resolve-DnsName to observe TTL
Resolve-DnsName -Name www.example.com -Type A
# Output includes TTL field
Name Type TTL Section IPAddress
---- ---- --- ------- ---------
www.example.com A 3241 Answer 93.184.216.34
# Query specific server
Resolve-DnsName -Name www.example.com -Type A -Server 8.8.8.8
Multi-Location TTL Propagation Testing:
Online tools test DNS propagation from global vantage points:
These tools are essential for verifying that DNS changes have propagated globally after TTL expiration.
When planning DNS changes: (1) Document current TTL values before any changes, (2) Query from multiple resolver vantage points to observe cache state diversity, (3) After changes, continue monitoring until observed TTL values from all locations drop below the pre-change TTL—indicating fresh resolution.
Time-to-Live is the control mechanism that governs the trade-off between DNS freshness and efficiency. Understanding TTL behavior is essential for DNS operations, change management, and troubleshooting.
Key takeaways from this page:
What's next:
We've covered individual record TTL behavior. Next, we'll examine DNS Caching Levels—the complete hierarchy of caches from browser to ISP to enterprise, understanding how each layer operates and how they interact to create the overall caching behavior that clients experience.
You now understand how TTL governs DNS cache behavior, the trade-offs involved in TTL selection, and operational patterns for managing TTL during changes. Next, we'll explore the complete hierarchy of caching levels from client to authoritative.