Loading learning content...
Imagine synchronizing the clocks of every computer on Earth—billions of devices spanning continents, time zones, and network topologies. A naive approach might suggest having every device query a single authoritative clock, but this immediately fails: no single server could handle the load, network latency would vary wildly, and a single point of failure would catastrophically affect global time distribution.
NTP's solution is elegant: a hierarchical tree structure where time flows from a small number of highly authoritative sources through progressively larger tiers of servers, eventually reaching the billions of client devices that depend on accurate time. This architecture provides scalability, redundancy, and controlled accuracy degradation at each level.
By the end of this page, you will understand the stratum-based hierarchy that organizes NTP servers worldwide, how reference clocks at the top of the hierarchy provide authoritative time, the role of primary and secondary servers in time distribution, and how the hierarchical model achieves both scalability and reliability.
NTP organizes time servers into strata (singular: stratum)—numbered levels that indicate how many 'hops' a server is from an authoritative reference clock. The stratum number serves as a rough proxy for accuracy and authority: lower numbers indicate closer proximity to the authoritative time source.
Key principles of the stratum system:
Stratum indicates distance, not quality — A stratum 2 server isn't necessarily more accurate than a stratum 3 server in absolute terms, but it's one hop closer to the reference
Stratum increases with each hop — If a server synchronizes to stratum N peers, its own stratum becomes N+1
Stratum 16 is 'unsynchronized' — The maximum valid stratum is 15; stratum 16 indicates the server cannot provide reliable time
Stratum 0 is special — Reserved for reference clocks themselves (GPS receivers, atomic standards), which aren't NTP servers but hardware time sources
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
NTP Stratum Hierarchy══════════════════════════════════════════════════════════════════════════ ╔═══════════════════════════════════════════════════════════════════╗ ║ STRATUM 0 ║ ║ Reference Clocks (Hardware) ║ ║ ║ ║ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ║ ║ │ GPS │ │ Atomic │ │ Radio │ ║ ║ │ Receiver │ │ Clock │ │ Clock │ ║ ║ │ (1-100 ns) │ │ (<1 ns) │ │ (~1 ms) │ ║ ║ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ ║ ╚══════════╪════════════════╪════════════════╪══════════════════════╝ │ │ │ │ PPS/Serial │ Direct │ Demodulated signal │ interface │ connection │ ▼ ▼ ▼ ╔═══════════════════════════════════════════════════════════════════╗ ║ STRATUM 1 ║ ║ Primary Time Servers ║ ║ (Directly connected to reference clocks) ║ ║ ║ ║ ┌───────────────────────────────────────────────────────────┐ ║ ║ │ Examples: NIST, USNO, PTB, NPL, national labs │ ║ ║ │ Count: ~500-1000 public stratum 1 servers worldwide │ ║ ║ │ Accuracy: Typically <10 microseconds from UTC │ ║ ║ └───────────────────────────────────────────────────────────┘ ║ ╚════════════════════════════╪══════════════════════════════════════╝ │ NTP Protocol ┌──────────────────────┼──────────────────────┐ │ │ │ ▼ ▼ ▼ ╔═══════════════════════════════════════════════════════════════════╗ ║ STRATUM 2 ║ ║ Secondary Time Servers ║ ║ (Synchronized to stratum 1 servers) ║ ║ ║ ║ ┌───────────────────────────────────────────────────────────┐ ║ ║ │ Examples: ISP servers, pool.ntp.org, cloud providers │ ║ ║ │ Count: ~10,000-50,000 public stratum 2 servers │ ║ ║ │ Accuracy: Typically <100 microseconds from UTC │ ║ ║ └───────────────────────────────────────────────────────────┘ ║ ╚════════════════════════════╪══════════════════════════════════════╝ │ ┌──────────────────────┼──────────────────────┐ │ │ │ ▼ ▼ ▼ ╔═══════════════════════════════════════════════════════════════════╗ ║ STRATUM 3 ║ ║ Tertiary Time Servers ║ ║ (Enterprise, campus, regional servers) ║ ║ ║ ║ ┌───────────────────────────────────────────────────────────┐ ║ ║ │ Examples: Corporate data centers, university networks │ ║ ║ │ Accuracy: Typically <1 millisecond from UTC │ ║ ║ └───────────────────────────────────────────────────────────┘ ║ ╚════════════════════════════╪══════════════════════════════════════╝ │ ▼ ╔═══════════════════════════════════════════════════════════════════╗ ║ STRATUM 4 and beyond ║ ║ ║ ║ Most end-user devices: desktops, laptops, smartphones ║ ║ Accuracy degrades slightly at each level ║ ║ Maximum useful stratum: 15 (stratum 16 = unsynchronized) ║ ╚═══════════════════════════════════════════════════════════════════╝ ══════════════════════════════════════════════════════════════════════════NUMBERS AT A GLANCE══════════════════════════════════════════════════════════════════════════ Stratum │ Server Count (est.) │ Typical Accuracy │ Primary Users────────┼─────────────────────┼───────────────────┼────────────────────── 1 │ 500-1,000 │ < 10 μs │ National labs, ISPs 2 │ 10,000-50,000 │ < 100 μs │ Enterprises, public 3 │ 100,000+ │ < 1 ms │ End users, small biz 4+ │ Billions │ < 10 ms │ Consumer devices════════════════════════════════════════════════════════════════════════A common misconception is that a stratum 1 server is always 'better' than a stratum 2 server. In reality, a well-connected stratum 2 server in your data center might give you better accuracy than a stratum 1 server across the globe, due to lower network delay and jitter. NTP's selection algorithms consider many factors beyond stratum, including round-trip delay and measured dispersion.
At the pinnacle of the NTP hierarchy are reference clocks—precision timekeeping devices that provide the authoritative time source for the entire network. These aren't NTP servers themselves but hardware devices that stratum 1 servers connect to directly.
Reference clocks are designated stratum 0 to indicate they are the ultimate source, not NTP participants. They interface with computers through various means: serial ports, PPS (Pulse Per Second) signals, or specialized timing cards.
| Type | Accuracy | Cost Range | Characteristics |
|---|---|---|---|
| GPS Receiver | 1-100 ns | $50-$10,000+ | Most common, requires antenna with sky view, 24/7 availability |
| Cesium Beam | <1 ns | $50,000-$100,000 | Primary SI standard, expensive, laboratory grade |
| Rubidium | 1-100 ns | $1,000-$10,000 | More affordable atomic standard, excellent short-term stability |
| WWVB/DCF77/MSF | 1-10 ms | $20-$200 | Radio broadcast from national labs, low cost, regional availability |
| CDMA/LTE | 1-10 μs | Varies | Cell tower timing, good urban coverage |
| PTP/IEEE 1588 | <1 μs | $500-$5,000 | Network-based, requires hardware support, excellent for LANs |
| IRIG-B | 1 μs | $500-$5,000 | Time code format common in military/industrial applications |
GPS as the dominant reference:
Global Positioning System (GPS) satellites each carry multiple atomic clocks and continuously broadcast precise time signals. A GPS receiver on the ground can achieve timing accuracy of approximately 40 nanoseconds relative to UTC, with top-end timing receivers achieving single-digit nanoseconds.
GPS timing works because:
The PPS signal is particularly valuable because it bypasses all software latency—the hardware directly provides a signal edge that can be timestamped by the operating system's interrupt handler.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
Reference Clock Interface to NTP Server══════════════════════════════════════════════════════════════════════════ TYPICAL GPS TIMING RECEIVER SETUP━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ┌─────────────────┐ │ GPS Antenna │ ← Mounted with clear sky view │ (Outdoor) │ └────────┬────────┘ │ Coax cable (low loss, <50m typical) │ ┌────────▼────────┐ │ GPS Receiver │ ← Timing-grade GPS module │ (Indoor) │ Examples: Trimble, u-blox, Garmin │ │ │ Outputs: │ │ • Serial port ├──────► NMEA sentences (time of day) │ • PPS signal ├──────► Rising edge on each second boundary └─────────────────┘ │ │ ┌──────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────────┐ │ NTP SERVER (Stratum 1) │ │ │ │ ┌───────────────────┐ ┌───────────────────────────────────┐ │ │ │ Serial Port │────►│ NMEA Parser (gpsd) │ │ │ │ (timestamp │ │ Provides: Year, Month, Day, │ │ │ │ less accurate) │ │ Hour, Minute, Second │ │ │ └───────────────────┘ └───────────────────────────────────┘ │ │ │ │ │ ┌───────────────────┐ │ │ │ │ PPS Input │ Combined ◄────┘ │ │ │ (GPIO or │─────► for │ │ │ serial DCD) │ precise │ │ │ ~100 ns jitter │ time │ │ └───────────────────┘ │ │ │ │ NTP Configuration (chrony): │ │ ──────────────────────────── │ │ refclock SHM 0 refid GPS precision 1e-1 │ │ refclock PPS /dev/pps0 lock GPS precision 1e-7 │ └─────────────────────────────────────────────────────────────────────┘ NMEA + PPS COMPLEMENTARY RELATIONSHIP━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Serial (NMEA) PPS Signal │ │ Provides: Date/time/fix data Provides: Precise second edge Accuracy: ~1 second Accuracy: ~100 nanoseconds Latency: Variable (serial delay) Latency: Hardware interrupt Purpose: Coarse time/date Purpose: Fine time alignment Combined: NMEA tells us WHICH second, PPS tells us WHEN it occurs ══════════════════════════════════════════════════════════════════════════GPS receivers typically provide both NMEA sentences (a text format with the current time) and a PPS signal. NMEA is accurate to about a second due to serial port latency, but it tells you the complete date and time. PPS is accurate to nanoseconds but only marks the second boundary—it doesn't tell you WHICH second. Combining both gives you the best of both: coarse time from NMEA, fine time from PPS.
Stratum 1 servers are the first layer of NTP servers in the hierarchy. They have a direct connection to reference clocks (stratum 0 devices) and serve as the primary distribution points for authoritative time across the network.
Characteristics of stratum 1 servers:
| Operator | Server(s) | Reference Clock | Notes |
|---|---|---|---|
| NIST (USA) | time.nist.gov | Cesium/Hydrogen maser | US national time standard, heavily loaded |
| PTB (Germany) | ptbtime1/2/3.ptb.de | Cesium fountain | German national standard, UTC(PTB) |
| NPL (UK) | ntp1/2.npl.co.uk | Cesium | UK national standard |
| USNO (USA) | tick.usno.navy.mil | Cesium/Hydrogen maser | US Naval Observatory, GPS master clock |
| time.facebook.com | GPS + atomic | Globally distributed, high capacity | |
| Microsoft | time.windows.com | GPS | Default Windows NTP server |
| NTP Pool Project | Various | GPS/atomic | Volunteer-operated, globally distributed |
The role of multiple references:
Well-designed stratum 1 servers often connect to multiple reference clocks of different types. This provides several benefits:
For example, a major cloud provider's stratum 1 deployment might include:
While public stratum 1 servers exist, best practice is to NOT sync directly to them unless you have a legitimate need (such as operating a stratum 2 server). Most stratum 1 servers are resource-constrained and overloaded. Use pool.ntp.org or your ISP/cloud provider's time servers instead—they're designed for client access and provide excellent accuracy.
Stratum 2 and higher servers form the bulk of the NTP infrastructure. They synchronize to servers in the stratum above them and serve clients below. This creates the scalable tree structure that enables NTP to serve billions of devices.
The stratum 2+ layer is where most NTP interaction occurs:
The pool.ntp.org system:
The NTP Pool Project is a volunteer-operated virtual cluster of time servers that handles the vast majority of consumer NTP traffic worldwide. As of 2024, the pool consists of approximately 4,000 active servers serving over 100 million systems.
How the pool works:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
Using pool.ntp.org - Configuration Examples══════════════════════════════════════════════════════════════════════════ BASIC CONFIGURATION (most users)────────────────────────────────────────────────────────────────────────────# /etc/chrony/chrony.conf or /etc/ntp.conf # Use the pool - 'iburst' for fast initial sync, 'prefer' for primarypool pool.ntp.org iburst maxsources 4 # For better accuracy, use regional poolspool 0.north-america.pool.ntp.org iburstpool 1.north-america.pool.ntp.org iburstpool 2.north-america.pool.ntp.org iburst ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ REGIONAL POOL ZONES (select closest)────────────────────────────────────────────────────────────────────────────Zone Coverage──────────────────────────────────────────────────────────────────────────pool.ntp.org Worldwide (any server)asia.pool.ntp.org Asiaeurope.pool.ntp.org Europenorth-america.pool.ntp.org North Americasouth-america.pool.ntp.org South Americaoceania.pool.ntp.org Australia, Oceaniaafrica.pool.ntp.org Africa Country-specific zones: us.pool.ntp.org United States uk.pool.ntp.org United Kingdom de.pool.ntp.org Germany jp.pool.ntp.org Japan (Most countries have zones) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ POOL DNS IN ACTION────────────────────────────────────────────────────────────────────────────$ dig +short pool.ntp.org162.159.200.1162.159.200.123185.255.55.20139.162.220.21 # Each query returns different IPs (load balancing)$ dig +short pool.ntp.org 194.177.222.6193.182.111.142162.159.200.123... $ chronyc sources -vMS Name/IP address Stratum Poll Reach LastRx Last sample===============================================================================^* time.cloudflare.com 3 6 377 50 -234us[ -567us] +/- 15ms^+ 45.33.84.208 2 6 377 49 +123us[ +12us] +/- 22ms^+ 162.159.200.1 3 6 377 50 +445us[ +678us] +/- 18ms^+ 185.243.112.2 2 6 377 50 -567us[ -234us] +/- 25ms Note: ^* = current sync source (system peer) ^+ = candidate (acceptable alternative) ^- = outlier (rejected) ^? = unknown/unreachableMajor cloud providers offer optimized NTP services for their platforms: AWS (169.254.169.123), Google Cloud (metadata.google.internal or time.google.com), Azure (time.windows.com). These are stratum 1-2 and optimized for low latency within their networks. Always prefer your cloud provider's time service when running in cloud environments.
The NTP hierarchy embodies a trust model where time 'flows' downward from authoritative sources. Understanding how trust is established and maintained is crucial for both configuring NTP correctly and understanding its security properties.
Trust flows with time:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
NTP Trust Model══════════════════════════════════════════════════════════════════════════ TRUST DECREASES WITH STRATUM (generally)──────────────────────────────────────────────────────────────────────────── Stratum 1 Stratum 2 Stratum 3 Stratum 4 ══════════ ══════════ ══════════ ══════════ │ GPS/Atomic │ │ ISP/Cloud │ │ Enterprise │ │ Clients │ │ connected │─►│ servers │─►│ servers │─►│ devices │ └────────────┘ └────────────┘ └────────────┘ └─────────┘ High trust ────────────────────────────────────────► Lower trust Low latency ───────────────────────────────────────► Higher latency Few servers ───────────────────────────────────────► Many servers ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TRUST METRICS (computed for each peer)──────────────────────────────────────────────────────────────────────────── ┌─────────────────────────────────────────────────────────────────────────┐│ ROOT DELAY (δ) ││ ││ Total round-trip delay from the root (stratum 0) to this peer ││ Calculated as: Local_RTT + Peer's_Root_Delay ││ ││ Lower is better - indicates shorter path to authoritative source │└─────────────────────────────────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────┐│ ROOT DISPERSION (ε) ││ ││ Accumulated error bound from root to this peer ││ Increases with each hop and over time between polls ││ ││ Lower is better - indicates tighter error bounds │└─────────────────────────────────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────────────────┐│ SYNCHRONIZATION DISTANCE (Λ) ││ ││ Combined metric: Λ = (δ / 2) + ε ││ Represents the maximum error from the root ││ ││ NTP MAXDIST default: 1.5 seconds ││ Peers exceeding MAXDIST are rejected │└─────────────────────────────────────────────────────────────────────────┘ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SELECTION: CHOOSING THE SYSTEM PEER──────────────────────────────────────────────────────────────────────────── 1. SANITY CHECKS (reject obviously wrong peers) ├── Stratum 0 or > 15: reject (invalid) ├── Root delay > MAXDIST: reject (too distant) ├── Root dispersion > MAXDIST: reject (too uncertain) └── Reference ID = self: reject (loop detection) 2. FALSETICKER DETECTION (Byzantine fault tolerance) ├── Each peer provides a time estimate with error bounds ├── Find the largest subset of peers that agree ├── Peers outside this consensus are 'falsetickers' └── Reject falsetickers from further consideration 3. CLUSTERING (select best candidates) ├── From 'truechimers' (non-falsetickers) ├── Sort by synchronization distance ├── Select top N candidates as survivors └── These form the candidate pool 4. SYSTEM PEER SELECTION ├── From survivors, prefer lowest stratum ├── Within same stratum, prefer lowest distance ├── The selected peer becomes the 'system peer' └── Its time is used to discipline the local clockWhy multiple sources matter:
NTP's trust model fundamentally depends on having multiple sources. With only a single source, a client must blindly trust that source; it has no way to detect if the source is faulty or malicious. With multiple sources, NTP can:
Minimum recommendation: 4 servers — With 4 servers, NTP can tolerate 1 faulty server and still achieve consensus. With 3 or fewer, a single Byzantine failure can cause problems.
For Byzantine fault tolerance, NTP needs 3f+1 servers to tolerate f faulty servers. With 4 servers, you can tolerate 1 faulty server. Many default configurations use 3 servers (or even 1!), which is insufficient for Byzantine fault tolerance—a single compromised server can manipulate your time without detection.
NTP supports several operating modes that accommodate different network architectures and requirements. Understanding these modes helps in designing efficient time synchronization for various scenarios.
| Mode | Direction | Use Case | Stratum Relationship |
|---|---|---|---|
| Client/Server | Client polls server | Most common; clients sync to servers | Client: server_stratum + 1 |
| Symmetric Active | Bidirectional | Peer synchronization; both contribute | Both can adjust stratum dynamically |
| Broadcast | Server → all clients | High-latency tolerance; LAN optimization | Clients: server_stratum + 1 |
| Multicast | Server → multicast group | Similar to broadcast; IP multicast | Clients: server_stratum + 1 |
| Manycast | Client discovers servers | Automatic server discovery | Client selects best discovered server |
Client/Server mode (most common):
In this mode, the client periodically polls the server with an NTP request. The server responds with its timestamps. The client uses these to calculate offset and delay, then adjusts its clock accordingly.
Client Server
│ │
│────── NTP Request ────────────►│
│ (T1 embedded) │
│ │
│◄───── NTP Response ───────────│
│ (T1, T2, T3, T4 data) │
│ │
Symmetric mode (peer-to-peer):
In symmetric mode, two servers mutually synchronize. Each peer can provide time to the other, and the peer with the better (lower) stratum becomes the time source for the other. This is useful for redundant stratum 2 servers that back each other up.
Broadcast/Multicast mode:
For environments with many clients (hundreds or thousands on a LAN), broadcast mode reduces load on the server. The server periodically broadcasts its time, and clients listen. However, broadcast mode cannot measure round-trip delay accurately, so it's less precise than client/server mode.
Broadcast and multicast modes are particularly vulnerable to spoofing attacks since any device on the network can send broadcast packets claiming to be the time server. These modes should only be used on secured networks with additional authentication (symmetric key or NTS).
When deploying NTP in an enterprise or data center environment, careful architecture design ensures reliability, accuracy, and scalability. Here's a reference architecture for a well-designed NTP deployment:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
Enterprise NTP Architecture Example══════════════════════════════════════════════════════════════════════════ TIER 1: EXTERNAL REFERENCES (stratum 1-2)──────────────────────────────────────────────────────────────────────────── ┌─────────────────────────────────────────┐ │ INTERNET / CLOUD │ │ │ │ ┌─────────────┐ ┌──────────────┐ │ │ │ pool.ntp.org │ │ time.google │ │ │ └──────┬──────┘ └──────┬───────┘ │ │ │ │ │ │ ┌──────┴─────┐ ┌───────┴──────┐ │ │ │time.apple │ │time.cloudflare│ │ │ └──────┬─────┘ └───────┬──────┘ │ └─────────┼─────────────────┼────────────┘ │ │ ══════════╪═════════════════╪════════════ TIER 2: INTERNAL STRATUM 2-3 (your infrastructure) ══════════╪═════════════════╪════════════ │ │ ┌─────────────────────────┼─────────────────┼─────────────────────────┐ │ │ │ │ │ DATA CENTER A ▼ ▼ DATA CENTER B │ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │ NTP Server Cluster │ │ NTP Server Cluster │ │ │ │ (stratum 2-3) │ │ (stratum 2-3) │ │ │ │ │ │ │ │ │ │ ┌─────┐ ┌─────┐ │ │ ┌─────┐ ┌─────┐ │ │ │ │ │ntp01│ │ntp02│←──peer─┼──┼──│ntp03│ │ntp04│ │ │ │ │ └──┬──┘ └──┬──┘ │ │ └──┬──┘ └──┬──┘ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───┬───┘ │ │ └───┬───┘ │ │ │ └─────────┼───────────────┘ └─────────┼───────────────┘ │ │ │ │ │ │ ══════════╪════════════════════════════╪════════════════ │ │ TIER 3: CLIENTS │ │ │ ══════════╪════════════════════════════╪════════════════ │ │ │ │ │ │ ┌─────────▼─────────┐ ┌─────────▼─────────┐ │ │ │ Application │ │ Application │ │ │ │ Servers │ │ Servers │ │ │ │ (stratum 4) │ │ (stratum 4) │ │ │ │ │ │ │ │ │ │ ┌───┐ ┌───┐ │ │ ┌───┐ ┌───┐ │ │ │ │ │web│ │app│ ... │ │ │db │ │svc│ ... │ │ │ │ └───┘ └───┘ │ │ └───┘ └───┘ │ │ │ └───────────────────┘ └───────────────────┘ │ └──────────────────────────────────────────────────────────────────┘ RECOMMENDED CONFIGURATION FOR INTERNAL NTP SERVERS────────────────────────────────────────────────────────────────────────────# /etc/chrony/chrony.conf for ntp01/ntp02 # External sources (diverse providers for resilience)server time.google.com iburst preferserver time.cloudflare.com iburstpool pool.ntp.org iburst maxsources 3 # Peer with other internal serverspeer ntp02.internal.company.com iburst # For ntp01peer ntp03.internal.company.com iburstpeer ntp04.internal.company.com iburst # Serve time to internal clientsallow 10.0.0.0/8allow 172.16.0.0/12allow 192.168.0.0/16 # If disconnected from external sources, still serve time# (but clients will see higher stratum)local stratum 10 orphan # Log for monitoringlogdir /var/log/chronylog measurements statistics trackingVirtual machines have notoriously bad timekeeping because the virtual CPU is not continuously scheduled. Pauses cause the VM's clock to fall behind. VMs should always run NTP clients, and you should disable hypervisor time sync features (like VMware tools time sync) to avoid conflicts. Container hosts should run NTP; containers typically inherit the host's clock.
The NTP hierarchy is a carefully designed architecture that balances accuracy, scalability, redundancy, and trust. Let's consolidate the key concepts:
What's next:
Now that we understand NTP's hierarchical structure, we'll dive deeper into the stratum levels themselves. The next page explores what each stratum level means, how stratum is calculated and propagated, and the practical implications of stratum for NTP clients and servers.
You now understand how NTP organizes time distribution through a hierarchical stratum system, from atomic clocks at the top to billions of clients at the bottom. This architecture has enabled accurate time synchronization across the internet for over four decades.