Loading learning content...
Knowing the time from a remote server is only half the battle. The real challenge is applying that knowledge to correct the local clock without breaking applications, creating discontinuities, or oscillating unstably. NTP's clock discipline algorithms represent decades of refinement in control systems theory applied to the unique challenges of distributed timekeeping.
This page takes you inside the NTP engine—the measurement process that captures timestamps, the filtering algorithms that reject noise, the selection logic that chooses the best source, and the clock discipline loop that smoothly steers your system clock toward accurate time.
By the end of this page, you will understand the four-timestamp measurement process and how NTP calculates offset and delay, the clock filter algorithm that selects the best samples and rejects outliers, the selection and clustering algorithms that choose among multiple sources, the phase-locked and frequency-locked loops that discipline the clock, and the critical difference between stepping and slewing the clock.
NTP's fundamental measurement is elegantly simple: exchange timestamps with a server and compute the clock offset and network delay. This requires four timestamps—two from the client and two from the server.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
NTP Timestamp Exchange══════════════════════════════════════════════════════════════════════════ CLIENT SERVER (clock A) (clock B) │ │ T1 ●──────────────────────────────────────────────────────► T2 │ NTP Request Packet │ │ (T1 embedded as Origin) │ │ │ │ │ │ │ T4 ◄──────────────────────────────────────────────────────● T3 │ NTP Response Packet │ │ (T1, T2, T3 embedded; T4 on arrival) │ │ │ TIMESTAMPS: T1 = Client transmit time (client clock) [Origin Timestamp] T2 = Server receive time (server clock) [Receive Timestamp] T3 = Server transmit time (server clock) [Transmit Timestamp] T4 = Client receive time (client clock) [Destination Timestamp] ══════════════════════════════════════════════════════════════════════════ CALCULATING OFFSET AND DELAY━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Given: T1, T2, T3, T4 (all in same time units) Round-Trip Delay (δ):┌─────────────────────────────────────────────────────────────────────────┐│ δ = (T4 - T1) - (T3 - T2) ││ ││ In words: Total elapsed time on client clock, minus server processing ││ time. This gives the network transit time (both directions). │└─────────────────────────────────────────────────────────────────────────┘ Clock Offset (θ):┌─────────────────────────────────────────────────────────────────────────┐│ θ = ((T2 - T1) + (T3 - T4)) / 2 ││ ││ In words: Average of the apparent offset in each direction. ││ Assumption: Network delay is symmetric (same in both directions). ││ If θ > 0: Client clock is BEHIND server (needs to advance) ││ If θ < 0: Client clock is AHEAD of server (needs to slow down) │└─────────────────────────────────────────────────────────────────────────┘ DERIVATION (for the mathematically inclined):━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Let: θ = true offset (server clock - client clock) d = one-way network delay (assumed symmetric) Then: T2 = T1 + θ + d (T1 on client clock, T2 on server clock) T4 = T3 - θ + d (T3 on server clock, T4 on client clock) Solving for θ: From first equation: θ = T2 - T1 - d From second equation: θ = T3 - T4 + d Adding these: 2θ = (T2 - T1) + (T3 - T4) θ = ((T2 - T1) + (T3 - T4)) / 2 The delay cancels out! This is the key insight of NTP's measurement. ══════════════════════════════════════════════════════════════════════════ NUMERICAL EXAMPLE━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Timestamps (in seconds since epoch + fractional): T1 = 1705500000.100000 (client transmits) T2 = 1705500000.150025 (server receives) T3 = 1705500000.150030 (server transmits) T4 = 1705500000.200010 (client receives) Round-Trip Delay: δ = (T4 - T1) - (T3 - T2) δ = (0.100010) - (0.000005) δ = 0.100005 seconds = 100.005 ms Clock Offset: θ = ((T2 - T1) + (T3 - T4)) / 2 θ = ((0.050025) + (-0.049980)) / 2 θ = (0.000045) / 2 θ = 0.0000225 seconds = 22.5 μs Interpretation: - Network round-trip: ~100 ms (one-way ~50 ms each direction) - Client clock is 22.5 μs behind server (needs to advance) - Very good synchronization!NTP's offset calculation assumes symmetric network delays. If the path from client to server takes 30 ms and the return path takes 70 ms, NTP will compute incorrect offset (off by 20 ms in this case). This asymmetry is the fundamental limit of NTP accuracy over asymmetric paths like ADSL or satellite links.
Individual NTP measurements are noisy. Network queuing delays vary, OS scheduling introduces jitter, and transient conditions cause outliers. The clock filter algorithm maintains a sliding window of recent measurements for each peer and selects the best sample based on delay—because lower delay usually means less queuing jitter and thus more accurate offset.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
Clock Filter Algorithm══════════════════════════════════════════════════════════════════════════ FILTER REGISTER: Last 8 (offset, delay, dispersion) samples per peer━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Index │ Offset (θ) │ Delay (δ) │ Dispersion (ε) │ Age──────┼──────────────┼─────────────┼──────────────────┼──────────── 0 │ +0.000023 │ 0.01523 │ 0.00001 │ 64 sec 1 │ +0.000018 │ 0.01498 │ 0.00002 │ 128 sec 2 │ +0.000045 │ 0.02534 │ 0.00001 │ 192 sec ← outlier 3 │ +0.000021 │ 0.01502 │ 0.00002 │ 256 sec 4 │ +0.000019 │ 0.01489 │ 0.00001 │ 320 sec ← BEST 5 │ +0.000024 │ 0.01510 │ 0.00002 │ 384 sec 6 │ +0.000022 │ 0.01505 │ 0.00001 │ 448 sec 7 │ +0.000025 │ 0.01520 │ 0.00002 │ 512 sec ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ STEP 1: SELECT SAMPLE WITH MINIMUM DELAY━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Why minimum delay? - Lower delay = less time spent in router queues - Less queuing = more deterministic timing - Sample with minimum delay is likely most accurate In the above example: Sample 4 has minimum delay (0.01489) → Use offset θ = +0.000019 s from sample 4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ STEP 2: SORT BY DELAY, COMPUTE DISPERSION━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Sort register by delay (ascending): Order │ Sample │ Delay │ Weight (dispersion contribution)──────┼──────────┼────────────┼─────────────────────────────────── 0 │ 4 │ 0.01489 │ ε₀ = ε₄ (best) 1 │ 1 │ 0.01498 │ ε₁ = ε₁ + FIL_WT × (j-i) 2 │ 3 │ 0.01502 │ ε₂ = ... 3 │ 5 │ 0.01510 │ ... 4 │ 6 │ 0.01505 │ 5 │ 7 │ 0.01520 │ 6 │ 0 │ 0.01523 │ 7 │ 2 │ 0.02534 │ ε₇ = worst (highest delay) Filter dispersion (peer jitter estimate): ε_peer = Σ (εᵢ / 2^(i+1)) for i = 0 to 7 This weighted sum emphasizes recent, low-delay samples. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ DISPERSION GROWTH━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Dispersion represents uncertainty. It grows over time: ε(t) = ε₀ + φ × (t - t₀) Where: ε₀ = initial dispersion from measurement φ = 15 ppm (NTP assumed maximum frequency error) t = current time t₀ = time of measurement This ensures that old samples are progressively devalued.After ~18 hours, dispersion reaches MAXDIST (1.5 seconds) andthe sample is considered stale. ══════════════════════════════════════════════════════════════════════════Network delays consist of fixed components (propagation, transmission) and variable components (queuing). Samples with lower delay have spent less time in queues, meaning they've experienced less random jitter. The fixed components average out in offset calculation, so low-delay samples provide the cleanest view of the true offset.
When a client has multiple configured peers, the selection algorithm determines which ones are trustworthy ('truechimers') and which are faulty ('falsetickers'). This is NTP's Byzantine fault tolerance mechanism.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
NTP Selection Algorithm: Finding Truechimers══════════════════════════════════════════════════════════════════════════ CONCEPT: INTERSECTION ALGORITHM━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Each peer provides an offset estimate with an error bound.We can represent this as an interval: [offset - error, offset + error] Peer A: ├──────────────────────────────────────────┤ θ_A - ε_A θ_A + ε_A Peer B: ├───────────────────────────────┤ θ_B - ε_B θ_B + ε_B Peer C: ├─────────────────────────────┤ θ_C - ε_C θ_C + ε_C Falseticker D: ├────────────┤(no overlap!) θ_D - ε_D θ_D + ε_D The intersection region is where the "true" offset must lie: ├───────────────────┤ │ TRUECHIMER │ │ INTERSECTION │ └───────────────────┘ Peers whose intervals don't overlap with the intersection are FALSETICKERS. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ALGORITHM STEPS━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1. CALCULATE CORRECTNESS INTERVAL FOR EACH PEER low_i = offset_i - root_distance_i high_i = offset_i + root_distance_i Where root_distance = (root_delay / 2) + root_dispersion 2. FIND THE LARGEST INTERSECTION - Find the interval [low, high] that contains the most peers - Use Marzullo's algorithm or NTP's variant - Require at least majority consensus: n/2 + 1 peers 3. CLASSIFY PEERS TRUECHIMER: Peer's interval overlaps with the intersection FALSETICKER: Peer's interval does not overlap 4. SANITY CHECKS - Stratum must be valid (1-15) - Root distance must be < MAXDIST (1.5 seconds) - Must have received valid response recently (reachability) - Must pass additional protocol checks ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ VISUAL EXAMPLE WITH 5 PEERS━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Offset (ms) →-30 -20 -10 0 +10 +20 +30 +40 +50 │ │ │ │ │ │ │ │ │ Peer 1: ├──────────────────────┤ -25 +5 Peer 2: ├──────────────────────┤ -15 +15 Peer 3: ├────────────────────┤ -10 +15 Peer 4: ├──────────────────┤ -5 +20 Peer 5 (falseticker): ├──────────┤ +35 +50 Intersection of Peers 1-4: ├─────┤ -5 +5 Peer 5 does NOT overlap with intersection → FALSETICKERPeers 1-4 all overlap with intersection → TRUECHIMERS ═══════════════════════════════════════════════════════════════════════The clustering algorithm:
After identifying truechimers, NTP further refines the selection through clustering. The goal is to select a small number of the best candidates (typically 3) from among the truechimers.
Clustering works by:
MINCLOCK (3) candidatesMINCLOCK candidates remainThe final candidates are the survivors—the best time sources available.
With 4 servers, NTP can tolerate 1 faulty server: if 3 agree and 1 disagrees, the 1 is detected as a falseticker. With only 3 servers, if 1 is faulty, you have a 2-vs-1 situation that NTP can barely resolve. With 2 servers, there's no way to determine which is wrong. Always configure at least 4 diverse NTP sources.
From the surviving candidates, NTP selects one as the system peer—the source that will actually discipline the local clock. The selection considers multiple factors:
| Priority | Criterion | Preference |
|---|---|---|
| 1 | Stratum | Lower is better (closer to reference) |
| 2 | Selection type | Prefer server/peer over broadcast |
| 3 | Root distance | Lower is better (tighter error bound) |
| 4 | Origin timestamp | More recent is better |
Root distance is the key metric for comparing peers at the same stratum:
root_distance = (root_delay / 2) + root_dispersion + (peer_jitter / 2) + PHI × (current_time - last_update)
Where:
The selected system peer determines your stratum:
my_stratum = system_peer_stratum + 1
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
System Peer Selection Example══════════════════════════════════════════════════════════════════════════ After selection and clustering, we have 3 survivors: Peer │ Stratum │ Root Distance │ Offset │ Status─────────────┼─────────┼───────────────┼────────────┼─────────────────time.google │ 1 │ 0.015 sec │ +22.5 μs │ SURVIVORntp.ubuntu │ 2 │ 0.025 sec │ +18.2 μs │ SURVIVORpool-server │ 2 │ 0.032 sec │ +24.1 μs │ SURVIVOR SELECTION PROCESS:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1: Compare by stratum time.google has the lowest stratum (1) → time.google is preferred! If all had the same stratum, we would then compare by root distance: time.google: 0.015 sec (lowest) ntp.ubuntu: 0.025 sec pool-server: 0.032 sec → time.google would still win RESULT: System Peer: time.google.com (stratum 1) My Stratum: 2 (1 + 1) Reference ID: time.google.com's IP address ═══════════════════════════════════════════════════════════════════════ CHRONY OUTPUT SHOWING SELECTION:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ $ chronyc sourcesMS Name/IP address Stratum Poll Reach LastRx Last sample===============================================================================^* time.google.com 1 6 377 23 +22us[+22us] +/- 15ms^+ ntp.ubuntu.com 2 6 377 24 +18us[+18us] +/- 25ms^+ pool.ntp.org 2 6 377 25 +24us[+24us] +/- 32ms Legend: ^* = Selected as system peer (clock source) ^+ = Acceptable candidate (survivor) ^- = Outlier (rejected by selection) ^x = Designated falseticker ^? = Unreachable / pendingThe system peer can change over time as network conditions evolve. If the current system peer becomes unreachable or its quality degrades, NTP will switch to the next-best survivor. This provides resilience—losing one server doesn't mean losing synchronization.
Once NTP has selected a system peer and computed an offset, it must adjust the local clock. This is the job of the clock discipline loop—a sophisticated feedback control system that adjusts both the clock's time (phase) and its rate (frequency).
Why not just set the clock directly?
Simply setting clock = clock + offset has severe problems:
Instead, NTP uses control theory principles to gradually steer the clock.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
Clock Discipline Loop Architecture══════════════════════════════════════════════════════════════════════════ ┌───────────────────────────────────────────────────────────┐ │ NTP CLOCK DISCIPLINE │ │ │ ┌──────┴──────┐ ┌──────────────┐ │ OFFSET │ │ LOCAL CLOCK │ │ INPUT │──────┬─────────────────────────────►│ │ │ (θ) │ │ │ Phase + │ └─────────────┘ │ │ Frequency │ │ └──────────────┘ │ │ ▼ │ ┌─────────────────────────────┐ │ │ LOOP FILTER │ │ │ │ │ │ ┌────────────────────┐ │ │ │ │ PLL COMPONENT │ │ Phase correction │ │ │ (Phase-Locked │────┼────────────────────►│ │ │ Loop) │ │ │ │ └────────────────────┘ │ │ │ │ │ │ ┌────────────────────┐ │ │ │ │ FLL COMPONENT │ │ Frequency correction │ │ (Frequency- │────┼────────────────────►│ │ │ Locked Loop) │ │ │ │ └────────────────────┘ │ │ │ │ │ └─────────────────────────────┘ │ ▲ │ │ Feedback │ └──────────────────────────────────────┘ ══════════════════════════════════════════════════════════════════════════ PHASE-LOCKED LOOP (PLL) vs FREQUENCY-LOCKED LOOP (FLL)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PLL (Phase-Locked Loop): - Corrects the clock OFFSET (phase) - Dominant when poll intervals are long and measurements stable - Produces smooth corrections - Response: adjustment proportional to offset - Best for: steady-state operation FLL (Frequency-Locked Loop): - Corrects the clock RATE (frequency) - Dominant when poll intervals are short or conditions noisy - Responds faster to frequency changes - Tracks frequency drift directly - Best for: initial sync, unstable conditions NTP uses a HYBRID approach: - Both loops contribute simultaneously - Weighting depends on poll interval and stability - Long poll (≥128s): mostly PLL - Short poll (≤8s): more FLL ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ POLL INTERVAL DYNAMICS━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ NTP dynamically adjusts poll interval based on conditions: Condition │ Action │ Poll Interval───────────────────────────┼──────────────────┼─────────────────Initial sync │ Start aggressive │ ~16 secondsOffset high, unstable │ Decrease poll │ Down to 8 secOffset low, stable │ Increase poll │ Up to 1024 secSudden instability │ Decrease poll │ Adaptive Benefits of longer poll intervals (when stable): ✓ Less network traffic ✓ Less load on servers ✓ Better filtering (more time to average noise) ✓ More stable clock discipline Benefits of shorter poll intervals (when unstable): ✓ Faster response to changes ✓ Quicker initial sync ✓ Better tracking of frequency driftThe optimal poll interval is related to the oscillator's Allan deviation minimum. Quartz crystals have an Allan deviation minimum at around τ = 1000-10000 seconds, which is why NTP's default poll range (16-1024 seconds) was chosen. Polling too fast adds noise; polling too slow misses frequency drift.
NTP has two fundamentally different ways to adjust the clock time: stepping (instantaneous jump) and slewing (gradual adjustment). Understanding when each is used is crucial for operations.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
Step vs Slew Decision Logic══════════════════════════════════════════════════════════════════════════ DEFAULT THRESHOLDS (ntpd/chrony)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Offset < 128 ms (0.128 seconds): → SLEW the clock → Adjust rate by up to 500 ppm → Time to correct 128 ms at 500 ppm: ~256 seconds (4+ minutes) 128 ms ≤ Offset < 1000 seconds (panic threshold): → STEP the clock (with safeguards) → Instant adjustment → May be limited on first N steps only Offset ≥ 1000 seconds: → PANIC - refuse to adjust → Requires manual intervention or explicit config → Protects against corrupted time sources ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SLEWING VISUALIZATION━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Clock is 50 ms behind. Slewing at 500 ppm = 500 μs/second = 0.5 ms/s Time: 0s 50s 100s 150s 200s 250s │ │ │ │ │ │Offset: 50ms 25ms 0ms -25ms (slew complete, rate normalized) │ │ │ │ │ │ └──────┴──────┴──────┴──────┴──────┘ Clock running 500 ppm fast After ~100 seconds: offset correctedThen: slew rate returns to nominalTotal: smooth adjustment, no discontinuity ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ STEPPING VISUALIZATION━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Clock is 5 seconds behind. Step adjustment: Time (as seen by applications): Before step: 10:30:00 10:30:01 10:30:02 10:30:03 │ │ │ │ └───────────┴───────────┴───────────┤ │STEPAfter step: 10:30:08 10:30:09 │ │ └───────────┘ Discontinuity! Time jumped from 10:30:03 to 10:30:08. - 5-second gap in timestamps - Applications may see time jump forward - Cron jobs scheduled for 10:30:04-10:30:07 may be skipped ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CHRONY CONFIGURATION━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ # Allow stepping during first 3 updates if offset > 1 secondmakestep 1.0 3 # Interpretation:# 1.0 = step threshold (seconds) - step if offset > 1 second# 3 = limit - only step during first 3 clock updates after start # After the first 3 updates, chrony will only slew, even for large offsets# This prevents unexpected steps during normal operation # Alternative: never step (slew only)makestep 0 0When the clock steps backward, bizarre things happen: file modification times move into the 'future' relative to new files, make thinks sources are newer than binaries, databases may lose writes, and time-based unique IDs may collide. Most NTP configurations avoid backward steps entirely, preferring to slew even if it takes hours.
Modern NTP implementations can discipline the clock in two ways: purely in userspace (the daemon adjusts time via system calls) or by offloading some work to the kernel. The kernel discipline mode provides better precision by handling frequency adjustments at the kernel level.
| Aspect | Daemon-Only | Kernel Discipline |
|---|---|---|
| Clock rate adjustment | adjtimex() calls from userspace | Kernel PLL, continuous adjustment |
| Precision | Limited by scheduler latency | Hardware interrupt precision |
| Overhead | Higher (syscall per adjustment) | Lower (kernel loop runs autonomously) |
| Typical accuracy | ~1 ms | ~10 μs |
| PPS support | Limited | Full hardware timestamping |
| Leap second handling | Daemon applies step | Kernel handles automatically |
How kernel discipline works:
When enabled, the NTP daemon communicates with the kernel via the ntp_adjtime() system call, setting:
The kernel then runs its own PLL continuously, providing smooth frequency adjustment that isn't subject to the daemon's scheduling delays. The result is significantly better accuracy, especially when combined with hardware PPS signals.
Checking kernel discipline status:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
Kernel Discipline Status Commands══════════════════════════════════════════════════════════════════════════ LINUX: Check kernel time status$ adjtimex --print mode: 8193 offset: 234 # Current offset in microseconds frequency: 1234567 # Frequency offset (scaled ppm) maxerror: 500000 # Maximum error bound (μs) esterror: 50 # Estimated error (μs) status: 8193 # Status flags time_constant: 7 # PLL time constant precision: 1 # Clock precision (μs) tolerance: 32768000 # Max freq tolerance (scaled ppm) Status flags (binary 10000000000001): STA_PLL (0x0001) = PLL updates enabled STA_NANO (0x2000) = Nanosecond resolution mode Looking for: ✓ STA_PLL set: kernel discipline active ✓ Low offset: good synchronization ✓ Stable frequency: clock is disciplined CHRONY: Check kernel sync$ chronyc tracking...Leap status : Normal... # "Normal" indicates kernel is synchronized# "Not synchronised" means kernel discipline not active $ chronyc ntpdata...Interleaved mode : NoAuthenticated : NoTX timestamping : KernelRX timestamping : KernelTotal TX : 1234... # "Kernel" timestamping = using kernel for better precisionKernel discipline works poorly in VMs because the virtual CPU isn't continuously scheduled. When the VM is paused, the kernel's clock discipline loop doesn't run, causing significant errors when the VM resumes. Most VM-aware NTP configurations disable kernel discipline and rely on daemon-only mode with more aggressive polling.
Clock synchronization is where NTP's theoretical foundations meet practical engineering. The algorithms we've explored represent decades of refinement in distributed systems and control theory. Let's consolidate the key concepts:
What's next:
With clock synchronization algorithms understood, we turn to a critical concern: NTP security. The final page explores the threats facing NTP, authentication mechanisms, and modern protocols like NTS (Network Time Security) that protect the temporal infrastructure of the internet.
You now understand how NTP measures, filters, selects, and disciplines clocks—the complete data path from network packets to system time. This knowledge is essential for understanding NTP's behavior, diagnosing synchronization problems, and tuning for optimal accuracy.