The School for Sysadmins Who Can’t Timesync Good and Wanna Learn To Do Other Stuff Good Too, part 4 – monitoring & troubleshooting

(See the sidebar for other posts in this series.)

Am I in sync?

So now that we’ve configured NTP, how do we know it’s working?  As Limoncelli et. al. have said, “If you aren’t monitoring it, you aren’t managing it.”  There are several tools which can be used to monitor and troubleshoot your NTP service.


ntpq is part of the NTP distribution, and is the most important monitoring and troubleshooting tool you’ll use; it is used on the NTP server to query various parameters about the operation of the local NTP server.  (It can be used to query a remote NTP server, but this is prevented by the default configuration in order to limit NTP’s usefulness in reflective DDoS attacks; ntpq can also be used to adjust the configuration of NTP, but this is rarely used in practice.)

The command you’ll most frequently use to determine NTP’s health is ntpq -pn.  The -p tells ntpq to print its list of peers, and the -n flag tells it to use numeric IP addresses rather than looking up DNS names.  (You can leave off the -n if you like waiting for DNS lookups and complaining to people about their broken reverse lookup domains.  Personally, I’m not a fan of either.)  This can be run as a normal user on your NTP server; here’s what the output looks like:

$ ntpq -pn
     remote           refid      st t when poll reach   delay   offset  jitter
+    2 u  255 1024  177    0.527    0.082   2.488
*   .NMEA.           1 u   37   64  376    0.598    0.150   2.196
-    2 u  338 1024  377   45.129   -1.657  18.318
+    2 u  576 1024  377   32.610   -0.345   4.734
+     2 u  158 1024  377   54.957   -0.281   3.400
-2001:4478:fe00:  2 u  509 1024  377   36.336    7.210   6.654
-    2 u  384 1024  377   36.832   -1.825   7.134
-2001:67c:1560:8   2 u  846 1024  377  370.902   -1.583   3.784
-   2 u  772 1024  377  328.477   -1.623  51.695

Let’s run through the fields in order:

The IPv4 or IPv6 address of the peer
The IPv4 or IPv6 address of the peer’s currently selected peer, or a textual identifier referring to the stratum 0 source in the case of stratum 1 peers.
The NTP stratum of the peer.  You’ll recall from previous parts of this series that the stratum of an NTP server is determined by the stratum of its time source, so in the example above we’re synced to a stratum 1 server, therefore the local server is stratum 2.
The type of peer association; in the example above, all of the peers are of type unicast.  Other possible types are broadcast and multicast; we’ll focus exclusively on unicast peers in this series; see [Mills] for more information on the other types.
The elapsed time, in seconds, since the last poll of this peer.
The interval, in seconds, between polls of this peer.  So if you run ntpq -pn multiple times, you’ll see the “when” field for each peer counting upwards until it reaches the “poll” field’s value.  NTP will automatically adjust the poll interval based on the reliability of the peer; you can place limits on it with the minpoll and maxpoll directives in ntp.conf, but usually there’s no need to do this.  The number is always a power of 2, and the default range is 64 (2^6) to 1024 (2^10) seconds (so, a bit over 1 minute to a bit over 17 minutes).
The reachability of the peer over the last 8 polls, represented as an octal (base 8) number.  Each bit in the reach field represents one poll: if a reply was received, the bit is 1; if the peer failed to reply or the reply was lost, it is 0.  So if the peer was 100% reachable over the last 8 polls, you’ll see the value 377 (binary 11 111 111)  here.  If 7 polls succeeded, then one failed, you’ll see 376 (binary 11 111 110).  If one failed, then 5 succeeded, then one failed, then another succeeded, you’ll see 175 (binary 01 111 101)  If they all failed, you’ll see 0.  (I’m not sure why this is displayed in octal; hexadecimal would save a column and is more familiar to most programmers & sysadmins.)
The round-trip transit time, in milliseconds, that the poll took to be sent to and received from the peer.  Low values mean that the peer is nearby (network-wise); high values mean the peer is further away.
The maximum probable offset, in milliseconds, of the peer’s clock from the local clock [RFC5905 s4], which ntpd calculates based on the round-trip delay.  Obviously, lower is better, since that’s the whole point of NTP.
The weighted RMS average of the differences in offsets in recent polls.  Lower is better; this figure represents the estimated error in calculating the offset of that peer.
There’s actually an unlabelled field right at the beginning of each row, before all the other information. It’s a one-character column called the tally code.  It represents the current state of the peer from the perspective of the various NTP algorithms.  The values you’re likely to see are:

  • * system – this is the best of the candidates which survived the filtering, intersection, and clustering algorithms
  • o PPS – this peer is preferred for pulse-per-second signalling
  • # backup – more than enough sources were supplied and ntpd doesn’t need them all, so this peer was excluded from further calculations
  • + candidate – this peer survived all of the testing algorithms and was used in calculating the correct time
  • – outlier – this peer includes the true time but was discarded during the cluster algorithm
  • x falseticker – this peer was outside the possible range and was discarded during the selection (intersection) algorithm
  • [space] – invalid peer; might cause a synchronisation loop, have an incorrect stratum, or might be unreachable or too far away from the root servers

Aside: the anatomy of a poll, and the selection (intersection) algorithm

Before we dig into applying the above knowledge of the peer fields to our example, we need to take a quick side trip into two more bits of theory.  Firstly, how NTP polls work.  You can find more detail on this process in RFC5905, but in a nutshell, each poll uses 4 timers:

t1 – the time the poll request leaves the local system
t2 – the time the poll request arrives at the remote peer
t3 – the time the poll reply leaves the remote peer
t4 – the time the poll reply arrives at the local system

t1 & t4 are recorded by the local system and are relative to its clock, t2 & t3 are recorded by the peer, and are relative to its clock.  Here’s a graphical representation, adapted from [Mills]:

mills-timersThe total delay (the time taken for the request to get to and from the peer) is the overall time minus the processing time on the peer, i.e. (t4 – t1) – (t3 – t2).  Because it can’t know the network topology or utilisation between the local system and the remote peer, NTP assumes that the delay in both directions is equal, i.e. that the peer’s reported times are in the middle of the round trip.

NTP performs the above calculation for every poll of every peer.  When the results from peers are available, NTP runs the selection (or intersection) algorithm.  The intersection algorithm is a modified version of an algorithm first devised by Keith Marzullo, and is used to determine which of the peers are producing possible reports of the real time, and which are not.

The intersection algorithm attempts to find the largest possible agreement about the true time represented by its remote peers.  It does this by finding the interval which includes the highest low point and the lowest high point of the greatest number of peers.  (Read that again a couple of times to make sure it makes sense.)  This agreement must include at least half of the total number of peers for NTP to consider it valid.

If you forget everything else about NTP, try to remember the intersection algorithm, because it helps to make sense of NTP’s best practices, which might otherwise seem pointless.  There are various diagrammatic representations of the intersection algorithm around, including Wikipedia:

Marzullo's algorithm example from Wikipedia
Marzullo’s algorithm example from Wikipedia

Or this one from Mills:

DTSS algorithm example from Mills
DTSS algorithm example from Mills

But what started to make NTP click into place for me was this one from Deeths & Brunette in Sun’s blueprints series:

Intersection algorithm example from David Deeth & Glen Brunette - Sun Blueprints
Intersection algorithm example from David Deeth & Glen Brunette – Sun Blueprints

The intersection algorithm currently in use in NTPv4 is not perfectly represented by any of the above diagrams (since the current version requires that the midpoint of the round trip for all truechimers is included in the intersection), but they are useful nonetheless in helping to visualise the intersection algorithm.

Interpreting ntpq -pn

So let’s look back at the example above and make a few observations about our ntpq -pn output:

  • There are a couple of peers at the start of the list with RFC1918 addresses and very low delay (less than 1 ms).  These are peers on my local network.  The latter of these is a stratum 1 server using the NMEA driver, a reference clock which uses GPS for timing, but also includes a PPS signal for additional accuracy.  (More on this time server in a later post.)  Both of the LAN peers have missed a poll recently, but they’re still reliable enough and accurate enough that they are included in the calculations, and the local stratum 1 server is the selected sync peer.
  • There are four other peers with delays in the 30-60 ms range; these are public servers in Australia.
  • Then there are two other peers with delays in the 300-400 ms range; these are servers in Canonical’s network which I monitor; they live in our European data centres.

Note that all of these are still possible sources of accurate time – not one of them is excluded as an invalid peer (tally code space) or a falseticker (tally code x).  We’ve also got pretty low jitter on most of them, so overall our NTP server is in good shape.

Other ntpq metrics

There are a couple more metrics of interest which we can get from ntpq:

  1. root delay – our delay, in milliseconds, to the stratum 0 time sources
  2. root dispersion – the maximum possible offset, in milliseconds, that our local clock could be from the stratum 0 time sources, given the characteristics of our peers.
  3. system offset – the local clock’s offset, in milliseconds, from NTP’s best estimate of the correct time, given our peers
  4. system jitter – as for peer jitter, this is the overall error in estimating the system offset
  5. system frequency (or drift) – the estimated error, in parts per million, of the local clock

Here’s an example of retrieving these metrics:

$ ntpq -nc readvar 0
associd=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,
version="ntpd 4.2.6p5@1.2349-o Fri Jul 22 17:30:51 UTC 2016 (1)",
processor="x86_64", system="Linux/3.16.0-4-amd64", leap=00, stratum=2,
precision=-20, rootdelay=0.579, rootdisp=5.813, refid=,
reftime=dbbeb981.38507158 Sat, Oct 29 2016 16:00:33.219,
clock=dbbeba0c.ae22daf2 Sat, Oct 29 2016 16:02:52.680, peer=514, tc=10,
mintc=3, offset=-0.102, frequency=6.245, sys_jitter=0.037,
clk_jitter=0.061, clk_wander=0.000

This tells ntpq to print the variables for peer association 0, which is the local system. (You can get similar individual figures for each active peer association; see the ntpq man page for details.)


It probably should go without saying, but if ntpq doesn’t produce the kind of output you were expecting, check the system logs (/var/log/syslog on Ubuntu & other Debian derivatives, or /var/log/messages on Red Hat-based systems).  If ntpd didn’t start for some reason, you’ll probably find the answer in the logs.  If you’re experimenting with changes to your NTP configuration, you might want to have tail -F /var/log/syslog|grep ntp running in another window while you restart ntpd.

Other monitoring tools

  • ntptrace – we mentioned this in the previous post.  It’s rarely used nowadays since the default ntpd configuration prevents ntptrace from remote hosts, but can be helpful if you run a local reference clock which you’ve configured for remote query from authorised sources.
  • ntpdate – set the time on a system not running ntpd using one or more NTP servers. This tool is deprecated (use ntpd -g instead), but it has one really helpful flag: -s (for simulate) – this does a dry run which goes through the process of contacting the NTP server(s), calculating the correct time, and comparing with the local clock, without actually changing the local time.
  • /var/log/ntpstats/clockstats – this log file, if enabled, has some interesting data from your local reference clock.  We’ll cover it in more detail in a later post.

So those are the basic tools for interactive monitoring and troubleshooting of NTP.  Hopefully you’ll only have to use them when investigating an anomaly or fixing things if something goes wrong.  So how do you know if that’s needed?


At work we use Nagios for alerting, so when I wanted to improve our NTP alerting, I went looking for Nagios plugins.  I was disappointed with what I found, so I ended up writing my own check, ntpmon, which you can find at Github and Launchpad.  The goal of ntpmon is to cover the most common use cases with reasonably comprehensive checks at the host level (as opposed to the individual peer level), and to have sensible, but reasonably stringent, defaults.  Alerts should be actionable, so my aim is to produce a check which points people in the right direction to fix their NTP server.

Here’s a brief overview of the alternative Nagios checks:

Some NTP checks are provided with Nagios (you can find them in the monitoring-plugins-basic package in Ubuntu); check_ntp_peer has some good basic checks, but doesn’t check a wide enough variety of metrics, and is rather liberal in what it considers acceptable time synchronisation; check_ntp_time is rather strange in that it checks the clock offset between the local host and a given remote NTP server, rather than interrogating the local NTP server for its offset.  Use check_ntp_peer if you are limited to only the built-in checks; it gets enough right to be better than nothing.

check_ntpd was the best of the checks I found before writing ntpmon.  Use it if you prefer perl over python.  Most of the remaining checks in the Nagios exchange category for NTP are either token gestures to say that NTP is monitored, or niche solutions.


For historical measurement and trending, there are a number of popular solutions, all with rather patchy NTP coverage:

collectd has an NTP plugin, which reports the frequency, system offset, and something else called “error”, the meaning of which is rather unclear to me, even after reading the source code and comparing the graphed values with known quantities from ntpmon.  It also reports the offset, delay, and dispersion for each peer.

The prometheus node_exporter includes NTP, but similar to check_ntp_time, it only reports the offset of the local clock from a configured peer, and that peer’s stratum.  This seems of such minimal usefulness as not to be worth storing or graphing.

Telegraf has a ntpq input plugin, which offers a reasonably straightforward interface to the data for individual peers in ntpq’s results.  It’s fairly young, and has at least a couple of glaring bugs, like getting the number of seconds in an hour wrong, and not converting reachability from an octal bitmap to a decimal counter.

Given the limitations of the above solutions, and because I’m trying to strike a balance between minimalism and overwhelming & unactionable data, I extended ntpmon to support telemetry.  This is available via the Nagios plugin through the standard reporting mechanism, and as a collectd exec plugin.  I intend to add telegraf and/or prometheus support in the near future.

Here’s an example from the Nagios check:

$ /opt/ntpmon/ 
OK: offset is -0.000870 | frequency=12.288000 offset=-0.000870 peers=10 reach=100.000000 result=0 rootdelay=0.001850 rootdisp=0.032274 runtime=120529 stratum=2 sync=1.000000 sysjitter=0.001121488 sysoffset=-0.000451404 tracehosts= traceloops=

And here’s a glimpse of the collectd plugin in debug mode:

PUTVAL "localhost/ntpmon-frequency/frequency_offset" interval=60 N:12.288000000
PUTVAL "localhost/ntpmon-offset/time_offset" interval=60 N:-0.000915111
PUTVAL "localhost/ntpmon-peers/count" interval=60 N:10.000000000
PUTVAL "localhost/ntpmon-reachability/percent" interval=60 N:100.000000000
PUTVAL "localhost/ntpmon-rootdelay/time_offset" interval=60 N:0.001850000
PUTVAL "localhost/ntpmon-rootdisp/time_offset" interval=60 N:0.036504000
PUTVAL "localhost/ntpmon-runtime/duration" interval=60 N:120810.662998199
PUTVAL "localhost/ntpmon-stratum/count" interval=60 N:2.000000000
PUTVAL "localhost/ntpmon-syncpeers/count" interval=60 N:1.000000000
PUTVAL "localhost/ntpmon-sysjitter/time_offset" interval=60 N:0.001096107
PUTVAL "localhost/ntpmon-sysoffset/time_offset" interval=60 N:-0.000451404
PUTVAL "localhost/ntpmon-tracehosts/count" interval=60 N:2.000000000
PUTVAL "localhost/ntpmon-traceloops/count" interval=60 N:0.000000000

This post ended up being pretty long and detailed; hope it all makes sense.  As always, contact me if you have questions or feedback.

Read on in part 5 – myths, misconceptions, and best practices.

The School for Sysadmins Who Can’t Timesync Good and Wanna Learn To Do Other Stuff Good Too, part 3 – NTP installation and configuration

(Part 1 one of this series gave the background and rationale, and part 2 covered the basics of how NTP works.)

Getting NTP

Depending on how you install Linux, you might already have a pretty workable NTP client, but for the tools we’ll be working with, we’ll need the NTP reference implementation server.  On Ubuntu (these examples use 16.04 “xenial xerus”, but the instructions should work on all current Debian-derived distributions), you can install this with the usual packaging tools:

ubuntu@machine-5:~$ sudo apt-get install ntp
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
Suggested packages:
The following NEW packages will be installed:
  libopts25 ntp
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 577 kB of archives.
After this operation, 1,791 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 xenial/main amd64 libopts25 amd64 1:5.18.7-3 [57.8 kB]
Get:2 xenial-updates/main amd64 ntp amd64 1:4.2.8p4+dfsg-3ubuntu5.3 [520 kB]
Fetched 577 kB in 0s (9,198 kB/s)
Selecting previously unselected package libopts25:amd64.
(Reading database ... 87077 files and directories currently installed.)
Preparing to unpack .../libopts25_1%3a5.18.7-3_amd64.deb ...
Unpacking libopts25:amd64 (1:5.18.7-3) ...
Selecting previously unselected package ntp.
Preparing to unpack .../ntp_1%3a4.2.8p4+dfsg-3ubuntu5.3_amd64.deb ...
Unpacking ntp (1:4.2.8p4+dfsg-3ubuntu5.3) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu10) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libopts25:amd64 (1:5.18.7-3) ...
Setting up ntp (1:4.2.8p4+dfsg-3ubuntu5.3) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu10) ...
Processing triggers for ureadahead (0.100.0-19) ...

Here are some of the components of the ntp package with which you might want to be familiar (use dpkg -L ntp on Ubuntu & other Debian-based distributions to see all the contents of the package; on Red Hat-based distributions, use rpm -qf ntp):

the main configuration file; if you make changes to this file, you’ll need to restart ntpd with sudo service ntp restart to activate them
statistics logging, if enabled, writes its logs here
the main NTP server process; you won’t usually need to use this directly
prints the exact time according to NTP
NTP query – this is the most important tool for troubleshooting
traces from your local server to stratum 1; sometimes helpful if you run a local reference clock, but if you’re using publicly available NTP servers, you’ll probably end up with a rather dull result like this:

ubuntu@machine-5:~$ ntptrace
localhost: stratum 3, offset -0.000445, synch distance 0.011506 timed out, nothing received
***Request timed out

This is because of the default security restrictions on querying NTP – more on this later.

On Ubuntu, NTP ships with a default NTP AppArmor profile which limits its capabilities; distributions which use SELinux by default probably ship with similar restrictions.
a hook which allows clients to receive their NTP server list via DHCP; we won’t be using this, but be aware that in the default Ubuntu configuration, if your network supplies NTP servers via DHCP, its settings will override the settings in /etc/ntp.conf.

Initial configuration

Straight after installation you should have a pretty usable NTP server, assuming you have Internet access which allows outbound traffic to UDP port 123 (some ISPs block this).  Here’s the default configuration on Ubuntu 16.04 LTS (with comments removed):

ubuntu@machine-5:~$ grep '^[^#]' /etc/ntp.conf
driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
pool iburst
pool iburst
pool iburst
pool iburst
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
restrict ::1
restrict source notrap nomodify noquery

Let’s break down the configuration:

  1. driftfile – The drift (a.k.a. frequency) is the estimated error rate (in parts-per-million) of the local clock. The NTP daemon saves its estimate every hour to the file named here so that it doesn’t have to recalculate this error rate on startup.  If the file doesn’t exist, ntpd will assume that the error rate is zero and recalculate it from scratch, so it’s harmless to delete the file.  But leaving it there helps get your clock headed the right way faster on startup.
  2. statistics & filegen – these direct ntpd to create various types of statistics, if the statsdir directive is enabled (which it’s not here).  You probably don’t need the stats files unless you are running a stratum 1 server, or are especially interested in the quality of your individual time sources.
  3. pool – this defines the sources of time for your NTP server; traditionally, “server” was used here, but the pool directive introduced in recent versions offers much improved behaviour, and should nearly always be used in preference to server.  The source may be a hostname or IP address, but hostnames are preferred, since recent versions of ntpd will retry DNS lookups and switch to a new pool server if connectivity is lost.  The “iburst” option at the end of the line causes ntpd to use 6 polls (as opposed to just 1) when initially contacting the source.  Using this is nearly always recommended, as it helps get your clock synced sooner.  The time sources used here are the public servers available in the NTP pool, plus the public Ubuntu NTP servers run by Canonical (my employer).
  4. restrict – the default set of restrictions allows use of pool servers, and permits querying of associations (the records about time sources) from the local host, but does not allow querying from elsewhere.  Clients can still use this server as a time source (firewall rules permitting).  Querying of the association list is something which was historically used in reflective DDoS attacks, so the default restrictions should be kept in place unless you have a good reason to do otherwise. [citation required]

If you have average time synchronisation needs, the default configuration will probably work just fine and you won’t have to change anything, with the possible exception of the time sources. A common configuration change you might make is changing [0-3] to the pool servers for your country or region.  For example, my home network’s NTP servers use [0-3]  For a full list of the available pools, check the global list, then drill down by continent and country.

Don’t necessarily just assume that your region/country servers are close, though.  Recently I was working on a customer’s system located in South America and found that the US pool servers were much closer in terms of network latency than the South American ones! You should watch carefully which sources are supplied to you from the NTP pool and change them if the pool returns servers which are not physically nearby. (Where “nearby” generally means within a few thousand km; a rough rule of thumb for “average” use is: time sources should be less than 100 ms away.)

Read on in part 4 of this series, where we’ll look at monitoring and troubleshooting your NTP service.

The School for Sysadmins Who Can’t Timesync Good and Wanna Learn To Do Other Stuff Good Too, part 2 – how NTP works

(Part 1 covered the background and rationale.  Part 3 is about installation and configuration.)

What is NTP?

NTP (Network Time Protocol) is an Internet standard for time synchronisation covered by multiple RFCs.  “NTP is [arguably] the longest running, continuously operating, ubiquitously available protocol in the Internet” [Mills].  It has been operating since 1985, which is several years before Tim Berners-Lee invented the WWW.  The current version is NTPv4, described in RFC5905, which also covers SNTP (Simple NTP), a more limited version designed mostly for clients.

Whilst there are multiple different implementations of NTP, I’ll be focusing on the reference implementation, from the Network Time Foundation, because that’s what I’m most familiar with, and because it has the most online reference material available.

How Linux keeps time

Linux and other Unix-like kernels maintain a system clock which is set at system boot time from a hardware real time clock (RTC), and is maintained by regular interrupts from a timing circuit, usually a crystal oscillator.

The kernel clock is maintained in UTC; the base unit of time is the number of seconds since midnight 1 January 1970 UTC.  Applications can read the system clock via time(2), gettimeofday(2), and clock_gettime(2), the last two of which offer micro- and nano-second resolution.

System calls are available to set the time if it needs to change (called “stepping” the clock), but the more commonly-used technique is to ask the kernel to adjust the system clock gradually via the adjtime(3) library function or adjtimex(2) system call (called “slewing” the clock).  Slewing ensures that the clock counter continues to increase rather than jumping suddenly (even if the clock needs to be adjusted backwards), by making slight changes in the length of seconds on the system clock.  If the clock needs to go forwards, the seconds are shortened (sped up) slightly until true time is reached; if the clock needs to go backwards, the seconds are lengthened (slowed down) slightly until true time catches up.  (There are other interesting timing functions supported by the Linux kernel; see the documentation for more.)

Because oscillators are imperfect, system time is always out from UTC by some amount.  Better quality hardware is accurate to within very small variance from the true time (unnoticeable by humans), while cheap hardware can be out by quite significant amounts.  Clock accuracy is also affected by other factors such as temperature, humidity, and even system load.  NTP is designed to receive timing information from external sources and use clock slewing (or stepping, where necessary) to keep the system clock as close as possible to true UTC time.

How NTP works

The notion of one true time is central to how NTP operates, and it has numerous checks and balances in it which are designed to keep your system zeroing in on the one true time. (For a more detailed and authoritative explanation of this, see Mills’ “Notes on setting up a NTP subnet“.)


The primary means which NTP uses for determining the correct time is just to ask for it!  An NTP server simply polls other NTP servers (on UDP port 123) or other time sources (more on this below) for their current time, measures how long it takes the request to get there and back, and analyses the results to determine which sources represent the true time.  The polling process is very efficient and can support huge numbers of clients with a minimum of bandwidth.

An NTP poll happens at intervals ranging from 8 seconds to 36 hours (going up in powers of two), with 64 seconds to 1024 seconds being the default range.  The NTP daemon will automatically adjust its polling interval for each source based on the previous responses it has received.  On most systems with a reliable clock and reliable time sources, poll times will settle on the maximum within a few hours of the NTP daemon being started.  Here’s an example from one of my systems:

$ ntpq -pn
     remote           refid      st t when poll reach   delay   offset  jitter
+    2 u  255 1024  177    0.527    0.082   2.488
*   .NMEA.           1 u   37   64  376    0.598    0.150   2.196
-    2 u 1067 1024  377   44.964   -1.948   0.764
+    2 u  101 1024  377   32.703   -1.666   8.223
+    2 u  953 1024  377   55.609   -0.120   6.276
-2001:4478:fe00:  2 u   76 1024  377   35.971    4.814   1.848
-2001:67c:1560:8    2 u 1017 1024  377  376.041   -3.303   4.412
+    2 u 1004 1024  377  325.680    1.469  38.157

The 6th column is the poll time, which is 1024 seconds for all but one of its peers.  (More on how to interpret the output of ntpq will come in a later post.)


So if your system gets time from another system on the network, from where does that system get its time?  NTP time is ultimately sourced from accurate external sources like atomic clocks, some of which use the ultimate source of the standard second, the Caesium atom, as their reference.  Such time sources are expensive, so other sources are used as well, such as radio clocks, stable oscillators, or (perhaps most commonly) the GPS satellite system (which itself uses atomic clocks).  These sources are collectively referred to as reference clocks.

In the NTP network, a reference clock is stratum 0 – that is, an authoritative source of time.  An NTP server which uses a stratum 0 clock as its time source is stratum 1.  Stratum 2 servers get their time from stratum 1 servers; stratum 3 servers get their time from stratum 2 servers, and so on.  In practice it’s rare to see servers higher than stratum 4 or 5 on the Internet [Mills] [Minar].

Stratum 1 servers are connected to their stratum 0 sources via local hardware such as a serial port or expansion card slot.  The reason we have additional strata after stratum 1 is to ensure that there are enough servers to cope with the load from all the clients.  As much as it is possible, network delay (latency) between strata should be kept to a minimum.


NTP uses a number of different algorithms to ensure that the time it receives is accurate. [Mills]  Knowing how these algorithms work at a basic level can help us avoid configuration mistakes later, so we’ll look at them here briefly:

  1. filtering – The poll results from each time source are filtered in order to produce the most accurate results. [Mills]
  2. selection (a.k.a. intersection) – The results from all sources are compared to determine which ones can potentially represent the true time, and those which cannot (called falsetickers or falsechimers) are discarded from further calculations. [Mills]
  3. clustering – The surviving time sources from the selection algorithm are combined using statistical techniques. [Mills]

Read on in part 3 – installation and configuration, where we’ll explore how to install and configure NTP on an Ubuntu Linux 16.04 system.

The School for Sysadmins Who Can’t Timesync Good and Wanna Learn To Do Other Stuff Good Too, part 1 – the problem with NTP

(With apologies to Derek Zoolander and Justin Steven.  And to whoever had to touch the HP-UX NTP setup at Queensland Police after I left. And to anyone who prefers the American spelling “synchronization”.)

(This is the first of a series on NTP.  Part 2 is an overview of how NTP works.)

The problem with NTP

In my experience, Network Time Protocol (NTP) is one of the least well-understood of the fundamental Internet application-layer protocols, and very few IT professionals operate it effectively.  Part of the reason for this is that the documentation for NTP is highly technical and assumes a certain level of background knowledge.

I first encountered NTP more than 20 years ago, and my first efforts with it were an unmitigated disaster due to my ignorance of how the protocol was designed to function.  Since then virtually every IT environment I’ve encountered has had a less-than-optimal NTP setup.

I am still far from an expert on NTP, but I’ve learned quite a lot about operating it since my early days.  I hope this series of posts will help you develop a working knowledge of NTP faster and get the basics of NTP configuration right in your environment.

Why learn NTP?

Why bother learning this rather obscure corner of Internet lore?  I mean, the Internet mostly works, despite this alleged widespread lack of expertise in time sync, right?

Here are some of the reasons you might want to learn more about NTP:

  1. You run Ceph, Mongodb, Kerberos, or a similar distributed system, and you want it to actually work.
  2. You want your logs to match up across multiple systems, potentially on multiple continents.
  3. You like learning about new things and tinkering with embedded systems.
  4. You think bandwidth-efficient, high-precision time synchronisation is just a fun, nerdy problem.
  5. You think this is cool:

    A scenario where the latter behavior [the PPS driver disciplining the local clock in the absence of external sources] can be most useful is a planetary orbiter fleet, for instance in the vicinity of Mars, where contact between orbiters and Earth only one or two times per Sol (Mars day). These orbiters have a precise timing reference based on an Ultra Stable Oscillator (USO) with accuracy in the order of a Cesium oscillator. A PPS signal is derived from the USO and can be disciplined from Earth on rare occasion or from another orbiter via NTP. In the above scenario the PPS signal disciplines the spacecraft clock between NTP updates.

    (Personally, they had me at “planetary orbiter fleet”. πŸ™‚ )


In this series, I’ll describe a few best practices for setting up NTP in a standard 64-bit Ubuntu Linux 16.04 LTS environment.  Bear in mind this quite limited scope; this advice will not apply in all circumstances and intentionally ignores the less common use cases.  Further caveats:

    1. I have no looks.
    2. I am not an expert.   My descriptions of the algorithms are based on the documentation and operational experience.  I’m not a member of the NTP project; I’ve never submitted a patch; I’ve never compiled ntpd from source (I hate reading & writing C/C++).
    3. I’ve only worked with the reference implementation of NTP, and only on Linux, with only one reference clock driver (NMEA), and a limited range of configuration options.
    4. I will be glossing over a lot of detail.  Sometimes it’s because I don’t think it’s necessary in order to work with NTP successfully; sometimes it’s because I haven’t looked into that particular corner and so I don’t understand it; sometimes it’s because I have looked into that particular corner and I still don’t understand it. πŸ™‚  But mostly it’s because I’m attempting to keep this series accessible for those who are newcomers.  If you’re an experienced NTP operator, you probably won’t find much of interest (if anything) until later in the series.
    5. We won’t cover much history or theory of time sync in this series.  If you’d like to know a little more about that, check out Julien Goodwin‘s previous LCA & SLUG talks:

Read on in part 2 – how NTP works.

ntpq: write to ::1 failed: Operation not permitted


The other day I got a bug report about check_ntpmon, which was reporting UNKNOWN status back to Nagios even though everything seemed to be working fine. A bit of debugging revealed that it was receiving the message on standard error:

ntpq: write to ::1 failed: Operation not permitted

This was a bit strange, because various links I found indicated that this message is usually due to firewalls:

But this host was not blocking anything, not to mention that check_ntpmon’s use of ntpq only ever uses the loopback interface, which is rarely ever touched by firewalls. A bit of further digging showed that indeed it was not the firewall, but a full conntrack table, with dmesg showing:

Aug 4 03:04:19 hostname kernel: [5226949.016837] nf_conntrack: table full, dropping packet

Increasing the conntrack limit fixed the problem.

(Just thought I’d document this here for posterity, since none of the links I found suggested this issue.)