[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2bae6a59-b6da-40dc-99bb-46a098cdd6cb@kenogo.org>
Date: Wed, 14 May 2025 12:01:58 +0200
From: Keno Goertz <contact@...ogo.org>
To: Miroslav Lichvar <mlichvar@...hat.com>, John Stultz <jstultz@...gle.com>
Cc: tglx@...utronix.de, zippel@...ux-m68k.org, mingo@...e.hu,
linux-kernel@...r.kernel.org
Subject: Re: ntp: Adjustment of time_maxerror with 500ppm instead of 15ppm
Hey,
On 5/12/25 10:57, Miroslav Lichvar wrote:
> This 500 ppm increment goes all way back to the original nanokernel
> implementation by David Mills, on which IIRC was based the Linux and
> other systems' timekeeping code:
> https://www.eecis.udel.edu/~mills/ntp/html/kern.html
>
> I think the idea to use MAXFREQ (reported as tolerance in timex) was
> to cover the case when the clock is not synchronized at all with the
> frequency offset set to any value in the +/- 500 ppm range. The Linux
> adjtimex also allows setting the tick length, which gives it a much
> wider range of +/-10% adjustment, so that is not fully covered.
>
> Changing the hardcoded rate to 15 ppm to match RFC5905 doesn't seem
> like a good idea to me. The kernel doesn't know how well the clock is
> synchronized and I'm sure in some cases it would be too small.
Thank you for these insights!
The site you linked references this RFC, which describes the kernel
model for timekeeping as used by the Linux kernel.
https://www.rfc-editor.org/rfc/rfc1589.html
Just skimming this document really helped my understanding of what's
going on. It also includes a more accurate description of time_maxerror:
> This variable establishes the maximum error of the indicated
> time relative to the primary synchronization source in
> microseconds. For NTP, the value is initialized by a
> ntp_adjtime() call to the synchronization distance, which is
> equal to the root dispersion plus one-half the root delay. It
> is increased by a small amount (time_tolerance) each second to
> reflect the clock frequency tolerance. This variable is
> computed by the synchronization daemon and the kernel, but is
> otherwise not used by the kernel.
In RFC 1589, time_tolerance is set to MAXFREQ by default and can be
changed by the kernel. The Linux kernel does a hard-coded adjustment of
time_maxerror with MAXFREQ instead.
A quick fix would be to change the misleading docstring of time_maxerror:
> Maximum error in microseconds holding the NTP sync distance
> (NTP dispersion + delay / 2)
I think something like this is clearer:
Maximum error in microseconds. The NTP daemon sets this to the root
synchronization distance (root dispersion + delay / 2). It is then
incremented by MAXFREQ each second to reflect the clock frequency tolerance.
Best regards
Keno
Powered by blists - more mailing lists