[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <308f260b-83c6-402e-9756-017be125bb44@paulmck-laptop>
Date: Fri, 19 Dec 2025 16:18:50 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Daniel J Blueman <daniel@...ra.org>,
John Stultz <jstultz@...gle.com>, Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>, Borislav Petkov <bp@...en8.de>,
Stephen Boyd <sboyd@...nel.org>,
Scott Hamilton <scott.hamilton@...den.com>
Subject: Re: clocksource: Reduce watchdog readout delay limit to prevent
false positives
On Fri, Dec 19, 2025 at 11:13:05AM +0100, Thomas Gleixner wrote:
> On Wed, Dec 17 2025 at 16:48, Paul E. McKenney wrote:
> > On Wed, Dec 17, 2025 at 06:21:05PM +0100, Thomas Gleixner wrote:
> >> The "valid" readout delay between the two reads of the watchdog is larger
> >> than the valid delta between the resulting watchdog and clocksource
> >> intervals, which results in false positive watchdog results.
OK, first, I have no objection to your reimplementing the clocksource
watchdog. Especially given that when making my changes, I felt the
need to remain within the current design, which was at times quite
constraining. I do feel the need to defend the current code, given
that limitation. But you knew that already.
In fact, you might well have been counting on it. ;-)
The desiderata from this end include:
o Avoid false positives from delays during measurement, including
delays from NMIs, SMIs, and vCPU preemption.
o Detect TSC drift due to bugs in firmware and hardware.
Others have in the past added:
o Avoid false positives due to extreme memory-system overload.
There are probably others that do not come immediately to mind.
> >> Assume TSC is the clocksource and HPET is the watchdog and both have a
> >> uncertainty margin of 250us (default). The watchdog readout does:
> >>
> >> 1) wdnow = read(HPET);
> >> 2) csnow = read(TSC);
> >> 3) wdend = read(HPET);
> > 4) wd_end2 = read(HPET);
>
> That's completely irrelevant for the problem at hand.
It is exactly one of the problems at hand. If we didn't have HPET readout
performance issues, we could dispense with that piece of the puzzle.
By the way, did this issue really occur on real hardware? Or are we
having a strictly theoretical discussion?
> >> The valid window for the delta between #1 and #3 is calculated by the
> >> uncertainty margins of the watchdog and the clocksource:
> >>
> >> m = 2 * watchdog.uncertainty_margin + cs.uncertainty margin;
> >>
> >> which results in 750us for the TSC/HPET case.
> >
> > Yes, because this interval includes two watchdog reads (#1 and #3 above)
> > and one clocksource read (#2 above). We therefore need to allow two
> > watchdog uncertainties and one clocksource uncertainty.
>
> That's a made up and broken theory based on ill defined heuristics.
If we want better heuristics, we either restrict ourselves to timer
hardware that has reasonable access latencies or we add something like
an ->access_time field to struct clocksource.
Otherwise, you will just be cutting off one end of the heuristic "blanket"
and sewing on to another.
Accounting more directly for vCPU preemption would also help a lot, as
that was the root cause of much (but by no means all) of the problem on
Meta's fleet.
> The uncertainty of a clocksource is defined by:
>
> 1) The frequency it runs with, which affects the accuracy of the
> time readout.
>
> 2) The access time
>
> Your implementation defines that the uncertainty is at least 50us,
> with a default of 125us and an upper limit of 1ms. That's just made up
> numbers pulled of of thin air which have nothing to do with reality.
These were set using test data from real hardware.
> Declaring that TSC access time of up to 1ms and at least 50us is
> hillarious.
Yes, in theory we could instead have an ->access_time that is used to
reject bad reads, so that ->uncertainty_margin would only be used to
detect timer skew. The discussion of such an addition didn't go far last
time around, but maybe now is a better time. Though I suspect that the
issues around determining what ->access_time should be set to are still
with us. But I would love to be proven wrong.
> Adding up these made up margins and then double the watchdog margin to
> validate the readout is just a hack to make it "work for me" and thereby
> breaking the whole machinery for existing problematic systems. See below.
Nope.
The readout #4 above into wd_end2 wasn't something my employer needed.
For our use cases, #1, #2, and #3 sufficed. It was instead needed by
other users running memory-bandwidth-heavy workloads on multi-socket
systems. This resulted in HPET access times that were astoundingly large.
One might naively hope that the HPETs in a given system could synchronize
themselves so that each CPU could read the HPET nearest it instead of
everyone reading CPU 0's HPET. Or that some other means would allow
reasonable access times. Hey, I can dream, can't I?
> >> The actual interval comparison uses a smaller margin:
> >>
> >> m = watchdog.uncertainty_margin + cs.uncertainty margin;
> >>
> >> which results in 500us for the TSC/HPET case.
> >
> > This is the (wd_seq_delay > md) comparison, righr? If so, the reason
> > for this is because it is measuring only a pair of watchdog reads (#3
> > and #4). There is no clocksource read on the latency recheck, so we do
> > not include the cs->uncertainty_margin value, only the pair of watchdog
> > uncertainty values.
>
> No. This is the check which does:
>
> int64_t md = 2 * watchdog->uncertainty_margin;
> ...
>
> *wdnow = watchdog->read(watchdog);
> *csnow = cs->read(cs);
> wd_end = watchdog->read(watchdog);
> ...
>
> wd_delay = cycles_to_nsec_safe(watchdog, *wdnow, wd_end);
> if (wd_delay <= md + cs->uncertainty_margin) {
> ...
> return WR_READ_SUCCESS;
> }
>
> It has nothing to do with the wd_seq_delay check.
Eh? Here wd_delay is twice watchdog uncertainty plus clocksource
uncertainty.
> > If this check fails, that indicates that the watchdog clocksource is much
> > slower than expected (for example, due to memory-system overload affecting
> > HPET on multicore systems), so we skip this measurement interval.
> >
> >> That means the following scenario will trigger the watchdog:
> >>
> >> Watchdog cycle N:
> >>
> >> 1) wdnow[N] = read(HPET);
> >> 2) csnow[N] = read(TSC);
> >> 3) wdend[N] = read(HPET);
> >>
> >> Assume the delay between #1 and #2 is 100us and the delay between #1 and
> >> #3 is within the 750us margin, i.e. the readout is considered valid.
> >
> > Yes. We expect at most 250us for #1, another 250us for #2, and yet
> > another 250us for #3.
> >
> >> Watchdog cycle N + 1:
> >>
> >> 4) wdnow[N + 1] = read(HPET);
> >> 5) csnow[N + 1] = read(TSC);
> >> 6) wdend[N + 1] = read(HPET);
> >>
> >> If the delay between #4 and #6 is within the 750us margin then any delay
> >> between #4 and #5 which is larger than 600us will fail the interval check
> >> and mark the TSC unstable because the intervals are calculated against the
> >> previous value:
> >>
> >> wd_int = wdnow[N + 1] - wdnow[N];
> >> cs_int = csnow[N + 1] - csnow[N];
> >
> > Except that getting 600us latency between #4 and #5 is not consistent
> > with a 250us uncertainty. If that is happening, the uncertainty should
> > instead be at least 300us.
>
> That's utter nonsense.
Or alternatively, you simply do not yet understand it. ;-)
> >> Putting the above delays in place this results in:
> >>
> >> cs_int = (wdnow[N + 1] + 610us) - (wdnow[N] + 100us);
> >> -> cs_int = wd_int + 510us;
> >>
> >> which is obviously larger than the allowed 500us margin and results in
> >> marking TSC unstable.
> >
> > Agreed, but due to the ->uncertainty_margin values being too small.
>
> You seriously fail to understand basic math.
Well, if that is the best you can do... ;-)
> Let me draw you a picture:
>
> H = HPET read
> T = TSC read
> RW = read valid window
> IW = interval valid window
>
> RW-------------------------------------------RW
> IW-------------------------IW
>
> HT H <- Read 1 valid
> H TH <- Read 2 valid
> |---------------------------------| <- Interval too large
>
> Q: How is increasing the uncertainty values fixing the underlying math
> problem of RW > IW?
>
> A: Not at all. It just papers over it and makes the false positive case
> more unlikely by further weakening the accuracy.
Yep. Given the current lack of something like ->access_time to go along
with the current ->uncertainty_margin, there are limits to what we can do.
Again, was the report from a real workload running on real hardware, or
are we simply having a theoretical discussion of what might happen?
> >> Fix this by using the same margin as the interval comparison. If the delay
> >> between two watchdog reads is larger than that, then the readout was either
> >> disturbed by interconnect congestion, NMIs or SMIs.
> >>
> >> Fixes: 4ac1dd3245b9 ("clocksource: Set cs_watchdog_read() checks based on .uncertainty_margin")
> >> Reported-by: Daniel J Blueman <daniel@...ra.org>
> >
> > If this is happening in real life, we have a couple of choices:
> >
> > 1. Increase the ->uncertainty_margin values to match the objective
> > universe.
>
> Which universe? The universe of made up math? See above.
The universe of the real hardware Daniel was using, should it turn
out that this is a real problem encountered running a real workload on
real hardware.
> > 2. In clocksource_watchdog(), replace "(abs(cs_nsec - wd_nsec) > md)"
> > with "(abs(cs_nsec - wd_nsec) > 2 * md)".
> >
> > The rationale here is that the ->uncertainty_margin values are
> > two-tailed, as in the clocksource might report a value that is
> > ->uncertainty_margin and ->uncertainty_margin too late. When I
> > was coding this, I instead assumed that ->uncertainty_margin
> > covered the full range, centered on the correct time value.
>
> So you propose the opposite of what I'm doing, which weakens the
> watchdog even further.
>
> > You would know better than would I.
> >
> > My concern is that the patch below would force needless cs_watchdog_read()
> > retries.
>
> That's not the end of the world and way better than degrading the
> watchdog further.
But what you proposed is just a further tweak of the heuristics you so
energetically decry above.
> It's already useless for the original purpose to detect even slow skew
> of TSC between CPUs because it is relative and the uncertainty margins
> are insanely big.
Really? The uncertainty margins are way smaller than they were before
read #3 was introduced. Don't get me wrong, it would be great to make
them even smaller.
> I just booted an old machine where the BIOS "hides" SMI time by saving
> the TSC value on entry and restoring it on exit. The original watchdog
> implementation caught that. Now it happily continues and I can observe
> time going backwards between CPUs in user space. After an hour the
> difference between the two CPUs is > 1sec and the system still claims
> that everything is fine. And no, TSC_ADJUST does not catch that on this
> machine because the CPU does not support it.
>
> IOW, this whole big machine scalability hackery broke basic
> functionality for existing systems.
OK, that is at least a real problem, even if on antique hardware.
A problem that surprises me, but I don't have that sort of hardware,
so I will take your word for it. Though you would need what, a 62.5
millisecond SMI for the old watchdog to catch it, right?
(Yes, yes, I have seen far longer SMIs. Don't get me started...)
> We really need to go back to the drawing board and rethink this
> machinery from scratch.
>
> There are two main aspects to the watchdog:
>
> 1) Detect general frequency skew by comparing it against a known
> (assumed to be) stable clocksource
>
> This addresses the problems on older machines where the BIOS
> fiddles with the CPU frequency behind the kernels back.
Fair enough.
> That's a non-issue on modern CPUs because the TSC is constant
> frequency.
Give or take the occasional messed-up motherboard or firmware, where
the frequency is constant---but wrong. :-/
> 2) Detect inter CPU skew
>
> This addresses the problems where
>
> A) the BIOS fiddles with the TSC behind the kernels back (see
> above) and the modification cannot be detected by the TSC_ADJUST
> MSR due to its non-existance
>
> Not an issue on modern CPUs
>
> B) the sockets are not synchronized and the TSC frequency on them
> drifts apart
>
> C) hardware failure (something you mentioned back then when you
> hacked this uncertainty magic up)
3) Deal with insanely slow readout times for some clocksources.
Which your smp_call_function() approach describe below should handle.
> As I just validated on that old machine the whole "scalability" hackery
> broke #2A and #2B unless the modifications or drifts are massive. But if
> they are not it leaves both user space and kernel with inconsistent
> time.
Agreed. Imperfect though the current setup might be, there were reasons
why we changed it.
> That means we really have to look at this in two ways:
>
> I) On old hardware where TSC is not constant frequency, the validation
> against HPET is required.
>
> II) On new hardware with constant frequency TSC and TSC_ADJUST_MSR the
> HPET comparison is not really useful anymore as it does not detect
> low drift issues between unsynchronized sockets.
>
> What can be done instead?
>
> - Make CPU0 the watchdog supervisor, which runs the timer and
> orchestrates the machinery.
Fine on x86, but last I knew some systems can run without CPU 0.
Maybe the boot CPU? Though that can be offlined on some systems.
Handoff? For NO_HZ_FULL people, confine to the housekeeping CPUs?
Or is x86 the only system with more than one timer?
> - Limit the HPET comparison to CPUs which lack constant frequency,
> which makes the whole legacy I/O issue on large systems go away.
I thought that was already the default, and that you need to use the
tsc=watchdog kernel boot parameter to override that default in order to
use a known-stable TSC as the watchdog for HPET and/or ACPI PM timer.
> Run this comparision only on CPU0
>
> - Instead of moving the timer around CPUs, utilize a SMP function call
> similar to what the TSC synchronization mechanism does, i.e.
>
> CPU0 CPUN
> sync() sync()
> t1 = rdtsc()
> handover()
> t2 = rdtsc()
> handover()
> t3 = rdtsc()
> ....
>
> Each step validates that tN <= TN+1, which detects all issues of #2
> above for both old and contemporary machines.
>
> Of course this mechanism is also affected by interconnect
> congestion, but the only side effect of interconnect congestion is
> that the handover becomes slow, which means that the accuracy of
> detecting time going backwards suffers temporarily.
Is this going to be OK for real-time workloads? I guess that you have at
least as good a chance as I would to convince them. The more aggressive
NO_HZ_FULL people might have concerns. Especially these guys:
https://arxiv.org/abs/2509.03855
I have similar code in the current clocksource watchdog, but it doesn't
run until clocksource skew is detected, in other words, only after
something has likely gone sideways.
> With that all these made up uncertainty heuristics go completely away or
> can be turned into something which actually reflects the reality of the
> universe, i.e. the accuracy of the clocks and their access time.
>
> I doubt it's needed because the original implementation worked just fine
> and it's only relevant for the actual HPET/TSC comparison case. That is
> then limited to museum pieces which are not really affected by the
> scalability issues of todays machines. Especially not as the HPET/TSC
> check is limited to CPU0, which is the one closest to the legacy
> peripherals which contain the HPET or in real old machines the ACPI_PM
> timer.
Give or take firmware/hardware bugs that cause the TSC to at a constant
but incorrect rate. Maybe interact with userspace facilities such as NTP?
> I'll go and utilize my copious spare time to implement this, but don't
> expect it to materialize before 2026 :)
The current implementation is working for us, so your schedule is our
schedule. ;-)
Thanx, Paul
Powered by blists - more mailing lists