[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3zxB6r1kin8pSH1@feng-clx>
Date: Tue, 22 Nov 2022 23:55:51 +0800
From: Feng Tang <feng.tang@...el.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
<linux-kernel@...r.kernel.org>, <john.stultz@...aro.org>,
<sboyd@...nel.org>, <corbet@....net>, <Mark.Rutland@....com>,
<maz@...nel.org>, <kernel-team@...a.com>, <neeraju@...eaurora.org>,
<ak@...ux.intel.com>, <zhengjun.xing@...el.com>,
Chris Mason <clm@...a.com>, John Stultz <jstultz@...gle.com>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH clocksource 1/3] clocksource: Reject bogus watchdog
clocksource measurements
On Mon, Nov 21, 2022 at 10:14:49AM -0800, Paul E. McKenney wrote:
> > > I have absolutely no argument with this statement, and going back a
> > > long time. ;-)
> > >
> > > But the set of systems that caused me to send this turned out to have
> > > real divergence between HPET and TSC, and 40 milliseconds per second of
> > > divergence at that. So not only do you hate this series, but it is also
> > > the case that this series doesn't help with the problem at hand.
> >
> > The drift is about 4% which is quite big. It seems that this is
> > either problem of HPET/TSC's hardware/firmware, or the problem of
> > frequency calibration for HPET/TSC. TSC calibration is complex,
> > as it could be done from different methods depending on hardware
> > and firmware, could you share the kernel boot log related with
> > tsc/hpet and clocksource?
> >
> > Also if your platform has acpi PM_TIMER, you may try "nohpet"
> > to use PM_TIMER instead of HPET and check if there is also big
> > drift between TSC and PM_TIMER.
>
> The kernel is built with CONFIG_X86_PM_TIMER=y, so I was guessing
> that there is an ACPI PM_TIMER. Except that when I booted
> without your "Disable clocksource watchdog for TSC on qualified
> platforms" patch, I get the following:
>
> [ 44.303035] clocksource: timekeeping watchdog on CPU3: Marking clocksource 'tsc-early' as unstable because the skew is too large:
> [ 44.347034] clocksource: 'refined-jiffies' wd_nsec: 503923392 wd_now: fffb73f8 wd_last: fffb7200 mask: ffffffff
> [ 44.374034] clocksource: 'tsc-early' cs_nsec: 588042081 cs_now: 66c486d157 cs_last: 6682125e5e mask: ffffffffffffffff
> [ 44.403034] clocksource: No current clocksource.
> [ 44.418034] tsc: Marking TSC unstable due to clocksource watchdog
Aha, we've met similar error (TSC being judged 'unstable' by
'refined-jiffies') before, and our root cause is discussed in [1].
In our case, we had early serial console enabled, which made it
easier to be reproduced.
That was a trigger for us to proposed severl solutions before Thomas
suggested to disable tsc watchdog for all qualified platforms.
[1]. https://lore.kernel.org/lkml/20201126012421.GA92582@shbuild999.sh.intel.com/
> If PM_TIMER was involved, I would expect 'acpi_pm' instead of
> refined-jiffies. Or am I misinterpreting the output and/or code?
It's about timing. On a typical server platform, the clocksources
init order could be:
refined-jiffies --> hpet --> tsc-early --> acpi_pm --> tsc
>From your log, TSC('tsc-early') is disabled before 'acpi_pm' get
initialized, so 'acpi_pm' timer (if exist) had no chance to watchdog
the tsc.
> Either way, would it make sense to add CLOCK_SOURCE_MUST_VERIFY to
> clocksource_hpet.flags?
Maybe try below patch, which will skip watchdog for 'tsc-early',
while giving 'acpi_pm' timer a chance to watchdog 'tsc'.
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index cafacb2e58cc..9840f0131764 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1131,8 +1131,7 @@ static struct clocksource clocksource_tsc_early = {
.uncertainty_margin = 32 * NSEC_PER_MSEC,
.read = read_tsc,
.mask = CLOCKSOURCE_MASK(64),
- .flags = CLOCK_SOURCE_IS_CONTINUOUS |
- CLOCK_SOURCE_MUST_VERIFY,
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
.vdso_clock_mode = VDSO_CLOCKMODE_TSC,
.enable = tsc_cs_enable,
.resume = tsc_resume,
> I am sending the full console output off-list. Hey, you asked for it! ;-)
Thanks for sharing!
Thanks,
Feng
>
> Thanx, Paul
Powered by blists - more mailing lists