[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4QZzzk+FdGj4AXm@feng-clx>
Date: Mon, 28 Nov 2022 10:15:43 +0800
From: Feng Tang <feng.tang@...el.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
<linux-kernel@...r.kernel.org>, <john.stultz@...aro.org>,
<sboyd@...nel.org>, <corbet@....net>, <Mark.Rutland@....com>,
<maz@...nel.org>, <kernel-team@...a.com>, <neeraju@...eaurora.org>,
<ak@...ux.intel.com>, <zhengjun.xing@...el.com>,
Chris Mason <clm@...a.com>, John Stultz <jstultz@...gle.com>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH clocksource 1/3] clocksource: Reject bogus watchdog
clocksource measurements
On Wed, Nov 23, 2022 at 01:23:48PM -0800, Paul E. McKenney wrote:
> On Wed, Nov 23, 2022 at 10:36:04AM +0800, Feng Tang wrote:
> > On Tue, Nov 22, 2022 at 02:07:12PM -0800, Paul E. McKenney wrote:
> > [...]
> > > > > If PM_TIMER was involved, I would expect 'acpi_pm' instead of
> > > > > refined-jiffies. Or am I misinterpreting the output and/or code?
> > > >
> > > > It's about timing. On a typical server platform, the clocksources
> > > > init order could be:
> > > > refined-jiffies --> hpet --> tsc-early --> acpi_pm --> tsc
> > > >
> > > > >From your log, TSC('tsc-early') is disabled before 'acpi_pm' get
> > > > initialized, so 'acpi_pm' timer (if exist) had no chance to watchdog
> > > > the tsc.
> > > >
> > > > > Either way, would it make sense to add CLOCK_SOURCE_MUST_VERIFY to
> > > > > clocksource_hpet.flags?
> > > >
> > > > Maybe try below patch, which will skip watchdog for 'tsc-early',
> > > > while giving 'acpi_pm' timer a chance to watchdog 'tsc'.
> > > >
> > > > diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> > > > index cafacb2e58cc..9840f0131764 100644
> > > > --- a/arch/x86/kernel/tsc.c
> > > > +++ b/arch/x86/kernel/tsc.c
> > > > @@ -1131,8 +1131,7 @@ static struct clocksource clocksource_tsc_early = {
> > > > .uncertainty_margin = 32 * NSEC_PER_MSEC,
> > > > .read = read_tsc,
> > > > .mask = CLOCKSOURCE_MASK(64),
> > > > - .flags = CLOCK_SOURCE_IS_CONTINUOUS |
> > > > - CLOCK_SOURCE_MUST_VERIFY,
> > > > + .flags = CLOCK_SOURCE_IS_CONTINUOUS,
> > > > .vdso_clock_mode = VDSO_CLOCKMODE_TSC,
> > > > .enable = tsc_cs_enable,
> > > > .resume = tsc_resume,
> > >
> > > Your mainline patch b50db7095fe0 ("x86/tsc: Disable clocksource watchdog
> > > for TSC on qualified platorms") mitigates the issue so we are good for
> > > the immediate future, at least assuming reliable TSC.
> > >
> > > But it also disables checking against HPET, hence my question about
> > > marking clocksource_hpet.flags with CLOCK_SOURCE_MUST_VERIFY at boot time
> > > on systems whose CPUs have constant_tsc, nonstop_tsc, and tsc_adjust.
> >
> > IIUC, this will make TSC to watchdog HPET every 500 ms. We have got
> > report that the 500ms watchdog timer had big impact on some parallel
> > workload on big servers, that was another factor for us to seek
> > stopping the timer.
>
> Another approach would be to slow it down. Given the tighter bounds
> on skew, it could be done every (say) 10 seconds while allowing
> 2 milliseconds skew instead of the current 100 microseconds.
Yes, this can reduce the OS noise much. One problem is if we make it
a general interface, there is some clocksource whose warp time is
less than 10 seconds, like ACPI PM_TIMER (3-4 seconds), and I don't
know if other ARCHs have similar cases.
>
> > Is this about the concern of possible TSC frequency calibration
> > issue, as the 40 ms per second drift between HPET and TSC? With
> > b50db7095fe0 backported, we also have another patch to force TSC
> > calibration for those platforms which get the TSC freq directly
> > from CPUID or MSR and don't have such info in dmesg:
> > "tsc: Refined TSC clocksource calibration: 2693.509 MHz"
> >
> > https://lore.kernel.org/lkml/20220509144110.9242-1-feng.tang@intel.com/
> >
> > We did met tsc calibration issue due to some firmware issue, and
> > this can help to catch it. You can try it if you think it's relevant.
>
> I am giving this a go, thank you!
Thanks for spending time testing it!
Thanks,
Feng
Powered by blists - more mailing lists