[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221020140944.GK5600@paulmck-ThinkPad-P17-Gen-1>
Date: Thu, 20 Oct 2022 07:09:44 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Feng Tang <feng.tang@...el.com>
Cc: linux-kernel@...r.kernel.org, clm@...a.com, jstultz@...gle.com,
tglx@...utronix.de, sboyd@...nel.org, longman@...hat.com
Subject: Re: [PATCH clocksource] Reject bogus watchdog clocksource
measurements
On Thu, Oct 20, 2022 at 04:09:01PM +0800, Feng Tang wrote:
> On Wed, Oct 19, 2022 at 04:09:04PM -0700, Paul E. McKenney wrote:
> > One remaining clocksource-skew issue involves extreme CPU overcommit,
> > which can cause the clocksource watchdog measurements to be delayed by
> > tens of seconds. This in turn means that a clock-skew criterion that
> > is appropriate for a 500-millisecond interval will instead give lots of
> > false positives.
>
> I remembered I saw logs that the watchdog were delayed to dozens of
> or hundreds of seconds.
>
> Thanks for the fix which makes sense to me! with some nits below.
>
> > Therefore, check for the watchdog clocksource reporting much larger or
> > much less than the time specified by WATCHDOG_INTERVAL. In these cases,
> > print a pr_warn() warning and refrain from marking the clocksource under
> > test as being unstable.
> >
> > Reported-by: Chris Mason <clm@...a.com>
> > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > Cc: John Stultz <jstultz@...gle.com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Stephen Boyd <sboyd@...nel.org>
> > Cc: Feng Tang <feng.tang@...el.com>
> > Cc: Waiman Long <longman@...hat.com>
> >
> > diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
> > index 8058bec87acee..dcaf38c062161 100644
> > --- a/kernel/time/clocksource.c
> > +++ b/kernel/time/clocksource.c
> > @@ -386,7 +386,7 @@ EXPORT_SYMBOL_GPL(clocksource_verify_percpu);
> >
> > static void clocksource_watchdog(struct timer_list *unused)
> > {
> > - u64 csnow, wdnow, cslast, wdlast, delta;
> > + u64 csnow, wdnow, cslast, wdlast, delta, wdi;
> > int next_cpu, reset_pending;
> > int64_t wd_nsec, cs_nsec;
> > struct clocksource *cs;
> > @@ -440,6 +440,17 @@ static void clocksource_watchdog(struct timer_list *unused)
> > if (atomic_read(&watchdog_reset_pending))
> > continue;
> >
> > + /* Check for bogus measurements. */
> > + wdi = jiffies_to_nsecs(WATCHDOG_INTERVAL);
> > + if (wd_nsec < (wdi >> 2)) {
> > + pr_warn("timekeeping watchdog on CPU%d: Watchdog clocksource '%s' advanced only %lld ns during %d-jiffy time interval, skipping watchdog check.\n", smp_processor_id(), watchdog->name, wd_nsec, WATCHDOG_INTERVAL);
> > + continue;
> > + }
>
> If this happens (500ms timer happens only after less than 125ms),
> there is some severe problem with timer/interrupt system.
Should I add ", suspect timer/interrupt bug" just after "jiffy time
interval"? Or would a comment before that pr_warn() work better for you?
> > + if (wd_nsec > (wdi << 2)) {
> > + pr_warn("timekeeping watchdog on CPU%d: Watchdog clocksource '%s' advanced an excessive %lld ns during %d-jiffy time interval, probable CPU overutilization, skipping watchdog check.\n", smp_processor_id(), watchdog->name, wd_nsec, WATCHDOG_INTERVAL);
> > + continue;
> > + }
>
> I agree with Waiman that some rate limiting may be needed. As there
> were reports of hundreds of seconds of delay, 2 seconds delay could
> easily happen if a system is too busy or misbehave to trigger this
> problem.
Good points, thank you both!
Left to myself, I would use a capped power-of-two backoff that was reset
any time that the interval was within bounds. Maybe a cap of 10 minutes?
Or is there a better way to do this?
Thanx, Paul
Powered by blists - more mailing lists