[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240802154618.4149953-4-paulmck@kernel.org>
Date: Fri, 2 Aug 2024 08:46:17 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: "H. Peter Anvin" <hpa@...or.com>,
John Stultz <jstultz@...gle.com>,
Stephen Boyd <sboyd@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Feng Tang <feng.tang@...el.com>,
Waiman Long <longman@...hat.com>,
Neeraj Upadhyay <Neeraj.Upadhyay@....com>,
x86@...nel.org,
kernel-team@...a.com,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v2 TSC and clocksource-watchdog updates for v6.12 4/5] clocksource: Set cs_watchdog_read() checks based on .uncertainty_margin
Right now, cs_watchdog_read() does clocksource sanity checks based
on WATCHDOG_MAX_SKEW, which sets a floor on any clocksource's
.uncertainty_margin. These sanity checks can therefore act
inappropriately for clocksources with large uncertainty margins.
One reason for a clocksource to have a large .uncertainty_margin is when
that clocksource has long read-out latency, given that it does not make
sense for the .uncertainty_margin to be smaller than the read-out latency.
With the current checks, cs_watchdog_read() could reject all normal
reads from a clocksource with long read-out latencies, such as those
from legacy clocksources that are no longer implemented in hardware.
Therefore, recast the cs_watchdog_read() checks in terms of the
.uncertainty_margin values of the clocksources involved in the timespan
in question. The first covers two watchdog reads and one cs read,
so use twice the watchdog .uncertainty_margin plus that of the cs.
The second covers only a pair of watchdog reads, so use twice the
watchdog .uncertainty_margin.
Reported-by: Borislav Petkov <bp@...en8.de>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
Cc: John Stultz <jstultz@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Feng Tang <feng.tang@...el.com>
Cc: Waiman Long <longman@...hat.com>
Cc: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Cc: <x86@...nel.org>
---
kernel/time/clocksource.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index ee0ad5e4d5170..23336eecb4f43 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -244,6 +244,7 @@ enum wd_read_status {
static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
{
+ int64_t md = 2 * watchdog->uncertainty_margin;
unsigned int nretries, max_retries;
int64_t wd_delay, wd_seq_delay;
u64 wd_end, wd_end2;
@@ -258,7 +259,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
local_irq_enable();
wd_delay = cycles_to_nsec_safe(watchdog, *wdnow, wd_end);
- if (wd_delay <= WATCHDOG_MAX_SKEW) {
+ if (wd_delay <= md + cs->uncertainty_margin) {
if (nretries > 1 && nretries >= max_retries) {
pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
smp_processor_id(), watchdog->name, nretries);
@@ -271,12 +272,12 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
* there is too much external interferences that cause
* significant delay in reading both clocksource and watchdog.
*
- * If consecutive WD read-back delay > WATCHDOG_MAX_SKEW/2,
- * report system busy, reinit the watchdog and skip the current
+ * If consecutive WD read-back delay > md, report
+ * system busy, reinit the watchdog and skip the current
* watchdog test.
*/
wd_seq_delay = cycles_to_nsec_safe(watchdog, wd_end, wd_end2);
- if (wd_seq_delay > WATCHDOG_MAX_SKEW/2)
+ if (wd_seq_delay > md)
goto skip_test;
}
--
2.40.1
Powered by blists - more mailing lists