[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87msmnu3es.ffs@tglx>
Date: Thu, 11 Jul 2024 21:36:27 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Andrew Morton <akpm@...uxfoundation.org>, Peter Zijlstra
<peterz@...radead.org>, x86@...nel.org
Subject:
For systems on which the performance counter can expire early due to turbo
modes it the watchdog handler has a safety net in place which validates
that since the last watchdog event there has at least 4/5th of the watchdog
period elapsed.
This works only reliably after the first watchdog event because the per CPU
variable which holds the timestamp of the last event is never initialized.
So a first spurious event will validate against a timestamp of 0 which
results in a delta which is likely to be way over the 4/5 threshold of
the period. As this might happen before the first watchdog hrtimer event
increments the watchdog counter, this can lead to false positives.
Fix this by initializing the timestamp before enabling the hardware
event. Reset the rearm counter as well, as that might be non zero after
the watchdog was disabled and reenabled.
Fixes: 7edaeb6841df ("kernel/watchdog: Prevent false positives with turbo modes")
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
kernel/watchdog_perf.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/kernel/watchdog_perf.c
+++ b/kernel/watchdog_perf.c
@@ -75,11 +75,15 @@ static bool watchdog_check_timestamp(voi
__this_cpu_write(last_timestamp, now);
return true;
}
-#else
-static inline bool watchdog_check_timestamp(void)
+
+static void watchdog_init_timestamp(void)
{
- return true;
+ __this_cpu_write(nmi_rearmed, 0);
+ __this_cpu_write(last_timestamp, ktime_get_mono_fast_ns());
}
+#else
+static inline bool watchdog_check_timestamp(void) { return true; }
+static inline void watchdog_init_timestamp(void) { }
#endif
static struct perf_event_attr wd_hw_attr = {
@@ -161,6 +165,7 @@ void watchdog_hardlockup_enable(unsigned
if (!atomic_fetch_inc(&watchdog_cpus))
pr_info("Enabled. Permanently consumes one hw-PMU counter.\n");
+ watchdog_init_timestamp();
perf_event_enable(this_cpu_read(watchdog_ev));
}
Powered by blists - more mailing lists