lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbO3bN3WEUvejRBp@feng-clx.sh.intel.com>
Date: Fri, 26 Jan 2024 21:45:16 +0800
From: Feng Tang <feng.tang@...el.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
CC: John Stultz <jstultz@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
	Stephen Boyd <sboyd@...nel.org>, <linux-kernel@...r.kernel.org>, "Peter
 Zijlstra" <peterz@...radead.org>, Waiman Long <longman@...hat.com>, Jin Wang
	<jin1.wang@...el.com>
Subject: Re: [PATCH] clocksource: Scale the max retry number of watchdog read
 according to CPU numbers

Hi Paul,

On Fri, Jan 26, 2024 at 04:27:19AM -0800, Paul E. McKenney wrote:
> On Fri, Jan 26, 2024 at 05:12:50PM +0800, Feng Tang wrote:
> > There was a bug on one 8-socket server that the TSC is wrongly marked as
> > 'unstable' and disabled during boot time. (reproduce rate is every 120
> > rounds of reboot tests), with log:
> > 
> >     clocksource: timekeeping watchdog on CPU227: wd-tsc-wd excessive read-back delay of 153560ns vs. limit of 125000ns,
> >     wd-wd read-back delay only 11440ns, attempt 3, marking tsc unstable
> >     tsc: Marking TSC unstable due to clocksource watchdog
> >     TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
> >     sched_clock: Marking unstable (119294969739, 159204297)<-(125446229205, -5992055152)
> >     clocksource: Checking clocksource tsc synchronization from CPU 319 to CPUs 0,99,136,180,210,542,601,896.
> >     clocksource: Switched to clocksource hpet
> > 
> > The reason is for platform with lots of CPU, there are sporadic big or huge
> > read latency of read watchog/clocksource during boot or when system is under
> > stress work load, and the frequency and maximum value of the latency goes up
> > with the increasing of CPU numbers. Current code already has logic to detect
> > and filter such high latency case by reading 3 times of watchdog, and check
> > the 2 deltas. Due to the randomness of the latency, there is a low possibility
> > situation that the first delta (latency) is big, but the second delta is small
> > and looks valid, which can escape from the check, and there is a
> > 'max_cswd_read_retries' for retrying that check covering this case, whose
> > default value is only 2 and may be not enough for machines with huge number
> > of CPUs.
> > 
> > So scale and enlarge the max retry number according to CPU number to better
> > filter those latency noise on large system, which has been verified fine in
> > 4 days and 670 rounds of reboot test on the 8-socket machine.
> > 
> > Tested-by: Jin Wang <jin1.wang@...el.com>
> > Signed-off-by: Feng Tang <feng.tang@...el.com>
> > ---
> >  kernel/time/clocksource.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
> > index c108ed8a9804..f15283101906 100644
> > --- a/kernel/time/clocksource.c
> > +++ b/kernel/time/clocksource.c
> > @@ -226,6 +226,15 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
> >  	u64 wd_end, wd_end2, wd_delta;
> >  	int64_t wd_delay, wd_seq_delay;
> >  
> > +	/*
> > +	 * If no user changes the default value, scale the retry threshold
> > +	 * according to CPU numbers. As per test, the more CPU a platform has,
> > +	 * the bigger read latency is found during boot time or under stress
> > +	 * work load. Increase the try nubmer to reduce false alarms.
> > + 	 */
> > +	if (max_cswd_read_retries == 2)
> 
> I like the self-adjusting behavior!

Thanks!
 
> But why not make max_cswd_read_retries be a long (instead of a ulong)
> defaulting to -1.  Then when someone sets it to a specific positive
> value, they get exactly that value, while leaving it -1 gets the default
> CPU-scaling behavior.  Zero and other negative values should get a
> warning in order to reserve the for possible future use.

Good suggestions!

> I also suggest doing the adjustment at boot time, for example, using
> an early_initcall().  That way the test code also sees the scaled value.

I also thought about doing the adjustment once in early boot phase
using num_possible_cpus(), but gave up as that parameters could be
changed runtime using sysfs's module parameter interface, and cpu
runtime hotplugging.

Since the watchdog timer only happens (if not skipped) every 500 ms,
how about doing the ilog2 math everytime, like below:

diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
index 1d42d4b17327..9104bdecf34e 100644
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -291,7 +291,7 @@ static inline void timer_probe(void) {}
 #define TIMER_ACPI_DECLARE(name, table_id, fn)		\
 	ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn)
 
-extern ulong max_cswd_read_retries;
+extern long max_cswd_read_retries;
 void clocksource_verify_percpu(struct clocksource *cs);
 
 #endif /* _LINUX_CLOCKSOURCE_H */
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index c108ed8a9804..867bb36e6dad 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -208,8 +208,8 @@ void clocksource_mark_unstable(struct clocksource *cs)
 	spin_unlock_irqrestore(&watchdog_lock, flags);
 }
 
-ulong max_cswd_read_retries = 2;
-module_param(max_cswd_read_retries, ulong, 0644);
+long max_cswd_read_retries = -1;
+module_param(max_cswd_read_retries, long, 0644);
 EXPORT_SYMBOL_GPL(max_cswd_read_retries);
 static int verify_n_cpus = 8;
 module_param(verify_n_cpus, int, 0644);
@@ -225,8 +225,17 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
 	unsigned int nretries;
 	u64 wd_end, wd_end2, wd_delta;
 	int64_t wd_delay, wd_seq_delay;
+	long max_retries = max_cswd_read_retries;
+	
+	if (max_cswd_read_retries <= 0) {
+		if (max_cswd_read_retries != -1)
+			pr_warn_once("max_cswd_read_retries has been set a invalid number: %d\n",
+				max_cswd_read_retries);
 
-	for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
+		max_retries = ilog2(num_online_cpus()) + 1;
+	} 		
+
+	for (nretries = 0; nretries <= max_retries; nretries++) {
 		local_irq_disable();
 		*wdnow = watchdog->read(watchdog);
 		*csnow = cs->read(cs);
@@ -238,7 +247,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
 		wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
 					      watchdog->shift);
 		if (wd_delay <= WATCHDOG_MAX_SKEW) {
-			if (nretries > 1 || nretries >= max_cswd_read_retries) {
+			if (nretries > 1 || nretries >= max_retries) {
 				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
 					smp_processor_id(), watchdog->name, nretries);
 			}

Thanks,
Feng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ