lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250129202909.GQNNqNoH@linutronix.de>
Date: Wed, 29 Jan 2025 21:29:09 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Waiman Long <llong@...hat.com>
Cc: John Stultz <jstultz@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
	Stephen Boyd <sboyd@...nel.org>, Feng Tang <feng.tang@...el.com>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Clark Williams <clrkwllms@...nel.org>,
	Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
	linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH v2 2/2] clocksource: Use get_random_bytes() in
 clocksource_verify_choose_cpus()

On 2025-01-29 12:03:34 [-0500], Waiman Long wrote:
> I guess we will have to break clocksource_verify_choose_cpus() into 2
> separate parts, one without preemption disabled and other one with
> preemption disabled. I don't think it is a good idea to just use
> migrate_disable() as we may get too much latency that can affect the test
> result.

Something like

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index 7304d7cf47f2d..bb7c845d7248c 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -373,10 +373,10 @@ void clocksource_verify_percpu(struct clocksource *cs)
 	cpumask_clear(&cpus_ahead);
 	cpumask_clear(&cpus_behind);
 	cpus_read_lock();
-	preempt_disable();
+	migrate_disable();
 	clocksource_verify_choose_cpus();
 	if (cpumask_empty(&cpus_chosen)) {
-		preempt_enable();
+		migrate_enable();
 		cpus_read_unlock();
 		pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name);
 		return;
@@ -386,6 +386,7 @@ void clocksource_verify_percpu(struct clocksource *cs)
 	for_each_cpu(cpu, &cpus_chosen) {
 		if (cpu == testcpu)
 			continue;
+		preempt_disable();
 		csnow_begin = cs->read(cs);
 		smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1);
 		csnow_end = cs->read(cs);
@@ -400,8 +401,9 @@ void clocksource_verify_percpu(struct clocksource *cs)
 			cs_nsec_max = cs_nsec;
 		if (cs_nsec < cs_nsec_min)
 			cs_nsec_min = cs_nsec;
+		preempt_enable();
 	}
-	preempt_enable();
+	migrate_enable();
 	cpus_read_unlock();
 	if (!cpumask_empty(&cpus_ahead))
 		pr_warn("        CPUs %*pbl ahead of CPU %d for clocksource %s.\n",

> I will send out a v3 patch to fix that.

should do the job. It is untested…

> Thanks,
> Longman

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ