lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241009180708.GU17263@noisy.programming.kicks-ass.net>
Date: Wed, 9 Oct 2024 20:07:08 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: linux-kernel@...r.kernel.org, neeraj.upadhyay@...nel.org,
	riel@...riel.com, leobras@...hat.com, tglx@...utronix.de,
	qiyuzhu2@....com
Subject: Re: locking/csd-lock: Switch from sched_clock() to
 ktime_get_mono_fast_ns()

On Wed, Oct 09, 2024 at 10:57:24AM -0700, Paul E. McKenney wrote:
> Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock()
> to check for excessive CSD-lock wait times.  This works, but does not
> guarantee monotonic timestamps. 

It does if you provide a sane TSC

> Therefore, switch from sched_clock()
> to ktime_get_mono_fast_ns(), which does guarantee monotonic timestamps,
> at least in the absence of calls from NMI handlers, which are not involved
> in this code path.

That can end up using HPET in the case of non-sane TSC.

In the good case they're equal, in the bad case you're switching from
slightly dodgy time to super expensive time. Is that what you want?

> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> Cc: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Leonardo Bras <leobras@...hat.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
> 
> diff --git a/kernel/smp.c b/kernel/smp.c
> index f25e20617b7eb..27dc31a146a35 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -246,7 +246,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
>  		return true;
>  	}
>  
> -	ts2 = sched_clock();
> +	ts2 = ktime_get_mono_fast_ns();
>  	/* How long since we last checked for a stuck CSD lock.*/
>  	ts_delta = ts2 - *ts1;
>  	if (likely(ts_delta <= csd_lock_timeout_ns * (*nmessages + 1) *
> @@ -321,7 +321,7 @@ static void __csd_lock_wait(call_single_data_t *csd)
>  	int bug_id = 0;
>  	u64 ts0, ts1;
>  
> -	ts1 = ts0 = sched_clock();
> +	ts1 = ts0 = ktime_get_mono_fast_ns();
>  	for (;;) {
>  		if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages))
>  			break;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ