lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7e5ab11f-f8f9-4e91-becf-09f3cb279c2f@paulmck-laptop>
Date: Fri, 11 Oct 2024 10:40:59 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Rik van Riel <riel@...riel.com>
Cc: linux-kernel@...r.kernel.org, neeraj.upadhyay@...nel.org,
	leobras@...hat.com, tglx@...utronix.de, peterz@...radead.org,
	qiyuzhu2@....com
Subject: Re: locking/csd-lock: Switch from sched_clock() to
 ktime_get_mono_fast_ns()

On Thu, Oct 10, 2024 at 11:31:08AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 10, 2024 at 11:13:35AM -0400, Rik van Riel wrote:
> > On Wed, 2024-10-09 at 10:57 -0700, Paul E. McKenney wrote:
> > > Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock()
> > > to check for excessive CSD-lock wait times.  This works, but does not
> > > guarantee monotonic timestamps.  Therefore, switch from sched_clock()
> > > to ktime_get_mono_fast_ns(), which does guarantee monotonic
> > > timestamps,
> > > at least in the absence of calls from NMI handlers, which are not
> > > involved
> > > in this code path.
> > > 
> > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > > Cc: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
> > > Cc: Rik van Riel <riel@...riel.com>
> > > Cc: Leonardo Bras <leobras@...hat.com>
> > > Cc: Thomas Gleixner <tglx@...utronix.de>
> > > Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
> > 
> > The functional change here is that sched_clock() uses rdtsc(),
> > while ktime_get_mono_fast_ns() uses rdtsc_ordered().
> > 
> > I don't know if sched_clock() should also be using rdtsc_ordered().
> > 
> > Reviewed-by: Rik van Riel <riel@...riel.com>
> 
> Thank you!  I will apply this on my next rebase.

And please see below for the updated version.

							Thanx, Paul

------------------------------------------------------------------------

locking/csd-lock: Switch from sched_clock() to ktime_get_mono_fast_ns()

Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock() to check
for excessive CSD-lock wait times.  This works, but does not guarantee
monotonic timestamps on x86 due to the sched_clock() function's use of
the rdtsc instruction, which does not guarantee ordering.  This means
that, given successive calls to sched_clock(), the second might return
an earlier time than the second, that is, time might seem to go backwards.
This can (and does!) result in false-positive CSD-lock wait complaints
claiming almost 2^64 nanoseconds of delay.

Therefore, switch from sched_clock() to ktime_get_mono_fast_ns(), which
does guarantee monotonic timestamps via the rdtsc_ordered() function,
which as the name implies, does guarantee ordered timestamps, at least
in the absence of calls from NMI handlers, which are not involved in
this code path.

Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
Reviewed-by: Rik van Riel <riel@...riel.com>
Cc: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
Cc: Leonardo Bras <leobras@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>

diff --git a/kernel/smp.c b/kernel/smp.c
index f25e20617b7eb..27dc31a146a35 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -246,7 +246,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
 		return true;
 	}
 
-	ts2 = sched_clock();
+	ts2 = ktime_get_mono_fast_ns();
 	/* How long since we last checked for a stuck CSD lock.*/
 	ts_delta = ts2 - *ts1;
 	if (likely(ts_delta <= csd_lock_timeout_ns * (*nmessages + 1) *
@@ -321,7 +321,7 @@ static void __csd_lock_wait(call_single_data_t *csd)
 	int bug_id = 0;
 	u64 ts0, ts1;
 
-	ts1 = ts0 = sched_clock();
+	ts1 = ts0 = ktime_get_mono_fast_ns();
 	for (;;) {
 		if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages))
 			break;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ