lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8975d74f5807f76478ed08206dd5eda133a55bb0.camel@surriel.com>
Date: Sat, 13 Jul 2024 13:14:27 -0400
From: Rik van Riel <riel@...riel.com>
To: neeraj.upadhyay@...nel.org, linux-kernel@...r.kernel.org
Cc: rcu@...r.kernel.org, kernel-team@...a.com, rostedt@...dmis.org, 
 mingo@...nel.org, peterz@...radead.org, paulmck@...nel.org,
 leobras@...hat.com,  imran.f.khan@...cle.com
Subject: Re: [PATCH  1/3] locking/csd_lock: Print large numbers as negatives

On Sat, 2024-07-13 at 22:28 +0530, neeraj.upadhyay@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...nel.org>
> 
> The CSD-lock-hold diagnostics from CONFIG_CSD_LOCK_WAIT_DEBUG are
> printed in nanoseconds as unsigned long longs, which is a bit obtuse
> for
> human readers when timing bugs result in negative CSD-lock hold
> times.
> Yes, there are some people to whom it is immediately obvious that
> 18446744073709551615 is really -1, but for the rest of us...
> 
To clarify the report a little bit: it appears that, on some CPU
models, occasionally sched_clock() values jump backward, on the
same CPU.

Looking at the number of systems where this happened over time,
leaving out the exact numbers, the distribution looks something
like this:
- 1 day:     N systems
- 3 days:   3N systems
- 1 week:   7N systems
- 1 month: 26N systems
- 90 days: 72N systems

This does not appear to be a case of a few systems with bad
hardware, where it happens constantly to the same systems, but
something that many systems experience occasionally, and then
not again for months.

The systems in question advertise CONSTANT_TSC, NONSTOP_TSC,
and generally seem to have stable, nonstop, monotonic TSC
values, but sometimes the values go back in time a little bit.
The cycles_2_ns data does not appear to change during the
episodes of sched_clock() going backward.

The csd_lock code is not the only thing that breaks when the
sched_clock values go backward, but it seems to be the best thing
we seem to have right now to detect it.

I don't know whether adding more detection of this issue would
increase the number of systems where backwards sched_clock is
observed.

Many of the systems with backwards going TSC values seem to
encounter a bunch of them across some time period, end up
getting rebooted, and then behave well for months after.

> Reported-by: Rik van Riel <riel@...riel.com>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> Cc: Imran Khan <imran.f.khan@...cle.com>
> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: Leonardo Bras <leobras@...hat.com>
> Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
> Cc: Rik van Riel <riel@...riel.com>
> Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
> 
Reviewed-by: Rik van Riel <riel@...riel.com>

-- 
All Rights Reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ