lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43d513c5-7620-481b-ab7e-30e76babbc80@paulmck-laptop>
Date: Mon, 30 Sep 2024 12:09:07 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: Chen Yu <yu.c.chen@...el.com>, Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, sfr@...b.auug.org.au,
	linux-next@...r.kernel.org, kernel-team@...a.com
Subject: Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error

On Fri, Sep 13, 2024 at 11:00:39AM -0700, Paul E. McKenney wrote:
> On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
> > On 13/09/24 07:08, Paul E. McKenney wrote:
> > > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> > >>
> > >> Just following up...
> > >>
> > >> For whatever it is worth, on last night's run of next-20240906, I got
> > >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> > >> These failures were often, but not always, shortly followed by a hard hang.
> > >>
> > >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> > >> in enqueue_dl_entity() and the warning at line 1971 is the
> > >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
> > >>
> > >> The pair of splats is shown below, in case it helps.
> > >
> > > Again following up...
> > >
> > > I am still seeing this on next-20240912, with six failures out of 100
> > > 6-hour runs of rcutorture’s TREE03 scenario.  Statistics suggests that
> > > we not read much into the change in frequency.
> > >
> > > Please let me know if there are any diagnostic patches or options that
> > > I should apply.
> > 
> > Hey, sorry I haven't forgotten about this, I've just spread myself a bit
> > too thin and also apparently I'm supposed to prepare some slides for next
> > week, I'll get back to this soonish.
> 
> I know that feeling!  Just didn't want it to get lost.

And Peter asked that I send along a reproducer, which I am finally getting
around to doing:

	tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make

Note that this run will consume 19,200 CPU hours, or almost two CPU
years.  Therefore, this is best done across a largish number of systems.
The kvm-remote.sh script can be helpful for this sort of thing, and
you give it a quoted list of systems before the rest of the arguments
shown above.

Doing this on a -next from last week got me 15 failures similar to the
following:

[41212.683966] WARNING: CPU: 14 PID: 126 at kernel/sched/deadline.c:1995 enqueue_dl_entity+0x511/0x5d0
[41212.712453] WARNING: CPU: 14 PID: 126 at kernel/sched/deadline.c:1971 enqueue_dl_entity+0x54f/0x5d0

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ