lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6033378-d716-4848-b7a5-dcf1a6b14669@paulmck-laptop>
Date: Tue, 1 Oct 2024 03:10:48 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: Chen Yu <yu.c.chen@...el.com>, Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, sfr@...b.auug.org.au,
	linux-next@...r.kernel.org, kernel-team@...a.com,
	Tomas Glozar <tglozar@...hat.com>
Subject: Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error

On Mon, Sep 30, 2024 at 10:44:24PM +0200, Valentin Schneider wrote:
> On 30/09/24 12:09, Paul E. McKenney wrote:
> > On Fri, Sep 13, 2024 at 11:00:39AM -0700, Paul E. McKenney wrote:
> >> On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
> >> > On 13/09/24 07:08, Paul E. McKenney wrote:
> >> > > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> >> > >>
> >> > >> Just following up...
> >> > >>
> >> > >> For whatever it is worth, on last night's run of next-20240906, I got
> >> > >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> >> > >> These failures were often, but not always, shortly followed by a hard hang.
> >> > >>
> >> > >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> >> > >> in enqueue_dl_entity() and the warning at line 1971 is the
> >> > >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
> >> > >>
> >> > >> The pair of splats is shown below, in case it helps.
> >> > >
> >> > > Again following up...
> >> > >
> >> > > I am still seeing this on next-20240912, with six failures out of 100
> >> > > 6-hour runs of rcutorture’s TREE03 scenario.  Statistics suggests that
> >> > > we not read much into the change in frequency.
> >> > >
> >> > > Please let me know if there are any diagnostic patches or options that
> >> > > I should apply.
> >> >
> >> > Hey, sorry I haven't forgotten about this, I've just spread myself a bit
> >> > too thin and also apparently I'm supposed to prepare some slides for next
> >> > week, I'll get back to this soonish.
> >>
> >> I know that feeling!  Just didn't want it to get lost.
> >
> > And Peter asked that I send along a reproducer, which I am finally getting
> > around to doing:
> >
> >       tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
> >
> 
> FYI Tomas (on Cc) has been working on getting pretty much this to run on
> our infra, no hit so far.
> 
> How much of a pain would it be to record an ftrace trace while this runs?
> I'm thinking sched_switch, sched_wakeup and function-tracing
> dl_server_start() and dl_server_stop() would be a start.
> 
> AIUI this is running under QEMU so we'd need to record the trace within
> that, I'm guessing we can (ab)use --bootargs to feed it tracing arguments,
> but how do we get the trace out?

Me, I would change those warnings to dump the trace buffer to the
console when triggered.  Let me see if I can come up with something
better over breakfast.  And yes, there is the concern that adding tracing
will suppress this issue.

So is there some state that I could manually dump upon triggering either
of these two warnings?  That approach would minimize the probability of
suppressing the problem.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ