[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201120185757.GL1437@paulmck-ThinkPad-P72>
Date: Fri, 20 Nov 2020 10:57:57 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Marco Elver <elver@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Anders Roxell <anders.roxell@...aro.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Jann Horn <jannh@...gle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
kasan-dev <kasan-dev@...glegroups.com>, rcu@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: linux-next: stall warnings and deadlock on Arm64 (was: [PATCH]
kfence: Avoid stalling...)
On Fri, Nov 20, 2020 at 06:02:06PM +0000, Mark Rutland wrote:
> On Fri, Nov 20, 2020 at 09:38:24AM -0800, Paul E. McKenney wrote:
> > On Fri, Nov 20, 2020 at 03:22:00PM +0000, Mark Rutland wrote:
> > > On Fri, Nov 20, 2020 at 06:39:28AM -0800, Paul E. McKenney wrote:
> > > > On Fri, Nov 20, 2020 at 03:19:28PM +0100, Marco Elver wrote:
> > > > > I found that disabling ftrace for some of kernel/rcu (see below) solved
> > > > > the stalls (and any mention of deadlocks as a side-effect I assume),
> > > > > resulting in successful boot.
> > > > >
> > > > > Does that provide any additional clues? I tried to narrow it down to 1-2
> > > > > files, but that doesn't seem to work.
> > > >
> > > > There were similar issues during the x86/entry work. Are the ARM guys
> > > > doing arm64/entry work now?
> > >
> > > I'm currently looking at it. I had been trying to shift things to C for
> > > a while, and right now I'm trying to fix the lockdep state tracking,
> > > which is requiring untangling lockdep/rcu/tracing.
> > >
> > > The main issue I see remaining atm is that we don't save/restore the
> > > lockdep state over exceptions taken from kernel to kernel. That could
> > > result in lockdep thinking IRQs are disabled when they're actually
> > > enabled (because code in the nested context might do a save/restore
> > > while IRQs are disabled, then return to a context where IRQs are
> > > enabled), but AFAICT shouldn't result in the inverse in most cases since
> > > the non-NMI handlers all call lockdep_hardirqs_disabled().
> > >
> > > I'm at a loss to explaim the rcu vs ftrace bits, so if you have any
> > > pointers to the issuies ween with the x86 rework that'd be quite handy.
> >
> > There were several over a number of months. I especially recall issues
> > with the direct-from-idle execution of smp_call_function*() handlers,
> > and also with some of the special cases in the entry code, for example,
> > reentering the kernel from the kernel. This latter could cause RCU to
> > not be watching when it should have been or vice versa.
>
> Ah; those are precisely the cases I'm currently fixing, so if we're
> lucky this is an indirect result of one of those rather than a novel
> source of pain...
Here is hoping!
> > I would of course be most aware of the issues that impinged on RCU
> > and that were located by rcutorture. This is actually not hard to run,
> > especially if the ARM bits in the scripting have managed to avoid bitrot.
> > The "modprobe rcutorture" approach has fewer dependencies. Either way:
> > https://paulmck.livejournal.com/57769.html and later posts.
>
> That is a very good idea. I'd been relying on Syzkaller to tickle the
> issue, but the torture infrastructure is a much better fit for this
> problem. I hadn't realise how comprehensive the scripting was, thanks
> for this!
But why not both rcutorture and Syzkaller? ;-)
> I'll see about giving that a go once I have the irq-from-idle cases
> sorted, as those are very obviously broken if you hack
> trace_hardirqs_{on,off}() to check that RCU is watching.
Sounds good!
Thanx, Paul
Powered by blists - more mailing lists