[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200331153734.GA6979@lenoir>
Date: Tue, 31 Mar 2020 17:37:35 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Geert Uytterhoeven <geert@...ux-m68k.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Steven Rostedt <rostedt@...dmis.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Russell King <rmk+kernel@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>,
Frederic Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH 6/9] lockdep: Introduce wait-type checks
On Tue, Mar 31, 2020 at 05:28:50PM +0200, Peter Zijlstra wrote:
> On Tue, Mar 31, 2020 at 04:55:15PM +0200, Peter Zijlstra wrote:
> > On Tue, Mar 31, 2020 at 03:25:21PM +0200, Geert Uytterhoeven wrote:
> > > On arm64 (e.g. R-Car H3 ES2.0):
> > >
> > > +=============================
> > > +[ BUG: Invalid wait context ]
> > > +5.6.0-salvator-x-09423-gb29514ba13a9c459-dirty #679 Not tainted
> > > +-----------------------------
> > > +swapper/5/0 is trying to lock:
> > > +ffffff86ff76f398 (&pool->lock){..-.}-{3:3}, at: __queue_work+0x134/0x430
> > > +other info that might help us debug this:
> > > +1 lock held by swapper/5/0:
> > > + #0: ffffffc01103a4a0 (rcu_read_lock){....}-{1:3}, at:
> > > rcu_lock_acquire.constprop.59+0x0/0x38
> > > +stack backtrace:
> > > +CPU: 5 PID: 0 Comm: swapper/5 Not tainted
> > > 5.6.0-salvator-x-09423-gb29514ba13a9c459-dirty #679
> > > +Hardware name: Renesas Salvator-X 2nd version board based on r8a77951 (DT)
> > > +Call trace:
> > > + dump_backtrace+0x0/0x180
> > > + show_stack+0x14/0x1c
> > > + dump_stack+0xdc/0x12c
> > > + __lock_acquire+0x37c/0xf9c
> > > + lock_acquire+0x258/0x288
> > > + _raw_spin_lock+0x34/0x48
> > > + __queue_work+0x134/0x430
> > > + queue_work_on+0x48/0x8c
> > > + timers_update_nohz+0x24/0x2c
> > > + tick_nohz_activate.isra.15.part.16+0x5c/0x80
> > > + tick_setup_sched_timer+0xe0/0xf0
> > > + hrtimer_run_queues+0x88/0xf8
> >
> > So this is complaining that it cannot take pool->lock, which is
> > WAIT_CONFIG while holding RCU, which presents a WAIT_CONFIG context.
> >
> > This seems to implicate something is amiss, because that should be
> > allowed. The thing it doesn't print is the context, which in the above
> > case is a (hrtimer) interrupt.
> >
> > I suspect this really is a hardirq context and the next patch won't cure
> > things. It looks nohz (full?) related.
> >
> > Frederic, can you untangle this?
>
> Sebastian is right; I completely forgot the workqueue thing was still
> pending.
>
Ok good, because I had no better answer :)
Powered by blists - more mailing lists