lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 Dec 2020 15:55:14 +0530
From:   Naresh Kamboju <naresh.kamboju@...aro.org>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>,
        open list <linux-kernel@...r.kernel.org>,
        linux-stable <stable@...r.kernel.org>, rcu@...r.kernel.org,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        lkft-triage@...ts.linaro.org, Netdev <netdev@...r.kernel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Sasha Levin <sashal@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [stabe-rc 5.9 ] sched: core.c:7270 Illegal context switch in
 RCU-bh read-side critical section!

On Tue, 15 Dec 2020 at 23:52, Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Tue, 15 Dec 2020 06:45:31 -0800 Paul E. McKenney wrote:
> > > Crash log:
> > > --------------
> > > # selftests: bpf: test_tc_edt.sh
> > > [  503.796362]
> > > [  503.797960] =============================
> > > [  503.802131] WARNING: suspicious RCU usage
> > > [  503.806232] 5.9.15-rc1 #1 Tainted: G        W
> > > [  503.811358] -----------------------------
> > > [  503.815444] /usr/src/kernel/kernel/sched/core.c:7270 Illegal
> > > context switch in RCU-bh read-side critical section!
> > > [  503.825858]
> > > [  503.825858] other info that might help us debug this:
> > > [  503.825858]
> > > [  503.833998]
> > > [  503.833998] rcu_scheduler_active = 2, debug_locks = 1
> > > [  503.840981] 3 locks held by kworker/u12:1/157:
> > > [  503.845514]  #0: ffff0009754ed538
> > > ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x208/0x768
> > > [  503.855048]  #1: ffff800013e63df0 (net_cleanup_work){+.+.}-{0:0},
> > > at: process_one_work+0x208/0x768
> > > [  503.864201]  #2: ffff8000129fe3f0 (pernet_ops_rwsem){++++}-{3:3},
> > > at: cleanup_net+0x64/0x3b8
> > > [  503.872786]
> > > [  503.872786] stack backtrace:
> > > [  503.877229] CPU: 1 PID: 157 Comm: kworker/u12:1 Tainted: G        W
> > >         5.9.15-rc1 #1
> > > [  503.885433] Hardware name: ARM Juno development board (r2) (DT)
> > > [  503.891382] Workqueue: netns cleanup_net
> > > [  503.895324] Call trace:
> > > [  503.897786]  dump_backtrace+0x0/0x1f8
> > > [  503.901464]  show_stack+0x2c/0x38
> > > [  503.904796]  dump_stack+0xec/0x158
> > > [  503.908215]  lockdep_rcu_suspicious+0xd4/0xf8
> > > [  503.912591]  ___might_sleep+0x1e4/0x208
> >
> > You really are forbidden to invoke ___might_sleep() while in a BH-disable
> > region of code, whether due to rcu_read_lock_bh(), local_bh_disable(),
> > or whatever else.
> >
> > I do see the cond_resched() in inet_twsk_purge(), but I don't immediately
> > see a BH-disable region of code.  Maybe someone more familiar with this
> > code would have some ideas.
> >
> > Or you could place checks for being in a BH-disable further up in
> > the code.  Or build with CONFIG_DEBUG_INFO=y to allow more precise
> > interpretation of this stack trace.

I will try to reproduce this warning with DEBUG_INFO=y enabled kernel and
get back to you with a better crash log.

>
> My money would be on the option that whatever run on this workqueue
> before forgot to re-enable BH, but we already have a check for that...
> Naresh, do you have the full log? Is there nothing like "BUG: workqueue
> leaked lock" above the splat?

Yes [1] is the full test log link.
But i do not see "BUG: workqueue leaked lock" in the log.

full log link,
[1] https://lkft.validation.linaro.org/scheduler/job/2049484#L5979

- Naresh

Powered by blists - more mailing lists