[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJRA-1z60cvGnbqYa=Ua-ysR9uHufkrFmQGRmN-4Dod2Q@mail.gmail.com>
Date: Tue, 30 Apr 2024 20:43:22 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Davide Caratti <dcaratti@...hat.com>
Cc: Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Naresh Kamboju <naresh.kamboju@...aro.org>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net/sched: unregister lockdep keys in
qdisc_create/qdisc_alloc error path
On Tue, Apr 30, 2024 at 8:35 PM Davide Caratti <dcaratti@...hat.com> wrote:
>
> hi Eric, thanks for looking at this!
>
> On Tue, Apr 30, 2024 at 07:58:14PM +0200, Eric Dumazet wrote:
> > On Tue, Apr 30, 2024 at 7:11 PM Davide Caratti <dcaratti@...hat.com> wrote:
> > >
>
> [...]
>
> > > @@ -1389,6 +1389,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
> > > ops->destroy(sch);
> > > qdisc_put_stab(rtnl_dereference(sch->stab));
> > > err_out3:
> > > + lockdep_unregister_key(&sch->root_lock_key);
> > > netdev_put(dev, &sch->dev_tracker);
> > > qdisc_free(sch);
> > > err_out2:
> >
> > For consistency with the other path, what about this instead ?
> >
> > This would also allow a qdisc goten from an rcu lookup to allow its
> > spinlock to be acquired.
> > (I am not saying this can happen, but who knows...)
> >
> > Ie defer the lockdep_unregister_key() right before the kfree()
>
> the problem is, qdisc_free() is called also in a RCU callback. So, if we move
> lockdep_unregister_key() inside the function, the non-error path is
> going to splat like this
Got it, but we do have ways of running a work queue after rcu grace period.
Let's use your patch, but I suspect we could have other issues.
Full disclosure, I have the following syzbot report:
WARNING: bad unlock balance detected!
6.9.0-rc5-syzkaller-01413-gdd1941f801bc #0 Not tainted
-------------------------------------
kworker/u8:6/2474 is trying to release lock (&sch->root_lock_key) at:
[<ffffffff897300c5>] spin_unlock_bh include/linux/spinlock.h:396 [inline]
[<ffffffff897300c5>] dev_reset_queue+0x145/0x1b0 net/sched/sch_generic.c:1304
but there are no more locks to release!
other info that might help us debug this:
5 locks held by kworker/u8:6/2474:
#0: ffff888015ecd948 ((wq_completion)netns){+.+.}-{0:0}, at:
process_one_work kernel/workqueue.c:3229 [inline]
#0: ffff888015ecd948 ((wq_completion)netns){+.+.}-{0:0}, at:
process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3335
#1: ffffc9000a3a7d00 (net_cleanup_work){+.+.}-{0:0}, at:
process_one_work kernel/workqueue.c:3230 [inline]
#1: ffffc9000a3a7d00 (net_cleanup_work){+.+.}-{0:0}, at:
process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3335
#2: ffffffff8f59bd50 (pernet_ops_rwsem){++++}-{3:3}, at:
cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:591
#3: ffffffff8f5a8648 (rtnl_mutex){+.+.}-{3:3}, at:
cleanup_net+0x6af/0xcc0 net/core/net_namespace.c:627
#4: ffff88802cbce258 (dev->qdisc_tx_busylock ?:
&qdisc_tx_busylock#2){+...}-{2:2}, at: spin_lock_bh
include/linux/spinlock.h:356 [inline]
#4: ffff88802cbce258 (dev->qdisc_tx_busylock ?:
&qdisc_tx_busylock#2){+...}-{2:2}, at: dev_reset_queue+0x126/0x1b0
net/sched/sch_generic.c:1299
stack backtrace:
CPU: 1 PID: 2474 Comm: kworker/u8:6 Not tainted
6.9.0-rc5-syzkaller-01413-gdd1941f801bc #0
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 03/27/2024
Workqueue: netns cleanup_net
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
print_unlock_imbalance_bug+0x256/0x2c0 kernel/locking/lockdep.c:5194
__lock_release kernel/locking/lockdep.c:5431 [inline]
lock_release+0x599/0x9f0 kernel/locking/lockdep.c:5774
__raw_spin_unlock_bh include/linux/spinlock_api_smp.h:165 [inline]
_raw_spin_unlock_bh+0x1b/0x40 kernel/locking/spinlock.c:210
spin_unlock_bh include/linux/spinlock.h:396 [inline]
dev_reset_queue+0x145/0x1b0 net/sched/sch_generic.c:1304
netdev_for_each_tx_queue include/linux/netdevice.h:2503 [inline]
dev_deactivate_many+0x54a/0xb10 net/sched/sch_generic.c:1368
__dev_close_many+0x1a4/0x300 net/core/dev.c:1529
dev_close_many+0x24e/0x4c0 net/core/dev.c:1567
unregister_netdevice_many_notify+0x544/0x16e0 net/core/dev.c:11181
cleanup_net+0x75d/0xcc0 net/core/net_namespace.c:632
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa10/0x17c0 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
Powered by blists - more mailing lists