[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250104073740.597af5c0@kernel.org>
Date: Sat, 4 Jan 2025 07:37:40 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: "David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, Kuniyuki Iwashima <kuni1840@...il.com>,
<netdev@...r.kernel.org>
Subject: Re: [PATCH v1 net-next 0/4] net: Hold per-netns RTNL during netdev
notifier registration.
On Sat, 4 Jan 2025 15:37:31 +0900 Kuniyuki Iwashima wrote:
> Patch 1 converts the global netdev notifier to blocking_notifier,
> which will be called under per-netns RTNL without RTNL, then we
> need to protect the ongoing netdev_chain users from unregistration.
>
> Patch 2 ~ 4 adds per-netns RTNL for registration of the global
> and per-netns netdev notifiers.
Lockdep is not happy:
[ 249.261403][ T11] ============================================
[ 249.261592][ T11] WARNING: possible recursive locking detected
[ 249.261769][ T11] 6.13.0-rc5-virtme #1 Not tainted
[ 249.261920][ T11] --------------------------------------------
[ 249.262094][ T11] kworker/u16:0/11 is trying to acquire lock:
[ 249.262293][ T11] ffffffff8a7f6a70 ((netdev_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x50/0x90
[ 249.262591][ T11]
[ 249.262591][ T11] but task is already holding lock:
[ 249.262810][ T11] ffffffff8a7f6a70 ((netdev_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x50/0x90
[ 249.263100][ T11]
[ 249.263100][ T11] other info that might help us debug this:
[ 249.263310][ T11] Possible unsafe locking scenario:
[ 249.263310][ T11]
[ 249.263522][ T11] CPU0
[ 249.263624][ T11] ----
[ 249.263728][ T11] lock((netdev_chain).rwsem);
[ 249.263875][ T11] lock((netdev_chain).rwsem);
[ 249.264020][ T11]
[ 249.264020][ T11] *** DEADLOCK ***
[ 249.264020][ T11]
[ 249.264223][ T11] May be due to missing lock nesting notation
[ 249.264223][ T11]
[ 249.264440][ T11] 5 locks held by kworker/u16:0/11:
[ 249.264582][ T11] #0: ffff8880010b5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7ec/0x16d0
[ 249.264867][ T11] #1: ffffc900000b7da0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0xe0b/0x16d0
[ 249.265118][ T11] #2: ffffffff8a7ec4d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xbc/0xba0
[ 249.265381][ T11] #3: ffffffff8a807e88 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x81/0x2e0
[ 249.265668][ T11] #4: ffffffff8a7f6a70 ((netdev_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x50/0x90
[ 249.265954][ T11]
[ 249.265954][ T11] stack backtrace:
[ 249.266126][ T11] CPU: 2 UID: 0 PID: 11 Comm: kworker/u16:0 Not tainted 6.13.0-rc5-virtme #1
[ 249.266389][ T11] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 249.266572][ T11] Workqueue: netns cleanup_net
[ 249.266722][ T11] Call Trace:
[ 249.266826][ T11] <TASK>
[ 249.266907][ T11] dump_stack_lvl+0x82/0xd0
[ 249.267056][ T11] print_deadlock_bug+0x40a/0x650
[ 249.267206][ T11] validate_chain+0x5bf/0xae0
[ 249.267352][ T11] ? __pfx_validate_chain+0x10/0x10
[ 249.267503][ T11] ? hlock_class+0x4e/0x130
[ 249.267642][ T11] ? mark_lock+0x38/0x3e0
[ 249.267751][ T11] __lock_acquire+0xb9a/0x1680
[ 249.267897][ T11] ? spin_bug+0x191/0x1d0
[ 249.268007][ T11] ? debug_object_assert_init+0x2a9/0x370
[ 249.268164][ T11] lock_acquire.part.0+0xeb/0x330
[ 249.268313][ T11] ? blocking_notifier_call_chain+0x50/0x90
[ 249.268497][ T11] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 249.268651][ T11] ? trace_lock_acquire+0x14c/0x1f0
[ 249.268803][ T11] ? lock_acquire+0x32/0xc0
[ 249.268944][ T11] ? blocking_notifier_call_chain+0x50/0x90
[ 249.269132][ T11] down_read+0x9f/0x340
[ 249.269247][ T11] ? blocking_notifier_call_chain+0x50/0x90
[ 249.269436][ T11] ? __pfx_down_read+0x10/0x10
[ 249.269586][ T11] blocking_notifier_call_chain+0x50/0x90
[ 249.269739][ T11] __dev_close_many+0xdf/0x2d0
[ 249.269881][ T11] ? __pfx___dev_close_many+0x10/0x10
[ 249.270031][ T11] dev_close_many+0x202/0x650
--
pw-bot: cr
Powered by blists - more mailing lists