[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDiPFiLrhUI0M2MI@mini-arch>
Date: Thu, 29 May 2025 09:45:10 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: syzbot <syzbot+846bb38dc67fe62cc733@...kaller.appspotmail.com>,
davem@...emloft.net, edumazet@...gle.com, horms@...nel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
pabeni@...hat.com, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [net?] possible deadlock in rtnl_newlink
On 05/29, Jakub Kicinski wrote:
> On Thu, 29 May 2025 08:59:43 -0700 Stanislav Fomichev wrote:
> > So this is internal WQ entry lock that is being reordered with rtnl
> > lock. But looking at process_one_work, I don't see actual locks, mostly
> > lock_map_acquire/lock_map_release calls to enforce some internal WQ
> > invariants. Not sure what to do with it, will try to read more.
>
> Basically a flush_work() happens while holding rtnl_lock,
> but the work itself takes that lock. It's a driver bug.
e400c7444d84 ("e1000: Hold RTNL when e1000_down can be called") ?
I think similar things (but wrt netdev instance lock) are happening
with iavf: iavf_remove calls cancel_work_sync while holding the
instance lock and the work callbacks grab the instance lock as well :-/
Powered by blists - more mailing lists