lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDoKyVE7_hVENi4O@LQ3V64L9R2>
Date: Fri, 30 May 2025 12:45:13 -0700
From: Joe Damato <jdamato@...tly.com>
To: Stanislav Fomichev <stfomichev@...il.com>
Cc: netdev@...r.kernel.org, kuba@...nel.org, john.cs.hey@...il.com,
	jacob.e.keller@...el.com,
	syzbot+846bb38dc67fe62cc733@...kaller.appspotmail.com,
	Tony Nguyen <anthony.l.nguyen@...el.com>,
	Przemek Kitszel <przemyslaw.kitszel@...el.com>,
	Andrew Lunn <andrew+netdev@...n.ch>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
	"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
	open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH iwl-net] e1000: Move cancel_work_sync to avoid deadlock

On Fri, May 30, 2025 at 08:07:29AM -0700, Stanislav Fomichev wrote:
> On 05/30, Joe Damato wrote:
> > Previously, e1000_down called cancel_work_sync for the e1000 reset task
> > (via e1000_down_and_stop), which takes RTNL.
> > 
> > As reported by users and syzbot, a deadlock is possible due to lock
> > inversion in the following scenario:
> > 
> > CPU 0:
> >   - RTNL is held
> >   - e1000_close
> >   - e1000_down
> >   - cancel_work_sync (takes the work queue mutex)
> >   - e1000_reset_task
> > 
> > CPU 1:
> >   - process_one_work (takes the work queue mutex)
> >   - e1000_reset_task (takes RTNL)
> 
> nit: as Jakub mentioned in another thread, it seems more about the
> flush_work waiting for the reset_task to complete rather than
> wq mutexes (which are fake)?

Hm, I probably misunderstood something. Also, not sure what you
meant by the wq mutexes being fake?

My understanding (which is prob wrong) from the syzbot and user
report was that the order of wq mutex and rtnl are inverted in the
two paths, which can cause a deadlock if both paths run.

In the case you describe below, wouldn't cpu0's __flush_work
eventually finish, releasing RTNL, and allowing CPU 1 to proceed? It
seemed to me that the only way for deadlock to happen was with the
inversion described above -- but I'm probably missing something.
 
> CPU 0:
>   - RTNL is held
>   - e1000_close
>   - e1000_down
>   - cancel_work_sync
>   - __flush_work
>   - <wait here for the reset_task to finish>
> 
> CPU 1:
>   - process_one_work
>   - e1000_reset_task (takes RTNL)
>   - <but cpu 0 already holds rtnl>
> 
> The fix looks good!

Thanks for taking a look.

> Acked-by: Stanislav Fomichev <sdf@...ichev.me>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ