lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b6a844619892ecaa11031705808667e0886d8b2.camel@linux.ibm.com>
Date: Mon, 09 Feb 2026 11:12:36 +0100
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        "Ionut Nechita (Wind
 River)" <ionut.nechita@...driver.com>,
        Benjamin Block <bblock@...ux.ibm.com>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>, linux-pci@...r.kernel.org,
        Clark
 Williams <clrkwllms@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>, linux-rt-devel@...ts.linux.dev,
        linux-kernel@...r.kernel.org, Ionut Nechita
	 <ionut_n2001@...oo.com>,
        Farhan Ali <alifm@...ux.ibm.com>,
        Julian Ruess
	 <julianr@...ux.ibm.com>
Subject: Re: [PATCH] PCI/IOV: Fix recursive locking deadlock on
 pci_rescan_remove_lock

On Mon, 2026-02-09 at 09:25 +0100, Sebastian Andrzej Siewior wrote:
> On 2026-02-09 09:57:07 [+0200], Ionut Nechita (Wind River) wrote:
> > From: Ionut Nechita <ionut.nechita@...driver.com>
> > 
> > When a PCI device is hot-removed via sysfs (e.g., echo 1 > /sys/.../remove),
> > pci_stop_and_remove_bus_device_locked() acquires pci_rescan_remove_lock and
> > then recursively walks the bus hierarchy calling driver .remove() callbacks.
> > 
> > If the removed device is a PF with SR-IOV enabled (e.g., i40e, ice), the
> > driver's .remove() calls pci_disable_sriov() -> sriov_disable() ->
> > sriov_del_vfs() which also tries to acquire pci_rescan_remove_lock.
> > Since this is a non-recursive mutex and the same thread already holds it,
> > this results in a deadlock.
> > 
> > On PREEMPT_RT kernels, where mutexes are backed by rtmutex with deadlock
> > detection, this immediately triggers:
> > 
> >   WARNING: CPU: 15 PID: 11730 at kernel/locking/rtmutex.c:1663
> >   Call Trace:
> >    mutex_lock+0x47/0x60
> >    sriov_disable+0x2a/0x100
> >    i40e_free_vfs+0x415/0x470 [i40e]
> >    i40e_remove+0x38d/0x3e0 [i40e]
> >    pci_device_remove+0x3b/0xb0
> >    device_release_driver_internal+0x193/0x200
> >    pci_stop_bus_device+0x81/0xb0
> >    pci_stop_and_remove_bus_device_locked+0x16/0x30
> >    remove_store+0x79/0x90
> > 
> > On non-RT kernels the same recursive acquisition silently hangs the calling
> > process, eventually causing netdev watchdog TX timeout splats.
> > 
> > This affects all drivers that call pci_disable_sriov() from their .remove()
> > callback (i40e, ice, and others).
> > 
> > Fix this by tracking the owner of pci_rescan_remove_lock and skipping the
> > redundant acquisition in sriov_del_vfs() when the current thread already
> > holds it.  The VF removal is still serialized correctly because the caller
> > already holds the lock.
> 
> This looks like the result of commit 05703271c3cdc ("PCI/IOV: Add PCI
> rescan-remove locking when enabling/disabling SR-IOV").
> 
> > Signed-off-by: Ionut Nechita <ionut.nechita@...driver.com>
> 
> Sebastian

Agree, this looks related to the deadlock I later found with that
commit and that lead to this revert+new fix that has now been queued
for the v6.20/v7.00 here:

https://lore.kernel.org/linux-pci/20251216-revert_sriov_lock-v3-0-dac4925a7621@linux.ibm.com/

That said I do find this approach interesting. Benjamin and I are
actually still looking into a related problem with not taking the
rescan/remove lock as part of vfio-pci tear down and there this
approach could work better than just moving the locking up into the
sysfs handler. So far we haven't found a good place to take the lock in
that path that doesn't suffer from the recursive locking in other
paths. On the other hand conditionally taking a mutex is always a
little ugly in my opinion.

Thanks,
Niklas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ