lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240912125406.GA671060@bhelgaas>
Date: Thu, 12 Sep 2024 07:54:06 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Philipp Stanner <pstanner@...hat.com>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
	Krzysztof Wilczyński <kwilczynski@...nel.org>,
	Damien Le Moal <dlemoal@...nel.org>, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Alex Williamson <alex.williamson@...hat.com>
Subject: Re: [PATCH v2] PCI: Fix potential deadlock in pcim_intx()

On Thu, Sep 12, 2024 at 09:18:17AM +0200, Philipp Stanner wrote:
> On Wed, 2024-09-11 at 09:27 -0500, Bjorn Helgaas wrote:
> > On Thu, Sep 05, 2024 at 09:25:57AM +0200, Philipp Stanner wrote:
> > > commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> > > allocation step for pci_intx()'s device resource from
> > > pcim_enable_device() to pcim_intx(). As before,
> > > pcim_enable_device()
> > > sets pci_dev.is_managed to true; and it is never set to false
> > > again.
> > > 
> > > Due to the lifecycle of a struct pci_dev, it can happen that a
> > > second
> > > driver obtains the same pci_dev after a first driver ran.
> > > If one driver uses pcim_enable_device() and the other doesn't,
> > > this causes the other driver to run into managed pcim_intx(), which
> > > will
> > > try to allocate when called for the first time.
> > > 
> > > Allocations might sleep, so calling pci_intx() while holding
> > > spinlocks
> > > becomes then invalid, which causes lockdep warnings and could cause
> > > deadlocks:
> > > 
> > > ========================================================
> > > WARNING: possible irq lock inversion dependency detected
> > > 6.11.0-rc6+ #59 Tainted: G        W
> > > --------------------------------------------------------
> > > CPU 0/KVM/1537 just changed the state of lock:
> > > ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> > > vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took
> > > another,
> > > HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
> > > 
> > > and interrupts could create inverse lock ordering between them.
> > > 
> > > other info that might help us debug this:
> > >  Possible interrupt unsafe locking scenario:
> > > 
> > >        CPU0                    CPU1
> > >        ----                    ----
> > >   lock(fs_reclaim);
> > >                                local_irq_disable();
> > >                                lock(&vdev->irqlock);
> > >                                lock(fs_reclaim);
> > >   <Interrupt>
> > >     lock(&vdev->irqlock);
> > > 
> > >  *** DEADLOCK ***
> > > 
> > > Have pcim_enable_device()'s release function,
> > > pcim_disable_device(), set
> > > pci_dev.is_managed to false so that subsequent drivers using the
> > > same
> > > struct pci_dev do implicitly run into managed code.
> 
> Oops, that should obviously be "do *not* run into managed code."
> 
> Mea culpa. Maybe you can ammend that, Bjorn?

Fixed, thanks for the pointer.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ