lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c979a2e5-3d81-49e3-bc58-78d8d9db2296@kernel.org>
Date: Fri, 6 Sep 2024 09:38:04 +0900
From: Damien Le Moal <dlemoal@...nel.org>
To: Philipp Stanner <pstanner@...hat.com>, Bjorn Helgaas
 <bhelgaas@...gle.com>, Krzysztof WilczyƄski
 <kwilczynski@...nel.org>
Cc: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
 Alex Williamson <alex.williamson@...hat.com>
Subject: Re: [PATCH v2] PCI: Fix potential deadlock in pcim_intx()

On 9/5/24 16:25, Philipp Stanner wrote:
> commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> allocation step for pci_intx()'s device resource from
> pcim_enable_device() to pcim_intx(). As before, pcim_enable_device()
> sets pci_dev.is_managed to true; and it is never set to false again.
> 
> Due to the lifecycle of a struct pci_dev, it can happen that a second
> driver obtains the same pci_dev after a first driver ran.
> If one driver uses pcim_enable_device() and the other doesn't,
> this causes the other driver to run into managed pcim_intx(), which will
> try to allocate when called for the first time.
> 
> Allocations might sleep, so calling pci_intx() while holding spinlocks
> becomes then invalid, which causes lockdep warnings and could cause
> deadlocks:
> 
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 6.11.0-rc6+ #59 Tainted: G        W
> --------------------------------------------------------
> CPU 0/KVM/1537 just changed the state of lock:
> ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took another,
> HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
> 
> and interrupts could create inverse lock ordering between them.
> 
> other info that might help us debug this:
>  Possible interrupt unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(fs_reclaim);
>                                local_irq_disable();
>                                lock(&vdev->irqlock);
>                                lock(fs_reclaim);
>   <Interrupt>
>     lock(&vdev->irqlock);
> 
>  *** DEADLOCK ***
> 
> Have pcim_enable_device()'s release function, pcim_disable_device(), set
> pci_dev.is_managed to false so that subsequent drivers using the same
> struct pci_dev do implicitly run into managed code.
> 
> Fixes: 25216afc9db5 ("PCI: Add managed pcim_intx()")
> Reported-by: Alex Williamson <alex.williamson@...hat.com>
> Closes: https://lore.kernel.org/all/20240903094431.63551744.alex.williamson@redhat.com/
> Suggested-by: Alex Williamson <alex.williamson@...hat.com>
> Signed-off-by: Philipp Stanner <pstanner@...hat.com>
> Tested-by: Alex Williamson <alex.williamson@...hat.com>

Looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@...nel.org>

-- 
Damien Le Moal
Western Digital Research


Powered by blists - more mailing lists