lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250305230959.GA318387@bhelgaas>
Date: Wed, 5 Mar 2025 17:09:59 -0600
From: Bjorn Helgaas <helgaas@...nel.org>
To: "Chia-Lin Kao (AceLan)" <acelan.kao@...onical.com>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
	Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>,
	Lukas Wunner <lukas@...ner.de>, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] PCI: pciehp: Fix system hang during resume with
 daisy-chained hotplug controllers

Sorry for the delayed response.

On Tue, Oct 22, 2024 at 09:02:43PM +0800, Chia-Lin Kao (AceLan) wrote:
> A system hang occurs when multiple PCIe hotplug controllers in a daisy-chained
> setup (like a Thunderbolt dock with NVMe storage) resume from system sleep.
> This happens when both the dock and its downstream devices try to process PDC
> events at the same time through pciehp_request().
> 
> This patch changes pciehp_request() to atomic_or(), which adds the PDC event to
> ctrl->pending_events atomically. This change prevents the race condition by
> making the event handling atomic across multiple hotplug controllers during
> resume.

Can you explain what the race is, how it leads to a system hang, and
how this change avoids it?

I assume that .resume_noirq() for two devices in the same PCIe path,
e.g., a dock and a device downstream from it, would be serialized at a
higher level, because we would want to resume the upstream device
before trying to resume the downstream one.  But you're seeing
something different?

> The bug was found with an Intel Thunderbolt 4 Bridge (8086:0b26) dock and a
> Phison NVMe controller (1987:5012), where the system would hang if both devices
> tried to handle presence detect changes during resume.

The code change is in the pciehp_device_replaced() path.  When you
reproduce the problem, do you actually replace a device?  Or is
something wrong with the pciehp_device_replaced() checks, and we
mistakenly *think* a device was replaced?

> Changes:
>   v2:
>     * Replace pciehp_request() with atomic_or() to fix race condition
> 
>   v1:
>     * https://lore.kernel.org/lkml/Zvf7xYEA32VgLRJ6@wunner.de/T/
>     * Remove pci_walk_bus() call
>     * Fix appeared to work due to lower reproduction rate

Thanks for including the changelog.  You can put it after "---",
because we don't include it in the commit anyway.

You can wrap the commit log to 75 columns so it fits in 80 even after
git log indents it.

> Fixes: 9d573d19547b ("PCI: pciehp: Detect device replacement during system sleep")
> Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@...onical.com>
> ---
>  drivers/pci/hotplug/pciehp_core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
> index ff458e692fed..56bf23d55c41 100644
> --- a/drivers/pci/hotplug/pciehp_core.c
> +++ b/drivers/pci/hotplug/pciehp_core.c
> @@ -332,7 +332,7 @@ static int pciehp_resume_noirq(struct pcie_device *dev)
>  			ctrl_dbg(ctrl, "device replaced during system sleep\n");
>  			pci_walk_bus(ctrl->pcie->port->subordinate,
>  				     pci_dev_set_disconnected, NULL);
> -			pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC);
> +			atomic_or(PCI_EXP_SLTSTA_PDC, &ctrl->pending_events);
>  		}
>  	}
>  
> -- 
> 2.43.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ