lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250530113404.GA138859@bhelgaas>
Date: Fri, 30 May 2025 06:34:04 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Cc: Mahesh J Salgaonkar <mahesh@...ux.ibm.com>,
	Oliver O'Halloran <oohall@...il.com>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Lorenzo Pieralisi <lpieralisi@...nel.org>,
	Krzysztof WilczyƄski <kw@...ux.com>,
	Rob Herring <robh@...nel.org>, Zhou Wang <wangzhou1@...ilicon.com>,
	Will Deacon <will@...nel.org>, Robert Richter <rric@...nel.org>,
	Alyssa Rosenzweig <alyssa@...enzweig.io>,
	Marc Zyngier <maz@...nel.org>,
	Conor Dooley <conor.dooley@...rochip.com>,
	Daire McNamara <daire.mcnamara@...rochip.com>, dingwei@...vell.com,
	cassel@...nel.org, Lukas Wunner <lukas@...ner.de>,
	Krishna Chaitanya Chundru <krishna.chundru@....qualcomm.com>,
	linuxppc-dev@...ts.ozlabs.org, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org,
	linux-riscv@...ts.infradead.org
Subject: Re: [PATCH v4 4/5] PCI: host-common: Add link down handling for host
 bridges

On Fri, May 30, 2025 at 09:16:59AM +0530, Manivannan Sadhasivam wrote:
> On Wed, May 28, 2025 at 05:35:00PM -0500, Bjorn Helgaas wrote:
> > On Thu, May 08, 2025 at 12:40:33PM +0530, Manivannan Sadhasivam wrote:
> > > The PCI link, when down, needs to be recovered to bring it back. But that
> > > cannot be done in a generic way as link recovery procedure is specific to
> > > host bridges. So add a new API pci_host_handle_link_down() that could be
> > > called by the host bridge drivers when the link goes down.
> > > 
> > > The API will iterate through all the slots and calls the pcie_do_recovery()
> > > function with 'pci_channel_io_frozen' as the state. This will result in the
> > > execution of the AER Fatal error handling code. Since the link down
> > > recovery is pretty much the same as AER Fatal error handling,
> > > pcie_do_recovery() helper is reused here. First the AER error_detected
> > > callback will be triggered for the bridge and the downstream devices. Then,
> > > pci_host_reset_slot() will be called for the slot, which will reset the
> > > slot using 'reset_slot' callback to recover the link. Once that's done,
> > > resume message will be broadcasted to the bridge and the downstream devices
> > > indicating successful link recovery.
> > 
> > Link down is an event for a single Root Port.  Why would we iterate
> > through all the Root Ports if the link went down for one of them?
> 
> Because on the reference platform (Qcom), link down notification is
> not per-port, but per controller. So that's why we are iterating
> through all ports.  The callback is supposed to identify the ports
> that triggered the link down event and recover them.

Maybe I'm missing something.  Which callback identifies the port(s)
that triggered the link down event?  I see that
pci_host_handle_link_down() is called by
rockchip_pcie_rc_sys_irq_thread() and qcom_pcie_global_irq_thread(),
but I don't see the logic that identifies a particular Root Port.

Per-controller notification of per-port events is a controller
deficiency, not something inherent to PCIe.  I don't think we should
build common infrastructure that resets all the Root Ports just
because one of them had an issue.

I think pci_host_handle_link_down() should take a Root Port, not a
host bridge, and the controller driver should figure out which port
needs to be recovered, or the controller driver can have its own loop
to recover all of them if it can't figure out which one needs it.

Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ