lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aFu3Mg9SqlaDTw9z@wunner.de>
Date: Wed, 25 Jun 2025 10:45:38 +0200
From: Lukas Wunner <lukas@...ner.de>
To: Timothy Pearson <tpearson@...torengineering.com>
Cc: Bjorn Helgaas <helgaas@...nel.org>,
	linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	linux-pci <linux-pci@...r.kernel.org>,
	Madhavan Srinivasan <maddy@...ux.ibm.com>,
	Michael Ellerman <mpe@...erman.id.au>,
	christophe leroy <christophe.leroy@...roup.eu>,
	Naveen N Rao <naveen@...nel.org>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Shawn Anastasio <sanastasio@...torengineering.com>
Subject: Re: [PATCH v2 2/6] pci/hotplug/pnv_php: Work around switches with
 broken

On Fri, Jun 20, 2025 at 11:45:45AM -0500, Timothy Pearson wrote:
> From: "Lukas Wunner" <lukas@...ner.de>
> > I don't know how much PCIe hotplug on PowerNV differs from native,
> > spec-compliant PCIe hotplug.  If the differences are vast (and I
> > get the feeling they might be if I read terms like "PHB" and
> > "EEH unfreeze", which sound completely foreign to me), it might
> > be easier to refactor pnv_php.c and copy patterns or code from
> > pciehp, than to integrate the functionality from pnv_php.c into
> > pciehp.
> 
> The differences at the root level (PHB) are quite significant -- the
> controller is more advanced in many ways than standard PCIe root
> complexes [1] -- and those differences need very special handling.
> Once we are looking at devices downstream of the root complex,
> standard PCIe hotplug and AER specifications apply, so we're in a
> strange spot of really wanting to use pciehp (to handle all nested
> downstream bridges), but pciehp still needs to understand how to deal
> with our unqiue root complex.
> 
> One idea I had was to use the existing modularity of pciehp's source
> and add a new pciehp_powernv.c file with all of our root complex
> handling methods.  We could then #ifdef swap the assocated root complex
> calls to that external file when compiled in PowerNV mode.

We've traditionally dealt with such issues by inserting pcibios_*()
hooks in generic code, with a __weak implementation (which is usually
an empty stub) and a custom implementation in arch/powerpc/.

The linker then chooses the custom implementation over the __weak one.

You can find the existing hooks with:

git grep "__weak .*pcibios" -- drivers/pci
git grep pcibios -- arch/powerpc

An alternative method is to add a callback to struct pci_host_bridge.

> > One thing I don't quite understand is, it sounds like you've
> > attached a PCIe switch to a Root Port and the hotplug ports
> > are on the PCIe switch.  Aren't those hotplug ports just
> > bog-standard ones that can be driven by pciehp?  My expectation
> > would have been that a PowerNV-specific hotplug driver would
> > only be necessary for hotplug-capable Root Ports.
> 
> That's the problem -- the pciehp driver assumes x86 root ports,

Nah, there's nothing x86-specific about it.  pciehp is used on all
kinds of arches, it's just an implementation of the PCIe Base Spec.

> and the powernv driver ends up duplicating (badly) parts of the pciehp
> functionality for downstream bridges.  That's one reason I'd like to
> abstract the root port handling in pciehp and eventually move the
> PowerNV root port handling into that module.

Sounds like you're currently describing the Switch Downstream Ports with
an "ibm,ioda-phb2" compatible string in the devicetree?  That would feel
wrong since they're not host bridges.

> [1] Among other interesting differences, it is both capable of and has
> been tested to properly block and report all invalid accesses from a
> PCIe device to system memory.  This breaks assumptions in many PCIe
> device drivers, but is also a significant security advantage.
> EEH freeze is effectively this security mechanism kicking in -- on
> detecting an invalid access, the PHB itself will block the access and
> put the PE into a frozen state where no PCIe traffic is permitted.

That sounds somewhat similar to what the PCIe Access Control Services
Capability provides.  I think at least some IOMMUs support raising an
exception on illegal accesses, so the EEH functionality is probably not
*that* special.

Thanks,

Lukas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ