[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220525215739.GA275957@bhelgaas>
Date: Wed, 25 May 2022 16:57:39 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Jim Quinlan <jim2101024@...il.com>
Cc: linux-pci <linux-pci@...r.kernel.org>,
Nicolas Saenz Julienne <nsaenz@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
James Dutton <james.dutton@...il.com>,
Cyril Brulebois <kibi@...ian.org>,
bcm-kernel-feedback-list <bcm-kernel-feedback-list@...adcom.com>,
Jim Quinlan <james.quinlan@...adcom.com>,
Florian Fainelli <f.fainelli@...il.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
Krzysztof WilczyĆski <kw@...ux.com>,
"moderated list:BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE"
<linux-rpi-kernel@...ts.infradead.org>,
"moderated list:BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE"
<linux-arm-kernel@...ts.infradead.org>,
open list <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>, linux-pm@...r.kernel.org
Subject: Re: [PATCH v1] PCI: brcmstb: Fix regression regarding missing PCIe
linkup
On Tue, May 24, 2022 at 12:54:48PM -0400, Jim Quinlan wrote:
> On Mon, May 23, 2022 at 6:10 PM Bjorn Helgaas <helgaas@...nel.org> wrote:
> > On Sat, May 21, 2022 at 02:51:42PM -0400, Jim Quinlan wrote:
> > > On Sat, May 21,
> > > 2CONFIG_INITRAMFS_SOURCE="/work3/jq921458/cpio/54-arm64-rootfs.cpio022
> > > at 12:43 PM Bjorn Helgaas <helgaas@...nel.org> wrote:
> > > > On Wed, May 18, 2022 at 03:42:11PM -0400, Jim Quinlan wrote:
> > > > > commit 93e41f3fca3d ("PCI: brcmstb: Add control of subdevice
> > > > > voltage regulators")
> > > > >
> > > > > introduced a regression on the PCIe RPi4 Compute Module. If the
> > > > > PCIe endpoint node described in [2] was missing, no linkup would
> > > > > be attempted, and subsequent accesses would cause a panic
> > > > > because this particular PCIe HW causes a CPU abort on illegal
> > > > > accesses (instead of returning 0xffffffff).
> > > > >
> > > > > We fix this by allowing the DT endpoint subnode to be missing.
> > > > > This is important for platforms like the CM4 which have a
> > > > > standard PCIe socket and the endpoint device is unknown.
> > But above you say it's the *endpoint* node that doesn't exist. The
> > existing code looks like it's checking for the *bridge* node
> > (bus->dev->of_node). We haven't even enumerated the devices on the
> > child bus, so we don't know about them at this point.
>
> You are absolutely correct and I must change the commit message
> to say the "root port DT node". I'm sorry; this mistake likely did not
> help you understand the fix. :-(
Great, that will help me out! I think including the relevant DT
snippet would also make it more concrete and might conceivably be
helpful to somebody working around it on a kernel without the fix.
> > What happens if there is a DT node for the bridge, but it doesn't
> > describe any regulators? I assume regulator_bulk_get() will fail, and
> > it looks like that might still keep us from bringing the link up?
>
> The regulator_bulk_get() func does not fail if the regulators are
> not present. Instead it "gets" a dummy device and issues a warning
> per missing regulator. A version of my pullreq submitted code to
> prescan the DT node and call regulator_bulk_get() with only the
> names of the regulators present, but IIRC this was NAKd. Hopefully
> I will not be swamped with RPi developers' emails when they think
> these warnings are an issue.
Ah, I see, this is the IS_ERR (but not -ENODEV) NORMAL_GET case in
_regulator_get(). You might get some emails, but I guess it must be a
fairly common situation :)
> > > > What happens if we turn on the power but don't find any
> > > > downstream devices?
> > >
> > > They are turned off to conserve power.
> ...
> When brcm_pcie_add_bus() is invoked, we will "get" and enable any
> regulators that are present in the DT node. If the busno==1, we will
> will also attempt pcie-linkup. If PCIe linkup fails, which can happen for
> multiple reasons but most due to a missing device, we turn
> on "refusal" mode to prevent our unforgiving PCIe HW from causing an
> abort on any subsequent PCIe config-space accesses.
> Further, a failed linkup will have brcm_pcie_probe() stopping and
> removing the root bus, which in turn invokes brcm_pcie_remove_bus()
> (actually named pci_subdev_regulators_remove_bus() as it may someday
> find its way into bus.c), which invokes regulator_bulk_disable() on
> any regulators that were enabled by the probe.
Ah, thanks! This is the detail I missed. If pci_host_probe()
succeeds and the link is down, we call brcm_pcie_remove() (the
driver's .remove() method). That's unusual and possibly unique among
native host bridge drivers. I'm not sure that's the best pattern
here. Most drivers can't do that because they expect multiple devices
on the root bus. And the Root Port is still a functional device on
its own, even if its link is down. Users likely expect to see it in
lspci and manipulate it via setpci. It may have AER logs with clues
about why the link didn't come up.
Again something for future discussion, not for this regression.
> Unless you object, I plan on sending you a v2 of my regression fix
> which will correct the commit message, change the "if (busno == 1)"
> conditional to only guard the pcie linkup call, and add further
> comments.
I don't really *like* comparing "busno == 1" because the root bus
number is programmable on most devices. It would be more obvious if
we could test for a Root Port directly. But maybe it's the best we
can do for now.
Someday it seems like we should figure out how to make the PCI core
smart enough to turn on any regulators for devices below the Root Port
when we put the Root Port in D0. But I don't know how to do that or
even whether it's feasible.
Bjorn
Powered by blists - more mailing lists