[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220526205355.GA344519@bhelgaas>
Date: Thu, 26 May 2022 15:53:55 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Rob Herring <robh@...nel.org>
Cc: Jim Quinlan <jim2101024@...il.com>,
linux-pci <linux-pci@...r.kernel.org>,
Nicolas Saenz Julienne <nsaenz@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
James Dutton <james.dutton@...il.com>,
Cyril Brulebois <kibi@...ian.org>,
bcm-kernel-feedback-list <bcm-kernel-feedback-list@...adcom.com>,
Jim Quinlan <james.quinlan@...adcom.com>,
Florian Fainelli <f.fainelli@...il.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Krzysztof WilczyĆski <kw@...ux.com>,
"moderated list:BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE"
<linux-rpi-kernel@...ts.infradead.org>,
"moderated list:BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE"
<linux-arm-kernel@...ts.infradead.org>,
open list <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>, linux-pm@...r.kernel.org
Subject: Re: [PATCH v1] PCI: brcmstb: Fix regression regarding missing PCIe
linkup
On Thu, May 26, 2022 at 02:25:12PM -0500, Rob Herring wrote:
> On Mon, May 23, 2022 at 05:10:36PM -0500, Bjorn Helgaas wrote:
> > On Sat, May 21, 2022 at 02:51:42PM -0400, Jim Quinlan wrote:
> > > On Sat, May 21,
> > > 2CONFIG_INITRAMFS_SOURCE="/work3/jq921458/cpio/54-arm64-rootfs.cpio022
> > > at 12:43 PM Bjorn Helgaas <helgaas@...nel.org> wrote:
> > > > On Wed, May 18, 2022 at 03:42:11PM -0400, Jim Quinlan wrote:
> > > > I added Rafael because this seems vaguely similar to runtime power
> > > > management, and if we can integrate with that somehow, I'd sure like
> > > > to avoid building a parallel infrastructure for it.
> > > >
> > > > The current path we're on is to move some of this code that's
> > > > currently in pcie-brcmstb.c to the PCIe portdrv [0]. I'm a little
> > > > hesitant about that because ACPI does just fine without it. If we're
> > > > adding new DT functionality that could not be implemented via ACPI,
> > > > that's one thing. But I'm not convinced this is that new.
> > >
> > > AFAICT, Broadcom STB and Cable Modem products do not have/use/want
> > > ACPI. We are fine with keeping this "PCIe regulator" feature
> > > private to our driver and giving you speedy and full support in
> > > maintaining it.
> >
> > I don't mean that you should use ACPI, only that ACPI platforms can do
> > this sort of power control using the existing PCI core infrastructure,
> > and maybe there's a way for OF/DT platforms to hook into that same
> > infrastructure to minimize the driver-specific work. E.g., maybe
> > there's a way to extend platform_pci_set_power_state() and similar to
> > manage these regulators.
>
> The big difference is ACPI abstracts how to control power for a device.
> The OS just knows D0, D3, etc. states. For DT, there is no such
> abstraction. You need device specific code to do device specific power
> management.
I'm thinking about the PCI side of the host controller, which should
live by the PCI rules. There are device-specific ways to control
power, clocks, resets, etc on the PCI side, but drivers for PCI
devices (as opposed to drivers for the host controllers) can't really
call that code directly.
There are some exceptions, but generally speaking I don't think PCI
drivers that use generic power management need to use PCI_D0,
PCI_D3hot, etc directly. Generic PM uses interfaces like
pci_pm_suspend() that keep most of the PCI details in the PCI core
instead of the endpoint driver, e.g., [3].
The PCI core has a bunch of interfaces:
platform_pci_power_manageable()
platform_pci_set_power_state()
platform_pci_get_power_state()
platform_pci_choose_state()
that currently mostly use ACPI. So I'm wondering whether there's some
way to extend those platform_*() interfaces to call the native host
controller device-specific power control code via an ops structure.
Otherwise it feels like the native host controller drivers are in a
different world than the generic PM world, and we'll end up with every
host controller driver reimplementing things.
For example, how would we runtime suspend a Root Port and turn off
power for PCI devices below it? Obviously that requires
device-specific code to control the power. Do we have some common
interface to it, or do we have to trap config writes to PCI_PM_CTRL or
something?
[3] https://git.kernel.org/linus/cd97b7e0d780
> > > > [0] https://lore.kernel.org/r/20211110221456.11977-6-jim2101024@gmail.com
> > > > IIUC, this path:
> > > >
> > > > pci_alloc_child_bus
> > > > brcm_pcie_add_bus # .add_bus method
> > > > pci_subdev_regulators_add_bus # in pcie-brcmstb.c for now
> > > > alloc_subdev_regulators # in pcie-brcmstb.c for now
> > > > regulator_bulk_get
> > > > regulator_bulk_enable
> > > > brcm_pcie_linkup # bring link up
> > > >
> > > > is basically so we can leave power to downstream devices off, then
> > > > turn it on when we're ready to enumerate those downstream devices.
> > >
> > > Yes -- it is the "chicken-and-egg" problem. Ideally, we would like
> > > for the endpoint driver to turn on its own regulators, but even to
> > > know which endpoint driver to probe we must turn on the regulator to
> > > establish linkup.
> >
> > I don't think having an endpoint driver turn on power to its device is
> > the right goal.
>
> DT requires device specific code to control a specific device. That
> belongs in the driver for that device.
I must be talking about something different than you are. I see that
brcmstb has device-specific code to control the brcmstb device as well
as power for PCI devices downstream from that device.
When I read "endpoint driver" I think of a PCIe Endpoint device like a
NIC. That's just a random PCI device, and I read "endpoint driver to
turn on its own regulators" as suggesting that the NIC driver (e1000,
etc) would turn on power to the NIC. Is that the intent?
Bjorn
Powered by blists - more mailing lists