[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250903230450.GA1236832@bhelgaas>
Date: Wed, 3 Sep 2025 18:04:50 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: David Box <david.e.box@...ux.intel.com>
Cc: rafael@...nel.org, bhelgaas@...gle.com, vicamo.yang@...onical.com,
kenny@...ix.com, ilpo.jarvinen@...ux.intel.com,
nirmal.patel@...ux.intel.com, mani@...nel.org,
linux-pm@...r.kernel.org, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH V3 1/2] PCI/ASPM: Add host-bridge API to override default
ASPM/CLKPM link state
On Fri, Aug 29, 2025 at 12:54:20PM -0700, David Box wrote:
> On Thu, Aug 28, 2025 at 03:43:45PM -0500, Bjorn Helgaas wrote:
> > On Mon, Aug 25, 2025 at 01:35:22PM -0700, David E. Box wrote:
> > > Synthetic PCIe hierarchies, such as those created by Intel VMD, are not
> > > enumerated by firmware and do not receive BIOS-provided ASPM or CLKPM
> > > defaults. Devices in such domains may therefore run without the intended
> > > power management.
> > >
> > > Add a host-bridge mechanism that lets controller drivers supply their own
> > > defaults. A new aspm_default_link_state field in struct pci_host_bridge is
> > > set via pci_host_set_default_pcie_link_state(). During link initialization,
> > > if this field is non-zero, ASPM and CLKPM defaults come from it instead of
> > > BIOS.
> > >
> > > This enables drivers like VMD to align link power management with platform
> > > expectations and avoids embedding controller-specific quirks in ASPM core
> > > logic.
> >
> > I think this kind of sidesteps the real issue. Drivers for host
> > controllers or PCI devices should tell us about *broken* things, but
> > not about things advertised by the hardware and available for use.
>
> I agree with the principle. The intent isn’t for VMD (or any controller) to
> override valid platform policy. It’s to handle synthetic domains where the
> platform doesn’t provide any policy path (no effective _OSC/FADT for the child
> hierarchy). In those cases, the controller is the only agent that knows the
> topology and can supply sane defaults.
>
> I’m happy to tighten the patch to explicitly cover synthetic domains only.
> Instead of an API, we could have a boolean flag 'aspm_synthetic_domain'. When
> set by the controller, we can do:
>
> if (host_bridge->aspm_synthetic_domain)
> link->aspm_default = PCIE_LINK_STATE_ALL;
>
> This at least addresses your concern about policy decision, leaving it to the
> core to determine how these domains are handled rather than an ABI that lets
> domains set policy.
>
> > The only documented policy controls I'm aware of for ASPM are:
> >
> > - FADT "PCIe ASPM Controls" bit ("if set, OS must not enable ASPM
> > control on this platform")
> >
> > - _OSC negotiation for control of the PCIe Capability (OS is only
> > allowed to write PCI_EXP_LNKCTL if platform has granted control to
> > the OS)
> >
> > I think what we *should* be doing is enabling ASPM when it's
> > advertised, subject to those platform policy controls and user choices
> > like CONFIG_PCIEASPM_PERFORMANCE/POWERSAVE/etc and sysfs attributes.
> >
> > So basically I think link->aspm_default should be PCIE_LINK_STATE_ALL
> > without drivers doing anything at all. Maybe we have to carve out
> > exceptions, e.g., "VMD hierarchies are exempt from _OSC," or "devices
> > on x86 systems before 2026 can't enable more ASPM than BIOS did," or
> > whatever. Is there any baby step we can make in that direction?
> >
> > This feels a little scary, so feel free to convince me it can't be
> > done :)
>
> I understand your direction of enabling all advertised states by
> default (subject to FADT/_OSC and user settings). To explore that,
> I’ll send an RFC in parallel with this patch that proposes a baby
> step, e.g. add instrumentation so we can see where BIOS left
> capabilities unused, and make it opt-in via a boot param so we can
> evaluate impact safely.
The instrumentation, absolutely. We need something about what was
already enabled and when we change things.
> So this series will handle the VMD gap directly, and the RFC can
> kick off the wider discussion about defaults on ACPI-managed hosts.
> Does that sound like a reasonable approach and split?
I don't really want a parallel approach because I don't think it would
ever converge again. BUT I think you're still OK for VMD, because I
think the default should be PCIE_LINK_STATE_ALL, and when we carve out
the exceptions that would not be in vmd.c, and it's easy to say that
there's no exception for VMD.
Powered by blists - more mailing lists