[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211206231218.GA3843138@dhcp-10-100-145-180.wdc.com>
Date: Mon, 6 Dec 2021 15:12:18 -0800
From: Keith Busch <kbusch@...nel.org>
To: Kai-Heng Feng <kai.heng.feng@...onical.com>
Cc: bhelgaas@...gle.com, linux-pm@...r.kernel.org,
"Rafael J . Wysocki" <rafael@...nel.org>,
Nirmal Patel <nirmal.patel@...ux.intel.com>,
Jonathan Derrick <jonathan.derrick@...ux.dev>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
Krzysztof WilczyĆski <kw@...ux.com>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] PCI: vmd: Honor ACPI _OSC on PCIe features
On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
>
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [ 0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [ 1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
>
> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> disable PCIe features accordingly to resolve the issue.
At least for some versions of this hardare, I recall ACPI is unaware of
any devices in the VMD domain; the platform can not see past the VMD
endpoint, so I throught the driver was supposed to always let the VMD
domain use OS native support regardless of the parent's ACPI _OSC.
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
> Suggested-by: Rafael J. Wysocki <rafael@...nel.org>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@...onical.com>
> ---
> v3:
> - Use a new helper function.
>
> v2:
> - Use pci_find_host_bridge() instead of open coding.
>
> drivers/pci/controller/vmd.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index a45e8e59d3d48..691765e6c12aa 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -661,6 +661,21 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
> return 0;
> }
>
> +/*
> + * Since VMD is an aperture to regular PCIe root ports, only allow it to
> + * control features that the OS is allowed to control on the physical PCI bus.
> + */
> +static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
> + struct pci_host_bridge *vmd_bridge)
> +{
> + vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
> + vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
> + vmd_bridge->native_aer = root_bridge->native_aer;
> + vmd_bridge->native_pme = root_bridge->native_pme;
> + vmd_bridge->native_ltr = root_bridge->native_ltr;
> + vmd_bridge->native_dpc = root_bridge->native_dpc;
> +}
> +
> static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> {
> struct pci_sysdata *sd = &vmd->sysdata;
> @@ -798,6 +813,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> return -ENODEV;
> }
>
> + vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
> + to_pci_host_bridge(vmd->bus->bridge));
> +
> vmd_attach_resources(vmd);
> if (vmd->irq_domain)
> dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
> --
> 2.32.0
>
Powered by blists - more mailing lists