[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0hpEy46Vh83dQ_orG=jW+a1b2+kipRLQOVOnvhjN0j03g@mail.gmail.com>
Date: Thu, 2 Dec 2021 17:17:25 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Kai-Heng Feng <kai.heng.feng@...onical.com>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
Linux PM <linux-pm@...r.kernel.org>,
Nirmal Patel <nirmal.patel@...ux.intel.com>,
Jonathan Derrick <jonathan.derrick@...ux.dev>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
Krzysztof WilczyĆski <kw@...ux.com>,
Linux PCI <linux-pci@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] PCI: vmd: Honor ACPI _OSC on PCIe features
On Thu, Dec 2, 2021 at 4:05 AM Kai-Heng Feng
<kai.heng.feng@...onical.com> wrote:
>
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
>
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [ 0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [ 1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
>
> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> disable PCIe features accordingly to resolve the issue.
>
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@...onical.com>
> ---
> v2:
> - Use pci_find_host_bridge() instead of open coding.
>
> drivers/pci/controller/vmd.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index a45e8e59d3d48..acf847cb825c0 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -671,6 +671,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> resource_size_t offset[2] = {0};
> resource_size_t membar2_offset = 0x2000;
> struct pci_bus *child;
> + struct pci_host_bridge *root_bridge, *vmd_bridge;
> int ret;
>
> /*
> @@ -798,6 +799,17 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> return -ENODEV;
> }
>
> + vmd_bridge = to_pci_host_bridge(vmd->bus->bridge);
> +
> + root_bridge = pci_find_host_bridge(vmd->dev->bus);
> +
> + vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
> + vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
> + vmd_bridge->native_aer = root_bridge->native_aer;
> + vmd_bridge->native_pme = root_bridge->native_pme;
> + vmd_bridge->native_ltr = root_bridge->native_ltr;
> + vmd_bridge->native_dpc = root_bridge->native_dpc;
One more, arguably minor, thing: I would put the above copying into a
separate function and call it here and I would add a comment
explaining why it is done next to it, like
/*
* Since VMD is an aperture to regular PCIe root ports, only allow it to control
* features that the OS is allowed to control on the physical PCI bus.
*/
vmd_copy_host_bridge_flags(to_pci_host_bridge(vmd->bus->bridge),
pci_find_host_bridge(vmd->dev->bus));
> +
> vmd_attach_resources(vmd);
> if (vmd->irq_domain)
> dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
> --
> 2.32.0
>
Powered by blists - more mailing lists