lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0jQUc8NyNYiGpx0ayEPXJR-TS4fy832+2fBGgKLmdWjtg@mail.gmail.com>
Date:   Wed, 1 Dec 2021 15:40:49 +0100
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Kai-Heng Feng <kai.heng.feng@...onical.com>
Cc:     Bjorn Helgaas <bhelgaas@...gle.com>,
        Linux PM <linux-pm@...r.kernel.org>,
        Nirmal Patel <nirmal.patel@...ux.intel.com>,
        Jonathan Derrick <jonathan.derrick@...ux.dev>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Rob Herring <robh@...nel.org>,
        Krzysztof WilczyƄski <kw@...ux.com>,
        Linux PCI <linux-pci@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] PCI: vmd: Honor ACPI _OSC on PCIe features

On Wed, Dec 1, 2021 at 7:25 AM Kai-Heng Feng
<kai.heng.feng@...onical.com> wrote:
>
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
>
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
>
> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> disable PCIe features accordingly to resolve the issue.
>
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@...onical.com>
> ---
>  drivers/pci/controller/vmd.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index a45e8e59d3d48..8298862417e84 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -670,7 +670,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>         LIST_HEAD(resources);
>         resource_size_t offset[2] = {0};
>         resource_size_t membar2_offset = 0x2000;
> -       struct pci_bus *child;
> +       struct pci_bus *child, *bus;
> +       struct pci_host_bridge *root_bridge, *vmd_bridge;
>         int ret;
>
>         /*
> @@ -798,6 +799,21 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>                 return -ENODEV;
>         }
>
> +       vmd_bridge = to_pci_host_bridge(vmd->bus->bridge);
> +
> +       bus = vmd->dev->bus;
> +       while (bus->parent)
> +               bus = bus->parent;

What about using pci_fimd_host_bridge() here?

LGTM otherwise.

> +
> +       root_bridge = to_pci_host_bridge(bus->bridge);
> +
> +       vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
> +       vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
> +       vmd_bridge->native_aer = root_bridge->native_aer;
> +       vmd_bridge->native_pme = root_bridge->native_pme;
> +       vmd_bridge->native_ltr = root_bridge->native_ltr;
> +       vmd_bridge->native_dpc = root_bridge->native_dpc;
> +
>         vmd_attach_resources(vmd);
>         if (vmd->irq_domain)
>                 dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
> --
> 2.32.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ