[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9cfc65c594deef33f24b60a66b7c78c742da7203.camel@linux.intel.com>
Date: Tue, 06 Feb 2024 13:25:29 -0800
From: "David E. Box" <david.e.box@...ux.intel.com>
To: Bjorn Helgaas <helgaas@...nel.org>, puranjay12@...il.com
Cc: Jian-Hong Pan <jhp@...lessos.org>, Johan Hovold <johan@...nel.org>, Mika
Westerberg <mika.westerberg@...ux.intel.com>, Damien Le Moal
<dlemoal@...nel.org>, Niklas Cassel <cassel@...nel.org>, Nirmal Patel
<nirmal.patel@...ux.intel.com>, Jonathan Derrick
<jonathan.derrick@...ux.dev>, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org, linux@...lessos.org
Subject: Re: [PATCH v2] PCI: vmd: Enable PCI PM's L1 substates of remapped
PCIe Root Port and NVMe
Adding Puranjay
On Mon, 2024-02-05 at 15:05 -0800, David E. Box wrote:
> On Mon, 2024-02-05 at 16:42 -0600, Bjorn Helgaas wrote:
> > On Mon, Feb 05, 2024 at 11:37:16AM -0800, David E. Box wrote:
> > > On Fri, 2024-02-02 at 18:05 -0600, Bjorn Helgaas wrote:
> > > > On Fri, Feb 02, 2024 at 03:11:12PM +0800, Jian-Hong Pan wrote:
> > > ...
> >
> > > > > @@ -775,6 +773,14 @@ static int vmd_pm_enable_quirk(struct pci_dev
> > > > > *pdev,
> > > > > void *userdata)
> > > > > pci_write_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT,
> > > > > ltr_reg);
> > > > > pci_info(pdev, "VMD: Default LTR value set by driver\n");
> > > >
> > > > You're not changing this part, and I don't understand exactly how LTR
> > > > works, but it makes me a little bit queasy to read "set the LTR value
> > > > to the maximum required to allow the deepest power management
> > > > savings" and then we set the max snoop values to a fixed constant.
> > > >
> > > > I don't think the goal is to "allow the deepest power savings"; I
> > > > think it's to enable L1.2 *when the device has enough buffering to
> > > > absorb L1.2 entry/exit latencies*.
> > > >
> > > > The spec (PCIe r6.0, sec 7.8.2.2) says "Software should set this to
> > > > the platform's maximum supported latency or less," so it seems like
> > > > that value must be platform-dependent, not fixed.
> > > >
> > > > And I assume the "_DSM for Latency Tolerance Reporting" is part of the
> > > > way to get those platform-dependent values, but Linux doesn't actually
> > > > use that yet.
> > >
> > > This may indeed be the best way but we need to double check with our
> > > BIOS folks. AFAIK BIOS writes the LTR values directly so there
> > > hasn't been a need to use this _DSM. But under VMD the ports are
> > > hidden from BIOS which is why we added it here. I've brought up the
> > > question internally to find out how Windows handles the DSM and to
> > > get a recommendation from our firmware leads.
> >
> > We want Linux to be able to program LTR itself, don't we? We
> > shouldn't have to rely on firmware to do it. If Linux can't do
> > it, hot-added devices aren't going to be able to use L1.2, right?
>
> Agreed. We just want to make sure we are not conflicting with what BIOS may be
> doing.
So the feedback is to run the _DSM and just overwrite any BIOS values. Looking
up the _DSM I saw there was an attempt to upstream this 4 years ago [1]. I'm not
sure why the effort stalled but we can pick up this work again.
https://patchwork.kernel.org/project/linux-pci/patch/20201015080311.7811-1-puranjay12@gmail.com/
Powered by blists - more mailing lists