lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 30 Dec 2023 20:30:00 +0100
From: Lukas Wunner <lukas@...ner.de>
To: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
Cc: linux-pci@...r.kernel.org, Bjorn Helgaas <helgaas@...nel.org>,
	Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
	Rob Herring <robh@...nel.org>, Krzysztof Wilczy??ski <kw@...ux.com>,
	Alexandru Gagniuc <mr.nuke.me@...il.com>,
	Krishna chaitanya chundru <quic_krichai@...cinc.com>,
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
	"Rafael J . Wysocki" <rafael@...nel.org>, linux-pm@...r.kernel.org,
	Bjorn Helgaas <bhelgaas@...gle.com>, linux-kernel@...r.kernel.org,
	Alex Deucher <alexdeucher@...il.com>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Amit Kucheria <amitk@...nel.org>, Zhang Rui <rui.zhang@...el.com>
Subject: Re: [PATCH v3 05/10] PCI: Store all PCIe Supported Link Speeds

On Sat, Dec 30, 2023 at 12:45:49PM +0100, Lukas Wunner wrote:
> On Fri, Sep 29, 2023 at 02:57:18PM +0300, Ilpo Järvinen wrote:
> > struct pci_bus stores max_bus_speed. Implementation Note in PCIe r6.0.1
> > sec 7.5.3.18, however, recommends determining supported Link Speeds
> > using the Supported Link Speeds Vector in the Link Capabilities 2
> > Register (when available).
> > 
> > Add pcie_bus_speeds into struct pci_bus which caches the Supported Link
> > Speeds. The value is taken directly from the Supported Link Speeds
> > Vector or synthetized from the Max Link Speed in the Link Capabilities
> > Register when the Link Capabilities 2 Register is not available.
> 
> Remind me, what's the reason again to cache this and why is
> max_bus_speed not sufficient?  Is the point that there may be
> "gaps" in the supported link speeds, i.e. not every bit below
> the maximum supported speed may be set?  And you need to skip
> over those gaps when throttling to a lower speed?

FWIW I went and re-read the internal review I provided on May 18.
Turns out I already mentioned back then that gaps aren't permitted:

 "Per PCIe r6.0.1 sec 8.2.1, the bitfield in the Link Capabilities 2
  register is not permitted to contain gaps between maximum supported
  speed and lowest possible speed (2.5 GT/s Gen1)."


> Also, I note that pci_set_bus_speed() doesn't use LNKCAP2.

About that, I wrote in May:

 "Actually, scratch that.  pci_set_bus_speed() is fine.  Since it's only
  interested in the *maximum* link speed, reading just LnkCap is correct.
  LnkCap2 only needs to be read to determine if a certain speed is
  *supported*.  E.g., even though 32 GT/s are supported, perhaps 16 GT/s
  are not.

  It's rather pcie_get_speed_cap() which should be changed.  There's
  no need for it to read LnkCap2.  The commit which introduced this,
  6cf57be0f78e, was misguided and had to be fixed up with f1f90e254e46.
  It could be simplified to just read LnkCap and return
  pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS].  If the device is a
  Root Port or Downstream Port, it doesn't even have to do that but
  could return the cached value in subordinate->max_bus_speed.
  If you add another attribute to struct pci_bus for the downstream
  device's maximum speed, the maximum speed for Endpoints and Upstream
  Ports could be returned directly as well from that attribute."

Thanks,

Lukas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ