[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200731135523.GA3717@bjorn-Precision-5520>
Date: Fri, 31 Jul 2020 08:55:23 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: "Saheed O. Bolarinwa" <refactormyself@...il.com>
Cc: Mike Marciniszyn <mike.marciniszyn@...el.com>,
Dennis Dalessandro <dennis.dalessandro@...el.com>,
Doug Ledford <dledford@...hat.com>,
Jason Gunthorpe <jgg@...pe.ca>, bjorn@...gaas.com,
skhan@...uxfoundation.org,
linux-kernel-mentees@...ts.linuxfoundation.org,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rdma@...r.kernel.org,
"Michael J. Ruhl" <michael.j.ruhl@...el.com>,
Ashutosh Dixit <ashutosh.dixit@...el.com>,
Ian Kumlien <ian.kumlien@...il.com>,
Puranjay Mohan <puranjay12@...il.com>
Subject: Re: [PATCH v4 01/12] IB/hfi1: Check if pcie_capability_read_*()
reads ~0
[+cc Michael, Ashutosh, Ian, Puranjay]
On Fri, Jul 31, 2020 at 01:02:29PM +0200, Saheed O. Bolarinwa wrote:
> On failure pcie_capability_read_dword() sets it's last parameter,
> val to 0. In this case dn and up will be 0, so aspm_hw_l1_supported()
> will return false.
> However, with Patch 12/12, it is possible that val is set to ~0 on
> failure. This would introduce a bug because (x & x) == (~0 & x). So
> with dn and up being 0x02, a true value is return when the read has
> actually failed.
>
> Since, the value ~0 is invalid here,
>
> Reset dn and up to 0 when a value of ~0 is read into them, this
> ensures false is returned on failure in this case.
>
> Suggested-by: Bjorn Helgaas <bjorn@...gaas.com>
> Signed-off-by: Saheed O. Bolarinwa <refactormyself@...il.com>
> ---
>
> drivers/infiniband/hw/hfi1/aspm.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hfi1/aspm.c b/drivers/infiniband/hw/hfi1/aspm.c
> index a3c53be4072c..9605b2145d19 100644
> --- a/drivers/infiniband/hw/hfi1/aspm.c
> +++ b/drivers/infiniband/hw/hfi1/aspm.c
> @@ -33,13 +33,13 @@ static bool aspm_hw_l1_supported(struct hfi1_devdata *dd)
> return false;
>
> pcie_capability_read_dword(dd->pcidev, PCI_EXP_LNKCAP, &dn);
> - dn = ASPM_L1_SUPPORTED(dn);
> + dn = (dn == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(dn);
>
> pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &up);
> - up = ASPM_L1_SUPPORTED(up);
> + up = (up == (u32)~0) ? 0 : ASPM_L1_SUPPORTED(up);
I don't want to change this. The driver shouldn't be mucking with
ASPM at all. The PCI core should take care of this automatically. If
it doesn't, we need to fix the core.
If the driver needs to disable ASPM to work around device errata or
something, the core has an interface for that. But the driver should
not override the system-wide policy for managing ASPM.
Ah, some archaeology finds affa48de8417 ("staging/rdma/hfi1: Add
support for enabling/disabling PCIe ASPM"), which says:
hfi1 HW has a high PCIe ASPM L1 exit latency and also advertises an
acceptable latency less than actual ASPM latencies.
That suggests that either there is a device defect, e.g., advertising
incorrect ASPM latencies, or a PCI core defect, e.g., incorrectly
enabling ASPM when the path exit latency exceeds that hfi1 can
tolerate.
Coincidentally, Ian recently debugged a problem in how the PCI core
computes exit latencies over a path [1].
Can anybody supply details about the hfi1 ASPM parameters, e.g., the
output of "sudo lspci -vv"? Any details about the configuration where
the problem occurs? Is there a switch in the path?
[1] https://lore.kernel.org/r/20200727213045.2117855-1-ian.kumlien@gmail.com
> /* ASPM works on A-step but is reported as not supported */
> - return (!!dn || is_ax(dd)) && !!up;
> + return (dn || is_ax(dd)) && up;
> }
>
> /* Set L1 entrance latency for slower entry to L1 */
> --
> 2.18.4
>
Powered by blists - more mailing lists