[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210917220942.GA1748301@bjorn-Precision-5520>
Date: Fri, 17 Sep 2021 17:09:42 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Kai-Heng Feng <kai.heng.feng@...onical.com>
Cc: hkallweit1@...il.com, nic_swsd@...ltek.com, bhelgaas@...gle.com,
davem@...emloft.net, kuba@...nel.org, anthony.wong@...onical.com,
netdev@...r.kernel.org, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH net-next v5 0/3] r8169: Implement dynamic ASPM
mechanism for recent 1.0/2.5Gbps Realtek NICs
On Thu, Sep 16, 2021 at 11:44:14PM +0800, Kai-Heng Feng wrote:
> The purpose of the series is to get comments and reviews so we can merge
> and test the series in downstream kernel.
>
> The latest Realtek vendor driver and its Windows driver implements a
> feature called "dynamic ASPM" which can improve performance on it's
> ethernet NICs.
>
> Heiner Kallweit pointed out the potential root cause can be that the
> buffer is too small for its ASPM exit latency.
I looked at the lspci data in your bugzilla
(https://bugzilla.kernel.org/show_bug.cgi?id=214307).
L1.2 is enabled, which requires the Latency Tolerance Reporting
capability, which helps determine when the Link will be put in L1.2.
IIUC, these are analogous to the DevCap "Acceptable Latency" values.
Zero latency values indicate the device will be impacted by any delay
(PCIe r5.0, sec 6.18).
Linux does not currently program those values, so the values there
must have been set by the BIOS. On the working AMD system, they're
set to 1048576ns, while on the broken Intel system, they're set to
3145728ns.
I don't really understand how these values should be computed, and I
think they depend on some electrical characteristics of the Link, so
I'm not sure it's *necessarily* a problem that they are different.
But a 3X difference does seem pretty large.
So I'm curious whether this is related to the problem. Here are some
things we could try on the broken Intel system:
- What happens if you disable ASPM L1.2 using
/sys/devices/pci*/.../link/l1_2_aspm?
- If that doesn't work, what happens if you also disable PCI-PM L1.2
using /sys/devices/pci*/.../link/l1_2_pcipm?
- If either of the above makes things work, then at least we know
the problem is sensitive to L1.2.
- Then what happens if you use setpci to set the LTR Latency
registers to 0, then re-enable ASPM L1.2 and PCI-PM L1.2? This
should mean the Realtek device wants the best possible service and
the Link probably won't spend much time in L1.2.
- What happens if you set the LTR Latency registers to 0x1001
(should be the same as on the AMD system)?
Powered by blists - more mailing lists