[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240726080446.12375-1-mattc@purestorage.com>
Date: Fri, 26 Jul 2024 02:04:46 -0600
From: Matthew W Carlis <mattc@...estorage.com>
To: macro@...am.me.uk
Cc: alex.williamson@...hat.com,
bhelgaas@...gle.com,
christophe.leroy@...roup.eu,
davem@...emloft.net,
david.abdurachmanov@...il.com,
edumazet@...gle.com,
ilpo.jarvinen@...ux.intel.com,
kuba@...nel.org,
leon@...nel.org,
linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org,
linux-rdma@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
lukas@...ner.de,
mahesh@...ux.ibm.com,
mattc@...estorage.com,
mika.westerberg@...ux.intel.com,
mpe@...erman.id.au,
netdev@...r.kernel.org,
npiggin@...il.com,
oohall@...il.com,
pabeni@...hat.com,
pali@...nel.org,
saeedm@...dia.com,
sr@...x.de,
wilson@...iptree.org
Subject: PCI: Work around PCIe link training failures
On Mon, 22 Jul 2024, Maciej W. Rozycki wrote:
> The main reason is it is believed that it is the downstream device
> causing the issue, and obviously you can't fetch its ID if you can't
> negotiate link so as to talk to it in the first place.
Have had some more time to look into this issue. So, I think the problem
with this change is that it is quite strict in its assumptions about what
it means when a device fails to train, but in an environment where hot-plug
is exercised frequently you are essentially bound have something interrupt
the link training. In the first case where we caught this problem our test
automation was doing some power cycle tortures on our endpoints. If you catch
the right timing the link will be forced down to Gen1 forever without some other
automation to recover you unless your device is the one single device in the
allowlist which had the hardware bug in the first place.
I wonder if we can come up with some kind of alternative.
- Matt
Powered by blists - more mailing lists