[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0219543-dec3-dfa2-8bba-93e4bc32eee2@opensource.wdc.com>
Date: Wed, 15 Feb 2023 19:41:58 +0900
From: Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To: Rick Wertenbroek <rick.wertenbroek@...il.com>
Cc: alberto.dassatti@...g-vd.ch, xxm@...k-chips.com,
rick.wertenbroek@...g-vd.ch, Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Heiko Stuebner <heiko@...ech.de>,
Shawn Lin <shawn.lin@...k-chips.com>,
Lorenzo Pieralisi <lpieralisi@...nel.org>,
Krzysztof WilczyĆski <kw@...ux.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Jani Nikula <jani.nikula@...el.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
Mikko Kovanen <mikko.kovanen@...amobile.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
devicetree@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-rockchip@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org
Subject: Re: [PATCH v2 0/9] PCI: rockchip: Fix RK3399 PCIe endpoint controller
driver
On 2/15/23 19:28, Rick Wertenbroek wrote:
> On Wed, Feb 15, 2023 at 2:51 AM Damien Le Moal
> <damien.lemoal@...nsource.wdc.com> wrote:
>>
>> Note about that: with your series applied, nothing was working for me on
>> my pine Rockpro64 board (AMD Ryzen host). I got weird/unstable behavior
>> and the host IOMMU screaming about IO page faults due to the endpoint
>> doing weird pci accesses. Running the host with IOMMU on really helps in
>> debugging this stuff :)
>
> Thank you for testing, I have also tested with a Ryzen host, I have IOMMU
> enabled as well.
>
>>
>> With the few fixes to your series I commented about, things started to
>> work better, but still very unstable. More debugging and I found out that
>> the pci-epf-test drivers, both host and endpoint sides, have nasty
>> problems that lead to reporting failures when things are actually working,
>> or outright dummy things being done that trigger errors (e.g. bad DMA
>> synchronization triggers IOMMU page faults reports). I have a dozen fix
>> patches for these drivers. Will clean them up and post ASAP.
>>
>> With the test drivers fixed + the fixes to your series, I have the
>> pci_test.sh tests passing 100% of the time, repeatedly (in a loop). All solid.
>>
>
> Good to hear that it now works, I'll try them as well.
>
>> However, I am still seeing issues with my ongoing work with a NVMe
>> endpoint driver function: I see everything working when the host BIOS
>> pokes at the NVMe "drive" it sees (all good, that is normal), but once
>> Linux nvme driver probe kicks in, IRQs are essentially dead: the nvme
>> driver does not see anything strange and allocates IRQs (1 first, which
>> ends up being INTX, then multiple MSI one for each completion queue), but
>> on the endpoint side, attempting to raise MSI or INTX IRQs result in error
>> as the rockchip-ep driver sees both INTX and MSI as disabled. No clue what
>> is going on. I suspect that a pci reset may have happened and corrupted
>> the core configuration. However, the EPC/EPF infrastructure does not
>> catch/process PCI resets as far as I can tell. That may be the issue.
>> I do not see this issue with the epf test driver, because I suspect the
>> host BIOS not knowing anything about that device, it does not touch it.
>> This all may depend on the host & BIOS. Not sure. Need to try with
>> different hosts. Just FYI :)
>>
>
> Interesting that you are working on this, I started to patch the RK3399 PCIe
> endpoint controller driver for a similar project, I want to run our NVMe
> firmware in a Linux PCIe endpoint function.
>
> For the IRQs there are two things that come to mind:
> 1) The host driver could actually disable them and work in polling mode,
> I have seen that with different versions of the Linux kernel NVMe driver
> sometimes it would choose to use polling instead of IRQs for the queues.
> So maybe it's just the
> 2) The RK3399 PCIe endpoint controller is said to be able only to generate
> one type of interrupt at a given time. "It is capable of generating MSI or
> Legacy interrupt if the PCIe is configured as EP. Notes that one PCIe
> component can't generate both types of interrupts. It is either one or the
> other." (see TRM 17.5.9 Interrupt Support).
> I don't know exactly what the TRM means the the controller cannot
> use both interrupts at the same time, but this might be a path to explore
The host says that both INTX is enabled and MSI disabled when the nvme driver
starts probing. That driver starts probe with a single vector to enable the
device first and use the admin SQ/CQ for indentify etc. Then, that IRQ is freed
and multiple MSI vectors allocated, one for each admin + IO queue pair.
The problem is that on the endpoint, the driver says that both INTX and MSI are
disabled but the host at least sees INTX enabled, and the first IRQ allocated
for the probe enables MSI and gets one vector. But that MSI enable is not seen
by the EP, and the EP also says that INTX is disabled, contrary to what the host
says.
When the BIOS probe the drive, both INTX and MSI are OK. Only one IRQ is used by
the BIOS and I tried both by setting & disabling MSI. What I think happens is
that there may be a PCI reset/FLR or something similar, and that screws up the
core config... I do not have a PCI bus analyzer, so hard to debug :)
I did hack both the host nvme driver and EP driver to print PCI link status etc,
but I do not see anything strange there. Furthermore, the BAR accesses and admin
SQ/CQ commands and cqe exchange is working as I get the identify commands from
the host and the host sees the cqe, but after a timeout as it never receives any
IRQ... I would like to try testing without the BIOS touching the EP nvme
controller. But not sure how to do that. Probably should ignore the first CC.EN
enable event I see, which is from the BIOS.
--
Damien Le Moal
Western Digital Research
Powered by blists - more mailing lists