[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1779233.rKVo5UmMmJ@jona>
Date: Thu, 31 Oct 2019 15:35:46 +0100
From: Patrick Brunner <brunner@...ttbacher.ch>
To: linux-kernel@...r.kernel.org
Subject: x86: Is Multi-MSI support for PCIe working?
Dear all
As the subject suggests, I'm wondering whether PCIe Multi-MSI support is working on x86 with IOMMU enabled.
The reason I'm asking is, that I couldn't find any device driver using multiple MSI. There are examples using either single MSI or multiple MSI-X, but no multiple MSI.
I'm trying to get a x1 PCIe card with a Lattice ECP5 FPGA working, which utilises 2 MSIs. Depending on whether the IOMMU is enabled or not, I'm able to allocate both desired
MSIs (with IOMMU enabled) or only one MSI (IOMMU disabled). I could happily live with the IOMMU disabled, but I can't allocate the second MSI then, which is required for the
function of the device: the first MSI is used to signal that a DMA transfer from the FPGA to the CPU has finished. The associated IRQ handler basically just wakes up an user mode task.
The second MSI is used as a shared IRQ for a number of 16750-compatible UARTs in the FPGA, mapped through a Wishbone bus to the PCIe BAR.
The first MSI works perfectly, but the second one causes one IOMMU page faults once per UART during probing of the UARTs, and one IOMMU page fault with every byte received through
any UART. I've been running out of ideas a while ago as the IOMMU subsystem is too complex to understand for me as a non-kernel developer. But I'd highly appreciate any hints from
more experienced developers. Could anybody provide me some pointers, please?
Best regards,
Patrick
--
Patrick Brunner
Stettbacher Signal Processing
Neugutstrasse 54
CH-8600 Duebendorf
Switzerland
Phone: +41 43 299 57 23
Mail: brunner@...ttbacher.ch
Powered by blists - more mailing lists