[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1731130093.git.nicolinc@nvidia.com>
Date: Fri, 8 Nov 2024 21:48:45 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: <maz@...nel.org>, <tglx@...utronix.de>, <bhelgaas@...gle.com>,
<alex.williamson@...hat.com>
CC: <jgg@...dia.com>, <leonro@...dia.com>,
<shameerali.kolothum.thodi@...wei.com>, <robin.murphy@....com>,
<dlemoal@...nel.org>, <kevin.tian@...el.com>, <smostafa@...gle.com>,
<andriy.shevchenko@...ux.intel.com>, <reinette.chatre@...el.com>,
<eric.auger@...hat.com>, <ddutile@...hat.com>, <yebin10@...wei.com>,
<brauner@...nel.org>, <apatel@...tanamicro.com>,
<shivamurthy.shastri@...utronix.de>, <anna-maria@...utronix.de>,
<nipun.gupta@....com>, <marek.vasut+renesas@...lbox.org>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<linux-pci@...r.kernel.org>, <kvm@...r.kernel.org>
Subject: [PATCH RFCv1 0/7] vfio: Allow userspace to specify the address for each MSI vector
On ARM GIC systems and others, the target address of the MSI is translated
by the IOMMU. For GIC, the MSI address page is called "ITS" page. When the
IOMMU is disabled, the MSI address is programmed to the physical location
of the GIC ITS page (e.g. 0x20200000). When the IOMMU is enabled, the ITS
page is behind the IOMMU, so the MSI address is programmed to an allocated
IO virtual address (a.k.a IOVA), e.g. 0xFFFF0000, which must be mapped to
the physical ITS page: IOVA (0xFFFF0000) ===> PA (0x20200000).
When a 2-stage translation is enabled, IOVA will be still used to program
the MSI address, though the mappings will be in two stages:
IOVA (0xFFFF0000) ===> IPA (e.g. 0x80900000) ===> 0x20200000
(IPA stands for Intermediate Physical Address).
If the device that generates MSI is attached to an IOMMU_DOMAIN_DMA, the
IOVA is dynamically allocated from the top of the IOVA space. If attached
to an IOMMU_DOMAIN_UNMANAGED (e.g. a VFIO passthrough device), the IOVA is
fixed to an MSI window reported by the IOMMU driver via IOMMU_RESV_SW_MSI,
which is hardwired to MSI_IOVA_BASE (IOVA==0x8000000) for ARM IOMMUs.
So far, this IOMMU_RESV_SW_MSI works well as kernel is entirely in charge
of the IOMMU translation (1-stage translation), since the IOVA for the ITS
page is fixed and known by kernel. However, with virtual machine enabling
a nested IOMMU translation (2-stage), a guest kernel directly controls the
stage-1 translation with an IOMMU_DOMAIN_DMA, mapping a vITS page (at an
IPA 0x80900000) onto its own IOVA space (e.g. 0xEEEE0000). Then, the host
kernel can't know that guest-level IOVA to program the MSI address.
To solve this problem the VMM should capture the MSI IOVA allocated by the
guest kernel and relay it to the GIC driver in the host kernel, to program
the correct MSI IOVA. And this requires a new ioctl via VFIO.
Extend the VFIO path to allow an MSI target IOVA to be forwarded into the
kernel and pushed down to the GIC driver.
Add VFIO ioctl VFIO_IRQ_SET_ACTION_PREPARE with VFIO_IRQ_SET_DATA_MSI_IOVA
to carry the data.
The downstream calltrace is quite long from the VFIO to the ITS driver. So
in order to carry the MSI IOVA from the top to its_irq_domain_alloc(), add
patches in a leaf-to-root order:
vfio_pci_core_ioctl:
vfio_pci_set_irqs_ioctl:
vfio_pci_set_msi_prepare: // PATCH-7
pci_alloc_irq_vectors_iovas: // PATCH-6
__pci_alloc_irq_vectors: // PATCH-5
__pci_enable_msi/msix_range: // PATCH-4
msi/msix_capability_init: // PATCH-3
msi/msix_setup_msi_descs:
msi_insert_msi_desc(); // PATCH-1
pci_msi_setup_msi_irqs:
msi_domain_alloc_irqs_all_locked:
__msi_domain_alloc_locked:
__msi_domain_alloc_irqs:
__irq_domain_alloc_irqs:
irq_domain_alloc_irqs_locked:
irq_domain_alloc_irqs_hierarchy:
msi_domain_alloc:
irq_domain_alloc_irqs_parent:
its_irq_domain_alloc(); // PATCH-2
Note that this series solves half the problem, since it only allows kernel
to set the physical PCI MSI/MSI-X on the device with the correct head IOVA
of a 2-stage translation, where the guest kernel does the stage-1 mapping
that MSI IOVA (0xEEEE0000) to its own vITS page (0x80900000) while missing
the stage-2 mapping from that IPA to the physical ITS page:
0xEEEE0000 ===> 0x80900000 =x=> 0x20200000
A followup series should fill that gap, doing the stage-2 mapping from the
vITS page 0x80900000 to the physical ITS page (0x20200000), likely via new
IOMMUFD ioctl. Once VMM sets up this stage-2 mapping, VM will act the same
as bare metal relying on a running kernel to handle the stage-1 mapping:
0xEEEE0000 ===> 0x80900000 ===> 0x20200000
This series (prototype) is on Github:
https://github.com/nicolinc/iommufd/commits/vfio_msi_giova-rfcv1/
It's tested by hacking the host kernel to hard-code a stage-2 mapping.
Thanks!
Nicolin
Nicolin Chen (7):
genirq/msi: Allow preset IOVA in struct msi_desc for MSI doorbell
address
irqchip/gic-v3-its: Bypass iommu_cookie if desc->msi_iova is preset
PCI/MSI: Pass in msi_iova to msi_domain_insert_msi_desc
PCI/MSI: Allow __pci_enable_msi_range to pass in iova
PCI/MSI: Extract a common __pci_alloc_irq_vectors function
PCI/MSI: Add pci_alloc_irq_vectors_iovas helper
vfio/pci: Allow preset MSI IOVAs via VFIO_IRQ_SET_ACTION_PREPARE
drivers/pci/msi/msi.h | 3 +-
include/linux/msi.h | 11 +++
include/linux/pci.h | 18 ++++
include/linux/vfio_pci_core.h | 1 +
include/uapi/linux/vfio.h | 8 +-
drivers/irqchip/irq-gic-v3-its.c | 21 ++++-
drivers/pci/msi/api.c | 136 ++++++++++++++++++++----------
drivers/pci/msi/msi.c | 20 +++--
drivers/vfio/pci/vfio_pci_intrs.c | 41 ++++++++-
drivers/vfio/vfio_main.c | 3 +
kernel/irq/msi.c | 6 ++
11 files changed, 212 insertions(+), 56 deletions(-)
--
2.43.0
Powered by blists - more mailing lists