lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1740014950.git.nicolinc@nvidia.com>
Date: Wed, 19 Feb 2025 17:31:35 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: <jgg@...dia.com>, <kevin.tian@...el.com>, <tglx@...utronix.de>,
	<maz@...nel.org>
CC: <joro@...tes.org>, <will@...nel.org>, <robin.murphy@....com>,
	<shuah@...nel.org>, <iommu@...ts.linux.dev>, <linux-kernel@...r.kernel.org>,
	<linux-arm-kernel@...ts.infradead.org>, <linux-kselftest@...r.kernel.org>,
	<eric.auger@...hat.com>, <baolu.lu@...ux.intel.com>, <yi.l.liu@...el.com>,
	<yury.norov@...il.com>, <jacob.pan@...ux.microsoft.com>,
	<patches@...ts.linux.dev>
Subject: [PATCH v2 0/7] iommu: Add MSI mapping support with nested SMMU (Part-1 core)

[ Background ]
On ARM GIC systems and others, the target address of the MSI is translated
by the IOMMU. For GIC, the MSI address page is called "ITS" page. When the
IOMMU is disabled, the MSI address is programmed to the physical location
of the GIC ITS page (e.g. 0x20200000). When the IOMMU is enabled, the ITS
page is behind the IOMMU, so the MSI address is programmed to an allocated
IO virtual address (a.k.a IOVA), e.g. 0xFFFF0000, which must be mapped to
the physical ITS page: IOVA (0xFFFF0000) ===> PA (0x20200000).
When a 2-stage translation is enabled, IOVA will be still used to program
the MSI address, though the mappings will be in two stages:
  IOVA (0xFFFF0000) ===> IPA (e.g. 0x80900000) ===> PA (0x20200000)
(IPA stands for Intermediate Physical Address).

If the device that generates MSI is attached to an IOMMU_DOMAIN_DMA, the
IOVA is dynamically allocated from the top of the IOVA space. If attached
to an IOMMU_DOMAIN_UNMANAGED (e.g. a VFIO passthrough device), the IOVA is
fixed to an MSI window reported by the IOMMU driver via IOMMU_RESV_SW_MSI,
which is hardwired to MSI_IOVA_BASE (IOVA==0x8000000) for ARM IOMMUs.

So far, this IOMMU_RESV_SW_MSI works well as kernel is entirely in charge
of the IOMMU translation (1-stage translation), since the IOVA for the ITS
page is fixed and known by kernel. However, with virtual machine enabling
a nested IOMMU translation (2-stage), a guest kernel directly controls the
stage-1 translation with an IOMMU_DOMAIN_DMA, mapping a vITS page (at an
IPA 0x80900000) onto its own IOVA space (e.g. 0xEEEE0000). Then, the host
kernel can't know that guest-level IOVA to program the MSI address.

There have been two approaches to solve this problem:
1. Create an identity mapping in the stage-1. VMM could insert a few RMRs
   (Reserved Memory Regions) in guest's IORT. Then the guest kernel would
   fetch these RMR entries from the IORT and create an IOMMU_RESV_DIRECT
   region per iommu group for a direct mapping. Eventually, the mappings
   would look like: IOVA (0x8000000) === IPA (0x8000000) ===> 0x20200000
   This requires an IOMMUFD ioctl for kernel and VMM to agree on the IPA.
2. Forward the guest-level MSI IOVA captured by VMM to the host-level GIC
   driver, to program the correct MSI IOVA. Forward the VMM-defined vITS
   page location (IPA) to the kernel for the stage-2 mapping. Eventually:
   IOVA (0xFFFF0000) ===> IPA (0x80900000) ===> PA (0x20200000)
   This requires a VFIO ioctl (for IOVA) and an IOMMUFD ioctl (for IPA).

Worth mentioning that when Eric Auger was working on the same topic with
the VFIO iommu uAPI, he had a solution for approach (2) first, and then
switched to approach (1), suggested by Jean-Philippe for the reduction of
complexity.

Approach (1) basically feels like the existing VFIO passthrough that has
a 1-stage mapping for the unmanaged domain, yet only by shifting the MSI
mapping from stage 1 (no-viommu case) to stage 2 (has-viommu case). So,
it could reuse the existing IOMMU_RESV_SW_MSI piece, by sharing the same
idea of "VMM leaving everything to the kernel".

Approach (2) is an ideal solution, yet it requires additional effort for
kernel to be aware of the stage-1 gIOVAs and the stage-2 IPAs for vITS
page(s), which demands VMM to closely cooperate.
 * It also brings some complicated use cases to the table where the host
   or/and guest system(s) has/have multiple ITS pages.

[ Execution ]
Though these two approaches feel very different on the surface, they can
share some underlying common infrastructure. Currently, only one pair of
sw_msi functions (prepare/compose) are provided by dma-iommu for irqchip
drivers to directly use. There could be different versions of functions
from different domain owners: for existing VFIO passthrough cases and in-
kernel DMA domain cases, reuse the existing dma-iommu's version of sw_msi
functions; for nested translation use cases, there can be another version
of sw_msi functions to handle mapping and msi_msg(s) differently.

As a part-1 series, this refactors the core infrastructure:
 - Get rid of the duplication in the "compose" function
 - Introduce a function pointer for the previously "prepare" function
 - Allow different domain owners to set their own "sw_msi" implementations
 - Implement an iommufd_sw_msi function to additionally support non-nested
   use cases and also prepare for a nested translation use case using the
   approach (1)

[ Future Plan ]
Part-2 will add support of approach (1), i.e. RMR solution:
 - Add a pair of IOMMUFD options for a SW_MSI window for kernel and VMM to
   agree on (for approach 1)
Part-3 and beyond will continue the effort of supporting approach (2) i.e.
a complete vITS-to-pITS mapping:
 - Map the phsical ITS page (potentially via IOMMUFD_CMD_IOAS_MAP_MSI)
 - Convey the IOVAs per-irq (potentially via VFIO_IRQ_SET_ACTION_PREPARE)

---

This is a joint effort that includes Jason's rework in irq/iommu/iommufd
base level and my additional patches on top of that for new uAPIs.

This series is on github:
https://github.com/nicolinc/iommufd/commits/iommufd_msi_p1-v2

For testing with nested SMMU (approach 1):
https://github.com/nicolinc/iommufd/commits/wip/iommufd_msi_p2-v2
Pairing QEMU branch for testing (approach 1):
https://github.com/nicolinc/qemu/commits/wip/for_iommufd_msi_p2-v2-rmr

Changelog
v2
 * Split the iommufd ioctl for approach (1) out of this part-1
 * Rebase on Jason's for-next tree (6.14-rc2) for two iommufd patches
 * Update commit logs in two irqchip patches to make narrative clearer
 * Keep iommu_dma_compose_msi_msg() in PATCH-1 as a small cleaner step
 * Improve with some coding style changes: kdoc and 100-char wrapping
v1
 https://lore.kernel.org/kvm/cover.1739005085.git.nicolinc@nvidia.com/
 * Rebase on v6.14-rc1 and iommufd_attach_handle-v1 series
   https://lore.kernel.org/all/cover.1738645017.git.nicolinc@nvidia.com/
 * Correct typos
 * Replace set_bit with __set_bit
 * Use a common helper to get iommufd_handle
 * Add kdoc for iommu_msi_iova/iommu_msi_page_shift
 * Rename msi_msg_set_msi_addr() to msi_msg_set_addr()
 * Update selftest for a better coverage for the new options
 * Change IOMMU_OPTION_SW_MSI_START/SIZE to be per-idev and properly
   check against device's reserved region list
RFCv2
 https://lore.kernel.org/kvm/cover.1736550979.git.nicolinc@nvidia.com/
 * Rebase on v6.13-rc6
 * Drop all the irq/pci patches and rework the compose function instead
 * Add a new sw_msi op to iommu_domain for a per type implementation and
   let iommufd core has its own implementation to support both approaches
 * Add RMR-solution (approach 1) support since it is straightforward and
   have been used in some out-of-tree projects widely
RFCv1
 https://lore.kernel.org/kvm/cover.1731130093.git.nicolinc@nvidia.com/

Thanks!
Nicolin

Jason Gunthorpe (5):
  genirq/msi: Store the IOMMU IOVA directly in msi_desc instead of
    iommu_cookie
  genirq/msi: Refactor iommu_dma_compose_msi_msg()
  iommu: Make iommu_dma_prepare_msi() into a generic operation
  irqchip: Have CONFIG_IRQ_MSI_IOMMU be selected by irqchips that need
    it
  iommufd: Implement sw_msi support natively

Nicolin Chen (2):
  iommu: Turn fault_data to iommufd private pointer
  iommu: Turn iova_cookie to dma-iommu private pointer

 drivers/iommu/Kconfig                   |   1 -
 drivers/irqchip/Kconfig                 |   4 +
 kernel/irq/Kconfig                      |   1 +
 drivers/iommu/iommufd/iommufd_private.h |  23 +++-
 include/linux/iommu.h                   |  58 +++++----
 include/linux/msi.h                     |  55 +++++---
 drivers/iommu/dma-iommu.c               |  63 +++-------
 drivers/iommu/iommu.c                   |  29 +++++
 drivers/iommu/iommufd/device.c          | 160 ++++++++++++++++++++----
 drivers/iommu/iommufd/fault.c           |   2 +-
 drivers/iommu/iommufd/hw_pagetable.c    |   5 +-
 drivers/iommu/iommufd/main.c            |   9 ++
 drivers/irqchip/irq-gic-v2m.c           |   5 +-
 drivers/irqchip/irq-gic-v3-its.c        |  13 +-
 drivers/irqchip/irq-gic-v3-mbi.c        |  12 +-
 drivers/irqchip/irq-ls-scfg-msi.c       |   5 +-
 16 files changed, 309 insertions(+), 136 deletions(-)


base-commit: dc10ba25d43f433ad5d9e8e6be4f4d2bb3cd9ddb
prerequisite-patch-id: 0000000000000000000000000000000000000000
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ