[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1592988927-48009-14-git-send-email-yi.l.liu@intel.com>
Date: Wed, 24 Jun 2020 01:55:26 -0700
From: Liu Yi L <yi.l.liu@...el.com>
To: alex.williamson@...hat.com, eric.auger@...hat.com,
baolu.lu@...ux.intel.com, joro@...tes.org
Cc: kevin.tian@...el.com, jacob.jun.pan@...ux.intel.com,
ashok.raj@...el.com, yi.l.liu@...el.com, jun.j.tian@...el.com,
yi.y.sun@...el.com, jean-philippe@...aro.org, peterx@...hat.com,
hao.wu@...el.com, iommu@...ts.linux-foundation.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v3 13/14] vfio: Document dual stage control
From: Eric Auger <eric.auger@...hat.com>
The VFIO API was enhanced to support nested stage control: a bunch of
new iotcls and usage guideline.
Let's document the process to follow to set up nested mode.
Cc: Kevin Tian <kevin.tian@...el.com>
CC: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Cc: Alex Williamson <alex.williamson@...hat.com>
Cc: Eric Auger <eric.auger@...hat.com>
Cc: Jean-Philippe Brucker <jean-philippe@...aro.org>
Cc: Joerg Roedel <joro@...tes.org>
Cc: Lu Baolu <baolu.lu@...ux.intel.com>
Signed-off-by: Eric Auger <eric.auger@...hat.com>
Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
---
v2 -> v3:
*) address comments from Stefan Hajnoczi
v1 -> v2:
*) new in v2, compared with Eric's original version, pasid table bind
and fault reporting is removed as this series doesn't cover them.
Original version from Eric.
https://lkml.org/lkml/2020/3/20/700
Documentation/driver-api/vfio.rst | 67 +++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index f1a4d3c..639890f 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -239,6 +239,73 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);
+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. Stage corresponds to
+the ARM terminology while level corresponds to Intel's VTD terminology.
+In the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for
+VM isolation (GPA -> HPA).
+
+Under dual stage translation, the guest gets ownership of the stage 1 page
+tables and also owns stage 1 configuration structures. The hypervisor owns
+the root configuration structure (for security reason), including stage 2
+configuration. This works as long as configuration structures and page table
+formats are compatible between the virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage 2, leaving stage 1 available
+for guest usage. The guest stage 1 format depends on IOMMU vendor, and
+it is the same with the nesting configuration method. User space should
+check the format and configuration method after setting nesting type by
+using:
+
+ ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
+
+Details can be found in Documentation/userspace-api/iommu.rst. For Intel
+VT-d, each stage 1 page table is bound to host by:
+
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
+ memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+
+As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA.
+GVA->GPA page tables are available when PASID (Process Address Space ID)
+is exposed to guest. e.g. guest with PASID-capable devices assigned. For
+such page table binding, the bind_data should include PASID info, which
+is allocated by guest itself or by host. This depends on hardware vendor
+e.g. Intel VT-d requires to allocate PASID from host. This requirement is
+defined by the Virtual Command Support in VT-d 3.0 spec, guest software
+running on VT-d should allocate PASID from host kernel. To allocate PASID
+from host, user space should +check the IOMMU_NESTING_FEAT_SYSWIDE_PASID
+bit of the nesting info reported from host kernel. VFIO reports the nesting
+info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by:
+
+ req.flags = VFIO_IOMMU_ALLOC_PASID;
+ ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
+
+With first stage/level page table bound to host, it allows to combine the
+guest stage 1 translation along with the hypervisor stage 2 translation to
+get final address.
+
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
+ memcpy(&nesting_op->data, &inv_data, sizeof(inv_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+
+Those invalidations can happen at various granularity levels, page, context,
+...
+
VFIO User API
-------------------------------------------------------------------------------
--
2.7.4
Powered by blists - more mailing lists