[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0c248f0-71ff-4477-98ec-1bbd52eda566@amd.com>
Date: Wed, 23 Apr 2025 12:58:19 +0530
From: Vasant Hegde <vasant.hegde@....com>
To: Nicolin Chen <nicolinc@...dia.com>, jgg@...dia.com, kevin.tian@...el.com,
corbet@....net, will@...nel.org
Cc: robin.murphy@....com, joro@...tes.org, thierry.reding@...il.com,
vdumpa@...dia.com, jonathanh@...dia.com, shuah@...nel.org, praan@...gle.com,
nathan@...nel.org, peterz@...radead.org, yi.l.liu@...el.com,
jsnitsel@...hat.com, mshavit@...gle.com, zhangzekun11@...wei.com,
iommu@...ts.linux.dev, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-tegra@...r.kernel.org, linux-kselftest@...r.kernel.org,
patches@...ts.linux.dev
Subject: Re: [PATCH v1 00/16] iommufd: Add vIOMMU infrastructure (Part-4
vCMDQ)
Hi Nicolin,
On 4/11/2025 12:07 PM, Nicolin Chen wrote:
> The vIOMMU object is designed to represent a slice of an IOMMU HW for its
> virtualization features shared with or passed to user space (a VM mostly)
> in a way of HW acceleration. This extended the HWPT-based design for more
> advanced virtualization feature.
>
> A vCMDQ introduced by this series as a part of the vIOMMU infrastructure
> represents a HW supported queue/buffer for VM to use exclusively, e.g.
> - NVIDIA's virtual command queue
> - AMD vIOMMU's command buffer
I assume we can pass multiple buffer details (like GPA, size) from guest to
hypervisor. Is that correct understanding?
> either of which is an IOMMU HW feature to directly load and execute cache
> invalidation commands issued by a guest kernel, to shoot down TLB entries
> that HW cached for guest-owned stage-1 page table entries. This is a big
> improvement since there is no VM Exit during an invalidation, compared to
> the traditional invalidation pathway by trapping a guest-own invalidation
> queue and forwarding those commands/requests to the host kernel that will
> eventually fill a HW-owned queue to execute those commands.
>
> Thus, a vCMDQ object, as an initial use case, is all about a guest-owned
> HW command queue that VMM can allocate/configure depending on the request
> from a guest kernel. Introduce a new IOMMUFD_OBJ_VCMDQ and its allocator
> IOMMUFD_CMD_VCMDQ_ALLOC allowing VMM to forward the IOMMU-specific queue
> info, such as queue base address, size, and etc.
> > Meanwhile, a guest-owned command queue needs the kernel (a command queue
> driver) to control the queue by reading/writing its consumer and producer
> indexes, which means the command queue HW allows the guest kernel to get
> a direct R/W access to those registers. Introduce an mmap infrastructure
> to the iommufd core so as to support pass through a piece of MMIO region
> from the host physical address space to the guest physical address space.
> The VMA info (vm_pgoff/size) used by an mmap must be pre-allocated during
> the IOMMUFD_CMD_VCMDQ_ALLOC and given those info to the user space as an
> output driver-data by the IOMMUFD_CMD_VCMDQ_ALLOC. So, this requires a
> driver-specific user data support by a vIOMMU object.
Nice! Thanks.
-Vasant
Powered by blists - more mailing lists