lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3c85550-a7f6-a0d9-74a4-f98c8251b80e@amd.com>
Date:   Fri, 23 Jun 2023 15:05:06 -0700
From:   "Suthikulpanit, Suravee" <suravee.suthikulpanit@....com>
To:     Jason Gunthorpe <jgg@...dia.com>
Cc:     linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
        kvm@...r.kernel.org, joro@...tes.org, robin.murphy@....com,
        yi.l.liu@...el.com, alex.williamson@...hat.com,
        nicolinc@...dia.com, baolu.lu@...ux.intel.com,
        eric.auger@...hat.com, pandoh@...gle.com, kumaranand@...gle.com,
        jon.grimm@....com, santosh.shukla@....com, vasant.hegde@....com,
        jay.chen@....com, joseph.chung@....com
Subject: Re: [RFC PATCH 00/21] iommu/amd: Introduce support for HW accelerated
 vIOMMU w/ nested page table

Jason,

On 6/23/2023 4:45 AM, Jason Gunthorpe wrote:
> On Thu, Jun 22, 2023 at 06:15:17PM -0700, Suthikulpanit, Suravee wrote:
>> Jason,
>>
>> On 6/22/2023 6:46 AM, Jason Gunthorpe wrote:
>>> On Wed, Jun 21, 2023 at 06:54:47PM -0500, Suravee Suthikulpanit wrote:
>>>
>>>> Since the IOMMU hardware virtualizes the guest command buffer, this allows
>>>> IOMMU operations to be accelerated such as invalidation of guest pages
>>>> (i.e. stage1) when the command is issued by the guest kernel without
>>>> intervention from the hypervisor.
>>>
>>> This is similar to what we are doing on ARM as well.
>>
>> Ok
>>
>>>> This series is implemented on top of the IOMMUFD framework. It leverages
>>>> the exisiting APIs and ioctls for providing guest iommu information
>>>> (i.e. struct iommu_hw_info_amd), and allowing guest to provide guest page
>>>> table information (i.e. struct iommu_hwpt_amd_v2) for setting up user
>>>> domain.
>>>>
>>>> Please see the [4],[5], and [6] for more detail on the AMD HW-vIOMMU.
>>>>
>>>> NOTES
>>>> -----
>>>> This series is organized into two parts:
>>>>     * Part1: Preparing IOMMU driver for HW-vIOMMU support (Patch 1-8).
>>>>
>>>>     * Part2: Introducing HW-vIOMMU support (Patch 9-21).
>>>>
>>>>     * Patch 12 and 21 extends the existing IOMMUFD ioctls to support
>>>>       additional opterations, which can be categorized into:
>>>>       - Ioctls to init/destroy AMD HW-vIOMMU instance
>>>>       - Ioctls to attach/detach guest devices to the AMD HW-vIOMMU instance.
>>>>       - Ioctls to attach/detach guest domains to the AMD HW-vIOMMU instance.

I'm re-looking into these three a bit and will get back.

>>>>       - Ioctls to trap certain AMD HW-vIOMMU MMIO register accesses.
To describe the need for this ioctl, AMD IOMMU has two set of MMIO 
registers:
   1. Control MMIO
   2. Data MMIO

For AMD HW-vIOMMU, the hardware define a private memory address space 
(PAS) containing VF Control MMIO and VF MMIO register for each guest 
IOMMU instance, which represents the guest view of the AMD IOMMU MMIO 
registers. This memory is also accessed by the IOMMU hardware to 
virtualize the guest MMIO register.

When the guest IOMMU driver write to guest control MMIO register of the 
QEMU AMD HW-vIOMMU device model, it traps into QEMU. QEMU reads the 
value call VIOMMU_MMIO_ACCESS to tell the AMD IOMMU driver in the host 
to program VFCtrlMMIO or VFMMIO register for this guest.

Similar for the read on guest control MMIO register, QEMU calls ioctl to 
get the value from AMD iommu driver, which reads the guest VFCtrlMMIO or 
VFMMIO register and provide back to the guest.

>>>>       - Ioctls to trap AMD HW-vIOMMU command buffer initialization.

For this ioctl, the IOMMU hardware define an IOMMU PAS containing a 
command buffer for each guest IOMMU instance. This memory is also 
accessed by IOMMU hardware to virtualize the guest command buffer.

When the guest IOMMU driver write to guest Command Buffer Base Address 
MMIO register of the QEMU AMD HW-vIOMMU device model, it traps into 
QEMU. QEMU reads the value, parse the GPA, and translate to HVA. Then it 
calls VIOMMU_CMDBUF_UPDATE to communicate the HVA to IOMMU driver to map 
it in the IOMMU PAS so that it use this memory to virtualize the guest 
command buffer.

>>>
>>> No one else seems to need this kind of stuff, why is AMD different?
>>>
>>> Emulation and mediation to create the vIOMMU is supposed to be in the
>>> VMM side, not in the kernel. I don't want to see different models by
>>> vendor.
>>
>> These ioctl is not necessary for emulation, which I would agree that it
>> should be done on the VMM side (e.g. QEMU). These ioctls provides necessary
>> information for programming the AMD IOMMU hardware to provide
>> hardware-assisted virtualized IOMMU.
> 
> You have one called 'trap', it shouldn't be like this. It seems like
> this is trying to parse the command buffer in the kernel, it should be
> done in the VMM.

Please see the more detail description above. Basically, all parsing is 
done in the VMM, and it use the ioctl to tell IOMMU driver to program 
the VFCtrlMMIO/VFMMIO registers or IOMMU PAS for the hardware to access.

>> In this series, AMD IOMMU GCR3 table is actually setup when the
>> IOMMUFD_CMD_HWPT_ALLOC is called, which the driver provides a hook to struct
>> iommu_ops.domain_alloc_user().
> 
> That isn't entirely right either, the GCR3 should be programmed into
> HW during iommu_domain attach.
> >> The AMD-specific information is communicated from QEMU via
>> iommu_domain_user_data.iommu_hwpt_amd_v2. This is similar to INTEL
>> and ARM.
> 
> This is only for requesting the iommu_domain and supplying the gcr3 VA
> for later use.

Ah, ok. Lemme look into this again and get back to you.

>.... 
>
>> There are still work to be done in this to fully support PASID. I'll
>> take a look at this next.
> 
> I would expect PASID work is only about invalidation?

Actually, I am referring to supporting non-zero PASID, which requires 
walking the guest IOMMU gCR3 table and communicate this to the hypervisor.

>>> To start focus only on user space page tables and kernel mediated
>>> invalidation and fit into the same model as everyone else. This is
>>> approx the same patches and uAPI you see for ARM and Intel. AFAICT
>>> AMD's HW is very similar to ARM's, so you should be aligning to the
>>> ARM design.
>>
>> I think the user space page table is covered as described above.
> 
> I'm not sure, it doesn't look like it is what I would expect.

Lemme clean up this part and get back in next RFC.

>> It seems that user-space is supposed to call the ioctl
>> IOMMUFD_CMD_HWPT_INVALIDATE for both INTEL and ARM to issue invalidation for
>> stage 1 page table. Please lemme know if I misunderstand the purpose of this
>> ioctl.
> 
> Yes, the VMM traps the invalidation and issues it like this.
>   
>> However, for AMD since the HW-vIOMMU virtualizes the guest command buffer,
>> and when it sees the page table invalidation command in the guest command
>> buffer, it takes care of the invalidation using information in the DomIDMap,
>> which maps guest domain ID (gDomID) of a particular guest to the
>> corresponding host domain ID (hDomID) of the device and invalidate the
>> nested translation according to the specified PASID, DomID, and GVA.
> 
> The VMM should do all of this stuff. The VMM parses the command buffer
> and the VMM converts the commands to invalidation ioctls.
>
> I'm a unclear if AMD supports a mode where the HW can directly operate
> a command/invalidation queue in the VM without virtualization. Eg DMA
> from guest memory and deliver directly to the guest completion
> interrupts.

Correct, VMM does not need to parse the command buffer. The hardware 
takes care of virtualizing the invalidation commands in the guest 
command buffer directly buffer w/o VMM helps to do invalidation from the 
host side.

For AMD IOMMU, the invalidation command is normally followed by the 
COMPLETION_WAIT command on a memory semaphore, in which the hardware 
updates after all the prior commands are completed.

For Linux, we are not using Completion Wait interrupt. The iommu driver 
polls on the memory semphore in a loop.

> If it always needs SW then the SW part should be in the VMM, not the
> kernel. Then you don't need to load all these tables into the kernel.
> 

As described, the IOMMU driver needs to program the IOMMU PAS. IOMMU 
hardware uses its own IOMMU page table to access the PAS.

For example, an AMD IOMMU hardware is normally listed as a PCI device 
(e.g. PCI ID 00:00.2). To setup IOMMU PAS for this IOMMU instance, the 
IOMMU driver allocate an IOMMU v1 page table for this device, which 
contains PAS mapping.

The IOMMU hardware use the PAS for storing Guest IOMMU information such 
as Guest MMIOs, DevID Mapping Table, DomID Mapping Table, and Guest 
Command/Event/PPR logs.

Thanks,
Suravee

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ