[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PH0PR12MB5481DA4780B0EAE420B3ABF4DC379@PH0PR12MB5481.namprd12.prod.outlook.com>
Date: Tue, 8 Jun 2021 06:30:30 +0000
From: Parav Pandit <parav@...dia.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>
CC: "Tian, Kevin" <kevin.tian@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
Jason Gunthorpe <jgg@...dia.com>,
Lu Baolu <baolu.lu@...ux.intel.com>,
David Woodhouse <dwmw2@...radead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Alex Williamson (alex.williamson@...hat.com)"
<alex.williamson@...hat.com>, Jason Wang <jasowang@...hat.com>,
Eric Auger <eric.auger@...hat.com>,
Jonathan Corbet <corbet@....net>,
"Raj, Ashok" <ashok.raj@...el.com>,
"Liu, Yi L" <yi.l.liu@...el.com>, "Wu, Hao" <hao.wu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
David Gibson <david@...son.dropbear.id.au>,
Kirti Wankhede <kwankhede@...dia.com>,
Robin Murphy <robin.murphy@....com>
Subject: RE: [RFC] /dev/ioasid uAPI proposal
Hi Jaocb,
Sorry for the late response. Was on PTO on Friday last week.
Please see comments below.
> From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> Sent: Friday, June 4, 2021 2:28 AM
>
> Hi Parav,
>
> On Tue, 1 Jun 2021 17:30:51 +0000, Parav Pandit <parav@...dia.com> wrote:
>
> > > From: Tian, Kevin <kevin.tian@...el.com>
> > > Sent: Thursday, May 27, 2021 1:28 PM
> >
> > > 5.6. I/O page fault
> > > +++++++++++++++
> > >
> > > (uAPI is TBD. Here is just about the high-level flow from host IOMMU
> > > driver to guest IOMMU driver and backwards).
> > >
> > > - Host IOMMU driver receives a page request with raw fault_data {rid,
> > > pasid, addr};
> > >
> > > - Host IOMMU driver identifies the faulting I/O page table according
> > > to information registered by IOASID fault handler;
> > >
> > > - IOASID fault handler is called with raw fault_data (rid, pasid,
> > > addr), which is saved in ioasid_data->fault_data (used for
> > > response);
> > >
> > > - IOASID fault handler generates an user fault_data (ioasid, addr),
> > > links it to the shared ring buffer and triggers eventfd to
> > > userspace;
> > >
> > > - Upon received event, Qemu needs to find the virtual routing
> > > information (v_rid + v_pasid) of the device attached to the faulting
> > > ioasid. If there are multiple, pick a random one. This should be
> > > fine since the purpose is to fix the I/O page table on the guest;
> > >
> > > - Qemu generates a virtual I/O page fault through vIOMMU into guest,
> > > carrying the virtual fault data (v_rid, v_pasid, addr);
> > >
> > Why does it have to be through vIOMMU?
> I think this flow is for fully emulated IOMMU, the same IOMMU and device
> drivers run in the host and guest. Page request interrupt is reported by the
> IOMMU, thus reporting to vIOMMU in the guest.
In non-emulated case, how will the page fault of guest will be handled?
If I take Intel example, I thought FL page table entry still need to be handled by guest, which in turn fills up 2nd level page table entries.
No?
>
> > For a VFIO PCI device, have you considered to reuse the same PRI
> > interface to inject page fault in the guest? This eliminates any new
> > v_rid. It will also route the page fault request and response through
> > the right vfio device.
> >
> I am curious how would PCI PRI can be used to inject fault. Are you talking
> about PCI config PRI extended capability structure?
PCI PRI capability is only to expose page fault support.
Page fault injection/response cannot happen through the pci cap anyway.
This requires a side channel.
I was suggesting to emulate pci_endpoint->rc->iommu->iommu_irq path of hypervisor, as
vmm->guest_emuated_pri_device->pri_req/rsp queue(s).
> The control is very
> limited, only enable and reset. Can you explain how would page fault
> handled in generic PCI cap?
Not via pci cap.
Through more generic interface without attaching to viommu.
> Some devices may have device specific way to handle page faults, but I guess
> this is not the PCI PRI method you are referring to?
This was my next question that if page fault reporting and response interface is generic, it will be more scalable given that PCI PRI is limited to single page requests.
And additionally VT-d seems to funnel all the page fault interrupts via single IRQ.
And 3rdly, its requirement to always come through the hypervisor intermediatory.
Having a generic mechanism, will help to overcome above limitations as Jean already pointed out that page fault is a hot path.
>
> > > - Guest IOMMU driver fixes up the fault, updates the I/O page table,
> > > and then sends a page response with virtual completion data (v_rid,
> > > v_pasid, response_code) to vIOMMU;
> > >
> > What about fixing up the fault for mmu page table as well in guest?
> > Or you meant both when above you said "updates the I/O page table"?
> >
> > It is unclear to me that if there is single nested page table
> > maintained or two (one for cr3 references and other for iommu). Can
> > you please clarify?
> >
> I think it is just one, at least for VT-d, guest cr3 in GPA is stored in the host
> iommu. Guest iommu driver calls handle_mm_fault to fix the mmu page
> tables which is shared by the iommu.
>
So if guest has touched the page data, FL and SL entries of mmu should be populated and IOMMU side should not even reach a point of raising the PRI.
(ATS should be enough).
Because IOMMU side share the same FL and SL table entries referred by the scalable-mode PASID-table entry format described in Section 9.6.
Is that correct?
> > > - Qemu finds the pending fault event, converts virtual completion data
> > > into (ioasid, response_code), and then calls a /dev/ioasid ioctl to
> > > complete the pending fault;
> > >
> > For VFIO PCI device a virtual PRI request response interface is done,
> > it can be generic interface among multiple vIOMMUs.
> >
> same question above, not sure how this works in terms of interrupts and
> response queuing etc.
>
Citing "VFIO PCI device" was wrong on my part.
Was considering a generic page fault device to expose in guest that has request/response queues.
This way it is not attached to specific viommu driver and having other benefits explained above.
Powered by blists - more mailing lists