[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <78ee2638-1a03-fcc8-50a5-81040f677e69@redhat.com>
Date: Tue, 1 Jun 2021 10:36:36 +0800
From: Jason Wang <jasowang@...hat.com>
To: Liu Yi L <yi.l.liu@...ux.intel.com>
Cc: yi.l.liu@...el.com, "Tian, Kevin" <kevin.tian@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
Jason Gunthorpe <jgg@...dia.com>,
Lu Baolu <baolu.lu@...ux.intel.com>,
David Woodhouse <dwmw2@...radead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Alex Williamson (alex.williamson@...hat.com)\""
<alex.williamson@...hat.com>, Eric Auger <eric.auger@...hat.com>,
Jonathan Corbet <corbet@....net>
Subject: Re: [RFC] /dev/ioasid uAPI proposal
在 2021/5/31 下午4:41, Liu Yi L 写道:
>> I guess VFIO_ATTACH_IOASID will fail if the underlayer doesn't support
>> hardware nesting. Or is there way to detect the capability before?
> I think it could fail in the IOASID_CREATE_NESTING. If the gpa_ioasid
> is not able to support nesting, then should fail it.
>
>> I think GET_INFO only works after the ATTACH.
> yes. After attaching to gpa_ioasid, userspace could GET_INFO on the
> gpa_ioasid and check if nesting is supported or not. right?
Some more questions:
1) Is the handle returned by IOASID_ALLOC an fd?
2) If yes, what's the reason for not simply use the fd opened from
/dev/ioas. (This is the question that is not answered) and what happens
if we call GET_INFO for the ioasid_fd?
3) If not, how GET_INFO work?
>
>>> /* Bind guest I/O page table */
>>> bind_data = {
>>> .ioasid = giova_ioasid;
>>> .addr = giova_pgtable;
>>> // and format information
>>> };
>>> ioctl(ioasid_fd, IOASID_BIND_PGTABLE, &bind_data);
>>>
>>> /* Invalidate IOTLB when required */
>>> inv_data = {
>>> .ioasid = giova_ioasid;
>>> // granular information
>>> };
>>> ioctl(ioasid_fd, IOASID_INVALIDATE_CACHE, &inv_data);
>>>
>>> /* See 5.6 for I/O page fault handling */
>>>
>>> 5.5. Guest SVA (vSVA)
>>> ++++++++++++++++++
>>>
>>> After boots the guest further create a GVA address spaces (gpasid1) on
>>> dev1. Dev2 is not affected (still attached to giova_ioasid).
>>>
>>> As explained in section 4, user should avoid expose ENQCMD on both
>>> pdev and mdev.
>>>
>>> The sequence applies to all device types (being pdev or mdev), except
>>> one additional step to call KVM for ENQCMD-capable mdev:
>> My understanding is ENQCMD is Intel specific and not a requirement for
>> having vSVA.
> ENQCMD is not really Intel specific although only Intel supports it today.
> The PCIe DMWr capability is the capability for software to enumerate the
> ENQCMD support in device side. yes, it is not a requirement for vSVA. They
> are orthogonal.
Right, then it's better to mention DMWr instead of a vendor specific
instruction in a general framework like ioasid.
Thanks
>
Powered by blists - more mailing lists