[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48391b05-ecbb-4053-bed5-2740806ff06e@intel.com>
Date: Wed, 22 Nov 2023 11:52:37 +0800
From: Yi Liu <yi.l.liu@...el.com>
To: Baolu Lu <baolu.lu@...ux.intel.com>,
Jason Gunthorpe <jgg@...dia.com>,
"Tian, Kevin" <kevin.tian@...el.com>
CC: "joro@...tes.org" <joro@...tes.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"cohuck@...hat.com" <cohuck@...hat.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"nicolinc@...dia.com" <nicolinc@...dia.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"mjrosato@...ux.ibm.com" <mjrosato@...ux.ibm.com>,
"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>,
"yi.y.sun@...ux.intel.com" <yi.y.sun@...ux.intel.com>,
"peterx@...hat.com" <peterx@...hat.com>,
"jasowang@...hat.com" <jasowang@...hat.com>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>,
"lulu@...hat.com" <lulu@...hat.com>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"Duan, Zhenzhong" <zhenzhong.duan@...el.com>,
"joao.m.martins@...cle.com" <joao.m.martins@...cle.com>,
"Zeng, Xin" <xin.zeng@...el.com>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: Re: [PATCH v7 1/3] iommufd: Add data structure for Intel VT-d stage-1
cache invalidation
On 2023/11/22 10:32, Baolu Lu wrote:
> On 11/21/23 8:17 PM, Jason Gunthorpe wrote:
>> On Tue, Nov 21, 2023 at 02:54:15AM +0000, Tian, Kevin wrote:
>>>> From: Jason Gunthorpe <jgg@...dia.com>
>>>> Sent: Tuesday, November 21, 2023 7:05 AM
>>>>
>>>> On Mon, Nov 20, 2023 at 08:26:31AM +0000, Tian, Kevin wrote:
>>>>>> From: Liu, Yi L <yi.l.liu@...el.com>
>>>>>> Sent: Friday, November 17, 2023 9:18 PM
>>>>>>
>>>>>> This adds the data structure for flushing iotlb for the nested domain
>>>>>> allocated with IOMMU_HWPT_DATA_VTD_S1 type.
>>>>>>
>>>>>> This only supports invalidating IOTLB, but no for device-TLB as
>>>>>> device-TLB
>>>>>> invalidation will be covered automatically in the IOTLB invalidation
>>>>>> if the
>>>>>> underlying IOMMU driver has enabled ATS for the affected device.
>>>>>
>>>>> "no for device-TLB" is misleading. Here just say that cache invalidation
>>>>> request applies to both IOTLB and device TLB (if ATS is enabled ...)
>>>>
>>>> I think we should forward the ATS invalidation from the guest too?
>>>> That is what ARM and AMD will have to do, can we keep them all
>>>> consistent?
>>>>
>>>> I understand Intel keeps track of enough stuff to know what the RIDs
>>>> are, but is it necessary to make it different?
>>>
>>> probably ask the other way. Now intel-iommu driver always flushes
>>> iotlb and device tlb together then is it necessary to separate them
>>> in uAPI for no good (except doubled syscalls)? :)
>>
>> I wish I knew more about Intel CC design to be able to answer that :|
>>
>> Doesn't the VM issue the ATC flush command regardless? How does it
>> know it has a working ATC but does not need to flush it?
>>
>
> The Intel VT-d spec doesn't require the driver to flush iotlb and device
> tlb together.
Spec has below description. Although it does not say iotlb and device tlb
should be flushed together, but there is indeed requirement that both
should be flushed when a page is unmapped.
Chapter 6.5.2.5:
"Since translation requests-without-PASID from a device may be serviced by
hardware from the
IOTLB, software must always request IOTLB invalidation (iotlb_inv_dsc)
before requesting
corresponding Device-TLB (dev_tlb_inv_dsc) invalidation."
> Therefore, the current approach of relying on caching mode
> to determine whether device TLB invalidation is necessary appears to be
> a performance optimization rather than an architectural requirement.
>
> The vIOMMU driver assumes that it is running within a VM guest when
> caching mode is enabled. This assumption leads to an omission of device
> TLB invalidation, relying on the hypervisor to perform a combined flush
> of the IOLB and device TLB.
yes, this is what the current intel iommu driver does. However, whether
rely on caching mode or not is orthogonal with whether we need to uapis
here. I think guest iommu driver could submit both iotlb and device tlb
invalidation request. But Qemu could select if it needs to forward the
device tlb invalidation request to kernel if kernel iommu driver has
already covered the device tlb invalidation when get the request to
invalidate iotlb.
> While this optimization aims to reduce VMEXIT overhead, it introduces
> potential issues:
>
> - When a Linux guest running on a hypervisor other than KVM/QEMU, the
> assumption of combined IOLB and device TLB flushing by the hypervisor
> may be incorrect, potentially leading to missed device TLB
> invalidation.
Hmmm, this appears to be an intel iommu driver bug, it should submit both
iotlb invalidation and device tlb invalidation requests. But as above, I
think this is orthogonal here. KVM/QEMU could define its own uapi based on
the implementation to gain the best suit.
>
> - The caching mode doesn't apply to first-stage translation. Therefore,
> if the driver uses first-stage translation and still relies on caching
> mode to determine device TLB invalidation, the optimization fails.
yes, caching mode does no apply to first-stage translation table. But in
nested translation, guest does not need to notify hypervisor when there is
page unmapped. is it? So whether caching mode applies to first-stage
translation table does not matter. TBH. I didn't see the problem due to
this reason. But I agree that linux guest intel iommu driver needs to
submit both iotlb and device tlb invalidation request to guarantee it can
work on other hypervisors. And there should be other way to do the
performance optimization.
>
> A more reasonable optimization would be to allocate a bit in the iommu
> capability registers. The vIOMMU driver could then leverage this bit to
> determine whether it could eliminate a device invalidation request.
this may be something spec can be enhanced. But again it is just to make
guest intel iommu driver to gain performance optimization and also can
work on other hypervisors. As of this uapi design, considering it within
the linux ecosystem is enough.
--
Regards,
Yi Liu
Powered by blists - more mailing lists