[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6ff346e-6b6b-d9cd-c7c8-0e54614c1b37@linux.intel.com>
Date: Wed, 4 Dec 2019 08:32:17 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Cc: baolu.lu@...ux.intel.com, Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>, ashok.raj@...el.com,
kevin.tian@...el.com, Eric Auger <eric.auger@...hat.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
Hi Jacob,
On 12/4/19 12:50 AM, Jacob Pan wrote:
> On Tue, 3 Dec 2019 10:44:45 +0800
> Lu Baolu <baolu.lu@...ux.intel.com> wrote:
>
>> Hi Jacob,
>>
>> On 12/3/19 4:02 AM, Jacob Pan wrote:
>>> On Fri, 22 Nov 2019 11:04:44 +0800
>>> Lu Baolu<baolu.lu@...ux.intel.com> wrote:
>>>
>>>> Intel VT-d 3.0 introduces more caches and interfaces for software
>>>> to flush when it runs in the scalable mode. Currently various
>>>> cache flush helpers are scattered around. This consolidates them
>>>> by putting them in the existing iommu_flush structure.
>>>>
>>>> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>>>> *
>>>> * @cc_inv: invalidate context cache
>>>> * @iotlb_inv: Invalidate IOTLB and paging structure caches when
>>>> software
>>>> * has changed second-level tables.
>>>> * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
>>>> software
>>>> * has changed first-level tables.
>>>> * @pc_inv: invalidate pasid cache
>>>> * @dev_tlb_inv: invalidate cached mappings used by
>>>> requests-without-PASID
>>>> * from the Device-TLB on a endpoint device.
>>>> * @p_dev_tlb_inv: invalidate cached mappings used by
>>>> requests-with-PASID
>>>> * from the Device-TLB on an endpoint device
>>>> */
>>>> struct iommu_flush {
>>>> void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>>>> u16 sid, u8 fm, u64 type);
>>>> void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
>>>> addr, unsigned int size_order, u64 type);
>>>> void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did,
>>>> u32 pasid, u64 addr, unsigned long npages, bool ih);
>>>> void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32
>>>> pasid, u64 granu);
>>>> void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
>>>> u16 pfsid, u16 qdep, u64 addr, unsigned int mask);
>>>> void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
>>>> u16 pfsid, u32 pasid, u16 qdep, u64 addr,
>>>> unsigned long npages);
>>>> };
>>>>
>>>> The name of each cache flush ops is defined according to the spec
>>>> section 6.5 so that people are easy to look up them in the spec.
>>>>
>>> Nice consolidation. For nested SVM, I also introduced cache flushed
>>> helpers as needed.
>>> https://lkml.org/lkml/2019/10/24/857
>>>
>>> Should I wait for yours to be merged or you want to extend the this
>>> consolidation after SVA/SVM cache flush? I expect to send my v8
>>> shortly.
>>
>> Please base your v8 patch on this series. So it could get more chances
>> for test.
>>
> Sounds good.
I am sorry I need to spend more time on this patch series. Please go
ahead without it.
Best regards,
baolu
>
>> I will queue this patch series for internal test after 5.5-rc1 and if
>> everything goes well, I will forward it to Joerg around rc4 for linux-
>> next.
>>
>> Best regards,
>> baolu
>
> [Jacob Pan]
>
Powered by blists - more mailing lists