[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <afec1d30-4bb3-4d39-9ff1-eb8ecb26bed3@linux.intel.com>
Date: Sat, 17 Aug 2024 11:28:21 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.pan@...ux.microsoft.com>,
Tina Zhang <tina.zhang@...el.com>
Cc: baolu.lu@...ux.intel.com, Kevin Tian <kevin.tian@...el.com>,
iommu@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/4] iommu/vt-d: Introduce batched cache invalidation
On 2024/8/17 0:38, Jacob Pan wrote:
> On Thu, 15 Aug 2024 14:52:21 +0800
> Tina Zhang <tina.zhang@...el.com> wrote:
>
>> @@ -270,7 +343,8 @@ static void cache_tag_flush_iotlb(struct
>> dmar_domain *domain, struct cache_tag * u64 type = DMA_TLB_PSI_FLUSH;
>>
>> if (domain->use_first_level) {
>> - qi_flush_piotlb(iommu, tag->domain_id, tag->pasid,
>> addr, pages, ih);
>> + qi_batch_add_piotlb(iommu, tag->domain_id,
>> tag->pasid, addr,
>> + pages, ih, domain->qi_batch);
>> return;
>> }
>>
>> @@ -287,7 +361,8 @@ static void cache_tag_flush_iotlb(struct
>> dmar_domain *domain, struct cache_tag * }
>>
>> if (ecap_qis(iommu->ecap))
>> - qi_flush_iotlb(iommu, tag->domain_id, addr | ih,
>> mask, type);
>> + qi_batch_add_iotlb(iommu, tag->domain_id, addr | ih,
>> mask, type,
>> + domain->qi_batch);
>>
> If I understand this correctly, IOTLB flush maybe deferred until the
> batch array is full, right? If so, is there a security gap where
> callers think the mapping is gone after the call returns?
No. All related caches are flushed before function return. A domain can
have multiple cache tags. Previously, we sent individual cache
invalidation requests to hardware. This change combines all necessary
invalidation requests into a single batch and raise them to hardware
together to make it more efficient.
Thanks,
baolu
Powered by blists - more mailing lists