lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a2f66a2-a867-4203-8a76-dbced80bfeff@linux.intel.com>
Date: Wed, 3 Jul 2024 10:49:19 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Cc: baolu.lu@...ux.intel.com, Joerg Roedel <joro@...tes.org>,
 Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
 Jason Gunthorpe <jgg@...pe.ca>, Kevin Tian <kevin.tian@...el.com>,
 iommu@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] iommu/vt-d: Add helper to flush caches for context
 change

On 7/2/24 11:57 PM, Jacob Pan wrote:
> On Tue, 2 Jul 2024 12:43:41 +0800, Baolu Lu<baolu.lu@...ux.intel.com>
> wrote:
> 
>> On 2024/7/2 12:41, Jacob Pan wrote:
>>> On Mon,  1 Jul 2024 19:23:16 +0800, Lu Baolu<baolu.lu@...ux.intel.com>
>>> wrote:
>>>    
>>>> +	if (flush_domains) {
>>>> +		/*
>>>> +		 * If the IOMMU is running in scalable mode and there
>>>> might
>>>> +		 * be potential PASID translations, the caller should
>>>> hold
>>>> +		 * the lock to ensure that context changes and cache
>>>> flushes
>>>> +		 * are atomic.
>>>> +		 */
>>>> +		assert_spin_locked(&iommu->lock);
>>>> +		for (i = 0; i < info->pasid_table->max_pasid; i++) {
>>>> +			pte = intel_pasid_get_entry(info->dev, i);
>>>> +			if (!pte || !pasid_pte_is_present(pte))
>>>> +				continue;
>>> Is it worth going through 1M PASIDs just to skip the PASID cache
>>> invalidation? Or just do the flush on all used DIDs unconditionally.
>> Currently we don't track all domains attached to a device. If such
>> optimization is necessary, perhaps we can add it later.
> I think it is necessary, because without tracking domain IDs, the code
> above would have duplicated invalidations.
> For example: a device PASID table has the following entries
> 	PASID	DomainID
> -------------------------
> 	100	1
> 	200	1
> 	300	2
> -------------------------
> When a present context entry changes, we need to do:
> qi_flush_pasid_cache(iommu, 1, QI_PC_ALL_PASIDS, 0);
> qi_flush_pasid_cache(iommu, 2, QI_PC_ALL_PASIDS, 0);
> 
> With this code, we do
> qi_flush_pasid_cache(iommu, 1, QI_PC_ALL_PASIDS, 0);
> qi_flush_pasid_cache(iommu, 1, QI_PC_ALL_PASIDS, 0);//duplicated!
> qi_flush_pasid_cache(iommu, 2, QI_PC_ALL_PASIDS, 0);

Yes, this is likely. But currently enabling and disabling PRI happens in
driver's probe and release paths. Therefore such duplicate is not so
critical.

For long term, I have a plan to abstract the domain id into an object so
that domains attached to different PASIDs of a device could share a
domain id. With that done, we could improve this code by iterating the
domain id objects for a device and performing cache invalidation
directly.

Thanks,
baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ