lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <MW5PR11MB5881E894D3452E372C78F90189BA2@MW5PR11MB5881.namprd11.prod.outlook.com>
Date: Fri, 9 Aug 2024 09:24:02 +0000
From: "Zhang, Tina" <tina.zhang@...el.com>
To: Baolu Lu <baolu.lu@...ux.intel.com>, "Tian, Kevin" <kevin.tian@...el.com>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2 5/5] vt-d/iommu: Enable batching of IOTLB/Dev-IOTLB
 invalidations

Hi Baolu,

> -----Original Message-----
> From: Baolu Lu <baolu.lu@...ux.intel.com>
> Sent: Friday, August 9, 2024 4:22 PM
> To: Zhang, Tina <tina.zhang@...el.com>; Tian, Kevin <kevin.tian@...el.com>
> Cc: baolu.lu@...ux.intel.com; iommu@...ts.linux.dev; linux-
> kernel@...r.kernel.org
> Subject: Re: [PATCH v2 5/5] vt-d/iommu: Enable batching of IOTLB/Dev-IOTLB
> invalidations
> 
> On 2024/8/9 10:54, Tina Zhang wrote:
> > +static inline void handle_batched_iotlb_descs(struct dmar_domain *domain,
> > +					 struct cache_tag *tag,
> > +					 unsigned long addr,
> > +					 unsigned long pages,
> > +					 unsigned long mask,
> > +					 int ih)
> > +{
> > +	struct intel_iommu *iommu = tag->iommu;
> > +
> > +	if (domain->use_first_level) {
> > +		qi_batch_add_piotlb_desc(iommu, tag->domain_id,
> > +					 tag->pasid, addr, pages,
> > +					 ih, domain->qi_batch);
> > +	} else {
> > +		/*
> > +		 * Fallback to domain selective flush if no
> > +		 * PSI support or the size is too big.
> > +		 */
> > +		if (!cap_pgsel_inv(iommu->cap) ||
> > +		    mask > cap_max_amask_val(iommu->cap) ||
> > +		    pages == -1)
> > +			qi_batch_add_iotlb_desc(iommu, tag->domain_id,
> > +						0, 0, DMA_TLB_DSI_FLUSH,
> > +						domain->qi_batch);
> > +		else
> > +			qi_batch_add_iotlb_desc(iommu, tag->domain_id,
> > +						addr | ih, mask,
> > +						DMA_TLB_PSI_FLUSH,
> > +						domain->qi_batch);
> > +	}
> > +
> > +}
> 
> What if the iommu driver is running on an early or emulated hardware where
> the queued invalidation is not supported?
Yes, this is also taken into consideration. 

In this patch, domain->qi_batch will be NULL if the IOMMU doesn't support qi based invalidations (i.e. iommu->qi is NULL), see:

-       if (type == CACHE_TAG_DEVTLB || type == CACHE_TAG_NESTING_DEVTLB)
+       if (type == CACHE_TAG_DEVTLB || type == CACHE_TAG_NESTING_DEVTLB) {
                tag->dev = dev;
-       else
+
+               if (!domain->qi_batch && iommu->qi)
+                       /*
+                        * It doesn't matter if domain->qi_batch is NULL, as in
+                        * this case the commands will be submitted individually.
+                        */
+                       domain->qi_batch = kzalloc(sizeof(struct qi_batch),
+                                                  GFP_KERNEL);
+       } else {
                tag->dev = iommu->iommu.dev;
+       }

Then, when invoking handle_batched_xxx() helpers, the logic, introduced in this patch, would check if domain->qi_batch is valid or not before proceeding batch processing.

Regards,
-Tina
> 
> Thanks,
> baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ