[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW5PR11MB58816BFEBC3C4F8F668F330089E92@MW5PR11MB5881.namprd11.prod.outlook.com>
Date: Mon, 20 May 2024 04:34:52 +0000
From: "Zhang, Tina" <tina.zhang@...el.com>
To: Baolu Lu <baolu.lu@...ux.intel.com>, "Tian, Kevin" <kevin.tian@...el.com>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] iommu/vt-d: Support batching IOTLB/dev-IOTLB
invalidation commands
> -----Original Message-----
> From: Baolu Lu <baolu.lu@...ux.intel.com>
> Sent: Sunday, May 19, 2024 5:43 PM
> To: Zhang, Tina <tina.zhang@...el.com>; Tian, Kevin <kevin.tian@...el.com>
> Cc: baolu.lu@...ux.intel.com; iommu@...ts.linux.dev; linux-
> kernel@...r.kernel.org
> Subject: Re: [PATCH 1/2] iommu/vt-d: Support batching IOTLB/dev-IOTLB
> invalidation commands
>
> On 5/17/24 8:37 AM, Tina Zhang wrote:
> > Introduce a new parameter batch_desc to the QI based IOTLB/dev-IOTLB
> > invalidation operations to support batching invalidation descriptors.
> > This batch_desc is a pointer to the descriptor entry in a batch cmds
> > buffer. If the batch_desc is NULL, it indicates that batch submission
> > is not being used, and descriptors will be submitted individually.
> >
> > Also fix an issue reported by checkpatch about "unsigned mask":
> > "Prefer 'unsigned int' to bare use of 'unsigned'"
> >
> > Signed-off-by: Tina Zhang<tina.zhang@...el.com>
> > ---
> > drivers/iommu/intel/cache.c | 33 +++++++++++-------
> > drivers/iommu/intel/dmar.c | 67 ++++++++++++++++++++-----------------
> > drivers/iommu/intel/iommu.c | 27 +++++++++------
> > drivers/iommu/intel/iommu.h | 21 ++++++++----
> > drivers/iommu/intel/pasid.c | 20 ++++++-----
> > 5 files changed, 100 insertions(+), 68 deletions(-)
> >
> > diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
> > index e8418cdd8331..dcf5e0e6af17 100644
> > --- a/drivers/iommu/intel/cache.c
> > +++ b/drivers/iommu/intel/cache.c
> > @@ -278,7 +278,7 @@ void cache_tag_flush_range(struct dmar_domain
> *domain, unsigned long start,
> > case CACHE_TAG_NESTING_IOTLB:
> > if (domain->use_first_level) {
> > qi_flush_piotlb(iommu, tag->domain_id,
> > - tag->pasid, addr, pages, ih);
> > + tag->pasid, addr, pages, ih,
> NULL);
> > } else {
>
> I'd like to have all batched descriptors code inside this file to make it easier for
> maintenance. Perhaps we can add the below infrastructure in the
> dmar_domain structure together with the cache tag.
>
> #define MAX_BATCHED_DESC_COUNT 8
>
> struct batched_desc {
> struct qi_desc desc[MAX_BATCHED_DESC_COUNT];
> unsigned int index;
> };
>
> diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
> index eaf015b4353b..dd458d6ad7ec 100644
> --- a/drivers/iommu/intel/iommu.h
> +++ b/drivers/iommu/intel/iommu.h
> @@ -610,6 +610,7 @@ struct dmar_domain {
>
> spinlock_t cache_lock; /* Protect the cache tag list */
> struct list_head cache_tags; /* Cache tag list */
> + struct batched_desc *descs; /* Batched qi descriptors */
>
> int iommu_superpage;/* Level of superpages supported:
> 0 == 4KiB (no superpages), 1 == 2MiB,
>
> The batched descriptor structure is allocated when the first cache tag is
> assigned to this domain and freed when the last tag leaves.
>
> Then, we have below helpers to replace the current qi_flush_xxx() calls.
>
> static void batched_desc_iotlb_enqueue(...) {
> ... ...
> }
>
> and another one to push all queued commands to the hardware,
>
> static void batched_desc_flush(...)
> {
> ... ...
> }
>
> > /*
> > * Fallback to domain selective flush if no @@
> -287,11 +287,13
> > @@ void cache_tag_flush_range(struct dmar_domain *domain, unsigned
> long start,
> > if (!cap_pgsel_inv(iommu->cap) ||
> > mask > cap_max_amask_val(iommu->cap))
> > iommu->flush.flush_iotlb(iommu, tag-
> >domain_id,
> > - 0, 0,
> DMA_TLB_DSI_FLUSH);
> > + 0, 0,
> DMA_TLB_DSI_FLUSH,
> > + NULL);
> > else
> > iommu->flush.flush_iotlb(iommu, tag-
> >domain_id,
> > addr | ih,
> mask,
> > -
> DMA_TLB_PSI_FLUSH);
> > +
> DMA_TLB_PSI_FLUSH,
> > + NULL);
>
> Perhaps we could refactor the indirect call here.
> if (qi is supported by iommu)
> ...
> else /* register based iotlb invalidation*/
> ...
>
> and only batched descriptor is only supported in the upper case.
Sure. Thanks.
Regards,
-Tina
>
> Best regards,
> baolu
Powered by blists - more mailing lists