[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250716120817.GY2067380@nvidia.com>
Date: Wed, 16 Jul 2025 09:08:17 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Baolu Lu <baolu.lu@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
Kevin Tian <kevin.tian@...el.com>, Jann Horn <jannh@...gle.com>,
Vasant Hegde <vasant.hegde@....com>,
Dave Hansen <dave.hansen@...el.com>,
Alistair Popple <apopple@...dia.com>,
Uladzislau Rezki <urezki@...il.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Andy Lutomirski <luto@...nel.org>,
"Tested-by : Yi Lai" <yi1.lai@...el.com>, iommu@...ts.linux.dev,
security@...nel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH v2 1/1] iommu/sva: Invalidate KVA range on kernel TLB
flush
On Wed, Jul 16, 2025 at 02:34:04PM +0800, Baolu Lu wrote:
> > > @@ -654,6 +656,9 @@ struct iommu_ops {
> > >
> > > int (*def_domain_type)(struct device *dev);
> > >
> > > + void (*paging_cache_invalidate)(struct iommu_device *dev,
> > > + unsigned long start, unsigned long end);
> >
> > How would you even implement this in a driver?
> >
> > You either flush the whole iommu, in which case who needs a rage, or
> > the driver has to iterate over the PASID list, in which case it
> > doesn't really improve the situation.
>
> The Intel iommu driver supports flushing all SVA PASIDs with a single
> request in the invalidation queue.
How? All PASID !=0 ? The HW has no notion about a SVA PASID vs no-SVA
else. This is just flushing almost everything.
> > If this is a concern I think the better answer is to do a defered free
> > like the mm can sometimes do where we thread the page tables onto a
> > linked list, flush the CPU cache and push it all into a work which
> > will do the iommu flush before actually freeing the memory.
>
> Is it a workable solution to use schedule_work() to queue the KVA cache
> invalidation as a work item in the system workqueue? By doing so, we
> wouldn't need the spinlock to protect the list anymore.
Maybe.
MM is also more careful to pull the invalidation out some of the
locks, I don't know what the KVA side is like..
Jason
Powered by blists - more mailing lists