[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240510132928.GS4650@nvidia.com>
Date: Fri, 10 May 2024 10:29:28 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, x86@...nel.org,
alex.williamson@...hat.com, kevin.tian@...el.com,
iommu@...ts.linux.dev, pbonzini@...hat.com, seanjc@...gle.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com,
corbet@....net, joro@...tes.org, will@...nel.org,
robin.murphy@....com, baolu.lu@...ux.intel.com, yi.l.liu@...el.com
Subject: Re: [PATCH 5/5] iommufd: Flush CPU caches on DMA pages in
non-coherent domains
On Fri, May 10, 2024 at 04:03:04PM +0800, Yan Zhao wrote:
> > > @@ -1358,10 +1377,17 @@ int iopt_area_fill_domain(struct iopt_area *area, struct iommu_domain *domain)
> > > {
> > > unsigned long done_end_index;
> > > struct pfn_reader pfns;
> > > + bool cache_flush_required;
> > > int rc;
> > >
> > > lockdep_assert_held(&area->pages->mutex);
> > >
> > > + cache_flush_required = area->iopt->noncoherent_domain_cnt &&
> > > + !area->pages->cache_flush_required;
> > > +
> > > + if (cache_flush_required)
> > > + area->pages->cache_flush_required = true;
> > > +
> > > rc = pfn_reader_first(&pfns, area->pages, iopt_area_index(area),
> > > iopt_area_last_index(area));
> > > if (rc)
> > > @@ -1369,6 +1395,9 @@ int iopt_area_fill_domain(struct iopt_area *area, struct iommu_domain *domain)
> > >
> > > while (!pfn_reader_done(&pfns)) {
> > > done_end_index = pfns.batch_start_index;
> > > + if (cache_flush_required)
> > > + iopt_cache_flush_pfn_batch(&pfns.batch);
> > > +
> >
> > This is a bit unfortunate, it means we are going to flush for every
> > domain, even though it is not required. I don't see any easy way out
> > of that :(
> Yes. Do you think it's possible to add an op get_cache_coherency_enforced
> to iommu_domain_ops?
Do we need that? The hwpt already keeps track of that? the enforced could be
copied into the area along side storage_domain
Then I guess you could avoid flushing in the case the page came from
the storage_domain...
You'd want the storage_domain to preferentially point to any
non-enforced domain.
Is it worth it? How slow is this stuff?
Jason
Powered by blists - more mailing lists