[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250429060918.GK5848@unreal>
Date: Tue, 29 Apr 2025 09:09:18 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Baolu Lu <baolu.lu@...ux.intel.com>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>,
Jens Axboe <axboe@...nel.dk>, Christoph Hellwig <hch@....de>,
Keith Busch <kbusch@...nel.org>, Jake Edge <jake@....net>,
Jonathan Corbet <corbet@....net>, Jason Gunthorpe <jgg@...pe.ca>,
Zhu Yanjun <zyjzyj2000@...il.com>,
Robin Murphy <robin.murphy@....com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>, Sagi Grimberg <sagi@...mberg.me>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, linux-rdma@...r.kernel.org,
iommu@...ts.linux.dev, linux-nvme@...ts.infradead.org,
linux-pci@...r.kernel.org, kvm@...r.kernel.org, linux-mm@...ck.org,
Niklas Schnelle <schnelle@...ux.ibm.com>,
Chuck Lever <chuck.lever@...cle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Kanchan Joshi <joshi.k@...sung.com>,
Chaitanya Kulkarni <kch@...dia.com>,
Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH v10 03/24] iommu: generalize the batched sync after map
interface
On Tue, Apr 29, 2025 at 10:19:46AM +0800, Baolu Lu wrote:
> On 4/28/25 17:22, Leon Romanovsky wrote:
> > From: Christoph Hellwig<hch@....de>
> >
> > For the upcoming IOVA-based DMA API we want to batch the
> > ops->iotlb_sync_map() call after mapping multiple IOVAs from
> > dma-iommu without having a scatterlist. Improve the API.
> >
> > Add a wrapper for the map_sync as iommu_sync_map() so that callers
> > don't need to poke into the methods directly.
> >
> > Formalize __iommu_map() into iommu_map_nosync() which requires the
> > caller to call iommu_sync_map() after all maps are completed.
> >
> > Refactor the existing sanity checks from all the different layers
> > into iommu_map_nosync().
> >
> > Signed-off-by: Christoph Hellwig<hch@....de>
> > Acked-by: Will Deacon<will@...nel.org>
> > Tested-by: Jens Axboe<axboe@...nel.dk>
> > Reviewed-by: Jason Gunthorpe<jgg@...dia.com>
> > Reviewed-by: Luis Chamberlain<mcgrof@...nel.org>
> > Signed-off-by: Leon Romanovsky<leonro@...dia.com>
> > ---
> > drivers/iommu/iommu.c | 65 +++++++++++++++++++------------------------
> > include/linux/iommu.h | 4 +++
> > 2 files changed, 33 insertions(+), 36 deletions(-)
> >
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index 4f91a740c15f..02960585b8d4 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iommu.c
> > @@ -2443,8 +2443,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
> > return pgsize;
> > }
> > -static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > +int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> > + phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > {
> > const struct iommu_domain_ops *ops = domain->ops;
> > unsigned long orig_iova = iova;
> > @@ -2453,12 +2453,19 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > phys_addr_t orig_paddr = paddr;
> > int ret = 0;
> > + might_sleep_if(gfpflags_allow_blocking(gfp));
> > +
> > if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
> > return -EINVAL;
> > if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL))
> > return -ENODEV;
> > + /* Discourage passing strange GFP flags */
> > + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> > + __GFP_HIGHMEM)))
> > + return -EINVAL;
> > +
> > /* find out the minimum page size supported */
> > min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
> > @@ -2506,31 +2513,27 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > return ret;
> > }
> > -int iommu_map(struct iommu_domain *domain, unsigned long iova,
> > - phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > +int iommu_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size)
> > {
> > const struct iommu_domain_ops *ops = domain->ops;
> > - int ret;
> > -
> > - might_sleep_if(gfpflags_allow_blocking(gfp));
> > - /* Discourage passing strange GFP flags */
> > - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> > - __GFP_HIGHMEM)))
> > - return -EINVAL;
> > + if (!ops->iotlb_sync_map)
> > + return 0;
> > + return ops->iotlb_sync_map(domain, iova, size);
> > +}
>
> I am wondering whether iommu_sync_map() needs a return value. The
> purpose of this callback is just to sync the TLB cache after new
> mappings are created, which should effectively be a no-fail operation.
>
> The definition of iotlb_sync_map in struct iommu_domain_ops seems
> unnecessary:
>
> struct iommu_domain_ops {
> ...
> int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long
> iova,
> size_t size);
> ...
> };
>
> Furthermore, currently no iommu driver implements this callback in a way
> that returns a failure. We could clean up the iommu definition in a
> subsequent patch series, but for this driver-facing interface, it's
> better to get it right from the beginning.
I see that s390 is relying on return values:
569 static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
570 unsigned long iova, size_t size)
571 {
<...>
581 ret = zpci_refresh_trans((u64)zdev->fh << 32,
582 iova, size);
583 /*
584 * let the hypervisor discover invalidated entries
585 * allowing it to free IOVAs and unpin pages
586 */
587 if (ret == -ENOMEM) {
588 ret = zpci_refresh_all(zdev);
589 if (ret)
590 break;
591 }
<...>
595 return ret;
596 }
>
> Thanks,
> baolu
Powered by blists - more mailing lists