[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aSb8yH6fSlwk1oZZ@google.com>
Date: Wed, 26 Nov 2025 13:12:40 +0000
From: Pranjal Shrivastava <praan@...gle.com>
To: Alex Mastro <amastro@...com>
Cc: Leon Romanovsky <leon@...nel.org>, Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>, Jens Axboe <axboe@...nel.dk>,
Robin Murphy <robin.murphy@....com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Kees Cook <kees@...nel.org>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
Ankit Agrawal <ankita@...dia.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <skolothumtho@...dia.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex@...zbot.org>,
Krishnakant Jaju <kjaju@...dia.com>, Matt Ochs <mochs@...dia.com>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, iommu@...ts.linux.dev,
linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org, kvm@...r.kernel.org,
linux-hardening@...r.kernel.org, Nicolin Chen <nicolinc@...dia.com>,
Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH v9 06/11] dma-buf: provide phys_vec to scatter-gather
mapping routine
On Tue, Nov 25, 2025 at 04:18:03PM -0800, Alex Mastro wrote:
> On Thu, Nov 20, 2025 at 11:28:25AM +0200, Leon Romanovsky wrote:
> > +static struct scatterlist *fill_sg_entry(struct scatterlist *sgl, size_t length,
> > + dma_addr_t addr)
> > +{
> > + unsigned int len, nents;
> > + int i;
> > +
> > + nents = DIV_ROUND_UP(length, UINT_MAX);
> > + for (i = 0; i < nents; i++) {
> > + len = min_t(size_t, length, UINT_MAX);
> > + length -= len;
> > + /*
> > + * DMABUF abuses scatterlist to create a scatterlist
> > + * that does not have any CPU list, only the DMA list.
> > + * Always set the page related values to NULL to ensure
> > + * importers can't use it. The phys_addr based DMA API
> > + * does not require the CPU list for mapping or unmapping.
> > + */
> > + sg_set_page(sgl, NULL, 0, 0);
> > + sg_dma_address(sgl) = addr + i * UINT_MAX;
>
> (i * UINT_MAX) happens in 32-bit before being promoted to dma_addr_t for
> addition with addr. Overflows for i >=2 when length >= 8 GiB. Needs a cast:
>
> sg_dma_address(sgl) = addr + (dma_addr_t)i * UINT_MAX;
>
> Discovered this while debugging why dma-buf import was failing for
> an 8 GiB dma-buf using my earlier toy program [1]. It was surfaced by
> ib_umem_find_best_pgsz() returning 0 due to malformed scatterlist, which bubbles
> up as an EINVAL.
>
Thanks a lot for testing & reporting this!
However, I believe the casting approach is a little fragile (and
potentially prone to issues depending on how dma_addr_t is sized on
different platforms). Thus, approaching this with accumulation seems
better as it avoids the multiplication logic entirely, maybe something
like the following (untested) diff ?
--- a/drivers/dma-buf/dma-buf-mapping.c
+++ b/drivers/dma-buf/dma-buf-mapping.c
@@ -252,14 +252,14 @@ static struct scatterlist *fill_sg_entry(struct scatterlist *sgl, size_t length,
nents = DIV_ROUND_UP(length, UINT_MAX);
for (i = 0; i < nents; i++) {
len = min_t(size_t, length, UINT_MAX);
- length -= len;
/*
* DMABUF abuses scatterlist to create a scatterlist
* that does not have any CPU list, only the DMA list.
* Always set the page related values to NULL to ensure
* importers can't use it. The phys_addr based DMA API
* does not require the CPU list for mapping or unmapping.
*/
sg_set_page(sgl, NULL, 0, 0);
- sg_dma_address(sgl) = addr + i * UINT_MAX;
+ sg_dma_address(sgl) = addr;
sg_dma_len(sgl) = len;
+
+ addr += len;
+ length -= len;
sgl = sg_next(sgl);
}
Thanks,
Praan
Powered by blists - more mailing lists