[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250902232457.GC470103@nvidia.com>
Date: Tue, 2 Sep 2025 20:24:57 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Keith Busch <kbusch@...nel.org>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>,
Leon Romanovsky <leon@...nel.org>,
Leon Romanovsky <leonro@...dia.com>,
Abdiel Janulgue <abdiel.janulgue@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Alex Gaynor <alex.gaynor@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@....de>, Danilo Krummrich <dakr@...nel.org>,
iommu@...ts.linux.dev, Jason Wang <jasowang@...hat.com>,
Jens Axboe <axboe@...nel.dk>, Joerg Roedel <joro@...tes.org>,
Jonathan Corbet <corbet@....net>, Juergen Gross <jgross@...e.com>,
kasan-dev@...glegroups.com, linux-block@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-nvme@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, linux-trace-kernel@...r.kernel.org,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
"Michael S. Tsirkin" <mst@...hat.com>,
Miguel Ojeda <ojeda@...nel.org>,
Robin Murphy <robin.murphy@....com>, rust-for-linux@...r.kernel.org,
Sagi Grimberg <sagi@...mberg.me>,
Stefano Stabellini <sstabellini@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
virtualization@...ts.linux.dev, Will Deacon <will@...nel.org>,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH v4 14/16] block-dma: migrate to dma_map_phys instead of
map_page
On Tue, Sep 02, 2025 at 03:59:37PM -0600, Keith Busch wrote:
> On Tue, Sep 02, 2025 at 10:49:48PM +0200, Marek Szyprowski wrote:
> > On 19.08.2025 19:36, Leon Romanovsky wrote:
> > > @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, struct phys_vec *vec)
> > > static bool blk_dma_map_direct(struct request *req, struct device *dma_dev,
> > > struct blk_dma_iter *iter, struct phys_vec *vec)
> > > {
> > > - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr),
> > > - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req));
> > > + iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len,
> > > + rq_dma_dir(req), 0);
> > > if (dma_mapping_error(dma_dev, iter->addr)) {
> > > iter->status = BLK_STS_RESOURCE;
> > > return false;
> >
> > I wonder where is the corresponding dma_unmap_page() call and its change
> > to dma_unmap_phys()...
>
> You can't do that in the generic layer, so it's up to the caller. The
> dma addrs that blk_dma_iter yield are used in a caller specific
> structure. For example, for NVMe, it goes into an NVMe PRP. The generic
> layer doesn't know what that is, so the driver has to provide the
> unmapping.
To be specific I think it is this hunk in another patch that matches
the above:
@@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+ unsigned int attrs = 0;
unsigned int i;
+ if (req->cmd_flags & REQ_MMIO)
+ attrs = DMA_ATTR_MMIO;
+
for (i = 0; i < iod->nr_dma_vecs; i++)
- dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr,
- iod->dma_vecs[i].len, rq_dma_dir(req));
+ dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
+ iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
And it is functionally fine to split the series like this because
unmap_page is a nop around unmap_phys:
void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
enum dma_data_direction dir, unsigned long attrs)
{
if (unlikely(attrs & DMA_ATTR_MMIO))
return;
dma_unmap_phys(dev, addr, size, dir, attrs);
}
EXPORT_SYMBOL(dma_unmap_page_attrs);
Jason
Powered by blists - more mailing lists