[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aLDCC4rXcIKF8sRg@kbusch-mbp>
Date: Thu, 28 Aug 2025 14:54:35 -0600
From: Keith Busch <kbusch@...nel.org>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Leon Romanovsky <leon@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Abdiel Janulgue <abdiel.janulgue@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Alex Gaynor <alex.gaynor@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@....de>, Danilo Krummrich <dakr@...nel.org>,
iommu@...ts.linux.dev, Jason Wang <jasowang@...hat.com>,
Jens Axboe <axboe@...nel.dk>, Joerg Roedel <joro@...tes.org>,
Jonathan Corbet <corbet@....net>, Juergen Gross <jgross@...e.com>,
kasan-dev@...glegroups.com, linux-block@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-nvme@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, linux-trace-kernel@...r.kernel.org,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
"Michael S. Tsirkin" <mst@...hat.com>,
Miguel Ojeda <ojeda@...nel.org>,
Robin Murphy <robin.murphy@....com>, rust-for-linux@...r.kernel.org,
Sagi Grimberg <sagi@...mberg.me>,
Stefano Stabellini <sstabellini@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
virtualization@...ts.linux.dev, Will Deacon <will@...nel.org>,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH v4 15/16] block-dma: properly take MMIO path
On Thu, Aug 28, 2025 at 04:18:20PM -0300, Jason Gunthorpe wrote:
> On Thu, Aug 28, 2025 at 01:10:32PM -0600, Keith Busch wrote:
> >
> > Data and metadata are mapped as separate operations. They're just
> > different parts of one blk-mq request.
>
> In that case the new bit leon proposes should only be used for the
> unmap of the data pages and the metadata unmap should always be
> unmapped as CPU?
The common path uses host allocated memory to attach integrity metadata,
but that isn't the only path. A user can attach their own metadata with
nvme passthrough or the recent io_uring application metadata, and that
could have been allocated from anywhere.
In truth though, I hadn't tried p2p metadata before today, and it looks
like bio_integrity_map_user() is missing the P2P extraction flags to
make that work. Just added this patch below, now I can set p2p or host
memory independently for data and integrity payloads:
---
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 6b077ca937f6b..cf45603e378d5 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -265,6 +265,7 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
unsigned int align = blk_lim_dma_alignment_and_pad(&q->limits);
struct page *stack_pages[UIO_FASTIOV], **pages = stack_pages;
struct bio_vec stack_vec[UIO_FASTIOV], *bvec = stack_vec;
+ iov_iter_extraction_t extraction_flags = 0;
size_t offset, bytes = iter->count;
unsigned int nr_bvecs;
int ret, nr_vecs;
@@ -286,7 +287,12 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
}
copy = !iov_iter_is_aligned(iter, align, align);
- ret = iov_iter_extract_pages(iter, &pages, bytes, nr_vecs, 0, &offset);
+
+ if (blk_queue_pci_p2pdma(q))
+ extraction_flags |= ITER_ALLOW_P2PDMA;
+
+ ret = iov_iter_extract_pages(iter, &pages, bytes, nr_vecs,
+ extraction_flags, &offset);
if (unlikely(ret < 0))
goto free_bvec;
--
Powered by blists - more mailing lists