[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250422072606.GC48485@unreal>
Date: Tue, 22 Apr 2025 10:26:06 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>,
Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Jake Edge <jake@....net>, Jonathan Corbet <corbet@....net>,
Jason Gunthorpe <jgg@...pe.ca>, Zhu Yanjun <zyjzyj2000@...il.com>,
Robin Murphy <robin.murphy@....com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>, Sagi Grimberg <sagi@...mberg.me>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, linux-rdma@...r.kernel.org,
iommu@...ts.linux.dev, linux-nvme@...ts.infradead.org,
linux-pci@...r.kernel.org, kvm@...r.kernel.org, linux-mm@...ck.org,
Niklas Schnelle <schnelle@...ux.ibm.com>,
Chuck Lever <chuck.lever@...cle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Kanchan Joshi <joshi.k@...sung.com>,
Chaitanya Kulkarni <kch@...dia.com>
Subject: Re: [PATCH v8 23/24] nvme-pci: convert to blk_rq_dma_map
On Tue, Apr 22, 2025 at 07:00:50AM +0200, Christoph Hellwig wrote:
> > + dma_len = min_t(u32, length, NVME_CTRL_PAGE_SIZE - (dma_addr & (NVME_CTRL_PAGE_SIZE - 1)));
>
> And overly long line slipped in here during one of the rebases.
>
> > + /*
> > + * We are in this mode as IOVA path wasn't taken and DMA length
> > + * is morethan two sectors. In such case, mapping was perfoormed
> > + * per-NVME_CTRL_PAGE_SIZE, so unmap accordingly.
> > + */
>
> Where does this comment come from? Lots of spelling errors, and I
> also don't understand what it is talking about as setors are entirely
> irrelevant here.
I'm trying to say when this do {} while is taken and sector is a wrong
word to describe NVME_CTRL_PAGE_SIZE. Let's remove this comment.
>
> > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_state, iod->total_len)) {
> > + if (iod->cmd.common.flags & NVME_CMD_SGL_METABUF)
> > + nvme_free_sgls(dev, req);
>
> With the addition of metadata SGL support this also needs to check
> NVME_CMD_SGL_METASEG.
>
> The commit message should also really mentioned that someone
> significantly altered the patch for merging with latest upstream,
> as I as the nominal author can't recognize some of that code.
Someone :), I thought that adding my SOB is enough.
>
> > + unsigned int entries = req->nr_integrity_segments;
> > struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> >
> > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_meta_state,
> > + iod->total_meta_len)) {
> > + if (entries == 1) {
> > + dma_unmap_page(dev->dev, iod->meta_dma,
> > + rq_integrity_vec(req).bv_len,
> > + rq_dma_dir(req));
> > + return;
> > }
> > }
> >
> > + dma_pool_free(dev->prp_small_pool, iod->meta_list, iod->meta_dma);
>
> This now doesn't unmap for entries > 1 in the non-IOVA case.
I forgot to unmap SGL metadata, in non-SGL, the metadata is the size of
one entry. There is no "entries > 1" for non-SGL path.
WDYT?
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a9c298a45bf1..73dbedd7daf6 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -839,13 +839,21 @@ static __always_inline void nvme_unmap_metadata(struct nvme_dev *dev,
{
unsigned int entries = req->nr_integrity_segments;
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ struct nvme_sgl_desc *sg_list = iod->meta_list;
+ enum dma_data_direction dir = rq_dma_dir(req);
if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_meta_state,
iod->total_meta_len)) {
- if (entries == 1) {
+ if (iod->cmd.common.flags & NVME_CMD_SGL_METASEG) {
+ unsigned int i;
+
+ for (i = 0; i < entries; i++)
+ dma_unmap_page(dev->dev,
+ le64_to_cpu(sg_list[i].addr),
+ le32_to_cpu(sg_list[i].length), dir);
+ } else {
dma_unmap_page(dev->dev, iod->meta_dma,
- rq_integrity_vec(req).bv_len,
- rq_dma_dir(req));
+ rq_integrity_vec(req).bv_len, dir);
return;
}
}
>
Powered by blists - more mailing lists