lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251020145330.GO6199@unreal>
Date: Mon, 20 Oct 2025 17:53:30 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
	Sagi Grimberg <sagi@...mberg.me>, linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 3/3] block-dma: properly take MMIO path

On Mon, Oct 20, 2025 at 02:30:27PM +0200, Christoph Hellwig wrote:
> On Mon, Oct 20, 2025 at 11:52:31AM +0300, Leon Romanovsky wrote:
> > What about this commit message?
> 
> Much bettwer.  Btw, what is the plan for getting rid of the
> "automatic" p2p handling, which would be the logical conflusion from
> this?

I continued with "automatic" p2p code and think that it is structured
pretty well. Why do you want to remove it?

The code in v2 looks like this:

@@ -184,6 +184,8 @@ static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
                 * P2P transfers through the host bridge are treated the
                 * same as non-P2P transfers below and during unmap.
                 */
+               iter->attrs |= DMA_ATTR_MMIO;
+               fallthrough;
        case PCI_P2PDMA_MAP_NONE:
                break;
        default:

...

@@ -1038,6 +1051,9 @@ static blk_status_t nvme_map_data(struct request *req)
        if (!blk_rq_dma_map_iter_start(req, dev->dev, &iod->dma_state, &iter))
                return iter.status;
 
+       if (iter.attrs & DMA_ATTR_MMIO)
+               iod->flags |= IOD_DATA_MMIO;
+
        if (use_sgl == SGL_FORCED ||
            (use_sgl == SGL_SUPPORTED &&
             (sgl_threshold && nvme_pci_avg_seg_size(req) >= sgl_threshold)))
@@ -1060,6 +1076,9 @@ static blk_status_t nvme_pci_setup_meta_sgls(struct request *req)
                                                &iod->meta_dma_state, &iter))
                return iter.status;
 
+       if (iter.attrs & DMA_ATTR_MMIO)
+               iod->flags |= IOD_META_MMIO;
+
        if (blk_rq_dma_map_coalesce(&iod->meta_dma_state))
                entries = 1;

...

@@ -733,8 +739,11 @@ static void nvme_unmap_metadata(struct request *req)
                return;
        }

+       if (iod->flags & IOD_META_MMIO)
+               attrs |= DMA_ATTR_MMIO;
+
        if (!blk_rq_integrity_dma_unmap(req, dma_dev, &iod->meta_dma_state,
-                                       iod->meta_total_len)) {
+                                       iod->meta_total_len, attrs)) {
                if (nvme_pci_cmd_use_meta_sgl(&iod->cmd))
                        nvme_free_sgls(req, sge, &sge[1], attrs);
                else

The code is here (waiting for kbuild results)  https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=block-with-mmio-v2

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ