[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251022062135.GD4317@lst.de>
Date: Wed, 22 Oct 2025 08:21:35 +0200
From: Christoph Hellwig <hch@....de>
To: Leon Romanovsky <leon@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH v2 2/2] block-dma: properly take MMIO path
On Mon, Oct 20, 2025 at 08:00:21PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@...dia.com>
>
> In commit eadaa8b255f3 ("dma-mapping: introduce new DMA attribute to
> indicate MMIO memory"), DMA_ATTR_MMIO attribute was added to describe
> MMIO addresses, which require to avoid any memory cache flushing, as
> an outcome of the discussion pointed in Link tag below.
>
> In case of PCI_P2PDMA_MAP_THRU_HOST_BRIDGE transfer, blk-mq-dm logic
> treated this as regular page and relied on "struct page" DMA flow.
> That flow performs CPU cache flushing, which shouldn't be done here,
> and doesn't set IOMMU_MMIO flag in DMA-IOMMU case.
>
> Link: https://lore.kernel.org/all/f912c446-1ae9-4390-9c11-00dce7bf0fd3@arm.com/
> Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> ---
> block/blk-mq-dma.c | 6 ++++--
> drivers/nvme/host/pci.c | 23 +++++++++++++++++++++--
> include/linux/blk-integrity.h | 7 ++++---
> include/linux/blk-mq-dma.h | 11 +++++++----
> 4 files changed, 36 insertions(+), 11 deletions(-)
>
> diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c
> index 4ba7b0323da4..3ede8022b41c 100644
> --- a/block/blk-mq-dma.c
> +++ b/block/blk-mq-dma.c
> @@ -94,7 +94,7 @@ static bool blk_dma_map_direct(struct request *req, struct device *dma_dev,
> struct blk_dma_iter *iter, struct phys_vec *vec)
> {
> iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len,
> - rq_dma_dir(req), 0);
> + rq_dma_dir(req), iter->attrs);
> if (dma_mapping_error(dma_dev, iter->addr)) {
> iter->status = BLK_STS_RESOURCE;
> return false;
> @@ -116,7 +116,7 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev,
>
> do {
> error = dma_iova_link(dma_dev, state, vec->paddr, mapped,
> - vec->len, dir, 0);
> + vec->len, dir, iter->attrs);
> if (error)
> break;
> mapped += vec->len;
> @@ -184,6 +184,8 @@ static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
> * P2P transfers through the host bridge are treated the
> * same as non-P2P transfers below and during unmap.
> */
> + iter->attrs |= DMA_ATTR_MMIO;
DMA_ATTR_MMIO is the only flags in iter->attrs, and I can't see any other
DMA mapping flag that would fit here. So I'd rather store the
enum pci_p2pdma_map_type here, which also removes the need for REQ_P2PDMA
and BIP_P2P_DMA when propagating that to nvme.
Powered by blists - more mailing lists