[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201106170036.18713-1-logang@deltatee.com>
Date: Fri, 6 Nov 2020 10:00:21 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org, linux-pci@...r.kernel.org,
linux-mm@...ck.org, iommu@...ts.linux-foundation.org
Cc: Stephen Bates <sbates@...thlin.com>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Christian König <christian.koenig@....com>,
Ira Weiny <iweiny@...el.com>,
John Hubbard <jhubbard@...dia.com>,
Don Dutile <ddutile@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Logan Gunthorpe <logang@...tatee.com>
Subject: [RFC PATCH 00/15] Userspace P2PDMA with O_DIRECT NVMe devices
This RFC enables P2PDMA transfers in userspace between NVMe drives using
existing O_DIRECT operations or the NVMe passthrough IOCTL.
This is accomplished by allowing userspace to allocate chunks of any CMB
by mmaping the NVMe ctrl device (Patches 14 and 15). The resulting memory
will be backed by P2P pages and can be passed only to O_DIRECT
operations. A flag is added to GUP() in Patch 10 and Patches 11 through 13
wire this flag up based on whether the block queue indicates P2PDMA
support.
The above is pretty straight forward and (I hope) largely uncontroversial.
However, the one significant problem in all this is that, presently,
pci_p2pdma_map_sg() requires a homogeneous SGL with all P2PDMA pages or
none. Enhancing GUP to support enforcing this rule would require a huge
hack that I don't expect would be all that pallatable. So this RFC takes
the approach of removing the requirement of having a homogeneous SGL.
With the new common dma-iommu infrastructure, this patchset adds
support for P2PDMA pages into dma_map_sg() which will support AMD,
Intel (soon) and dma-direct implementations. (Other IOMMU
implementations would then be unsupported, notably ARM and PowerPC).
The other major blocker is that in order to implement support for
P2PDMA pages in dma_map_sg(), a flag is necessary to determine if a
given dma_addr_t points to P2PDMA memory or to an IOVA so that it can
be unmapped appropriately in dma_unmap_sg(). The (ugly) approach this
RFC takes is to use the top bit in the dma_length field and ensure
callers are prepared for it using a new DMA_ATTR_P2PDMA flag.
I suspect, the ultimate solution to this blocker will be to implement
some kind of new dma_op that doesn't use the SGL. Ideas have been
thrown around in the past for one that maps some kind of novel dma_vec
directly from a bio_vec. This will become a lot easier to implement if
more dma_ops providers get converted to the new dma-iommu
implementation, but this will take time.
Alternative ideas or other feedback welcome.
This series is based on v5.10-rc2 with Lu Baolu's (and Tom Murphy's)
v4 patchset for converting the Intel IOMMU to dma-iommu[1]. A git
branch is available here:
https://github.com/sbates130272/linux-p2pmem/ p2pdma_user_cmb_rfc
Thanks,
Logan
[1] https://lkml.kernel.org/lkml/20200927063437.13988-1-baolu.lu@linux.intel.com/T/#u.
Logan Gunthorpe (15):
PCI/P2PDMA: Don't sleep in upstream_bridge_distance_warn()
PCI/P2PDMA: Attempt to set map_type if it has not been set
PCI/P2PDMA: Introduce pci_p2pdma_should_map_bus() and
pci_p2pdma_bus_offset()
lib/scatterlist: Add flag for indicating P2PDMA segments in an SGL
dma-direct: Support PCI P2PDMA pages in dma-direct map_sg
dma-mapping: Add flags to dma_map_ops to indicate PCI P2PDMA support
iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg
nvme-pci: Check DMA ops when indicating support for PCI P2PDMA
nvme-pci: Convert to using dma_map_sg for p2pdma pages
mm: Introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages
iov_iter: Introduce iov_iter_get_pages_[alloc_]flags()
block: Set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()
block: Set FOLL_PCI_P2PDMA in bio_map_user_iov()
PCI/P2PDMA: Introduce pci_mmap_p2pmem()
nvme-pci: Allow mmaping the CMB in userspace
block/bio.c | 7 +-
block/blk-map.c | 7 +-
drivers/dax/super.c | 7 +-
drivers/iommu/dma-iommu.c | 63 +++++++++++--
drivers/nvme/host/core.c | 14 ++-
drivers/nvme/host/nvme.h | 3 +-
drivers/nvme/host/pci.c | 50 ++++++----
drivers/pci/p2pdma.c | 178 +++++++++++++++++++++++++++++++++---
include/linux/dma-map-ops.h | 3 +
include/linux/dma-mapping.h | 16 ++++
include/linux/memremap.h | 4 +-
include/linux/mm.h | 1 +
include/linux/pci-p2pdma.h | 17 ++++
include/linux/scatterlist.h | 4 +
include/linux/uio.h | 21 ++++-
kernel/dma/direct.c | 33 ++++++-
kernel/dma/mapping.c | 8 ++
lib/iov_iter.c | 25 ++---
mm/gup.c | 28 +++---
mm/huge_memory.c | 8 +-
mm/memory-failure.c | 4 +-
mm/memremap.c | 14 ++-
22 files changed, 427 insertions(+), 88 deletions(-)
base-commit: 5ba8a2512e8c5f5cf9b7309dc895612f0a77a399
--
2.20.1
Powered by blists - more mailing lists