[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0234176-e86b-6e7c-686d-b4fef719aad7@deltatee.com>
Date: Wed, 19 Apr 2017 12:01:39 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Dan Williams <dan.j.williams@...el.com>,
Bjorn Helgaas <helgaas@...nel.org>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
"James E.J. Bottomley" <jejb@...ux.vnet.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Jens Axboe <axboe@...nel.dk>,
Steve Wise <swise@...ngridcomputing.com>,
Stephen Bates <sbates@...thlin.com>,
Max Gurtovoy <maxg@...lanox.com>,
Keith Busch <keith.busch@...el.com>, linux-pci@...r.kernel.org,
linux-scsi <linux-scsi@...r.kernel.org>,
linux-nvme@...ts.infradead.org, linux-rdma@...r.kernel.org,
linux-nvdimm <linux-nvdimm@...1.01.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jerome Glisse <jglisse@...hat.com>
Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
On 19/04/17 11:14 AM, Jason Gunthorpe wrote:
> I don't see a use for the dma_map function pointer at this point..
Yes, it is kind of like designing for the future. I just find it a
little odd calling the pci functions in the iommu.
> It doesn't make alot of sense for the completor of the DMA to provide
> a mapping op, the mapping process is *path* specific, not specific to
> a completer/initiator.
I'm just spit balling here but if HMM wanted to use unaddressable memory
as a DMA target, it could set that function to create a window ine gpu
memory, then call the pci_p2p_same_segment and return the result as the
dma address.
> dma_addr_t pci_p2p_same_segment(struct device *initator,
> struct device *completer,
> struct page *page)
I'm not sure I like the name pci_p2p_same_segment. It reads as though
it's only checking if the devices are not the same segment. It also may
be that, in the future, it supports devices on different segments. I'd
call it more like pci_p2p_dma_map.
> for (each sgl) {
Thanks, this code fleshes things out nicely
> To me it looks like the code duplication across the iommu stuff comes
> from just duplicating the basic iommu algorithm in every driver.
Yes, this is true.
> To clean that up I think someone would need to hoist the overall sgl
> loop and use more ops callbacks eg allocate_iommu_range,
> assign_page_to_rage, dealloc_range, etc. This is a problem p2p makes
> worse, but isn't directly causing :\
Yup.
Logan
Powered by blists - more mailing lists