[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4goxaeOFJjg3ior4zMRZmxuz=OZqd8Rb8zRmf1V2nDhAg@mail.gmail.com>
Date: Tue, 18 Apr 2017 16:02:40 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Bjorn Helgaas <helgaas@...nel.org>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
"James E.J. Bottomley" <jejb@...ux.vnet.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Jens Axboe <axboe@...nel.dk>,
Steve Wise <swise@...ngridcomputing.com>,
Stephen Bates <sbates@...thlin.com>,
Max Gurtovoy <maxg@...lanox.com>,
Keith Busch <keith.busch@...el.com>, linux-pci@...r.kernel.org,
linux-scsi <linux-scsi@...r.kernel.org>,
linux-nvme@...ts.infradead.org, linux-rdma@...r.kernel.org,
linux-nvdimm <linux-nvdimm@...1.01.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jerome Glisse <jglisse@...hat.com>
Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
On Tue, Apr 18, 2017 at 3:56 PM, Logan Gunthorpe <logang@...tatee.com> wrote:
>
>
> On 18/04/17 04:50 PM, Dan Williams wrote:
>> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe <logang@...tatee.com> wrote:
>>>
>>>
>>> On 18/04/17 04:28 PM, Dan Williams wrote:
>>>> Unlike the pci bus address offset case which I think is fundamental to
>>>> support since shipping archs do this today, I think it is ok to say
>>>> p2p is restricted to a single sgl that gets to talk to host memory or
>>>> a single device. That said, what's wrong with a p2p aware map_sg
>>>> implementation calling up to the host memory map_sg implementation on
>>>> a per sgl basis?
>>>
>>> I think Ben said they need mixed sgls and that is where this gets messy.
>>> I think I'd prefer this too given trying to enforce all sgs in a list to
>>> be one type or another could be quite difficult given the state of the
>>> scatterlist code.
>>>
>>>>> Also, what happens if p2p pages end up getting passed to a device that
>>>>> doesn't have the injected dma_ops?
>>>>
>>>> This goes back to limiting p2p to a single pci host bridge. If the p2p
>>>> capability is coordinated with the bridge rather than between the
>>>> individual devices then we have a central point to catch this case.
>>>
>>> Not really relevant. If these pages get to userspace (as people seem
>>> keen on doing) or a less than careful kernel driver they could easily
>>> get into the dma_map calls of devices that aren't even pci related (via
>>> an O_DIRECT operation on an incorrect file or something). The common
>>> code must reject these and can't rely on an injected dma op.
>>
>> No, we can't do that at get_user_pages() time, it will always need to
>> be up to the device driver to fail dma that it can't perform.
>
> I'm not sure I follow -- are you agreeing with me? The dma_map_* needs
> to fail for any dma it cannot perform. Which means either all dma_ops
> providers need to be p2p aware or this logic has to be in dma_map_*
> itself. My point being: you can't rely on an injected dma_op for some
> devices to handle the fail case globally.
Ah, I see what you're saying now. Yes, we do need something that
guarantees any dma mapping implementation that gets a struct page that
it does now know how to translate properly fails the request.
Powered by blists - more mailing lists