[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2a148b6e-86bc-4c4d-2f22-d733e2cc94cc@deltatee.com>
Date: Fri, 6 Jan 2017 15:10:32 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
Jerome Glisse <jglisse@...hat.com>
Cc: david1.zhou@....com, qiang.yu@....com,
"'linux-rdma@...r.kernel.org'" <linux-rdma@...r.kernel.org>,
"'linux-nvdimm@...ts.01.org'" <linux-nvdimm@...1.01.org>,
"Kuehling, Felix" <Felix.Kuehling@....com>,
Serguei Sagalovitch <serguei.sagalovitch@....com>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'dri-devel@...ts.freedesktop.org'" <dri-devel@...ts.freedesktop.org>,
"Koenig, Christian" <Christian.Koenig@....com>, hch@...radead.org,
"Deucher, Alexander" <Alexander.Deucher@....com>,
"Sander, Ben" <ben.sander@....com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
"'linux-pci@...r.kernel.org'" <linux-pci@...r.kernel.org>,
Jerome Glisse <j.glisse@...il.com>,
"Blinzer, Paul" <Paul.Blinzer@....com>,
"'Linux-media@...r.kernel.org'" <Linux-media@...r.kernel.org>
Subject: Re: Enabling peer to peer device transactions for PCIe devices
On 06/01/17 11:26 AM, Jason Gunthorpe wrote:
> Make a generic API for all of this and you'd have my vote..
>
> IMHO, you must support basic pinning semantics - that is necessary to
> support generic short lived DMA (eg filesystem, etc). That hardware
> can clearly do that if it can support ODP.
I agree completely.
What we want is for RDMA, O_DIRECT, etc to just work with special VMAs
(ie. at least those backed with ZONE_DEVICE memory). Then
GPU/NVME/DAX/whatever drivers can just hand these VMAs to userspace
(using whatever interface is most appropriate) and userspace can do what
it pleases with them. This makes _so_ much sense and actually largely
already works today (as demonstrated by iopmem).
Though, of course, there are many aspects that could still be improved
like denying CPU access to special VMAs and having get_user_pages avoid
pinning device memory, etc, etc. But all this would just be enhancements
to how VMAs work and not be effected by the basic design described above.
We experimented with GPU Direct and the peer memory patchset for IB and
they were broken by design. They were just a very specific hack into the
IB core and thus didn't help to support O_DIRECT or any other possible
DMA user. And the invalidation thing was completely nuts. We had to pray
an invalidation would never occur because, if it did, our application
would just break.
Logan
Powered by blists - more mailing lists