[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170106003034.GB4670@obsidianresearch.com>
Date: Thu, 5 Jan 2017 17:30:34 -0700
From: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
To: Jerome Glisse <jglisse@...hat.com>
Cc: Jerome Glisse <j.glisse@...il.com>,
"Deucher, Alexander" <Alexander.Deucher@....com>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'linux-rdma@...r.kernel.org'" <linux-rdma@...r.kernel.org>,
"'linux-nvdimm@...ts.01.org'" <linux-nvdimm@...1.01.org>,
"'Linux-media@...r.kernel.org'" <Linux-media@...r.kernel.org>,
"'dri-devel@...ts.freedesktop.org'" <dri-devel@...ts.freedesktop.org>,
"'linux-pci@...r.kernel.org'" <linux-pci@...r.kernel.org>,
"Kuehling, Felix" <Felix.Kuehling@....com>,
"Sagalovitch, Serguei" <Serguei.Sagalovitch@....com>,
"Blinzer, Paul" <Paul.Blinzer@....com>,
"Koenig, Christian" <Christian.Koenig@....com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
"Sander, Ben" <ben.sander@....com>, hch@...radead.org,
david1.zhou@....com, qiang.yu@....com
Subject: Re: Enabling peer to peer device transactions for PCIe devices
On Thu, Jan 05, 2017 at 06:23:52PM -0500, Jerome Glisse wrote:
> > I still don't understand what you driving at - you've said in both
> > cases a user VMA exists.
>
> In the former case no, there is no VMA directly but if you want one than
> a device can provide one. But such VMA is useless as CPU access is not
> expected.
I disagree it is useless, the VMA is going to be necessary to support
upcoming things like CAPI, you need it to support O_DIRECT from the
filesystem, DPDK, etc. This is why I am opposed to any model that is
not VMA based for setting up RDMA - that is shorted sighted and does
not seem to reflect where the industry is going.
So focus on having VMA backed by actual physical memory that covers
your GPU objects and ask how do we wire up the '__user *' to the DMA
API in the best way so the DMA API still has enough information to
setup IOMMUs and whatnot.
> What i was trying to get accross is that no matter what level you
> consider in the end you still need something at the DMA API level.
> And that the 2 different use case (device vma or regular vma) means
> 2 differents API for the device driver.
I agree we need new stuff at the DMA API level, but I am opposed to
the idea we need two API paths that the *driver* has to figure out.
That is fundamentally not what I want as a driver developer.
Give me a common API to convert '__user *' to a scatter list and pin
the pages. This needs to figure out your two cases. And Huge
Pages. And ZONE_DIRECT.. (a better get_user_pages)
Give me an API to take the scatter list and DMA map it, handling all
the stuff associated with peer-peer. (a better dma_map_sg)
Give me a notifier scheme to rework my scatter list when physical
pages need to change (mmu notifiers)
Use the scatter list memory to convey needed information from the
first step to the second.
Do not bother the driver with distinctions on what kind of memory is
behind that VMA. Don't ask me to use get_user_pages or
gpu_get_user_pages, do not ask me to use dma_map_sg or
dma_map_sg_peer_direct. The Driver Doesn't Need To Know.
IMHO this is why GPU direct is not mergable - it creates a crazy
parallel mini-mm subsystem inside RDMA and uses that to connect to a
GPU driver, everything is expected to have parallel paths for GPU
direct and normal MM. No good at all.
> > So, how do you identify these GPU objects? How do you expect RDMA
> > convert them to scatter lists? How will ODP work?
>
> No ODP on those. If you want vma, the GPU device driver can provide
You said you needed invalidate, that has to be done via ODP.
Jason
Powered by blists - more mailing lists