[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161123232503.GA13965@obsidianresearch.com>
Date: Wed, 23 Nov 2016 16:25:03 -0700
From: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Logan Gunthorpe <logang@...tatee.com>,
Serguei Sagalovitch <serguei.sagalovitch@....com>,
"Deucher, Alexander" <Alexander.Deucher@....com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...1.01.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"Kuehling, Felix" <Felix.Kuehling@....com>,
"Bridgman, John" <John.Bridgman@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"Koenig, Christian" <Christian.Koenig@....com>,
"Sander, Ben" <ben.sander@....com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
"Blinzer, Paul" <Paul.Blinzer@....com>,
"Linux-media@...r.kernel.org" <Linux-media@...r.kernel.org>,
Haggai Eran <haggaie@...lanox.com>
Subject: Re: Enabling peer to peer device transactions for PCIe devices
On Wed, Nov 23, 2016 at 02:42:12PM -0800, Dan Williams wrote:
> > The crucial part for this discussion is the ability to fence and block
> > DMA for a specific range. This is the hardware capability that lets
> > page migration happen: fence&block DMA, migrate page, update page
> > table in HCA, unblock DMA.
>
> Wait, ODP requires migratable pages, ZONE_DEVICE pages are not
> migratable.
Does it? I didn't think so.. Does ZONE_DEVICE break MMU notifiers/etc
or something? There is certainly nothing about the hardware that cares
about ZONE_DEVICE vs System memory.
I used 'migration' in the broader sense of doing any transformation to
the page such that the DMA address changes - not the specific kernel
MM process...
> You can't replace a PCIe mapping with just any other System RAM
> physical address, right?
I thought that was exactly what HMM was trying to do? Migrate pages
between CPU and GPU memory as needed. As Serguei has said this process
needs to be driven by the GPU driver.
The peer-peer issue is how do you do that while RDMA is possible on
those pages, because when the page migrates to GPU memory you want the
RDMA to follow it seamlessly.
This is why page table mirroring is the best solution - use the
existing mm machinery to link the DMA driver and whatever is
controlling the VMA.
> At least not without a filesystem recording where things went, but
> at point we're no longer talking about the base P2P-DMA mapping
In the filesystem/DAX case, it would be the filesystem that initiates
any change in the page physical address.
ODP *follows* changes in the VMA it does not cause any change in
address mapping. That has to be done by whoever is in charge of the
VMA.
> something like pnfs-rdma to a DAX filesystem.
Something in the kernel (ie nfs-rdma) would be entirely different. We
generally don't do long lived mappings in the kernel for RDMA
(certainly not for NFS), so it is much more like your basic every day
DMA operation: map, execute, unmap. We probably don't need to use page
table mirroring for this.
ODP comes in when userpsace mmaps a DAX file and then tries to use it
for RDMA. Page table mirroring lets the DAX filesystem decide to move
the backing pages at any time. When it wants to do that it interacts
with the MM in the usual way which links to ODP and makes sure the
migration is seamless.
Jason
Powered by blists - more mailing lists