[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0087fba9-7bcb-8bb3-c26e-4ef3e4970c34@amd.com>
Date: Sun, 27 Nov 2016 15:07:22 +0100
From: Christian König <christian.koenig@....com>
To: Haggai Eran <haggaie@...lanox.com>,
Jason Gunthorpe <jgunthorpe@...idianresearch.com>
CC: Logan Gunthorpe <logang@...tatee.com>,
Serguei Sagalovitch <serguei.sagalovitch@....com>,
Dan Williams <dan.j.williams@...el.com>,
"Deucher, Alexander" <Alexander.Deucher@....com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...1.01.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"Kuehling, Felix" <Felix.Kuehling@....com>,
"Bridgman, John" <John.Bridgman@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"Sander, Ben" <ben.sander@....com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
"Blinzer, Paul" <Paul.Blinzer@....com>,
"Linux-media@...r.kernel.org" <Linux-media@...r.kernel.org>,
Max Gurtovoy <maxg@...lanox.com>
Subject: Re: Enabling peer to peer device transactions for PCIe devices
Am 27.11.2016 um 15:02 schrieb Haggai Eran:
> On 11/25/2016 9:32 PM, Jason Gunthorpe wrote:
>> On Fri, Nov 25, 2016 at 02:22:17PM +0100, Christian König wrote:
>>
>>>> Like you say below we have to handle short lived in the usual way, and
>>>> that covers basically every device except IB MRs, including the
>>>> command queue on a NVMe drive.
>>> Well a problem which wasn't mentioned so far is that while GPUs do have a
>>> page table to mirror the CPU page table, they usually can't recover from
>>> page faults.
>>> So what we do is making sure that all memory accessed by the GPU Jobs stays
>>> in place while those jobs run (pretty much the same pinning you do for the
>>> DMA).
>> Yes, it is DMA, so this is a valid approach.
>>
>> But, you don't need page faults from the GPU to do proper coherent
>> page table mirroring. Basically when the driver submits the work to
>> the GPU it 'faults' the pages into the CPU and mirror translation
>> table (instead of pinning).
>>
>> Like in ODP, MMU notifiers/HMM are used to monitor for translation
>> changes. If a change comes in the GPU driver checks if an executing
>> command is touching those pages and blocks the MMU notifier until the
>> command flushes, then unfaults the page (blocking future commands) and
>> unblocks the mmu notifier.
> I think blocking mmu notifiers against something that is basically
> controlled by user-space can be problematic. This can block things like
> memory reclaim. If you have user-space access to the device's queues,
> user-space can block the mmu notifier forever.
Really good point.
I think this means the bare minimum if we don't have recoverable page
faults is to have preemption support like Felix described in his answer
as well.
Going to keep that in mind,
Christian.
Powered by blists - more mailing lists