lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3781848f-d4a0-69d3-678d-c5623bac3c10@mellanox.com>
Date:   Sun, 4 Dec 2016 09:53:26 +0200
From:   Haggai Eran <haggaie@...lanox.com>
To:     Jason Gunthorpe <jgunthorpe@...idianresearch.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "linux-nvdimm@...1.01.org" <linux-nvdimm@...1.01.org>,
        "christian.koenig@....com" <christian.koenig@....com>,
        "Suravee.Suthikulpanit@....com" <Suravee.Suthikulpanit@....com>,
        "John.Bridgman@....com" <John.Bridgman@....com>,
        "Alexander.Deucher@....com" <Alexander.Deucher@....com>,
        "Linux-media@...r.kernel.org" <Linux-media@...r.kernel.org>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "logang@...tatee.com" <logang@...tatee.com>,
        "dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
        "Max Gurtovoy" <maxg@...lanox.com>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        "serguei.sagalovitch@....com" <serguei.sagalovitch@....com>,
        "Paul.Blinzer@....com" <Paul.Blinzer@....com>,
        "Felix.Kuehling@....com" <Felix.Kuehling@....com>,
        "ben.sander@....com" <ben.sander@....com>
Subject: Re: Enabling peer to peer device transactions for PCIe devices

On 11/30/2016 6:23 PM, Jason Gunthorpe wrote:
>> and O_DIRECT operations that access GPU memory.
> This goes through user space so there is still a VMA..
> 
>> Also, HMM's migration between two GPUs could use peer to peer in the
>> kernel, although that is intended to be handled by the GPU driver if
>> I understand correctly.
> Hum, presumably these migrations are VMA backed as well...
I guess so.

>>> Presumably in-kernel could use a vmap or something and the same basic
>>> flow?
>> I think we can achieve the kernel's needs with ZONE_DEVICE and DMA-API support
>> for peer to peer. I'm not sure we need vmap. We need a way to have a scatterlist
>> of MMIO pfns, and ZONE_DEVICE allows that.
> Well, if there is no virtual map then we are back to how do you do
> migrations and other things people seem to want to do on these
> pages. Maybe the loose 'struct page' flow is not for those users.
I was thinking that kernel use cases would disallow migration, similar to how 
non-ODP MRs would work. Either they are short-lived (like an O_DIRECT transfer)
or they can be longed lived but non-migratable (like perhaps a CMB staging buffer).

> But I think if you want kGPU or similar then you probably need vmaps
> or something similar to represent the GPU pages in kernel memory.
Right, although sometimes the GPU pages are simply inaccessible to the CPU.
In any case, I haven't thought about kGPU as a use-case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ