lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170719022537.GA6911@redhat.com>
Date:   Tue, 18 Jul 2017 22:25:38 -0400
From:   Jerome Glisse <jglisse@...hat.com>
To:     Bob Liu <liubo95@...wei.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        John Hubbard <jhubbard@...dia.com>,
        David Nellans <dnellans@...dia.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Balbir Singh <bsingharora@...il.com>,
        Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH 0/6] Cache coherent device memory (CDM) with HMM v5

On Wed, Jul 19, 2017 at 09:46:10AM +0800, Bob Liu wrote:
> On 2017/7/18 23:38, Jerome Glisse wrote:
> > On Tue, Jul 18, 2017 at 11:26:51AM +0800, Bob Liu wrote:
> >> On 2017/7/14 5:15, Jérôme Glisse wrote:
> >>> Sorry i made horrible mistake on names in v4, i completly miss-
> >>> understood the suggestion. So here i repost with proper naming.
> >>> This is the only change since v3. Again sorry about the noise
> >>> with v4.
> >>>
> >>> Changes since v4:
> >>>   - s/DEVICE_HOST/DEVICE_PUBLIC
> >>>
> >>> Git tree:
> >>> https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-cdm-v5
> >>>
> >>>
> >>> Cache coherent device memory apply to architecture with system bus
> >>> like CAPI or CCIX. Device connected to such system bus can expose
> >>> their memory to the system and allow cache coherent access to it
> >>> from the CPU.
> >>>
> >>> Even if for all intent and purposes device memory behave like regular
> >>> memory, we still want to manage it in isolation from regular memory.
> >>> Several reasons for that, first and foremost this memory is less
> >>> reliable than regular memory if the device hangs because of invalid
> >>> commands we can loose access to device memory. Second CPU access to
> >>> this memory is expected to be slower than to regular memory. Third
> >>> having random memory into device means that some of the bus bandwith
> >>> wouldn't be available to the device but would be use by CPU access.
> >>>
> >>> This is why we want to manage such memory in isolation from regular
> >>> memory. Kernel should not try to use this memory even as last resort
> >>> when running out of memory, at least for now.
> >>>
> >>
> >> I think set a very large node distance for "Cache Coherent Device Memory"
> >> may be a easier way to address these concerns.
> > 
> > Such approach was discuss at length in the past see links below. Outcome
> > of discussion:
> >   - CPU less node are bad
> >   - device memory can be unreliable (device hang) no way for application
> >     to understand that
> 
> Device memory can also be more reliable if using high quality and expensive memory.

Even ECC memory does not compensate for device hang. When your GPU lockups
you might need to re-init GPU from scratch after which the content of the
device memory is unreliable. During init the device memory might not get
proper clock or proper refresh cycle and thus is susceptible to corruption.

> 
> >   - application and driver NUMA madvise/mbind/mempolicy ... can conflict
> >     with each other and no way the kernel can figure out which should
> >     apply
> >   - NUMA as it is now would not work as we need further isolation that
> >     what a large node distance would provide
> > 
> 
> Agree, that's where we need spend time on.
> 
> One drawback of HMM-CDM I'm worry about is one more extra copy.
> In the cache coherent case, CPU can write data to device memory
> directly then start fpga/GPU/other accelerators.

There is not necessarily an extra copy. Device driver can pre-allocate
virtual address range of a process with device memory. Device page fault
can directly allocate device memory. Once allocated CPU access will use
the device memory.

There is plan to allow other allocation (CPU page fault, file cache, ...)
to also use device memory directly. We just don't know what kind of
userspace API will fit best for that so at first it might be hidden behind
device driver specific ioctl. 

Jérôme

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ