lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Jun 2017 07:22:05 +0000
From:   "Bridgman, John" <John.Bridgman@....com>
To:     Jérôme Glisse <jglisse@...hat.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
CC:     Dan Williams <dan.j.williams@...el.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        John Hubbard <jhubbard@...dia.com>,
        "Sander, Ben" <ben.sander@....com>,
        "Kuehling, Felix" <Felix.Kuehling@....com>
Subject: RE: [HMM 00/15] HMM (Heterogeneous Memory Management) v23

Hi Jerome, 

I'm just getting back to this; sorry for the late responses. 

Your description of HMM talks about blocking CPU accesses when a page has been migrated to device memory, and you treat that as a "given" in the HMM design. Other than BAR limits, coherency between CPU and device caches and performance on read-intensive CPU accesses to device memory are there any other reasons for this ?

The reason I'm asking is that we make fairly heavy use of large BAR support which allows the CPU to directly access all of the device memory on each of the GPUs, albeit without cache coherency, and there are some cases where it appears that allowing CPU access to the page in device memory would be more efficient than constantly migrating back and forth.

Migrating the page back and forth between device system memory appears at first glance to provide three benefits (albeit at a cost):

1. BAR limit - this is kind of a no-brainer, in the sense that if the CPU can not access the VRAM then you have to migrate it

2. coherency - having the CPU fault when page is in device memory or vice versa gives you an event which can be used to allow cache flushing on one device before handing ownership (from a cache perspective) to the other device - but at first glance you don't actually have to move the page to get that benefit

3. performance - CPU writes to device memory can be pretty fast since the transfers can be "fire and forget" but reads are always going to be slow because of the round-trip nature... but the tradeoff between access performance and migration overhead is more of a heuristic thing than a black-and-white thing

Do you see any HMM-related problems in principle with optionally leaving a page in device memory while the CPU is accessing it assuming that only one CPU/device "owns" the page from a cache POV at any given time ? 

Thanks,
John

(btw apologies for what looks like top-posting - I tried inserting the questions a few different places in your patches but each time ended up messy)

>-----Original Message-----
>From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On
>Behalf Of Jérôme Glisse
>Sent: Wednesday, May 24, 2017 1:20 PM
>To: akpm@...ux-foundation.org; linux-kernel@...r.kernel.org; linux-
>mm@...ck.org
>Cc: Dan Williams; Kirill A . Shutemov; John Hubbard; Jérôme Glisse
>Subject: [HMM 00/15] HMM (Heterogeneous Memory Management) v23
>
>Patchset is on top of git://git.cmpxchg.org/linux-mmotm.git so i test same
>kernel as kbuild system, git branch:
>
>https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-v23
>
>Change since v22 is use of static key for special ZONE_DEVICE case in
>put_page() and build fix for architecture with no mmu.
>
>Everything else is the same. Below is the long description of what HMM is
>about and why. At the end of this email i describe briefly each patch and
>suggest reviewers for each of them.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ