lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o8y5h57d.fsf@yhuang-dev.intel.com>
Date:   Fri, 25 Oct 2019 11:30:46 +0800
From:   "Huang\, Ying" <ying.huang@...el.com>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Jonathan Adams <jwadams@...gle.com>, Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "Williams\, Dan J" <dan.j.williams@...el.com>,
        "Verma\, Vishal L" <vishal.l.verma@...el.com>,
        Wu Fengguang <fengguang.wu@...el.com>
Subject: Re: [RFC] Memory Tiering

Dave Hansen <dave.hansen@...el.com> writes:

> On 10/23/19 4:11 PM, Jonathan Adams wrote:
>> we would have a bidirectional attachment:
>> 
>> A is marked "move cold pages to" B
>> B is marked "move hot pages to" A
>> C is marked "move cold pages to" D
>> D is marked "move hot pages to" C
>> 
>> By using autonuma for moving PMEM pages back to DRAM, you avoid
>> needing the B->A  & D->C links, at the cost of migrating the pages
>> back synchronously at pagefault time (assuming my understanding of how
>> autonuma works is accurate).
>> 
>> Our approach still lets you have multiple levels of hierarchy for a
>> given socket (you could imaging an "E" node with the same relation to
>> "B" as "B" has to "A"), but doesn't make it easy to represent (say) an
>> "E" which was equally close to all sockets (which I could imagine for
>> something like remote memory on GenZ or what-have-you), since there
>> wouldn't be a single back link; there would need to be something like
>> your autonuma support to achieve that.
>> 
>> Does that make sense?
>
> Yes, it does.  We've actually tried a few other approaches separate from
> autonuma-based ones for promotion.  For some of those, we have a
> promotion path which is separate from the demotion path.
>
> That said, I took a quick look to see what the autonuma behavior was and
> couldn't find anything obvious.  Ying, when moving a slow page due to
> autonuma, do we move it close to the CPU that did the access, or do we
> promote it to the DRAM close to the slow memory where it is now?

Now in autonuma, the slow page will be moved to the CPU that did the
access.  So I think Jonathan's requirement has been covered already.

Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ