lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190108145302.GY31793@dhcp22.suse.cz>
Date:   Tue, 8 Jan 2019 15:53:02 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Fengguang Wu <fengguang.wu@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        kvm@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        Fan Du <fan.du@...el.com>, Yao Yuan <yuan.yao@...el.com>,
        Peng Dong <dongx.peng@...el.com>,
        Huang Ying <ying.huang@...el.com>,
        Liu Jingqi <jingqi.liu@...el.com>,
        Dong Eddie <eddie.dong@...el.com>,
        Zhang Yi <yi.z.zhang@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness
 accounting/migration

On Wed 02-01-19 10:12:04, Dave Hansen wrote:
> On 12/28/18 12:41 AM, Michal Hocko wrote:
> >>
> >> It can be done in kernel page reclaim path, near the anonymous page
> >> swap out point. Instead of swapping out, we now have the option to
> >> migrate cold pages to PMEM NUMA nodes.
> > OK, this makes sense to me except I am not sure this is something that
> > should be pmem specific. Is there any reason why we shouldn't migrate
> > pages on memory pressure to other nodes in general? In other words
> > rather than paging out we whould migrate over to the next node that is
> > not under memory pressure. Swapout would be the next level when the
> > memory is (almost_) fully utilized. That wouldn't be pmem specific.
> 
> Yeah, we don't want to make this specific to any particular kind of
> memory.  For instance, with lots of pressure on expensive, small
> high-bandwidth memory (HBM), we might want to migrate some HBM contents
> to DRAM.
> 
> We need to decide on whether we want to cause pressure on the
> destination nodes or not, though.  I think you're suggesting that we try
> to look for things under some pressure and totally avoid them.  That
> sounds sane, but I also like the idea of this being somewhat ordered.
> 
> Think of if we have three nodes, A, B, C.  A is fast, B is medium, C is
> slow.  If A and B are "full" and we want to reclaim some of A, do we:
> 
> 1. Migrate A->B, and put pressure on a later B->C migration, or
> 2. Migrate A->C directly
> 
> ?
> 
> Doing A->C is less resource intensive because there's only one migration
> involved.  But, doing A->B/B->C probably makes the app behave better
> because the "A data" is presumably more valuable and is more
> appropriately placed in B rather than being demoted all the way to C.

This is a good question and I do not have a good answer because I lack
experiences with such "many levels" systems. If we followed CPU caches
model ten you are right that the fallback should be gradual. This is
more complex implementation wise of course. Anyway, I believe that there
is a lot of room for experimentations. If this stays an internal
implementation detail without user API then there is also no promise on
future behavior so nothing gets carved into stone since the day 1 when
our experiences are limited.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ