[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181228084105.GQ16738@dhcp22.suse.cz>
Date: Fri, 28 Dec 2018 09:41:05 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Fengguang Wu <fengguang.wu@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
kvm@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Fan Du <fan.du@...el.com>, Yao Yuan <yuan.yao@...el.com>,
Peng Dong <dongx.peng@...el.com>,
Huang Ying <ying.huang@...el.com>,
Liu Jingqi <jingqi.liu@...el.com>,
Dong Eddie <eddie.dong@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Zhang Yi <yi.z.zhang@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness
accounting/migration
On Fri 28-12-18 13:08:06, Wu Fengguang wrote:
[...]
> Optimization: do hot/cold page tracking and migration
> =====================================================
>
> Since PMEM is slower than DRAM, we need to make sure hot pages go to
> DRAM and cold pages stay in PMEM, to get the best out of PMEM and DRAM.
>
> - DRAM=>PMEM cold page migration
>
> It can be done in kernel page reclaim path, near the anonymous page
> swap out point. Instead of swapping out, we now have the option to
> migrate cold pages to PMEM NUMA nodes.
OK, this makes sense to me except I am not sure this is something that
should be pmem specific. Is there any reason why we shouldn't migrate
pages on memory pressure to other nodes in general? In other words
rather than paging out we whould migrate over to the next node that is
not under memory pressure. Swapout would be the next level when the
memory is (almost_) fully utilized. That wouldn't be pmem specific.
> User space may also do it, however cannot act on-demand, when there
> are memory pressure in DRAM nodes.
>
> - PMEM=>DRAM hot page migration
>
> While LRU can be good enough for identifying cold pages, frequency
> based accounting can be more suitable for identifying hot pages.
>
> Our design choice is to create a flexible user space daemon to drive
> the accounting and migration, with necessary kernel supports by this
> patchset.
We do have numa balancing, why cannot we rely on it? This along with the
above would allow to have pmem numa nodes (cpuless nodes in fact)
without any special casing and a natural part of the MM. It would be
only the matter of the configuration to set the appropriate distance to
allow reasonable allocation fallback strategy.
I haven't looked at the implementation yet but if you are proposing a
special cased zone lists then this is something CDM (Coherent Device
Memory) was trying to do two years ago and there was quite some
skepticism in the approach.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists