[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <240c5997-ab7e-8045-dacc-1afdb7c49a0d@linux.alibaba.com>
Date: Sun, 7 Nov 2021 17:33:39 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Dave Hansen <dave.hansen@...el.com>,
"Huang, Ying" <ying.huang@...el.com>
Cc: akpm@...ux-foundation.org, dave.hansen@...ux.intel.com,
ziy@...dia.com, osalvador@...e.de, shy828301@...il.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: migrate: Add new node demotion strategy
On 2021/11/5 23:47, Dave Hansen wrote:
> On 11/4/21 7:51 PM, Huang, Ying wrote:
>>> Let's also try to do it with the existing node_demotion[] data
>>> structure before we go adding more.
>> To avoid cache ping-pong, I guess some kind of per-CPU data structure
>> may be more suitable for interleaving among multiple nodes.
>
> It would probably be better to just find something that's more
> read-heavy. Like, instead of keeping a strict round-robin, just
> randomly select one of the notes to which you can round-robin.
>
> That will scale naturally without having to worry about caching or fancy
> per-cpu data structures.
>
Thanks for your suggestion. After some thinking, can we change the
node_demotion[] structure like below? Which means one source node can be
demoted to mutiple target node, and we can set up the target node mask
according to the node distance. How do you think? Thanks.
static nodemask_t node_demotion[MAX_NUMNODES] __read_mostly =
{[0 ... MAX_NUMNODES - 1] = NODE_MASK_NONE};
Powered by blists - more mailing lists