[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfQIMmbY7nHusQRK@hirez.programming.kicks-ass.net>
Date: Fri, 28 Jan 2022 16:13:54 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Mel Gorman <mgorman@...e.de>, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>, Rik van Riel <riel@...riel.com>
Subject: Re: [RFC PATCH 1/2] NUMA balancing: fix NUMA topology type for
memory tiering system
On Fri, Jan 28, 2022 at 03:30:50PM +0800, Huang, Ying wrote:
> Srikar Dronamraju <srikar@...ux.vnet.ibm.com> writes:
>
> > * Huang Ying <ying.huang@...el.com> [2022-01-28 10:38:41]:
> >
> >>
> >> One possible fix is to ignore CPU-less nodes when detecting NUMA
> >> topology type in init_numa_topology_type(). That works well for the
> >> example system. Is it good in general for any system with CPU-less
> >> nodes?
> >>
> >
> > A CPUless node at the time online doesn't necessarily mean a CPUless node
> > for the entire boot. For example: On PowerVM Lpars, aka powerpc systems,
> > some of the nodes may start as CPUless nodes and then CPUS may get
> > populated/hotplugged on them.
>
> Got it!
>
> > Hence I am not sure if adding a check for CPUless nodes at node online may
> > work for such systems.
>
> How about something as below?
I'm thinking that might not be enough in that scenario; if we're going
to consistently skip CPU-less nodes (as I really think we should) then
__sched_domains_numa_masks_set() is not sufficient for the hotplug case
since sched_domains_numa_levels and sched_max_numa_distance can also
change.
This means we need to re-do more of sched_init_numa() and possibly
re-alloc some of those arrays etc..
Same for offline ofc.
Powered by blists - more mailing lists