[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sft6rpyy.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Sun, 30 Jan 2022 09:07:17 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Mel Gorman <mgorman@...e.de>, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>, Rik van Riel <riel@...riel.com>
Subject: Re: [RFC PATCH 1/2] NUMA balancing: fix NUMA topology type for
memory tiering system
Peter Zijlstra <peterz@...radead.org> writes:
> On Fri, Jan 28, 2022 at 03:30:50PM +0800, Huang, Ying wrote:
>> Srikar Dronamraju <srikar@...ux.vnet.ibm.com> writes:
>>
>> > * Huang Ying <ying.huang@...el.com> [2022-01-28 10:38:41]:
>> >
>> >>
>> >> One possible fix is to ignore CPU-less nodes when detecting NUMA
>> >> topology type in init_numa_topology_type(). That works well for the
>> >> example system. Is it good in general for any system with CPU-less
>> >> nodes?
>> >>
>> >
>> > A CPUless node at the time online doesn't necessarily mean a CPUless node
>> > for the entire boot. For example: On PowerVM Lpars, aka powerpc systems,
>> > some of the nodes may start as CPUless nodes and then CPUS may get
>> > populated/hotplugged on them.
>>
>> Got it!
>>
>> > Hence I am not sure if adding a check for CPUless nodes at node online may
>> > work for such systems.
>>
>> How about something as below?
>
> I'm thinking that might not be enough in that scenario; if we're going
> to consistently skip CPU-less nodes (as I really think we should) then
> __sched_domains_numa_masks_set() is not sufficient for the hotplug case
> since sched_domains_numa_levels and sched_max_numa_distance can also
> change.
>
> This means we need to re-do more of sched_init_numa() and possibly
> re-alloc some of those arrays etc..
>
> Same for offline ofc.
Got it! It doesn't make sense to create schedule domains for CPU-less
nodes. I can work on this after Chinese New Year holiday week (the
whole next week). But if anyone want to work on this, feel free to do
that.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists