[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220310183951.cb713c6ae926ea6ea8489a71@linux-foundation.org>
Date: Thu, 10 Mar 2022 18:39:51 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Oscar Salvador <osalvador@...e.de>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
"Huang, Ying" <ying.huang@...el.com>,
Abhishek Goel <huntbag@...ux.vnet.ibm.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: Only re-generate demotion targets when a numa
node changes its N_CPU state
On Thu, 10 Mar 2022 13:07:49 +0100 Oscar Salvador <osalvador@...e.de> wrote:
> Abhishek reported that after patch [1], hotplug operations are
> taking ~double the expected time. [2]
>
> The reason behind is that the CPU callbacks that migrate_on_reclaim_init()
> sets always call set_migration_target_nodes() whenever a CPU is brought
> up/down.
> But we only care about numa nodes going from having cpus to become
> cpuless, and vice versa, as that influences the demotion_target order.
>
> We do already have two CPU callbacks (vmstat_cpu_online() and vmstat_cpu_dead())
> that check exactly that, so get rid of the CPU callbacks in
> migrate_on_reclaim_init() and only call set_migration_target_nodes() from
> vmstat_cpu_{dead,online}() whenever a numa node change its N_CPU state.
What I'm not getting here (as so often happens) is a sense of how badly
this affects our users. Does anyone actually hotplug frequently enough
to care?
If "yes" then I'm inclined to merge this up for 5.18 with a cc:stable.
Not for 5.17 because it's late and things are looking rather creaky
already.
And I'll add a
Fixes: 884a6e5d1f93b ("mm/migrate: update node demotion order on hotplug events")
which is that patch's fourth such bouquet.
Powered by blists - more mailing lists