[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZUPomvqNwZgDbo51@memverge.com>
Date: Thu, 2 Nov 2023 14:21:14 -0400
From: Gregory Price <gregory.price@...verge.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Gregory Price <gourry.memverge@...il.com>,
linux-kernel@...r.kernel.org, linux-cxl@...r.kernel.org,
linux-mm@...ck.org, ying.huang@...el.com,
akpm@...ux-foundation.org, aneesh.kumar@...ux.ibm.com,
weixugc@...gle.com, apopple@...dia.com, tim.c.chen@...el.com,
dave.hansen@...el.com, shy828301@...il.com,
gregkh@...uxfoundation.org, rafael@...nel.org
Subject: Re: [RFC PATCH v3 0/4] Node Weights and Weighted Interleave
On Fri, Nov 03, 2023 at 10:56:01AM +0100, Michal Hocko wrote:
> On Wed 01-11-23 23:18:59, Gregory Price wrote:
> > On Thu, Nov 02, 2023 at 10:47:33AM +0100, Michal Hocko wrote:
> > > On Wed 01-11-23 12:58:55, Gregory Price wrote:
> > > > Basically consider: `numactl --interleave=all ...`
> > > >
> > > > If `--weights=...`: when a node hotplug event occurs, there is no
> > > > recourse for adding a weight for the new node (it will default to 1).
> > >
> > > Correct and this is what I was asking about in an earlier email. How
> > > much do we really need to consider this setup. Is this something nice to
> > > have or does the nature of the technology requires to be fully dynamic
> > > and expect new nodes coming up at any moment?
> > >
> >
> > Dynamic Capacity is expected to cause a numa node to change size (in
> > number of memory blocks) rather than cause numa nodes to come and go, so
> > maybe handling the full node hotplug is a bit of an overreach.
> >
> > Good call, I'll stop considering this problem for now.
> >
> > > > If the node is removed from the system, I believe (need to validate
> > > > this, but IIRC) the node will be removed from any registered cpusets.
> > > > As a result, that falls down to mempolicy, and the node is removed.
> > >
> > > I do not think we do anything like that. Userspace might decide to
> > > change the numa mask when a node is offlined but I do not think we do
> > > anything like that automagically.
> > >
> >
> > mpol_rebind_policy called by update_tasks_nodemask
> > https://elixir.bootlin.com/linux/latest/source/mm/mempolicy.c#L319
> > https://elixir.bootlin.com/linux/latest/source/kernel/cgroup/cpuset.c#L2016
> >
> > falls down from cpuset_hotplug_workfn:
> > https://elixir.bootlin.com/linux/latest/source/kernel/cgroup/cpuset.c#L3771
>
> Ohh, have missed that. Thanks for the reference. Quite honestly I am not
> sure this code is really a) necessary and b) ever exercised. For the
> former I would argue that offline node could be treated as completely
> depleted one. From the correctness POV it shouldn't make any difference
> and I am rather skeptical it would have performance improvements.
Only thing I'm not sure of is what happens if mempolicy is allowed to
select a node that doesn't exist. I could hack up a contrived test, but
i don't think the state is reachable at the moment.
More importantly, the rebind code is needed for task migration and for
allowing the cpusets to be change-able. From the perspective of
mempolicy, a node being hotplugged and the nodemask being changed due to
cgroup cpuset changing looks very similar and comes with the same
question: What do i do about weights when a change to the effective
nodemask is made.
This is why i'm falling toward "cgroups seem about right", because we
can make mempolicy ask cgroups for the weight, and also allow mempolicy
to carry its own explicit weight array - which allows for flexiblity.
I think this may end up generalizing to a cgroup-wide mempolicy interface
ala cgroup/mempolicy/[policy, nodemask, weights, ...]
but one thing as a time :]
> for the latter, full node offlines are really rare from experience. I
> would be interested about actual real life usecases which do that
yeah i'm just going to drop this from my requirement list and go OBO,
for areas where i see it may cause an issue (potential for 0-weights) i
will do something simple (initialize weights to 1), but otherwise I
think it's too much to expect from the kernel.
> >
> > As a user I would assume it would operate much the same way as other
> > nested cgroups, which is inherit by default (with subsets) or an
> > explicit overwrite that can't exceed the higher level settings.
>
> This would make it rather impractical because a default (everything set
> to 1) would be cast in stone. As mentioned above this this not an
> enforcement limit. So I _think_ that a simple hierarchical rule like
> cgroup_interleaving_mask(cgroup)
> interleaving_mask = (cgroup->interleaving_mask) ? : cgroup_interleaving_mask(parent_cgroup(cgroup))
>
> So child cgroups could overwrite parent as they wish. If there is any
> enforcement (like a cpuset) that would filter useable nodes and the
> allocation policy would simply apply weights on those.
>
Sorry yes, this is what I intended, I'm just bad at words.
~Gregory
Powered by blists - more mailing lists