[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875y1c2lyo.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Wed, 06 Dec 2023 08:50:23 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Gregory Price <gregory.price@...verge.com>
Cc: Michal Hocko <mhocko@...e.com>, "tj@...nel.org" <tj@...nel.org>,
"John Groves" <john@...alactic.com>,
Gregory Price <gourry.memverge@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"lizefan.x@...edance.com" <lizefan.x@...edance.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"corbet@....net" <corbet@....net>,
"roman.gushchin@...ux.dev" <roman.gushchin@...ux.dev>,
"shakeelb@...gle.com" <shakeelb@...gle.com>,
"muchun.song@...ux.dev" <muchun.song@...ux.dev>,
"jgroves@...ron.com" <jgroves@...ron.com>
Subject: Re: [RFC PATCH v4 0/3] memcg weighted interleave mempolicy control
Gregory Price <gregory.price@...verge.com> writes:
> On Tue, Dec 05, 2023 at 05:01:51PM +0800, Huang, Ying wrote:
>> Gregory Price <gregory.price@...verge.com> writes:
>>
>> > On Mon, Dec 04, 2023 at 04:19:02PM +0800, Huang, Ying wrote:
>> >> Gregory Price <gregory.price@...verge.com> writes:
>> >>
>> >> > If the structure is built as a matrix of (cpu_node,mem_nodes),
>> >> > the you can also optimize based on the node the task is running on.
>> >>
>> >> The matrix stuff makes the situation complex. If people do need
>> >> something like that, they can just use set_memorypolicy2() with user
>> >> specified weights. I still believe that "make simple stuff simple, and
>> >> complex stuff possible".
>> >>
>> >
>> > I don't think it's particularly complex, since we already have a
>> > distance matrix for numa nodes:
>> >
>> > available: 2 nodes (0-1)
>> > ... snip ...
>> > node distances:
>> > node 0 1
>> > 0: 10 21
>> > 1: 21 10
>> >
>> > This would follow the same thing, just adjustable for bandwidth.
>>
>> We add complexity for requirement. Not there's something similar
>> already.
>>
>> > I personally find the (src,dst) matrix very important for flexibility.
>>
>> With set_memorypolicy2(), I think we have the needed flexibility for
>> users needs the complexity.
>>
>> > But if there is particular pushback against it, having a one dimensional
>> > array is better than not having it, so I will take what I can get.
>>
>> TBH, I don't think that we really need that. Especially given we will
>> have set_memorypolicy2().
>>
>
> From a complexity standpoint, it is exactly as complex as the hardware
> configuration itself: each socket has a different view of the memory
> topology. If you have a non-homogeneous memory configuration (e.g. a
> different number of CXL expanders on one socket thant he other), a flat
> array of weights has no way of capturing this hardware configuration.
One important task of the software is to hide the complexity of hardware
from the users. At least it should provide the option. It only add
complexity based on real requirements.
> That makes the feature significantly less useful. In fact, it makes the
> feature equivalent to set_mempolicy2 - except that weights could be
> changed at runtime from outside a process.
>
>
> A matrix resolves one very specific use case: task migration
>
>
> set_mempolicy2 is not sufficient to solve this. There is presently no
> way for an external task to change the mempolicy of an existing task.
> That means a task must become "migration aware" to use weighting in the
> context of containers where migrations are likely.
>
> Two things to consider: A task...
> a) has no way of knowing a migration occured
> b) may not have visibility of numa nodes outside its cpusets prior to
> a migration - making it unlikely/not possible for them to set
> weights correctly in the event a migration occurs.
>
> If a server with 2 sockets is set up non-homogeneously (different amount
> of CXL memory expanders on each socket), then the effective bandwidth
> distribution between sockets will be different.
>
> If a container is migrated between sockets in this situation, then tasks
> with manually set weights, or if global weights are a single array, will
> have poor memory distributions in relation to the new view of the system.
>
> Requiring the global settings to be an array basically requires global
> weights to be sub-optimal for any use cases that is not explicitly a
> single workload that consumes all the cores on the system.
>
> If the system provides a matrix, then the global settings can be optimal
> and re-weighting in response to migration happens cleanly and transparently.
For these complex requirements, we will have process_set_mempolicy2().
I think that it's even more flexible than the global matrix.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists