[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YijrVmzG8/yT9a0f@slm.duckdns.org>
Date: Wed, 9 Mar 2022 08:00:54 -1000
From: Tejun Heo <tj@...nel.org>
To: Tianchen Ding <dtcccc@...ux.alibaba.com>
Cc: Zefan Li <lizefan.x@...edance.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Michael Wang <yun.wang@...ux.alibaba.com>,
Cruz Zhao <cruzzhao@...ux.alibaba.com>,
Masahiro Yamada <masahiroy@...nel.org>,
Nathan Chancellor <nathan@...nel.org>,
Kees Cook <keescook@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
Arnd Bergmann <arnd@...db.de>, Miguel Ojeda <ojeda@...nel.org>,
Chris Down <chris@...isdown.name>,
Vipin Sharma <vipinsh@...gle.com>,
Daniel Borkmann <daniel@...earbox.net>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [RFC PATCH v2 0/4] Introduce group balancer
Hello,
On Wed, Mar 09, 2022 at 04:30:51PM +0800, Tianchen Ding wrote:
> "the sched domains and the load balancer" you mentioned are the ways to
> "balance" tasks on each domains. However, this patchset aims to "group" them
> together to win hot cache and less competition, which is different from load
> balancer. See commit log of the patch 3/4 and this link:
> https://lore.kernel.org/all/11d4c86a-40ef-6ce5-6d08-e9d0bc9b512a@linux.alibaba.com/
I read that but it doesn't make whole lot of sense to me. As Peter noted, we
already have issues with cross NUMA node balancing interacting with in-node
balancing, which likely indicates that it needs more unified solution rather
than more fragmented. I have a hard time seeing how adding yet another layer
on top helps the situation.
> > * If, for some reason, you need more customizable behavior in terms of cpu
> > allocation, which is what cpuset is for, maybe it'd be better to build the
> > load balancer in userspace. That'd fit way better with how cgroup is used
> > in general and with threaded cgroups, it should fit nicely with everything
> > else.
> >
>
> We put group balancer in kernel space because this new policy does not
> depend on userspace apps. It's a "general" feature.
Well, it's general for use cases which are happy with the two knobs that you
defined for your use case.
> Doing "dynamic cpuset" in userspace may also introduce performance issue,
> since it may need to bind and unbind different cpusets for several times,
> and is too strict(compared with our "soft bind").
My bet is that you're gonna be able to get just about the same bench results
with userspace diddling with thread cgroup membership. Why not try that
first? The interface is already there. I have a hard time seeing the
justification for hard coding this into the kernel at this stage.
Thanks.
--
tejun
Powered by blists - more mailing lists