[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEXW_YSS0ex8xK7t2R7c1jiE4eNbwxdwP2uyGPDK78YAaYQr5A@mail.gmail.com>
Date: Mon, 5 Apr 2021 14:46:09 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Tejun Heo <tj@...nel.org>, Hao Luo <haoluo@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Hyser,Chris" <chris.hyser@...cle.com>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...e.de>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Glexiner <tglx@...utronix.de>,
Michal Koutný <mkoutny@...e.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Zefan Li <lizefan.x@...edance.com>
Subject: Re: [PATCH 0/9] sched: Core scheduling interfaces
Hi TJ, Peter,
On Sun, Apr 4, 2021 at 7:39 PM Tejun Heo <tj@...nel.org> wrote:
>
> cc'ing Michal and Christian who've been spending some time on cgroup
> interface issues recently and Li Zefan for cpuset.
>
> On Thu, Apr 01, 2021 at 03:10:12PM +0200, Peter Zijlstra wrote:
> > The cgroup interface now uses a 'core_sched' file, which still takes 0,1. It is
> > however changed such that you can have nested tags. The for any given task, the
> > first parent with a cookie is the effective one. The rationale is that this way
> > you can delegate subtrees and still allow them some control over grouping.
>
> I find it difficult to like the proposed interface from the name (the term
> "core" is really confusing given how the word tends to be used internally)
> to the semantics (it isn't like anything else) and even the functionality
> (we're gonna have fixed processors at some point, right?).
>
> Here are some preliminary thoughts:
>
> * Are both prctl and cgroup based interfaces really necessary? I could be
> being naive but given that we're (hopefully) working around hardware
> deficiencies which will go away in time, I think there's a strong case for
> minimizing at least the interface to the bare minimum.
I don't think these issues are going away as there are constantly new
exploits related to SMT that are coming out. Further, core scheduling
is not only for SMT - there are other usecases as well (such as VM
performance by preventing vCPU threads from core-sharing).
>
> Given how cgroups are set up (membership operations happening only for
> seeding, especially with the new clone interface), it isn't too difficult
> to synchronize process tree and cgroup hierarchy where it matters - ie.
> given the right per-process level interface, restricting configuration for
> a cgroup sub-hierarchy may not need any cgroup involvement at all. This
> also nicely gets rid of the interaction between prctl and cgroup bits.
> * If we *have* to have cgroup interface, I wonder whether this would fit a
> lot better as a part of cpuset. If you squint just right, this can be
> viewed as some dynamic form of cpuset. Implementation-wise, it probably
> won't integrate with the rest but I think the feature will be less jarring
> as a part of cpuset, which already is a bit of kitchensink anyway.
I think both interfaces are important for different reasons. Could you
take a look at the initial thread I started few months ago? I tried to
elaborate about usecases in detail:
http://lore.kernel.org/r/20200822030155.GA414063@google.com
Also, in ChromeOS we can't use CGroups for this purpose. The CGroup
hierarchy does not fit well with the threads we are tagging. Also, we
use CGroup v1, and since CGroups cannot overlap, this is impossible
let alone cumbersome. And, the CGroup interface having core scheduling
is still useful for people using containers and wanting to
core-schedule each container separately ( +Hao Luo can elaborate more
on that, but I did describe it in the link above).
> > The cgroup thing also '(ab)uses' cgroup_mutex for serialization because it
> > needs to ensure continuity between ss->can_attach() and ss->attach() for the
> > memory allocation. If the prctl() were allowed to interleave it might steal the
> > memory.
> >
> > Using cgroup_mutex feels icky, but is not without precedent,
> > kernel/bpf/cgroup.c does the same thing afaict.
> >
> > TJ, can you please have a look at this?
>
> Yeah, using cgroup_mutex for stabilizing cgroup hierarchy for consecutive
> operations is fine. It might be worthwhile to break that out into a proper
> interface but that's the least of concerns here.
>
> Can someone point me to a realistic and concrete usage scenario for this
> feature?
Yeah, its at http://lore.kernel.org/r/20200822030155.GA414063@google.com
as mentioned above, let me know if you need any more details about
usecase.
About the file name, how about kernel/sched/smt.c ? That definitely
provides more information than 'core_sched.c'.
Thanks,
- Joel
Powered by blists - more mailing lists