[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160829223526.GI28713@mtj.duckdns.org>
Date: Mon, 29 Aug 2016 18:35:26 -0400
From: Tejun Heo <tj@...nel.org>
To: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: Andy Lutomirski <luto@...capital.net>,
Ingo Molnar <mingo@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kernel-team@...com,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Paul Turner <pjt@...gle.com>, Li Zefan <lizefan@...wei.com>,
Linux API <linux-api@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [Documentation] State of CPU controller in cgroup v2
Hello, James.
On Sat, Aug 20, 2016 at 10:34:14PM -0700, James Bottomley wrote:
> I can see that process based is conceptually easier in v2 because you
> begin with a process tree, but it would really be a pity to lose the
> thread based controls we have now and permanently lose the ability to
> create more as we find uses for them. I can't really see how improving
> "common resource domain" is a good tradeoff for this.
Thread based control for namespace is not a different problem from
thread based control for individual applications, right? And the
problems with using cgroupfs directly for in-process control still
applies the same whether it's system-wide or inside a namespace.
One argument could be that inside a namespace, as the cgroupfs is
already scoped, cgroup path headaches are less of an issue, which is
true; however, that isn't applicable to applications which aren't
scoped in thier own namespaces and we can't scope every binary on the
system. More importnatly, a given application can't rely on being
scoped in a certain way. You can craft a custom config for a specific
setup but that's a horrible way to solve the problem of in-application
hierarchical resource distribution, and that's what rgroup was all
about.
Thanks.
--
tejun
Powered by blists - more mailing lists