[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yv0aMqMIafD7cOQX@slm.duckdns.org>
Date: Wed, 17 Aug 2022 06:41:22 -1000
From: Tejun Heo <tj@...nel.org>
To: Michal Koutný <mkoutny@...e.com>
Cc: Vasily Averin <vvs@...nvz.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
gregkh@...uxfoundation.org, hannes@...xchg.org, kernel@...nvz.org,
linux-kernel@...r.kernel.org, mhocko@...e.com, shakeelb@...gle.com,
songmuchun@...edance.com, viro@...iv.linux.org.uk
Subject: Re: [RFC PATCH] memcg: adjust memcg for new cgroup allocations
Hello,
On Wed, Aug 17, 2022 at 11:17:28AM +0200, Michal Koutný wrote:
> On Wed, Aug 17, 2022 at 10:42:40AM +0300, Vasily Averin <vvs@...nvz.org> wrote:
> > However, now we want to enable accounting for some other cgroup-related
> > resources called from cgroup_mkdir. We would like to guarantee that
> > all new accounted allocation will be charged to the same memory cgroup.
>
> Here's my point -- the change in the referenced patch applied to memory
> controller hierarchies. This extension applies to any hierarchy that can
> create groups, namely, a hierarchy without memory controller too. There
> mem_cgroup_from_cgroup falls back to the root memcg (on a different
> hierarchy).
>
> If the purpose is to prevent unlimited creation of cgroup objects, the
> root memcg is by principle unlimited, so it's just for accounting.
>
> But I understand the purpose is to have everything under one roof,
> unless the object lifetime is not bound to that owning memcg. Should
> memory-less hierarchies be treated specially?
At least from my POV, as long as cgroup1 is not being regressed, we want to
make decisions which make the best long term sense. We surely can
accommodate cgroup1 as long as the added complexity is minimal but the bar
is pretty high there. cgroup1 has been in maintenance mode for years now and
even the basic delegation model isn't well established in cgroup1, so if we
end up accounting everything in the root cgroup for most of cgroup1
hierarchies, that sounds fine to me.
Thanks.
--
tejun
Powered by blists - more mailing lists