[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YpdkHrbT/xkdx+Qb@dhcp22.suse.cz>
Date: Wed, 1 Jun 2022 15:05:34 +0200
From: Michal Hocko <mhocko@...e.com>
To: Michal Koutný <mkoutny@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Vasily Averin <vvs@...nvz.org>,
Andrew Morton <akpm@...ux-foundation.org>, kernel@...nvz.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Shakeel Butt <shakeelb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Muchun Song <songmuchun@...edance.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH mm v3 0/9] memcg: accounting for objects allocated by
mkdir cgroup
On Wed 01-06-22 11:32:26, Michal Hocko wrote:
> On Wed 01-06-22 11:15:43, Michal Koutny wrote:
> > On Wed, Jun 01, 2022 at 06:43:27AM +0300, Vasily Averin <vvs@...nvz.org> wrote:
> > > CT-901 /# cat /sys/fs/cgroup/memory/cgroup.subgroups_limit
> > > 512
> > > CT-901 /# echo 3333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit
> > > -bash: echo: write error: Operation not permitted
> > > CT-901 /# echo 333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit
> > > -bash: echo: write error: Operation not permitted
> > >
> > > I doubt this way can be accepted in upstream, however for OpenVz
> > > something like this it is mandatory because it much better
> > > than nothing.
> >
> > Is this customization of yours something like cgroup.max.descendants on
> > the unified (v2) hierarchy? (Just curious.)
> >
> > (It can be made inaccessible from within the subtree either with cgroup
> > ns or good old FS permissions.)
>
> So we already do have a limit to prevent somebody from running away with
> the number of cgroups. Nice! I was not aware of that and I guess this
> looks like the right thing to do. So do we need more control and
> accounting that this?
I have checked the actual implementation and noticed that cgroups are
uncharged when offlined (rmdir-ed) which means that an adversary could
still trick the limit and runaway while still consuming resources.
Roman, I guess the reason for this implementation was to avoid limit to
trigger on setups with memcgs which can take quite some time to die?
Would it make sense to make the implementation more strict to really act
as gate against potential cgroups count runways?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists