lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ypcw3xhumvH3KDkD@dhcp22.suse.cz>
Date:   Wed, 1 Jun 2022 11:26:55 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Vasily Averin <vvs@...nvz.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, kernel@...nvz.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Shakeel Butt <shakeelb@...gle.com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Michal Koutný <mkoutny@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Muchun Song <songmuchun@...edance.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH mm v3 0/9] memcg: accounting for objects allocated by
 mkdir cgroup

On Wed 01-06-22 06:43:27, Vasily Averin wrote:
[...]
> However, it isn't critical for OpenVz. Our kernel does not allow
> to change of cgroup.subgroups_limit from inside containers.

What is the semantic of this limit?

> CT-901 /# cat /sys/fs/cgroup/memory/cgroup.subgroups_limit 
> 512
> CT-901 /# echo 3333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit 
> -bash: echo: write error: Operation not permitted
> CT-901 /# echo 333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit 
> -bash: echo: write error: Operation not permitted
> 
> I doubt this way can be accepted in upstream, however for OpenVz
> something like this it is mandatory because it much better
> than nothing.
> 
> The number can be adjusted by host admin. The current default limit
> looks too small for me, however it is not difficult to increase it
> to a reasonable 10,000.
> 
> My experiments show that ~10000 cgroups consumes 0.5 Gb memory on 4cpu VM.
> On "big irons" it can easily grow up to several Gb. This is quite a lot
> to ignore its accounting.

Too many cgroups can certainly have a high memory footprint. I guess
this is quite clear. The question is whether trying to limit them by the
memory footprint is really the right way to go. I would be especially
worried about those smaller machines because of a smaller footprint
which would allow to deplete the id space faster.

Maybe we need some sort of limit on the number of cgroups in a subtree
so that any potential runaway can be prevented regardless of the cgroups
memory footprint. One potentially big problem with that is that cgroups
can live quite long after being offlined (e.g. memcg) so such a limit
could easily trigger I can imagine.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ