lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 24 May 2024 10:36:45 -0700
From: "T.J. Mercier" <tjmercier@...gle.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>, 
	Johannes Weiner <hannes@...xchg.org>, shakeel.butt@...ux.dev, cgroups@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [RFC] cgroup: Fix /proc/cgroups count for v2

On Fri, May 24, 2024 at 7:23 AM Michal Koutný <mkoutny@...e.com> wrote:
>
> Hello.
Hi Michal, thanks for taking a look.

> On Sun, May 19, 2024 at 05:46:48PM GMT, "T.J. Mercier" <tjmercier@...gle.com> wrote:
> > The number of cgroups using a controller is an important metric since
> > kernel memory is used for each cgroup, and some kernel operations scale
> > with the number of cgroups for some controllers (memory, io). So users
> > have an interest in minimizing/tracking the number of them.
>
> I agree this is good for debugging or quick checks of unified hierarchy
> enablement status.
>
> > To deal with num_cgroups being reported as 1 for those utility
> > controllers regardless of the number of cgroups that exist and support
> > their use,
>
> But '1' is correct number no? Those utility controllers are v1-only and
> their single group only exists on (default) root.

Sometimes? Take freezer as an example. If you don't mount it on v1
then /proc/cgroups currently advertises the total number of v2
cgroups. I thought that was reasonable since there exists a
cgroup.freeze in every cgroup, but does freezer really count as a
controller in this case? There's no freezer css for each cgroup so I
guess the better answer is just to report 1 like you suggest.

>
> > @@ -675,11 +699,19 @@ int proc_cgroupstats_show(struct seq_file *m, void *v)
> >        * cgroup_mutex contention.
> >        */
> >
> > -     for_each_subsys(ss, i)
> > +     for_each_subsys(ss, i) {
> > +             int count;
> > +
> > +             if (!cgroup_on_dfl(&ss->root->cgrp) || is_v2_utility_controller(i))
> > +                     count = atomic_read(&ss->root->nr_cgrps);
>
> I think is_v2_utility_controller(ssid) implies
> !cgroup_on_dfl(&ss->root->cgrp). I'd only decide based on the
> cgroup_on_dfl() predicate.
>
> > --- a/kernel/cgroup/cgroup.c
> > +++ b/kernel/cgroup/cgroup.c
> > @@ -2047,6 +2047,8 @@ void init_cgroup_root(struct cgroup_fs_context *ctx)
> >
> >       INIT_LIST_HEAD_RCU(&root->root_list);
> >       atomic_set(&root->nr_cgrps, 1);
> > +     for (int i = 0; i < CGROUP_SUBSYS_COUNT; ++i)
> > +             atomic_set(&root->nr_css[i], 0);
>
> Strictly not needed, non-dfl roots are kzalloc'd and dfl root is global
> variable (zeroed).
>
> HTH,
> Michal

Thanks, removed. I'll resend this with these changes as a PATCH with my SoB.

Best,
T.J.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ