lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <48802fec-0563-429b-95b2-571862ffff18@redhat.com>
Date: Fri, 12 Jul 2024 13:10:36 -0400
From: Waiman Long <longman@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
 Jonathan Corbet <corbet@....net>, cgroups@...r.kernel.org,
 linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
 Kamalesh Babulal <kamalesh.babulal@...cle.com>,
 Roman Gushchin <roman.gushchin@...ux.dev>
Subject: Re: [PATCH v3 1/2] cgroup: Show # of subsystem CSSes in cgroup.stat

On 7/12/24 12:29, Johannes Weiner wrote:
> On Thu, Jul 11, 2024 at 05:00:41PM -0400, Waiman Long wrote:
>> On 7/11/24 15:59, Johannes Weiner wrote:
>>> On Thu, Jul 11, 2024 at 03:13:12PM -0400, Waiman Long wrote:
>>>> On 7/11/24 14:59, Tejun Heo wrote:
>>>>> On Thu, Jul 11, 2024 at 02:51:38PM -0400, Waiman Long wrote:
>>>>>> On 7/11/24 14:44, Tejun Heo wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>> On Thu, Jul 11, 2024 at 01:39:38PM -0400, Waiman Long wrote:
>>>>>>>> On 7/11/24 13:18, Tejun Heo wrote:
>>>>>>> ...
>>>>>>>> Currently, I use the for_each_css() macro for iteration. If you mean
>>>>>>>> displaying all the possible cgroup subsystems even if they are not enabled
>>>>>>>> for the current cgroup, I will have to manually do the iteration.
>>>>>>> Just wrapping it with for_each_subsys() should do, no? for_each_css() won't
>>>>>>> iterate anything if css doesn't exist for the cgroup.
>>>>>> OK, I wasn't sure if you were asking to list all the possible cgroup v2
>>>>>> cgroup subsystems even if they weren't enabled in the current cgroup.
>>>>>> Apparently, that is the case. I prefer it that way too.
>>>>> Yeah, I think listing all is better. If the list corresponded directly to
>>>>> cgroup.controllers, it may make sense to only show enabled ones but we can
>>>>> have dying ones and implicitly enabled memory and so on, so I think it'd be
>>>>> cleaner to just list them all.
>>>> That will means cgroup subsystems that are seldomly used like rdma, misc
>>>> or even hugetlb will always be shown in all the cgroup.stat output. I
>>>> actually prefer just showing those that are enabled. As for dying memory
>>>> cgroups, they will only be shown in its online ancestors. We currently
>>>> don't know how many level down are each of the dying ones.
>>> It seems odd to me to not show dead ones after a cgroup has disabled
>>> the controller again. They still consume memory, after all, and so
>>> continue to be property of that cgroup afterwards.
>>>
>>> Instead of doing for_each_css(), would it make more sense to have
>>>
>>> 	struct cgroup {
>>> 		...
>>> 		int nr_dying_subsys[CGROUP_SUBSYS_COUNT];
>> What exactly does new this array for?
> For keeping the counts. Instead of inside the css.
>
> AFAICS, with your current patch, if somebody were to disable the
> controller in cgroup.subtree_control, it would offline the css at that
> level, become unreachable from cgroup->subsys[], and you'd lose
> remaining counts of dead css that are still associated with that
> cgroup. Re-enabling the controller would create a new css with new
> descendant counts, and now the reported numbers are actively misleading.
>
> That seems undesirable.
>
> If you track the counts in the cgroup itself, then cgroup.stat would
> reliably show the total counts of dead controllers that are associated
> with the subtree, even after disabling or toggling controllers.
>
> The hooks in online, offline, release should be the same, just update
> css->cgroup->nr_dying_subsys[id] instead of css->nr_dying_descendants.

That does make sense. Thank for the suggestion. I will update the patch 
accordingly.

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ