[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YsxOwEI7HOqdkRpz@dhcp22.suse.cz>
Date: Mon, 11 Jul 2022 18:24:32 +0200
From: Michal Hocko <mhocko@...e.com>
To: Vasily Averin <vvs@...nvz.org>
Cc: Shakeel Butt <shakeelb@...gle.com>, kernel@...nvz.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Michal Koutný <mkoutny@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Muchun Song <songmuchun@...edance.com>,
Cgroups <cgroups@...r.kernel.org>
Subject: Re: [PATCH mm v5 0/9] memcg: accounting for objects allocated by
mkdir, cgroup
On Sun 10-07-22 21:53:34, Vasily Averin wrote:
> On 7/1/22 14:03, Michal Hocko wrote:
> > On Mon 27-06-22 09:37:14, Shakeel Butt wrote:
> >> On Fri, Jun 24, 2022 at 6:59 AM Michal Hocko <mhocko@...e.com> wrote:
> > [...]
> >>> Is it even possible to prevent from id
> >>> depletion by the memory consumption? Any medium sized memcg can easily
> >>> consume all the ids AFAICS.
> >>
> >> Though the patch series is pitched as protection against OOMs, I think
> >> it is beneficial irrespective. Protection against an adversarial actor
> >> should not be the aim here. IMO this patch series improves the memory
> >> association to the actual user which is better than unattributed
> >> memory treated as system overhead.
> >
> > Considering the amount of memory and "normal" cgroup usage (I guess we
> > can agree that delegated subtrees do not count their cgroups in
> > thousands) is this really something that is worth bothering with?
> >
> > I mean, these patches are really small and not really disruptive so I do
> > not really see any problem with them. Except that they clearly add a
> > maintenance overhead. Not directly with the memory they track but any
> > future cgroup/memcg metadata related objects would need to be tracked as
> > well and I am worried this will get quickly out of sync. So we will have
> > a half assed solution in place that doesn't really help any containment
> > nor it provides a good and robust consumption tracking.
> >
> > All that being said I find these changes rather without a great value or
> > use.
>
> Dear Michal,
> I sill have 2 questions:
> 1) if you do not want to account any memory allocated for cgroup objects,
> should you perhaps revert commit 3e38e0aaca9e "mm: memcg: charge memcg percpu
> memory to the parent cgroup". Is it an exception perhaps?
> (in fact I hope you will not revert this patch, I just would like to know
> your explanations about this accounting)
Well, I have to say I was not a great fan of this patch when it was
proposed but I didn't really have strong arguments against it to nack
it. It was simple enough, rather self contained in few places. Just to
give you an insight into my thinking here. Your patchseries is also not
something I would nack (nor I have done that). I am not super fan of it
either. I voiced against it because it just hit my internal thrashold of
how many different places are patched without any systemic approach. If
we consider that it doesn't really help with the initial intention to
protect against adversaries then what is the point of all the churn?
Others might think differently and if you can get acks by other
maintainers then I won't stand in the way. I have voiced my concerns and
I hope my thinking is clear now.
> 2) my patch set includes kernfs accounting required for proper netdevices accounting
>
> Allocs Alloc Allocation
> number size
> --------------------------------------------
> 1 + 128 (__kernfs_new_node+0x4d) kernfs node
> 1 + 88 (__kernfs_iattrs+0x57) kernfs iattrs
> 1 + 96 (simple_xattr_alloc+0x28) simple_xattr, can grow over 4Kb
> 1 32 (simple_xattr_set+0x59)
> 1 8 (__kernfs_new_node+0x30)
>
> 2/9] memcg: enable accounting for kernfs nodes
> 3/9] memcg: enable accounting for kernfs iattrs
> 4/9] memcg: enable accounting for struct simple_xattr
>
> What do you think about them? Should I resend them as a new separate patch set?
kernfs is not really my area so I cannot really comment on those.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists