lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200827223728.GB2490802@carbon.dhcp.thefacebook.com>
Date:   Thu, 27 Aug 2020 15:37:28 -0700
From:   Roman Gushchin <guro@...com>
To:     Shakeel Butt <shakeelb@...gle.com>
CC:     Linux MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Kernel Team <kernel-team@...com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC 3/4] mm: kmem: prepare remote memcg charging infra
 for interrupt contexts

On Thu, Aug 27, 2020 at 02:58:50PM -0700, Shakeel Butt wrote:
> On Thu, Aug 27, 2020 at 10:52 AM Roman Gushchin <guro@...com> wrote:
> >
> > Remote memcg charging API uses current->active_memcg to store the
> > currently active memory cgroup, which overwrites the memory cgroup
> > of the current process. It works well for normal contexts, but doesn't
> > work for interrupt contexts: indeed, if an interrupt occurs during
> > the execution of a section with an active memcg set, all allocations
> > inside the interrupt will be charged to the active memcg set (given
> > that we'll enable accounting for allocations from an interrupt
> > context). But because the interrupt might have no relation to the
> > active memcg set outside, it's obviously wrong from the accounting
> > prospective.
> >
> > To resolve this problem, let's add a global percpu int_active_memcg
> > variable, which will be used to store an active memory cgroup which
> > will be sued from interrupt contexts. set_active_memcg() will
> 
> *used
> 
> > transparently use current->active_memcg or int_active_memcg depending
> > on the context.
> >
> > To make the read part simple and transparent for the caller, let's
> > introduce two new functions:
> >   - struct mem_cgroup *active_memcg(void),
> >   - struct mem_cgroup *get_active_memcg(void).
> >
> > They are returning the active memcg if it's set, hiding all
> > implementation details: where to get it depending on the current context.
> >
> > Signed-off-by: Roman Gushchin <guro@...com>
> 
> I like this patch. Internally we have a similar patch which instead of
> per-cpu int_active_memcg have current->active_memcg_irq. Our use-case
> was radix tree node allocations where we use the root node's memcg to
> charge all the nodes of the tree and the reason behind was that we
> observed a lot of zombies which were stuck due to radix tree nodes
> charges while the actual pages pointed by the those nodes/entries were
> in used by active jobs (shared file system and the kernel is older
> than the kmem reparenting).
> 
> Reviewed-by: Shakeel Butt <shakeelb@...gle.com>

Thank you for reviews, Shakeel!

I'll fix the typo, add your acks and will resend it as v1.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ