[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <u54tfpwnirzpthvvynkw2dpn7rqtv6nwlllizf4yhadltupv34@3466il3qbfib>
Date: Mon, 9 Jun 2025 16:51:59 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Michal Koutný <mkoutny@...e.com>, Harry Yoo <harry.yoo@...cle.com>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, bpf@...r.kernel.org, linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH 0/3] cgroup: nmi safe css_rstat_updated
On Mon, Jun 09, 2025 at 04:44:10PM -0700, Andrew Morton wrote:
> On Mon, 9 Jun 2025 15:56:08 -0700 Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> > BPF programs can run in nmi context and may trigger memcg charged memory
> > allocation in such context. Recently linux added support to nmi safe
> > page allocation along with memcg charging of such allocations. However
> > the kmalloc/slab support and corresponding memcg charging is still
> > lacking,
> >
> > To provide nmi safe support for memcg charging for kmalloc/slab
> > allocations, we need nmi safe memcg stats and for that we need nmi safe
> > css_rstat_updated() which adds the given cgroup state whose stats are
> > updated into the per-cpu per-ss update tree. This series took the aim to
> > make css_rstat_updated() nmi safe.
> >
> > This series made css_rstat_updated by using per-cpu lockless lists whose
> > node in embedded in individual struct cgroup_subsys_state and the
> > per-cpu head is placed in struct cgroup_subsys. For rstat users without
> > cgroup_subsys, a global per-cpu lockless list head is created. The main
> > challenge to use lockless in this scenario was the potential multiple
> > inserters using the same lockless node of a cgroup_subsys_state which is
> > different from traditional users of lockless lists.
> >
> > The multiple inserters using potentially same lockless node was resolved
> > by making one of them succeed on reset the lockless node and the winner
> > gets to insert the lockless node in the corresponding lockless list.
>
> And what happens with the losers?
Losers can continue their normal work without worrying about this
specific insertion. Basically we need one successful insertion. In
addition this is a contention between process context, softirq, hardirq
and nmi on the same cpu for the same cgroup which should be very
unlikely.
Powered by blists - more mailing lists