[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250609164410.568fd70e6a1deb6556e25af7@linux-foundation.org>
Date: Mon, 9 Jun 2025 16:44:10 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>, Michal
Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>, Michal Koutný
<mkoutny@...e.com>, Harry Yoo <harry.yoo@...cle.com>, Yosry Ahmed
<yosry.ahmed@...ux.dev>, bpf@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org, Meta kernel team
<kernel-team@...a.com>
Subject: Re: [PATCH 0/3] cgroup: nmi safe css_rstat_updated
On Mon, 9 Jun 2025 15:56:08 -0700 Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> BPF programs can run in nmi context and may trigger memcg charged memory
> allocation in such context. Recently linux added support to nmi safe
> page allocation along with memcg charging of such allocations. However
> the kmalloc/slab support and corresponding memcg charging is still
> lacking,
>
> To provide nmi safe support for memcg charging for kmalloc/slab
> allocations, we need nmi safe memcg stats and for that we need nmi safe
> css_rstat_updated() which adds the given cgroup state whose stats are
> updated into the per-cpu per-ss update tree. This series took the aim to
> make css_rstat_updated() nmi safe.
>
> This series made css_rstat_updated by using per-cpu lockless lists whose
> node in embedded in individual struct cgroup_subsys_state and the
> per-cpu head is placed in struct cgroup_subsys. For rstat users without
> cgroup_subsys, a global per-cpu lockless list head is created. The main
> challenge to use lockless in this scenario was the potential multiple
> inserters using the same lockless node of a cgroup_subsys_state which is
> different from traditional users of lockless lists.
>
> The multiple inserters using potentially same lockless node was resolved
> by making one of them succeed on reset the lockless node and the winner
> gets to insert the lockless node in the corresponding lockless list.
And what happens with the losers?
Powered by blists - more mailing lists