[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <218e8b26-6b83-46a4-a57c-2346130a1597@gmail.com>
Date: Mon, 16 Jun 2025 13:08:49 -0700
From: JP Kobryn <inwardvessel@...il.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>, Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Michal Koutný <mkoutny@...e.com>,
Harry Yoo <harry.yoo@...cle.com>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
bpf@...r.kernel.org, linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH v2 0/4] cgroup: nmi safe css_rstat_updated
On 6/11/25 3:15 PM, Shakeel Butt wrote:
> BPF programs can run in nmi context and may trigger memcg charged memory
> allocation in such context. Recently linux added support to nmi safe
> page allocation along with memcg charging of such allocations. However
> the kmalloc/slab support and corresponding memcg charging is still
> lacking,
>
> To provide nmi safe support for memcg charging for kmalloc/slab
> allocations, we need nmi safe memcg stats because for kernel memory
> charging and stats happen together. At the moment, memcg charging and
> memcg stats are nmi safe and the only thing which is not nmi safe is
> adding the cgroup to the per-cpu rstat update tree. i.e.
> css_rstat_updated() which this series is doing.
>
> This series made css_rstat_updated by using per-cpu lockless lists whose
> node in embedded in individual struct cgroup_subsys_state and the
> per-cpu head is placed in struct cgroup_subsys. For rstat users without
> cgroup_subsys, a global per-cpu lockless list head is created. The main
> challenge to use lockless in this scenario was the potential multiple
> inserters from the stacked context i.e. process, softirq, hardirq & nmi,
> potentially using the same per-cpu lockless node of a given
> cgroup_subsys_state. The normal lockless list does not protect against
> such scenario.
>
> The multiple stacked inserters using potentially same lockless node was
> resolved by making one of them succeed on reset the lockless node and the
> winner gets to insert the lockless node in the corresponding lockless
> list. The losers can assume the lockless list insertion will eventually
> succeed and continue their operation.
>
> Changelog since v2:
> - Add more clear explanation in cover letter and in the comment as
> suggested by Andrew, Michal & Tejun.
> - Use this_cpu_cmpxchg() instead of try_cmpxchg() as suggested by Tejun.
> - Remove the per-cpu ss locks as they are not needed anymore.
>
> Changelog since v1:
> - Based on Yosry's suggestion always use llist on the update side and
> create the update tree on flush side
>
> [v1] https://lore.kernel.org/cgroups/20250429061211.1295443-1-shakeel.butt@linux.dev/
>
>
>
> Shakeel Butt (4):
> cgroup: support to enable nmi-safe css_rstat_updated
> cgroup: make css_rstat_updated nmi safe
> cgroup: remove per-cpu per-subsystem locks
> memcg: cgroup: call css_rstat_updated irrespective of in_nmi()
>
> include/linux/cgroup-defs.h | 11 +--
> include/trace/events/cgroup.h | 47 ----------
> kernel/cgroup/rstat.c | 169 +++++++++++++---------------------
> mm/memcontrol.c | 10 +-
> 4 files changed, 74 insertions(+), 163 deletions(-)
>
I tested this series by doing some updates/flushes on a cgroup hierarchy
with four levels. This tag can be added to the patches in this series.
Tested-by: JP Kobryn <inwardvessel@...il.com>
Powered by blists - more mailing lists