[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250519063142.111219-2-shakeel.butt@linux.dev>
Date: Sun, 18 May 2025 23:31:38 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Harry Yoo <harry.yoo@...cle.com>,
Yosry Ahmed <yosry.ahmed@...ux.dev>,
Peter Zijlstra <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Tejun Heo <tj@...nel.org>,
bpf@...r.kernel.org,
linux-mm@...ck.org,
cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: [PATCH v4 1/5] memcg: disable kmem charging in nmi for unsupported arch
The memcg accounting and stats uses this_cpu* and atomic* ops. There are
archs which define CONFIG_HAVE_NMI but does not define
CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS and ARCH_HAVE_NMI_SAFE_CMPXCHG, so
memcg accounting for such archs in nmi context is not possible to
support. Let's just disable memcg accounting in nmi context for such
archs.
Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
---
init/Kconfig | 7 +++++++
mm/memcontrol.c | 3 +++
2 files changed, 10 insertions(+)
diff --git a/init/Kconfig b/init/Kconfig
index 4cdd1049283c..a2aa49cfb8bd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1006,6 +1006,13 @@ config MEMCG
help
Provides control over the memory footprint of tasks in a cgroup.
+config MEMCG_NMI_UNSAFE
+ bool
+ depends on MEMCG
+ depends on HAVE_NMI
+ depends on !ARCH_HAS_NMI_SAFE_THIS_CPU_OPS && !ARCH_HAVE_NMI_SAFE_CMPXCHG
+ default y
+
config MEMCG_V1
bool "Legacy cgroup v1 memory controller"
depends on MEMCG
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e17b698f6243..532e2c06ea60 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2652,6 +2652,9 @@ __always_inline struct obj_cgroup *current_obj_cgroup(void)
struct mem_cgroup *memcg;
struct obj_cgroup *objcg;
+ if (IS_ENABLED(CONFIG_MEMCG_NMI_UNSAFE) && in_nmi())
+ return NULL;
+
if (in_task()) {
memcg = current->active_memcg;
if (unlikely(memcg))
--
2.47.1
Powered by blists - more mailing lists