[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250513031316.2147548-1-shakeel.butt@linux.dev>
Date: Mon, 12 May 2025 20:13:09 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Harry Yoo <harry.yoo@...cle.com>,
Yosry Ahmed <yosry.ahmed@...ux.dev>,
bpf@...r.kernel.org,
linux-mm@...ck.org,
cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: [RFC PATCH 0/7] memcg: make memcg stats irq safe
This series converts memcg stats to be irq safe i.e. memcg stats can be
updated in any context (task, softirq or hardirq) without disabling the
irqs.
This is still an RFC as I am not satisfied with the usage of atomic_*
ops in memcg_rstat_updated(). Second I still need to run performance
benchmarks (any suggestions/recommendations would be appreciated).
Sending this out early to get feedback.
This is based on latest mm-everything branch along with the nmi-safe
memcg series [1].
Link: http://lore.kernel.org/20250509232859.657525-1-shakeel.butt@linux.dev
Shakeel Butt (7):
memcg: memcg_rstat_updated re-entrant safe against irqs
memcg: move preempt disable to callers of memcg_rstat_updated
memcg: make mod_memcg_state re-entrant safe against irqs
memcg: make count_memcg_events re-entrant safe against irqs
memcg: make __mod_memcg_lruvec_state re-entrant safe against irqs
memcg: objcg stock trylock without irq disabling
memcg: no stock lock for cpu hot-unplug
include/linux/memcontrol.h | 41 +--------
mm/memcontrol-v1.c | 6 +-
mm/memcontrol.c | 167 +++++++++++++++----------------------
mm/swap.c | 8 +-
mm/vmscan.c | 14 ++--
5 files changed, 85 insertions(+), 151 deletions(-)
--
2.47.1
Powered by blists - more mailing lists