[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251110101948.19277-1-leon.huangfu@shopee.com>
Date: Mon, 10 Nov 2025 18:19:48 +0800
From: Leon Huang Fu <leon.huangfu@...pee.com>
To: linux-mm@...ck.org
Cc: tj@...nel.org,
mkoutny@...e.com,
hannes@...xchg.org,
mhocko@...nel.org,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
akpm@...ux-foundation.org,
joel.granados@...nel.org,
jack@...e.cz,
laoar.shao@...il.com,
mclapinski@...gle.com,
kyle.meyer@....com,
corbet@....net,
lance.yang@...ux.dev,
leon.huangfu@...pee.com,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org
Subject: [PATCH mm-new v3] mm/memcontrol: Add memory.stat_refresh for on-demand stats flushing
Memory cgroup statistics are updated asynchronously with periodic
flushing to reduce overhead. The current implementation uses a flush
threshold calculated as MEMCG_CHARGE_BATCH * num_online_cpus() for
determining when to aggregate per-CPU memory cgroup statistics. On
systems with high core counts, this threshold can become very large
(e.g., 64 * 256 = 16,384 on a 256-core system), leading to stale
statistics when userspace reads memory.stat files.
This is particularly problematic for monitoring and management tools
that rely on reasonably fresh statistics, as they may observe data
that is thousands of updates out of date.
Introduce a new write-only file, memory.stat_refresh, that allows
userspace to explicitly trigger an immediate flush of memory statistics.
Writing any value to this file forces a synchronous flush via
__mem_cgroup_flush_stats(memcg, true) for the cgroup and all its
descendants, ensuring that subsequent reads of memory.stat and
memory.numa_stat reflect current data.
This approach follows the pattern established by /proc/sys/vm/stat_refresh
and memory.peak, where the written value is ignored, keeping the
interface simple and consistent with existing kernel APIs.
Usage example:
echo 1 > /sys/fs/cgroup/mygroup/memory.stat_refresh
cat /sys/fs/cgroup/mygroup/memory.stat
The feature is available in both cgroup v1 and v2 for consistency.
Signed-off-by: Leon Huang Fu <leon.huangfu@...pee.com>
---
v2 -> v3:
- Flush stats by memory.stat_refresh (per Michal)
- https://lore.kernel.org/linux-mm/20251105074917.94531-1-leon.huangfu@shopee.com/
v1 -> v2:
- Flush stats when write the file (per Michal).
- https://lore.kernel.org/linux-mm/20251104031908.77313-1-leon.huangfu@shopee.com/
Documentation/admin-guide/cgroup-v2.rst | 21 +++++++++++++++++--
mm/memcontrol-v1.c | 4 ++++
mm/memcontrol-v1.h | 2 ++
mm/memcontrol.c | 27 ++++++++++++++++++-------
4 files changed, 45 insertions(+), 9 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 3345961c30ac..ca079932f957 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1337,7 +1337,7 @@ PAGE_SIZE multiple when read back.
cgroup is within its effective low boundary, the cgroup's
memory won't be reclaimed unless there is no reclaimable
memory available in unprotected cgroups.
- Above the effective low boundary (or
+ Above the effective low boundary (or
effective min boundary if it is higher), pages are reclaimed
proportionally to the overage, reducing reclaim pressure for
smaller overages.
@@ -1785,6 +1785,23 @@ The following nested keys are defined.
up if hugetlb usage is accounted for in memory.current (i.e.
cgroup is mounted with the memory_hugetlb_accounting option).
+ memory.stat_refresh
+ A write-only file which exists on non-root cgroups.
+
+ Writing any value to this file forces an immediate flush of
+ memory statistics for this cgroup and its descendants. This
+ ensures subsequent reads of memory.stat and memory.numa_stat
+ reflect the most current data.
+
+ This is useful on high-core count systems where per-CPU caching
+ can lead to stale statistics, or when precise memory usage
+ information is needed for monitoring or debugging purposes.
+
+ Example::
+
+ echo 1 > memory.stat_refresh
+ cat memory.stat
+
memory.numa_stat
A read-only nested-keyed file which exists on non-root cgroups.
@@ -2173,7 +2190,7 @@ of the two is enforced.
cgroup writeback requires explicit support from the underlying
filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
-btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
+btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
attributed to the root cgroup.
There are inherent differences in memory and writeback management
diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
index 6eed14bff742..c3eac9b1f1be 100644
--- a/mm/memcontrol-v1.c
+++ b/mm/memcontrol-v1.c
@@ -2041,6 +2041,10 @@ struct cftype mem_cgroup_legacy_files[] = {
.name = "stat",
.seq_show = memory_stat_show,
},
+ {
+ .name = "stat_refresh",
+ .write = memory_stat_refresh_write,
+ },
{
.name = "force_empty",
.write = mem_cgroup_force_empty_write,
diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
index 6358464bb416..a14d4d74c9aa 100644
--- a/mm/memcontrol-v1.h
+++ b/mm/memcontrol-v1.h
@@ -29,6 +29,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg);
unsigned long memcg_events(struct mem_cgroup *memcg, int event);
unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
int memory_stat_show(struct seq_file *m, void *v);
+ssize_t memory_stat_refresh_write(struct kernfs_open_file *of, char *buf,
+ size_t nbytes, loff_t off);
void mem_cgroup_id_get_many(struct mem_cgroup *memcg, unsigned int n);
struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index bfc986da3289..19ef4b971d8d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -610,6 +610,15 @@ static void __mem_cgroup_flush_stats(struct mem_cgroup *memcg, bool force)
css_rstat_flush(&memcg->css);
}
+static void memcg_flush_stats(struct mem_cgroup *memcg, bool force)
+{
+ if (mem_cgroup_disabled())
+ return;
+
+ memcg = memcg ?: root_mem_cgroup;
+ __mem_cgroup_flush_stats(memcg, force);
+}
+
/*
* mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree
* @memcg: root of the subtree to flush
@@ -621,13 +630,7 @@ static void __mem_cgroup_flush_stats(struct mem_cgroup *memcg, bool force)
*/
void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
{
- if (mem_cgroup_disabled())
- return;
-
- if (!memcg)
- memcg = root_mem_cgroup;
-
- __mem_cgroup_flush_stats(memcg, false);
+ memcg_flush_stats(memcg, false);
}
void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
@@ -4530,6 +4533,12 @@ int memory_stat_show(struct seq_file *m, void *v)
return 0;
}
+ssize_t memory_stat_refresh_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off)
+{
+ memcg_flush_stats(mem_cgroup_from_css(of_css(of)), true);
+ return nbytes;
+}
+
#ifdef CONFIG_NUMA
static inline unsigned long lruvec_page_state_output(struct lruvec *lruvec,
int item)
@@ -4666,6 +4675,10 @@ static struct cftype memory_files[] = {
.name = "stat",
.seq_show = memory_stat_show,
},
+ {
+ .name = "stat_refresh",
+ .write = memory_stat_refresh_write,
+ },
#ifdef CONFIG_NUMA
{
.name = "numa_stat",
--
2.51.2
Powered by blists - more mailing lists