[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260108111027.172f19a9a86667e8e0142042@linux-foundation.org>
Date: Thu, 8 Jan 2026 11:10:27 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Jianyue Wu <wujianyue000@...il.com>
Cc: jianyuew@...dia.com, hannes@...xchg.org, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev, muchun.song@...ux.dev,
linux-mm@...ck.org, cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: optimize stat output for 11% sys time reduce
On Thu, 8 Jan 2026 17:37:29 +0800 Jianyue Wu <wujianyue000@...il.com> wrote:
> From: Jianyue Wu <wujianyue000@...il.com>
>
> Replace seq_printf/seq_buf_printf with lightweight helpers to avoid
> printf parsing in memcg stats output.
>
> Key changes:
> - Add memcg_seq_put_name_val() for seq_file "name value\n" formatting
> - Add memcg_seq_buf_put_name_val() for seq_buf "name value\n" formatting
> - Update __memory_events_show(), swap_events_show(),
> memory_stat_format(), memory_numa_stat_show(), and related helpers
>
> Performance:
> - 1M reads of memory.stat+memory.numa_stat
> - Before: real 0m9.663s, user 0m4.840s, sys 0m4.823s
> - After: real 0m9.051s, user 0m4.775s, sys 0m4.275s (~11.4% sys drop)
>
> Tests:
> - Script:
> for ((i=1; i<=1000000; i++)); do
> : > /dev/null < /sys/fs/cgroup/memory.stat
> : > /dev/null < /sys/fs/cgroup/memory.numa_stat
> done
>
I suspect there are workloads which read these files frequently.
I'd be interested in learning "how frequently". Perhaps
ascii-through-sysfs simply isn't an appropriate API for this data?
> @@ -1795,25 +1795,33 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v)
> mem_cgroup_flush_stats(memcg);
>
> for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
> - seq_printf(m, "%s=%lu", stat->name,
> - mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
> - false));
> - for_each_node_state(nid, N_MEMORY)
> - seq_printf(m, " N%d=%lu", nid,
> - mem_cgroup_node_nr_lru_pages(memcg, nid,
> - stat->lru_mask, false));
> + seq_puts(m, stat->name);
> + seq_put_decimal_ull(m, "=",
> + (u64)mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
> + false));
> + for_each_node_state(nid, N_MEMORY) {
> + seq_put_decimal_ull(m, " N", nid);
> + seq_put_decimal_ull(m, "=",
> + (u64)mem_cgroup_node_nr_lru_pages(memcg, nid,
> + stat->lru_mask, false));
The indenting went wrong here.
The patch does do a lot of ugly tricks to constrain the number of
columns used. Perhaps introduce some new local variables to clean this
up?
Powered by blists - more mailing lists