[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ydnm423ogjcs5bb4d7b34hrz75spau4tehdhv6s4qdhyftwjot@moug4iitzgzw>
Date: Fri, 19 Dec 2025 21:22:27 -0800
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: bpf@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
JP Kobryn <inwardvessel@...il.com>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH bpf-next v2 4/7] mm: introduce BPF kfuncs to access memcg
statistics and events
On Fri, Dec 19, 2025 at 08:12:47PM -0800, Roman Gushchin wrote:
> Introduce BPF kfuncs to conveniently access memcg data:
> - bpf_mem_cgroup_vm_events(),
> - bpf_mem_cgroup_usage(),
> - bpf_mem_cgroup_page_state(),
> - bpf_mem_cgroup_flush_stats().
>
> These functions are useful for implementing BPF OOM policies, but
> also can be used to accelerate access to the memcg data. Reading
> it through cgroupfs is much more expensive, roughly 5x, mostly
> because of the need to convert the data into the text and back.
>
> JP Kobryn:
> An experiment was setup to compare the performance of a program that
> uses the traditional method of reading memory.stat vs a program using
> the new kfuncs. The control program opens up the root memory.stat file
> and for 1M iterations reads, converts the string values to numeric data,
> then seeks back to the beginning. The experimental program sets up the
> requisite libbpf objects and for 1M iterations invokes a bpf program
> which uses the kfuncs to fetch all available stats for node_stat_item,
> memcg_stat_item, and vm_event_item types.
>
> The results showed a significant perf benefit on the experimental side,
> outperforming the control side by a margin of 93%. In kernel mode,
> elapsed time was reduced by 80%, while in user mode, over 99% of time
> was saved.
>
> control: elapsed time
> real 0m38.318s
> user 0m25.131s
> sys 0m13.070s
>
> experiment: elapsed time
> real 0m2.789s
> user 0m0.187s
> sys 0m2.512s
>
> control: perf data
> 33.43% a.out libc.so.6 [.] __vfscanf_internal
> 6.88% a.out [kernel.kallsyms] [k] vsnprintf
> 6.33% a.out libc.so.6 [.] _IO_fgets
> 5.51% a.out [kernel.kallsyms] [k] format_decode
> 4.31% a.out libc.so.6 [.] __GI_____strtoull_l_internal
> 3.78% a.out [kernel.kallsyms] [k] string
> 3.53% a.out [kernel.kallsyms] [k] number
> 2.71% a.out libc.so.6 [.] _IO_sputbackc
> 2.41% a.out [kernel.kallsyms] [k] strlen
> 1.98% a.out a.out [.] main
> 1.70% a.out libc.so.6 [.] _IO_getline_info
> 1.51% a.out libc.so.6 [.] __isoc99_sscanf
> 1.47% a.out [kernel.kallsyms] [k] memory_stat_format
> 1.47% a.out [kernel.kallsyms] [k] memcpy_orig
> 1.41% a.out [kernel.kallsyms] [k] seq_buf_printf
>
> experiment: perf data
> 10.55% memcgstat bpf_prog_..._query [k] bpf_prog_16aab2f19fa982a7_query
> 6.90% memcgstat [kernel.kallsyms] [k] memcg_page_state_output
> 3.55% memcgstat [kernel.kallsyms] [k] _raw_spin_lock
> 3.12% memcgstat [kernel.kallsyms] [k] memcg_events
> 2.87% memcgstat [kernel.kallsyms] [k] __memcg_slab_post_alloc_hook
> 2.73% memcgstat [kernel.kallsyms] [k] kmem_cache_free
> 2.70% memcgstat [kernel.kallsyms] [k] entry_SYSRETQ_unsafe_stack
> 2.25% memcgstat [kernel.kallsyms] [k] __memcg_slab_free_hook
> 2.06% memcgstat [kernel.kallsyms] [k] get_page_from_freelist
>
> Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
> Co-developed-by: JP Kobryn <inwardvessel@...il.com>
> Signed-off-by: JP Kobryn <inwardvessel@...il.com>
> Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
Powered by blists - more mailing lists