[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a935563217affe85b2a6d0689914d7aba2ce127f@linux.dev>
Date: Sun, 04 Jan 2026 09:30:46 +0000
From: hui.zhu@...ux.dev
To: "Michal Koutný" <mkoutny@...e.com>,
chenridong@...weicloud.com
Cc: "Andrew Morton" <akpm@...ux-foundation.org>, "Johannes Weiner"
<hannes@...xchg.org>, "Michal Hocko" <mhocko@...nel.org>, "Roman
Gushchin" <roman.gushchin@...ux.dev>, "Shakeel Butt"
<shakeel.butt@...ux.dev>, "Muchun Song" <muchun.song@...ux.dev>, "Alexei
Starovoitov" <ast@...nel.org>, "Daniel Borkmann" <daniel@...earbox.net>,
"Andrii Nakryiko" <andrii@...nel.org>, "Martin KaFai Lau"
<martin.lau@...ux.dev>, "Eduard Zingerman" <eddyz87@...il.com>, "Song
Liu" <song@...nel.org>, "Yonghong Song" <yonghong.song@...ux.dev>, "John
Fastabend" <john.fastabend@...il.com>, "KP Singh" <kpsingh@...nel.org>,
"Stanislav Fomichev" <sdf@...ichev.me>, "Hao Luo" <haoluo@...gle.com>,
"Jiri Olsa" <jolsa@...nel.org>, "Shuah Khan" <shuah@...nel.org>, "Peter
Zijlstra" <peterz@...radead.org>, "Miguel Ojeda" <ojeda@...nel.org>,
"Nathan Chancellor" <nathan@...nel.org>, "Kees Cook" <kees@...nel.org>,
"Tejun Heo" <tj@...nel.org>, "Jeff Xu" <jeffxu@...omium.org>, "Jan
Hendrik Farr" <kernel@...rr.cc>, "Christian Brauner"
<brauner@...nel.org>, "Randy Dunlap" <rdunlap@...radead.org>, "Brian
Gerst" <brgerst@...il.com>, "Masahiro Yamada" <masahiroy@...nel.org>,
davem@...emloft.net, "Jakub Kicinski" <kuba@...nel.org>, "Jesper Dangaard
Brouer" <hawk@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org, bpf@...r.kernel.org,
linux-kselftest@...r.kernel.org, "Hui Zhu" <zhuhui@...inos.cn>
Subject: Re: [RFC PATCH v2 0/3] Memory Controller eBPF support
2025年12月30日 17:49, "Michal Koutný" <mkoutny@...e.com mailto:mkoutny@...e.com?to=%22Michal%20Koutn%C3%BD%22%20%3Cmkoutny%40suse.com%3E > 写到:
Hi Michal and Ridong,
>
> Hi Hui.
>
> On Tue, Dec 30, 2025 at 11:01:58AM +0800, Hui Zhu <hui.zhu@...ux.dev> wrote:
>
> >
> > This allows administrators to suppress low-priority cgroups' memory
> > usage based on custom policies implemented in BPF programs.
> >
> BTW memory.low was conceived as a work-conserving mechanism for
> prioritization of different workloads. Have you tried that? No need to
> go directly to (high) limits. (<- Main question, below are some
> secondary implementation questions/remarks.)
>
> ...
>
memory.low is a helpful feature, but it can struggle to effectively
throttle low-priority processes that continuously access their memory.
For instance, consider the following example I ran:
root@...ntu:~# echo $((4 * 1024 * 1024 * 1024)) > /sys/fs/cgroup/high/memory.low
root@...ntu:~# cgexec -g memory:low stress-ng --vm 4 --vm-keep --vm-bytes 80% --vm-method all --seed 2025 --metrics -t 60 & cgexec -g memory:high stress-ng --vm 4 --vm-keep --vm-bytes 80% --vm-method all --seed 2025 --metrics -t 60
[1] 2011
stress-ng: info: [2011] setting to a 1 min, 0 secs run per stressor
stress-ng: info: [2012] setting to a 1 min, 0 secs run per stressor
stress-ng: info: [2011] dispatching hogs: 4 vm
stress-ng: info: [2012] dispatching hogs: 4 vm
stress-ng: metrc: [2012] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per RSS Max
stress-ng: metrc: [2012] (secs) (secs) (secs) (real time) (usr+sys time) instance (%) (KB)
stress-ng: metrc: [2012] vm 23584 60.21 2.75 15.94 391.73 1262.07 7.76 649988
stress-ng: info: [2012] skipped: 0
stress-ng: info: [2012] passed: 4: vm (4)
stress-ng: info: [2012] failed: 0
stress-ng: info: [2012] metrics untrustworthy: 0
stress-ng: info: [2012] successful run completed in 1 min, 0.22 secs
stress-ng: metrc: [2011] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per RSS Max
stress-ng: metrc: [2011] (secs) (secs) (secs) (real time) (usr+sys time) instance (%) (KB)
stress-ng: metrc: [2011] vm 23584 60.22 3.06 16.19 391.63 1224.97 7.99 688836
stress-ng: info: [2011] skipped: 0
stress-ng: info: [2011] passed: 4: vm (4)
stress-ng: info: [2011] failed: 0
stress-ng: info: [2011] metrics untrustworthy: 0
stress-ng: info: [2011] successful run completed in 1 min, 0.23 secs
As the results show, setting memory.low on the cgroup with the
high-priority workload did not improve its memory performance.
However, memory.low is beneficial in many other scenarios.
Perhaps extending it with eBPF support could help address a wider
range of issues.
> >
> > This series introduces a BPF hook that allows reporting
> > additional "pages over high" for specific cgroups, effectively
> > increasing memory pressure and throttling for lower-priority
> > workloads when higher-priority cgroups need resources.
> >
> Have you considered hooking into calculate_high_delay() instead? (That
> function has undergone some evolution so it'd seem like the candidate
> for BPFication.)
>
It seems that try_charge_memcg will not reach
__mem_cgroup_handle_over_high if it only hook calculate_high_delay
without setting memory.high.
What do you think about hooking try_charge_memcg as well,
so that it ensures __mem_cgroup_handle_over_high is called?
> ...
>
> >
> > 3. Cgroup hierarchy management (inheritance during online/offline)
> >
> I see you're copying the program upon memcg creation.
> Configuration copies aren't such a good way to properly handle
> hierarchical behavior.
> I wonder if this could follow the more generic pattern of how BPF progs
> are evaluated in hierarchies, see BPF_F_ALLOW_OVERRIDE and
> BPF_F_ALLOW_MULTI.
I will support them in the next version.
>
> >
> > Example Results
> >
> ...
>
> >
> > Results show the low-priority cgroup (/sys/fs/cgroup/low) was
> > significantly throttled:
> > - High-priority cgroup: 21,033,377 bogo ops at 347,825 ops/s
> > - Low-priority cgroup: 11,568 bogo ops at 177 ops/s
> >
> > The stress-ng process in the low-priority cgroup experienced a
> > ~99.9% slowdown in memory operations compared to the
> > high-priority cgroup, demonstrating effective priority
> > enforcement through BPF-controlled memory pressure.
> >
> As a demonstrator, it'd be good to compare this with a baseline without
> any extra progs, e.g. show that high-prio performed better and low-prio
> wasn't throttled for nothing.
Thanks for your remind.
This is a test log in the test environment without any extra progs:
root@...ntu:~# cgexec -g memory:low stress-ng --vm 4 --vm-keep --vm-bytes 80% \
--vm-method all --seed 2025 --metrics -t 60 \
& cgexec -g memory:high stress-ng --vm 4 --vm-keep --vm-bytes 80% \
--vm-method all --seed 2025 --metrics -t 60
[1] 982
stress-ng: info: [982] setting to a 1 min, 0 secs run per stressor
stress-ng: info: [983] setting to a 1 min, 0 secs run per stressor
stress-ng: info: [982] dispatching hogs: 4 vm
stress-ng: info: [983] dispatching hogs: 4 vm
stress-ng: metrc: [982] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per RSS Max
stress-ng: metrc: [982] (secs) (secs) (secs) (real time) (usr+sys time) instance (%) (KB)
stress-ng: metrc: [982] vm 23544 60.08 2.90 15.74 391.85 1263.43 7.75 524708
stress-ng: info: [982] skipped: 0
stress-ng: info: [982] passed: 4: vm (4)
stress-ng: info: [982] failed: 0
stress-ng: info: [982] metrics untrustworthy: 0
stress-ng: info: [982] successful run completed in 1 min, 0.09 secs
stress-ng: metrc: [983] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s CPU used per RSS Max
stress-ng: metrc: [983] (secs) (secs) (secs) (real time) (usr+sys time) instance (%) (KB)
stress-ng: metrc: [983] vm 23544 60.09 3.12 15.91 391.81 1237.10 7.92 705076
stress-ng: info: [983] skipped: 0
stress-ng: info: [983] passed: 4: vm (4)
stress-ng: info: [983] failed: 0
stress-ng: info: [983] metrics untrustworthy: 0
stress-ng: info: [983] successful run completed in 1 min, 0.09 secs
Best,
Hui
>
> Thanks,
> Michal
>
Powered by blists - more mailing lists