[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877bweswvo.fsf@linux.dev>
Date: Tue, 28 Oct 2025 11:29:31 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Tejun Heo <tj@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko
<mhocko@...nel.org>, Shakeel Butt <shakeel.butt@...ux.dev>, Johannes
Weiner <hannes@...xchg.org>, Andrii Nakryiko <andrii@...nel.org>, JP
Kobryn <inwardvessel@...il.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, bpf@...r.kernel.org, Martin KaFai Lau
<martin.lau@...nel.org>, Song Liu <song@...nel.org>, Kumar Kartikeya
Dwivedi <memxor@...il.com>
Subject: Re: [PATCH v2 20/23] sched: psi: implement bpf_psi struct ops
Tejun Heo <tj@...nel.org> writes:
> Hello,
>
> On Mon, Oct 27, 2025 at 04:22:03PM -0700, Roman Gushchin wrote:
>> This patch implements a BPF struct ops-based mechanism to create
>> PSI triggers, attach them to cgroups or system wide and handle
>> PSI events in BPF.
>>
>> The struct ops provides 3 callbacks:
>> - init() called once at load, handy for creating PSI triggers
>> - handle_psi_event() called every time a PSI trigger fires
>> - handle_cgroup_online() called when a new cgroup is created
>> - handle_cgroup_offline() called if a cgroup with an attached
>> trigger is deleted
>>
>> A single struct ops can create a number of PSI triggers, both
>> cgroup-scoped and system-wide.
>>
>> All 4 struct ops callbacks can be sleepable. handle_psi_event()
>> handlers are executed using a separate workqueue, so it won't
>> affect the latency of other PSI triggers.
>
> Here, too, I wonder whether it's necessary to build a hard-coded
> infrastructure to hook into PSI's triggers. psi_avgs_work() is what triggers
> these events and it's not that hot. Wouldn't a fexit attachment to that
> function that reads the updated values be enough? We can also easily add a
> TP there if a more structured access is desirable.
Idk, it would require re-implementing parts of the kernel PSI trigger code
in BPF, without clear benefits.
Handling PSI in BPF might be quite useful outside of the OOM handling,
e.g. it can be used for scheduling decisions, networking throttling,
memory tiering, etc. So maybe I'm biased (and I'm obviously am here), but
I'm not too concerned about adding infrastructure which won't be used.
But I understand your point. I personally feel that the added complexity of
the infrastructure makes writing and maintaining BPF PSI programs
simpler, but I'm open to other opinions here.
Thanks
Powered by blists - more mailing lists