[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQKWskY1ijJtSX=N0QczW_-gtg-X_SpK_GuiYBYQodn5wQ@mail.gmail.com>
Date: Tue, 28 Oct 2025 10:45:47 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, LKML <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>, Johannes Weiner <hannes@...xchg.org>,
Andrii Nakryiko <andrii@...nel.org>, JP Kobryn <inwardvessel@...il.com>,
linux-mm <linux-mm@...ck.org>,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
Martin KaFai Lau <martin.lau@...nel.org>, Song Liu <song@...nel.org>,
Kumar Kartikeya Dwivedi <memxor@...il.com>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v2 06/23] mm: introduce BPF struct ops for OOM handling
On Mon, Oct 27, 2025 at 4:18 PM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> +bool bpf_handle_oom(struct oom_control *oc)
> +{
> + struct bpf_oom_ops *bpf_oom_ops = NULL;
> + struct mem_cgroup __maybe_unused *memcg;
> + int idx, ret = 0;
> +
> + /* All bpf_oom_ops structures are protected using bpf_oom_srcu */
> + idx = srcu_read_lock(&bpf_oom_srcu);
> +
> +#ifdef CONFIG_MEMCG
> + /* Find the nearest bpf_oom_ops traversing the cgroup tree upwards */
> + for (memcg = oc->memcg; memcg; memcg = parent_mem_cgroup(memcg)) {
> + bpf_oom_ops = READ_ONCE(memcg->bpf_oom);
> + if (!bpf_oom_ops)
> + continue;
> +
> + /* Call BPF OOM handler */
> + ret = bpf_ops_handle_oom(bpf_oom_ops, memcg, oc);
> + if (ret && oc->bpf_memory_freed)
> + goto exit;
> + }
> +#endif /* CONFIG_MEMCG */
> +
> + /*
> + * System-wide OOM or per-memcg BPF OOM handler wasn't successful?
> + * Try system_bpf_oom.
> + */
> + bpf_oom_ops = READ_ONCE(system_bpf_oom);
> + if (!bpf_oom_ops)
> + goto exit;
> +
> + /* Call BPF OOM handler */
> + ret = bpf_ops_handle_oom(bpf_oom_ops, NULL, oc);
> +exit:
> + srcu_read_unlock(&bpf_oom_srcu, idx);
> + return ret && oc->bpf_memory_freed;
> +}
...
> +static int bpf_oom_ops_reg(void *kdata, struct bpf_link *link)
> +{
> + struct bpf_struct_ops_link *ops_link = container_of(link, struct bpf_struct_ops_link, link);
> + struct bpf_oom_ops **bpf_oom_ops_ptr = NULL;
> + struct bpf_oom_ops *bpf_oom_ops = kdata;
> + struct mem_cgroup *memcg = NULL;
> + int err = 0;
> +
> + if (IS_ENABLED(CONFIG_MEMCG) && ops_link->cgroup_id) {
> + /* Attach to a memory cgroup? */
> + memcg = mem_cgroup_get_from_ino(ops_link->cgroup_id);
> + if (IS_ERR_OR_NULL(memcg))
> + return PTR_ERR(memcg);
> + bpf_oom_ops_ptr = bpf_oom_memcg_ops_ptr(memcg);
> + } else {
> + /* System-wide OOM handler */
> + bpf_oom_ops_ptr = &system_bpf_oom;
> + }
I don't like the fallback and special case of cgroup_id == 0.
imo it would be cleaner to require CONFIG_MEMCG for this feature
and only allow attach to a cgroup.
There is always a root cgroup that can be attached to and that
handler will be acting as "system wide" oom handler.
Powered by blists - more mailing lists