[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXpv4r_3L0UWzAQn@google.com>
Date: Wed, 28 Jan 2026 20:21:54 +0000
From: Matt Bobrowski <mattbobrowski@...gle.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: bpf@...r.kernel.org, Michal Hocko <mhocko@...e.com>,
Alexei Starovoitov <ast@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
JP Kobryn <inwardvessel@...il.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Suren Baghdasaryan <surenb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH bpf-next v3 09/17] mm: introduce bpf_out_of_memory() BPF
kfunc
On Mon, Jan 26, 2026 at 06:44:12PM -0800, Roman Gushchin wrote:
> Introduce bpf_out_of_memory() bpf kfunc, which allows to declare
> an out of memory events and trigger the corresponding kernel OOM
> handling mechanism.
>
> It takes a trusted memcg pointer (or NULL for system-wide OOMs)
> as an argument, as well as the page order.
>
> If the BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK flag is not set, only one OOM
> can be declared and handled in the system at once, so if the function
> is called in parallel to another OOM handling, it bails out with -EBUSY.
> This mode is suited for global OOM's: any concurrent OOMs will likely
> do the job and release some memory. In a blocking mode (which is
> suited for memcg OOMs) the execution will wait on the oom_lock mutex.
>
> The function is declared as sleepable. It guarantees that it won't
> be called from an atomic context. It's required by the OOM handling
> code, which shouldn't be called from a non-blocking context.
>
> Handling of a memcg OOM almost always requires taking of the
> css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable
> also guarantees that it can't be called with acquired css_set_lock,
> so the kernel can't deadlock on it.
>
> To avoid deadlocks on the oom lock, the function is filtered out for
> bpf oom struct ops programs and all tracing programs.
>
> Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
> ---
> include/linux/oom.h | 5 +++
> mm/oom_kill.c | 85 +++++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 88 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/oom.h b/include/linux/oom.h
> index c2dce336bcb4..851dba9287b5 100644
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -21,6 +21,11 @@ enum oom_constraint {
> CONSTRAINT_MEMCG,
> };
>
> +enum bpf_oom_flags {
> + BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK = 1 << 0,
> + BPF_OOM_FLAGS_LAST = 1 << 1,
> +};
> +
> /*
> * Details of the page allocation that triggered the oom killer that are used to
> * determine what should be killed.
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 09897597907f..8f63a370b8f5 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -1334,6 +1334,53 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc,
> return 0;
> }
>
> +/**
> + * bpf_out_of_memory - declare Out Of Memory state and invoke OOM killer
> + * @memcg__nullable: memcg or NULL for system-wide OOMs
> + * @order: order of page which wasn't allocated
> + * @flags: flags
> + *
> + * Declares the Out Of Memory state and invokes the OOM killer.
> + *
> + * OOM handlers are synchronized using the oom_lock mutex. If wait_on_oom_lock
> + * is true, the function will wait on it. Otherwise it bails out with -EBUSY
> + * if oom_lock is contended.
> + *
> + * Generally it's advised to pass wait_on_oom_lock=false for global OOMs
> + * and wait_on_oom_lock=true for memcg-scoped OOMs.
> + *
> + * Returns 1 if the forward progress was achieved and some memory was freed.
> + * Returns a negative value if an error occurred.
> + */
> +__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable,
> + int order, u64 flags)
> +{
> + struct oom_control oc = {
> + .memcg = memcg__nullable,
> + .gfp_mask = GFP_KERNEL,
> + .order = order,
> + };
> + int ret;
> +
> + if (flags & ~(BPF_OOM_FLAGS_LAST - 1))
> + return -EINVAL;
> +
> + if (oc.order < 0 || oc.order > MAX_PAGE_ORDER)
> + return -EINVAL;
> +
> + if (flags & BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK) {
> + ret = mutex_lock_killable(&oom_lock);
If contended and we end up waiting here, some forward progress could
have been made in the interim. Enough such that this pending OOM event
initiated by the call into bpf_out_of_memory() may no longer even be
warranted. What do you think about adding an escape hatch here, which
could simply be in the form of a user-defined function callback?
> + if (ret)
> + return ret;
> + } else if (!mutex_trylock(&oom_lock))
> + return -EBUSY;
> +
> + ret = out_of_memory(&oc);
> +
> + mutex_unlock(&oom_lock);
> + return ret;
> +}
> +
> __bpf_kfunc_end_defs();
>
> BTF_KFUNCS_START(bpf_oom_kfuncs)
> @@ -1356,14 +1403,48 @@ static const struct btf_kfunc_id_set bpf_oom_kfunc_set = {
> .filter = bpf_oom_kfunc_filter,
> };
>
> +BTF_KFUNCS_START(bpf_declare_oom_kfuncs)
> +BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE)
> +BTF_KFUNCS_END(bpf_declare_oom_kfuncs)
> +
> +static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> + if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id))
> + return 0;
> +
> + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
> + prog->aux->attach_btf_id == bpf_oom_ops_ids[0])
> + return -EACCES;
> +
> + if (prog->type == BPF_PROG_TYPE_TRACING)
> + return -EACCES;
> +
> + return 0;
> +}
> +
> +static const struct btf_kfunc_id_set bpf_declare_oom_kfunc_set = {
> + .owner = THIS_MODULE,
> + .set = &bpf_declare_oom_kfuncs,
> + .filter = bpf_declare_oom_kfunc_filter,
> +};
> +
> static int __init bpf_oom_init(void)
> {
> int err;
>
> err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
> &bpf_oom_kfunc_set);
> - if (err)
> - pr_warn("error while registering bpf oom kfuncs: %d", err);
> + if (err) {
> + pr_warn("error while registering struct_ops bpf oom kfuncs: %d", err);
> + return err;
> + }
> +
> + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC,
> + &bpf_declare_oom_kfunc_set);
> + if (err) {
> + pr_warn("error while registering unspec bpf oom kfuncs: %d", err);
> + return err;
> + }
>
> return err;
> }
> --
> 2.52.0
>
Powered by blists - more mailing lists