[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250428033617.3797686-11-roman.gushchin@linux.dev>
Date: Mon, 28 Apr 2025 03:36:15 +0000
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Alexei Starovoitov <ast@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Suren Baghdasaryan <surenb@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Josh Don <joshdon@...gle.com>,
Chuyi Zhou <zhouchuyi@...edance.com>,
cgroups@...r.kernel.org,
linux-mm@...ck.org,
bpf@...r.kernel.org,
Roman Gushchin <roman.gushchin@...ux.dev>
Subject: [PATCH rfc 10/12] mm: introduce bpf_out_of_memory() bpf kfunc
Introduce bpf_out_of_memory() bpf kfunc, which allows to declare
an out of memory events and trigger the corresponding kernel OOM
handling mechanism.
It takes a trusted memcg pointer (or NULL for system-wide OOMs)
as an argument, as well as the page order.
Only one OOM can be declared and handled in the system at once,
so if the function is called in parallel to another OOM handling,
it bails out with -EBUSY.
The function is declared as sleepable. It guarantees that it won't
be called from an atomic context. It's required by the OOM handling
code, which is not guaranteed to work in a non-blocking context.
Handling of a memcg OOM almost always requires taking of the
css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable
also guarantees that it can't be called with acquired css_set_lock,
so the kernel can't deadlock on it.
Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
---
mm/oom_kill.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 2e922e75a9df..246510572e34 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -1333,6 +1333,27 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc,
return 0;
}
+__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, int order)
+{
+ struct oom_control oc = {
+ .memcg = memcg__nullable,
+ .order = order,
+ };
+ int ret = -EINVAL;
+
+ if (oc.order < 0 || oc.order > MAX_PAGE_ORDER)
+ goto out;
+
+ ret = -EBUSY;
+ if (mutex_trylock(&oom_lock)) {
+ ret = out_of_memory(&oc);
+ mutex_unlock(&oom_lock);
+ }
+
+out:
+ return ret;
+}
+
__bpf_kfunc_end_defs();
__bpf_hook_start();
@@ -1358,6 +1379,7 @@ static const struct btf_kfunc_id_set bpf_oom_hook_set = {
BTF_KFUNCS_START(bpf_oom_kfuncs)
BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE | KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE | KF_TRUSTED_ARGS)
BTF_KFUNCS_END(bpf_oom_kfuncs)
static const struct btf_kfunc_id_set bpf_oom_kfunc_set = {
--
2.49.0.901.g37484f566f-goog
Powered by blists - more mailing lists