lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875xbsglra.fsf@linux.dev>
Date: Sun, 02 Nov 2025 13:36:25 -0800
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
  linux-kernel@...r.kernel.org,  Alexei Starovoitov <ast@...nel.org>,
  Suren Baghdasaryan <surenb@...gle.com>,  Shakeel Butt
 <shakeel.butt@...ux.dev>,  Johannes Weiner <hannes@...xchg.org>,  Andrii
 Nakryiko <andrii@...nel.org>,  JP Kobryn <inwardvessel@...il.com>,
  linux-mm@...ck.org,  cgroups@...r.kernel.org,  bpf@...r.kernel.org,
  Martin KaFai Lau <martin.lau@...nel.org>,  Song Liu <song@...nel.org>,
  Kumar Kartikeya Dwivedi <memxor@...il.com>,  Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v2 06/23] mm: introduce BPF struct ops for OOM handling

Michal Hocko <mhocko@...e.com> writes:

> On Mon 27-10-25 16:17:09, Roman Gushchin wrote:
>> Introduce a bpf struct ops for implementing custom OOM handling
>> policies.
>> 
>> It's possible to load one bpf_oom_ops for the system and one
>> bpf_oom_ops for every memory cgroup. In case of a memcg OOM, the
>> cgroup tree is traversed from the OOM'ing memcg up to the root and
>> corresponding BPF OOM handlers are executed until some memory is
>> freed. If no memory is freed, the kernel OOM killer is invoked.
>
> Do you have any usecase in mind where parent memcg oom handler decides
> to not kill or cannot kill anything and hand over upwards in the
> hierarchy?

I believe that in most cases bpf handlers will handle ooms themselves,
but because strictly speaking I don't have control over what bpf
programs do or do not, the kernel should provide the fallback mechanism.
This is a common practice with bpf, e.g. sched_ext falls back to
CFS/EEVDF in case something is wrong.

Specifically to OOM case, I believe someone might want to use bpf
programs just for monitoring/collecting some information, without
trying to actually free some memory.

>> The struct ops provides the bpf_handle_out_of_memory() callback,
>> which expected to return 1 if it was able to free some memory and 0
>> otherwise. If 1 is returned, the kernel also checks the bpf_memory_freed
>> field of the oom_control structure, which is expected to be set by
>> kfuncs suitable for releasing memory. If both are set, OOM is
>> considered handled, otherwise the next OOM handler in the chain
>> (e.g. BPF OOM attached to the parent cgroup or the in-kernel OOM
>> killer) is executed.
>
> Could you explain why do we need both? Why is not bpf_memory_freed
> return value sufficient?

Strictly speaking, bpf_memory_freed should be enough, but because
bpf programs have to return an int and there is no additional cost
to add this option (pass to next or in-kernel oom handler), I thought
it's not a bad idea. If you feel strongly otherwise, I can ignore
the return value on rely on bpf_memory_freed only.

>
>> The bpf_handle_out_of_memory() callback program is sleepable to enable
>> using iterators, e.g. cgroup iterators. The callback receives struct
>> oom_control as an argument, so it can determine the scope of the OOM
>> event: if this is a memcg-wide or system-wide OOM.
>
> This could be tricky because it might introduce a subtle and hard to
> debug lock dependency chain. lock(a); allocation() -> oom -> lock(a).
> Sleepable locks should be only allowed in trylock mode.

Agree, but it's achieved by controlling the context where oom can be
declared (e.g. in bpf_psi case it's done from a work context).

>
>> The callback is executed just before the kernel victim task selection
>> algorithm, so all heuristics and sysctls like panic on oom,
>> sysctl_oom_kill_allocating_task and sysctl_oom_kill_allocating_task
>> are respected.
>
> I guess you meant to say and sysctl_panic_on_oom.

Yep, fixed.
>
>> BPF OOM struct ops provides the handle_cgroup_offline() callback
>> which is good for releasing struct ops if the corresponding cgroup
>> is gone.
>
> What kind of synchronization is expected between handle_cgroup_offline
> and bpf_handle_out_of_memory?

You mean from a user's perspective? E.g. can these two callbacks run in
parallel? Currently yes, but it's a good question, I haven't thought
about it, maybe it's better to synchronize them.
Internally both rely on srcu to pin bpf_oom_ops in memory.

>  
>> The struct ops also has the name field, which allows to define a
>> custom name for the implemented policy. It's printed in the OOM report
>> in the oom_policy=<policy> format. "default" is printed if bpf is not
>> used or policy name is not specified.
>
> oom_handler seems like a better fit but nothing I would insist on. Also
> I would just print it if there is an actual handler so that existing
> users who do not use bpf oom killers do not need to change their
> parsers.

Sure, works for me too.

>
> Other than that this looks reasonable to me.

Sound great, thank you for taking a look!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ