lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP01T76Wv+swbT9xuQ-YhQ=-qOFggw6u1RziJNGjJBiNO233OQ@mail.gmail.com>
Date: Tue, 29 Apr 2025 03:56:54 +0200
From: Kumar Kartikeya Dwivedi <memxor@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Matt Bobrowski <mattbobrowski@...gle.com>, linux-kernel@...r.kernel.org, 
	Andrew Morton <akpm@...ux-foundation.org>, Alexei Starovoitov <ast@...nel.org>, 
	Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, 
	Shakeel Butt <shakeel.butt@...ux.dev>, Suren Baghdasaryan <surenb@...gle.com>, 
	David Rientjes <rientjes@...gle.com>, Josh Don <joshdon@...gle.com>, 
	Chuyi Zhou <zhouchuyi@...edance.com>, cgroups@...r.kernel.org, linux-mm@...ck.org, 
	bpf@...r.kernel.org
Subject: Re: [PATCH rfc 00/12] mm: BPF OOM

On Mon, 28 Apr 2025 at 19:24, Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> On Mon, Apr 28, 2025 at 10:43:07AM +0000, Matt Bobrowski wrote:
> > On Mon, Apr 28, 2025 at 03:36:05AM +0000, Roman Gushchin wrote:
> > > This patchset adds an ability to customize the out of memory
> > > handling using bpf.
> > >
> > > It focuses on two parts:
> > > 1) OOM handling policy,
> > > 2) PSI-based OOM invocation.
> > >
> > > The idea to use bpf for customizing the OOM handling is not new, but
> > > unlike the previous proposal [1], which augmented the existing task
> > > ranking-based policy, this one tries to be as generic as possible and
> > > leverage the full power of the modern bpf.
> > >
> > > It provides a generic hook which is called before the existing OOM
> > > killer code and allows implementing any policy, e.g.  picking a victim
> > > task or memory cgroup or potentially even releasing memory in other
> > > ways, e.g. deleting tmpfs files (the last one might require some
> > > additional but relatively simple changes).
> > >
> > > The past attempt to implement memory-cgroup aware policy [2] showed
> > > that there are multiple opinions on what the best policy is.  As it's
> > > highly workload-dependent and specific to a concrete way of organizing
> > > workloads, the structure of the cgroup tree etc, a customizable
> > > bpf-based implementation is preferable over a in-kernel implementation
> > > with a dozen on sysctls.
> > >
> > > The second part is related to the fundamental question on when to
> > > declare the OOM event. It's a trade-off between the risk of
> > > unnecessary OOM kills and associated work losses and the risk of
> > > infinite trashing and effective soft lockups.  In the last few years
> > > several PSI-based userspace solutions were developed (e.g. OOMd [3] or
> > > systemd-OOMd [4]). The common idea was to use userspace daemons to
> > > implement custom OOM logic as well as rely on PSI monitoring to avoid
> > > stalls. In this scenario the userspace daemon was supposed to handle
> > > the majority of OOMs, while the in-kernel OOM killer worked as the
> > > last resort measure to guarantee that the system would never deadlock
> > > on the memory. But this approach creates additional infrastructure
> > > churn: userspace OOM daemon is a separate entity which needs to be
> > > deployed, updated, monitored. A completely different pipeline needs to
> > > be built to monitor both types of OOM events and collect associated
> > > logs. A userspace daemon is more restricted in terms on what data is
> > > available to it. Implementing a daemon which can work reliably under a
> > > heavy memory pressure in the system is also tricky.
> > >
> > > [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
> > > [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
> > > [3]: https://github.com/facebookincubator/oomd
> > > [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
> > >
> > > ----
> > >
> > > This is an RFC version, which is not intended to be merged in the current form.
> > > Open questions/TODOs:
> > > 1) Program type/attachment type for the bpf_handle_out_of_memory() hook.
> > >    It has to be able to return a value, to be sleepable (to use cgroup iterators)
> > >    and to have trusted arguments to pass oom_control down to bpf_oom_kill_process().
> > >    Current patchset has a workaround (patch "bpf: treat fmodret tracing program's
> > >    arguments as trusted"), which is not safe. One option is to fake acquire/release
> > >    semantics for the oom_control pointer. Other option is to introduce a completely
> > >    new attachment or program type, similar to lsm hooks.
> >
> > Thinking out loud now, but rather than introducing and having a single
> > BPF-specific function/interface, and BPF program for that matter,
> > which can effectively be used to short-circuit steps from within
> > out_of_memory(), why not introduce a
> > tcp_congestion_ops/sched_ext_ops-like interface which essentially
> > provides a multifaceted interface for controlling OOM killing
> > (->select_bad_process, ->oom_kill_process, etc), optionally also from
> > the context of a BPF program (BPF_PROG_TYPE_STRUCT_OPS)?
>
> It's certainly an option and I thought about it. I don't think we need a bunch
> of hooks though. This patchset adds 2 and they belong to completely different
> subsystems (mm and sched/psi), so Idk how well they can be gathered
> into a single struct ops. But maybe it's fine.
>
> The only potentially new hook I can envision now is one to customize
> the oom reporting.
>

If you're considering scoping it down to a particular cgroup (as you
allude to in the TODO), or building a hierarchical interface, using
struct_ops will be much better than fmod_ret etc., which is global in
nature. Even if you don't support it now. I don't think a struct_ops
is warranted only when you have more than a few callbacks. As an
illustration, sched_ext started out without supporting hierarchical
attachment, but will piggy-back on the struct_ops interface to do so
in the near future.

> Thanks for the suggestion!
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ