lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a53v2iou.fsf@linux.dev>
Date: Tue, 19 Aug 2025 12:52:01 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: linux-mm@...ck.org,  bpf@...r.kernel.org,  Johannes Weiner
 <hannes@...xchg.org>,  Michal Hocko <mhocko@...e.com>,  David Rientjes
 <rientjes@...gle.com>,  Matt Bobrowski <mattbobrowski@...gle.com>,  Song
 Liu <song@...nel.org>,  Kumar Kartikeya Dwivedi <memxor@...il.com>,
  Alexei Starovoitov <ast@...nel.org>,  Andrew Morton
 <akpm@...ux-foundation.org>,  linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 00/14] mm: BPF OOM

Suren Baghdasaryan <surenb@...gle.com> writes:

> On Mon, Aug 18, 2025 at 10:01 AM Roman Gushchin
> <roman.gushchin@...ux.dev> wrote:
>>
>> This patchset adds an ability to customize the out of memory
>> handling using bpf.
>>
>> It focuses on two parts:
>> 1) OOM handling policy,
>> 2) PSI-based OOM invocation.
>>
>> The idea to use bpf for customizing the OOM handling is not new, but
>> unlike the previous proposal [1], which augmented the existing task
>> ranking policy, this one tries to be as generic as possible and
>> leverage the full power of the modern bpf.
>>
>> It provides a generic interface which is called before the existing OOM
>> killer code and allows implementing any policy, e.g. picking a victim
>> task or memory cgroup or potentially even releasing memory in other
>> ways, e.g. deleting tmpfs files (the last one might require some
>> additional but relatively simple changes).
>>
>> The past attempt to implement memory-cgroup aware policy [2] showed
>> that there are multiple opinions on what the best policy is.  As it's
>> highly workload-dependent and specific to a concrete way of organizing
>> workloads, the structure of the cgroup tree etc, a customizable
>> bpf-based implementation is preferable over a in-kernel implementation
>> with a dozen on sysctls.
>
> s/on/of ?

Fixed, thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ