lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250821194515.ohw7rhgo4peepw63@offworld>
Date: Thu, 21 Aug 2025 12:45:15 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Michal Hocko <mhocko@...e.com>
Cc: zhongjinji <zhongjinji@...or.com>, akpm@...ux-foundation.org,
	andrealmeid@...lia.com, dvhart@...radead.org, feng.han@...or.com,
	liam.howlett@...cle.com, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, liulu.liu@...or.com, mingo@...hat.com,
	npache@...hat.com, peterz@...radead.org, rientjes@...gle.com,
	shakeel.butt@...ux.dev, tglx@...utronix.de
Subject: Re: [PATCH v4 2/3] mm/oom_kill: Only delay OOM reaper for processes
 using robust futexes

On Thu, 21 Aug 2025, Michal Hocko wrote:

>On Tue 19-08-25 19:53:08, Davidlohr Bueso wrote:
>> Yeah, relying on time as a fix is never a good idea. I was going to suggest
>> skipping the reaping for tasks with a robust list,
>
>let me reiterate that the purpose of the oom reaper is not an oom
>killing process optimization. It is crucial to guarantee a forward
>progress on the OOM situation by a) async memory reclaim of the oom
>victim and b) unblocking oom selection to a different process after a)
>is done. That means that the victim cannot block the oom situation for
>ever. Therefore we cannot really avoid tasks with robust futex or any
>other user processes without achieving b) at the same time.

Yes, which is why I indicated that skipping it was less practical.

In the real world, users that care enough to use robust futexes should
make sure that their application keep the OOM killer away altogether.

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ