lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240903124416.GE424729@mit.edu>
Date: Tue, 3 Sep 2024 08:44:16 -0400
From: "Theodore Ts'o" <tytso@....edu>
To: Yafang Shao <laoar.shao@...il.com>
Cc: Michal Hocko <mhocko@...e.com>, Dave Chinner <david@...morbit.com>,
        Kent Overstreet <kent.overstreet@...ux.dev>,
        Matthew Wilcox <willy@...radead.org>, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Dave Chinner <dchinner@...hat.com>
Subject: Re: [PATCH] bcachefs: Switch to memalloc_flags_do() for vmalloc
 allocations

On Tue, Sep 03, 2024 at 02:34:05PM +0800, Yafang Shao wrote:
>
> When setting GFP_NOFAIL, it's important to not only enable direct
> reclaim but also the OOM killer. In scenarios where swap is off and
> there is minimal page cache, setting GFP_NOFAIL without __GFP_FS can
> result in an infinite loop. In other words, GFP_NOFAIL should not be
> used with GFP_NOFS. Unfortunately, many call sites do combine them.
> For example:
> 
> XFS:
> 
> fs/xfs/libxfs/xfs_exchmaps.c: GFP_NOFS | __GFP_NOFAIL
> fs/xfs/xfs_attr_item.c: GFP_NOFS | __GFP_NOFAIL
> 
> EXT4:
> 
> fs/ext4/mballoc.c: GFP_NOFS | __GFP_NOFAIL
> fs/ext4/extents.c: GFP_NOFS | __GFP_NOFAIL
> 
> This seems problematic, but I'm not an FS expert. Perhaps Dave or Ted
> could provide further insight.

GFP_NOFS is needed because we need to signal to the mm layer to avoid
recursing into file system layer --- for example, to clean a page by
writing it back to the FS.  Since we may have taken various file
system locks, recursing could lead to deadlock, which would make the
system (and the user) sad.

If the mm layer wants to OOM kill a process, that should be fine as
far as the file system is concerned --- this could reclaim anonymous
pages that don't need to be written back, for example.  And we don't
need to write back dirty pages before the process killed.  So I'm a
bit puzzled why (as you imply; I haven't dug into the mm code in
question) GFP_NOFS implies disabling the OOM killer?

Regards,

					- Ted

P.S.  Note that this is a fairly simplistic, very conservative set of
constraints.  If you have several dozen file sysetems mounted, and
we're deep in the guts of file system A, it might be *fine* to clean
pages associated with file system B or file system C.  Unless of
course, file system A is a loop-back mount onto a file located in file
system B, in which case writing into file system A might require
taking locks related to file system B.  But that aside, in theory we
could allow certain types of page reclaim if we were willing to track
which file systems are busy.

On the other hand, if the system is allowed to get that busy,
performance is going to be *terrible*, and so perhaps the better thing
to do is to teach the container manager not to schedule so many jobs
on the server in the first place, or having the mobile OS kill off
applications that aren't in the foreground, or giving the OOM killer
license to kill off jobs much earlier, etc.  By the time we get to the
point where we are trying to use these last dozen or so pages, the
system is going to be thrashing super-badly, and the user is going to
be *quite* unhappy.  So arguably these problems should be solved much
higher up the software stack, by not letting the system get into such
a condition in the first place.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ