lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbCAN8KwgxoSw4Rg2Uuwp0=LcGY8WRMqLbpEP5MkW4H_XQ@mail.gmail.com>
Date: Tue, 3 Sep 2024 21:15:59 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: "Theodore Ts'o" <tytso@....edu>
Cc: Michal Hocko <mhocko@...e.com>, Dave Chinner <david@...morbit.com>, 
	Kent Overstreet <kent.overstreet@...ux.dev>, Matthew Wilcox <willy@...radead.org>, 
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, Dave Chinner <dchinner@...hat.com>
Subject: Re: [PATCH] bcachefs: Switch to memalloc_flags_do() for vmalloc allocations

On Tue, Sep 3, 2024 at 8:44 PM Theodore Ts'o <tytso@....edu> wrote:
>
> On Tue, Sep 03, 2024 at 02:34:05PM +0800, Yafang Shao wrote:
> >
> > When setting GFP_NOFAIL, it's important to not only enable direct
> > reclaim but also the OOM killer. In scenarios where swap is off and
> > there is minimal page cache, setting GFP_NOFAIL without __GFP_FS can
> > result in an infinite loop. In other words, GFP_NOFAIL should not be
> > used with GFP_NOFS. Unfortunately, many call sites do combine them.
> > For example:
> >
> > XFS:
> >
> > fs/xfs/libxfs/xfs_exchmaps.c: GFP_NOFS | __GFP_NOFAIL
> > fs/xfs/xfs_attr_item.c: GFP_NOFS | __GFP_NOFAIL
> >
> > EXT4:
> >
> > fs/ext4/mballoc.c: GFP_NOFS | __GFP_NOFAIL
> > fs/ext4/extents.c: GFP_NOFS | __GFP_NOFAIL
> >
> > This seems problematic, but I'm not an FS expert. Perhaps Dave or Ted
> > could provide further insight.
>
> GFP_NOFS is needed because we need to signal to the mm layer to avoid
> recursing into file system layer --- for example, to clean a page by
> writing it back to the FS.  Since we may have taken various file
> system locks, recursing could lead to deadlock, which would make the
> system (and the user) sad.
>
> If the mm layer wants to OOM kill a process, that should be fine as
> far as the file system is concerned --- this could reclaim anonymous
> pages that don't need to be written back, for example.  And we don't
> need to write back dirty pages before the process killed.  So I'm a
> bit puzzled why (as you imply; I haven't dug into the mm code in
> question) GFP_NOFS implies disabling the OOM killer?

Refer to the out_of_memory() function [0]:

    if (!(oc->gfp_mask & __GFP_FS) && !is_memcg_oom(oc))
        return true;

[0]. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/oom_kill.c#n1137

Is it possible that this check can be removed?

>
> Regards,
>
>                                         - Ted
>
> P.S.  Note that this is a fairly simplistic, very conservative set of
> constraints.  If you have several dozen file sysetems mounted, and
> we're deep in the guts of file system A, it might be *fine* to clean
> pages associated with file system B or file system C.  Unless of
> course, file system A is a loop-back mount onto a file located in file
> system B, in which case writing into file system A might require
> taking locks related to file system B.  But that aside, in theory we
> could allow certain types of page reclaim if we were willing to track
> which file systems are busy.
>
> On the other hand, if the system is allowed to get that busy,
> performance is going to be *terrible*, and so perhaps the better thing
> to do is to teach the container manager not to schedule so many jobs
> on the server in the first place, or having the mobile OS kill off
> applications that aren't in the foreground, or giving the OOM killer
> license to kill off jobs much earlier, etc.  By the time we get to the
> point where we are trying to use these last dozen or so pages, the
> system is going to be thrashing super-badly, and the user is going to
> be *quite* unhappy.  So arguably these problems should be solved much
> higher up the software stack, by not letting the system get into such
> a condition in the first place.

I completely agree with your point. However, in the real world, things
don't always work as expected, which is why it's crucial to ensure the
OOM killer is effective during system thrashing. Unfortunately, the
kernel's OOM killer doesn't always perform as expected, particularly
under heavy thrashing. This is one reason why user-space OOM killers
like oomd exist.


--
Regards
Yafang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ