lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28e35a8b-400e-9320-5a97-accfccf4b9a8@suse.cz>
Date:   Tue, 28 Apr 2020 11:38:19 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing
 soon

On 4/27/20 10:30 PM, Andrew Morton wrote:
> On Sun, 26 Apr 2020 20:12:58 -0700 (PDT) David Rientjes <rientjes@...gle.com> wrote:
> 
>> 
>> GFP_ATOMIC allocators can access below these per-zone watermarks.  So the 
>> issue is that per-zone free pages stays between ALLOC_HIGH watermarks 
>> (the watermark that GFP_ATOMIC allocators can allocate to) and min 
>> watermarks.  We never reclaim enough memory to get back to min watermarks 
>> because reclaim cannot keep up with the amount of GFP_ATOMIC allocations.
> 
> But there should be an upper bound upon the total amount of in-flight
> GFP_ATOMIC memory at any point in time?  These aren't like pagecache

If it's a network receive path, then this is effectively bounded by link speed 
vs ability to deal with the packets quickly and free the buffers. And the bursts 
of incoming packets might be out of control of the admin. With my "enterprise 
kernel support" hat on, it's it's annoying enough to explain GFP_ATOMIC failures 
(usually high-order) in dmesg every once in a while (the usual suggestion is to 
bump min_free_kbytes and stress that unless they are frequent, there's no actual 
harm as networking can defer the allocation to non-atomic context). If there was 
an OOM kill as a result, that could not be disabled, I can well imagine we would 
have to revert such patch in our kernel as a result due to the DOS (intentional 
or not) potential.

> which will take more if we give it more.  Setting the various
> thresholds appropriately should ensure that blockable allocations don't
> get their memory stolen by GPP_ATOMIC allocations?

I agree with the view that GFP_ATOMIC is only a (perhaps more visible) part of 
the problem that there's no fairness guarantee in reclaim, and allocators can 
steal from each other. GFP_ATOMIC allocations just have it easier thanks to 
lower thresholds.

> I took a look at doing a quick-fix for the
> direct-reclaimers-get-their-stuff-stolen issue about a million years
> ago.  I don't recall where it ended up.  It's pretty trivial for the
> direct reclaimer to free pages into current->reclaimed_pages and to
> take a look in there on the allocation path, etc.  But it's only
> practical for order-0 pages.

FWIW there's already such approach added to compaction by Mel some time ago, so 
order>0 allocations are covered to some extent. But in this case I imagine that 
compaction won't even start because order-0 watermarks are too low.

The order-0 reclaim capture might work though - as a result the GFP_ATOMIC 
allocations would more likely fail and defer to their fallback context.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ