[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230524155811.GA14306@cmpxchg.org>
Date: Wed, 24 May 2023 11:58:11 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock
On Wed, May 24, 2023 at 11:21:43AM +0200, Vlastimil Babka wrote:
> On 5/19/23 13:13, Johannes Weiner wrote:
> > During stress testing with higher-order allocations, a deadlock
> > scenario was observed in compaction: One GFP_NOFS allocation was
> > sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
> > the system were busy with compactors spinning on buffer locks held by
> > the sleeping GFP_NOFS allocation.
> >
> > Reclaim is susceptible to this same deadlock; we fixed it by granting
> > GFP_NOFS allocations additional LRU isolation headroom, to ensure it
> > makes forward progress while holding fs locks that other reclaimers
> > might acquire. Do the same here.
> >
> > This code has been like this since compaction was initially merged,
> > and I only managed to trigger this with out-of-tree patches that
> > dramatically increase the contexts that do GFP_NOFS compaction. While
> > the issue is real, it seems theoretical in nature given existing
> > allocation sites. Worth fixing now, but no Fixes tag or stable CC.
>
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
>
> So IIUC the change is done by not giving GFP_NOFS extra headroom, but
> instead restricting the headroom of __GFP_FS allocations. But the original
> one was probably too generous anyway so it should be fine?
Yes, the original limit is generally half the LRU, which is quite high.
The new limit is 1/16th of the LRU for regular compactors and half for
GFP_NOFS ones. Note that I didn't make these up; they're stolen from
too_many_isolated() in vmscan.c. I figured those are proven values and
no sense in deviating from them until we have a reason to do so.
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
Thanks!
Powered by blists - more mailing lists