[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fd1a56d-5a22-4bde-59a5-169a4696219e@suse.cz>
Date: Wed, 24 May 2023 11:21:43 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] mm: compaction: avoid GFP_NOFS ABBA deadlock
On 5/19/23 13:13, Johannes Weiner wrote:
> During stress testing with higher-order allocations, a deadlock
> scenario was observed in compaction: One GFP_NOFS allocation was
> sleeping on mm/compaction.c::too_many_isolated(), while all CPUs in
> the system were busy with compactors spinning on buffer locks held by
> the sleeping GFP_NOFS allocation.
>
> Reclaim is susceptible to this same deadlock; we fixed it by granting
> GFP_NOFS allocations additional LRU isolation headroom, to ensure it
> makes forward progress while holding fs locks that other reclaimers
> might acquire. Do the same here.
>
> This code has been like this since compaction was initially merged,
> and I only managed to trigger this with out-of-tree patches that
> dramatically increase the contexts that do GFP_NOFS compaction. While
> the issue is real, it seems theoretical in nature given existing
> allocation sites. Worth fixing now, but no Fixes tag or stable CC.
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
So IIUC the change is done by not giving GFP_NOFS extra headroom, but
instead restricting the headroom of __GFP_FS allocations. But the original
one was probably too generous anyway so it should be fine?
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/compaction.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> v2:
> - clarify too_many_isolated() comment (Mel)
> - split isolation deadlock from no-contiguous-anon lockups as that's
> a different scenario and deserves its own patch
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index c8bcdea15f5f..c9a4b6dffcf2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -745,8 +745,9 @@ isolate_freepages_range(struct compact_control *cc,
> }
>
> /* Similar to reclaim, but different enough that they don't share logic */
> -static bool too_many_isolated(pg_data_t *pgdat)
> +static bool too_many_isolated(struct compact_control *cc)
> {
> + pg_data_t *pgdat = cc->zone->zone_pgdat;
> bool too_many;
>
> unsigned long active, inactive, isolated;
> @@ -758,6 +759,17 @@ static bool too_many_isolated(pg_data_t *pgdat)
> isolated = node_page_state(pgdat, NR_ISOLATED_FILE) +
> node_page_state(pgdat, NR_ISOLATED_ANON);
>
> + /*
> + * Allow GFP_NOFS to isolate past the limit set for regular
> + * compaction runs. This prevents an ABBA deadlock when other
> + * compactors have already isolated to the limit, but are
> + * blocked on filesystem locks held by the GFP_NOFS thread.
> + */
> + if (cc->gfp_mask & __GFP_FS) {
> + inactive >>= 3;
> + active >>= 3;
> + }
> +
> too_many = isolated > (inactive + active) / 2;
> if (!too_many)
> wake_throttle_isolated(pgdat);
> @@ -806,7 +818,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * list by either parallel reclaimers or compaction. If there are,
> * delay for some time until fewer pages are isolated
> */
> - while (unlikely(too_many_isolated(pgdat))) {
> + while (unlikely(too_many_isolated(cc))) {
> /* stop isolation if there are still pages not migrated */
> if (cc->nr_migratepages)
> return -EAGAIN;
Powered by blists - more mailing lists