[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68ac1af7-988b-42c6-8249-8949eb7fd986@suse.cz>
Date: Tue, 25 Feb 2025 11:50:02 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Brendan Jackman <jackmanb@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] mm: page_alloc: don't steal single pages from biggest
buddy
On 2/25/25 01:08, Johannes Weiner wrote:
> The fallback code searches for the biggest buddy first in an attempt
> to steal the whole block and encourage type grouping down the line.
>
> The approach used to be this:
>
> - Non-movable requests will split the largest buddy and steal the
> remainder. This splits up contiguity, but it allows subsequent
> requests of this type to fall back into adjacent space.
>
> - Movable requests go and look for the smallest buddy instead. The
> thinking is that movable requests can be compacted, so grouping is
> less important than retaining contiguity.
>
> c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
> conversion") enforces freelist type hygiene, which restricts stealing
> to either claiming the whole block or just taking the requested chunk;
> no additional pages or buddy remainders can be stolen any more.
>
> The patch mishandled when to switch to finding the smallest buddy in
> that new reality. As a result, it may steal the exact request size,
> but from the biggest buddy. This causes fracturing for no good reason.
>
> Fix this by committing to the new behavior: either steal the whole
> block, or fall back to the smallest buddy.
>
> Remove single-page stealing from steal_suitable_fallback(). Rename it
> to try_to_steal_block() to make the intentions clear. If this fails,
> always fall back to the smallest buddy.
>
> The following is from 4 runs of mmtest's thpchallenge. "Pollute" is
> single page fallback, "steal" is conversion of a partially used block.
> The numbers for free block conversions (omitted) are comparable.
>
> vanilla patched
>
> @pollute[unmovable from reclaimable]: 27 106
> @pollute[unmovable from movable]: 82 46
> @pollute[reclaimable from unmovable]: 256 83
> @pollute[reclaimable from movable]: 46 8
> @pollute[movable from unmovable]: 4841 868
> @pollute[movable from reclaimable]: 5278 12568
>
> @steal[unmovable from reclaimable]: 11 12
> @steal[unmovable from movable]: 113 49
> @steal[reclaimable from unmovable]: 19 34
> @steal[reclaimable from movable]: 47 21
> @steal[movable from unmovable]: 250 183
> @steal[movable from reclaimable]: 81 93
>
> The allocator appears to do a better job at keeping stealing and
> polluting to the first fallback preference. As a result, the numbers
> for "from movable" - the least preferred fallback option, and most
> detrimental to compactability - are down across the board.
>
> Fixes: c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block conversion")
> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Thanks!
Powered by blists - more mailing lists