[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dae2179c-1562-447c-a4fc-d415b4a9ebfc@suse.cz>
Date: Wed, 26 Feb 2025 16:08:09 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Brendan Jackman <jackmanb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>, Michal Hocko
<mhocko@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Yosry Ahmed <yosry.ahmed@...ux.dev>
Subject: Re: [PATCH v3 1/2] mm/page_alloc: Clarify terminology in migratetype
fallback code
On 2/25/25 4:29 PM, Brendan Jackman wrote:
> This code is rather confusing because:
>
> 1. "Steal" is sometimes used to refer to the general concept of
> allocating from a from a block of a fallback migratetype
> (steal_suitable_fallback()) but sometimes it refers specifically to
> converting a whole block's migratetype (can_steal_fallback()).
>
> 2. can_steal_fallback() sounds as though it's answering the question "am
> I functionally permitted to allocate from that other type" but in
> fact it is encoding a heuristic preference.
>
> 3. The same piece of data has different names in different places:
> can_steal vs whole_block. This reinforces point 2 because it looks
> like the different names reflect a shift in intent from "am I
> allowed to steal" to "do I want to steal", but no such shift exists.
>
> Fix 1. by avoiding the term "steal" in ambiguous contexts. Start using
> the term "claim" to refer to the special case of stealing the entire
> block.
>
> Fix 2. by using "should" instead of "can", and also rename its
> parameters and add some commentary to make it more explicit what they
> mean.
>
> Fix 3. by adopting the new "claim" terminology universally for this
> set of variables.
>
> Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Some nits:
> ---
> mm/compaction.c | 4 ++--
> mm/internal.h | 2 +-
> mm/page_alloc.c | 72 ++++++++++++++++++++++++++++-----------------------------
> 3 files changed, 39 insertions(+), 39 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 0992106d4ea751f7f1f8ebf7c75cd433d676cbe0..550ce50218075509ccb5f9485fd84f5d1f3d23a7 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2333,7 +2333,7 @@ static enum compact_result __compact_finished(struct compact_control *cc)
> ret = COMPACT_NO_SUITABLE_PAGE;
> for (order = cc->order; order < NR_PAGE_ORDERS; order++) {
> struct free_area *area = &cc->zone->free_area[order];
> - bool can_steal;
> + bool claim_block;
>
> /* Job done if page is free of the right migratetype */
> if (!free_area_empty(area, migratetype))
> @@ -2350,7 +2350,7 @@ static enum compact_result __compact_finished(struct compact_control *cc)
> * other migratetype buddy lists.
> */
> if (find_suitable_fallback(area, order, migratetype,
> - true, &can_steal) != -1)
> + true, &claim_block) != -1)
> /*
> * Movable pages are OK in any pageblock. If we are
> * stealing for a non-movable allocation, make sure
> diff --git a/mm/internal.h b/mm/internal.h
> index b07550db2bfd1d152fa90f91b3687b0fa1a9f653..aa30282a774ae26349944a75da854ae6a3da2a98 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -863,7 +863,7 @@ static inline void init_cma_pageblock(struct page *page)
>
>
> int find_suitable_fallback(struct free_area *area, unsigned int order,
> - int migratetype, bool only_stealable, bool *can_steal);
> + int migratetype, bool claim_only, bool *claim_block);
>
> static inline bool free_area_empty(struct free_area *area, int migratetype)
> {
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 5d8e274c8b1d500d263a17ef36fe190f60b88196..5e694046ef92965b34d4831e96d92f02681a8b45 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1942,22 +1942,22 @@ static inline bool boost_watermark(struct zone *zone)
>
> /*
> * When we are falling back to another migratetype during allocation, try to
> - * steal extra free pages from the same pageblocks to satisfy further
> - * allocations, instead of polluting multiple pageblocks.
> + * claim entire blocks to satisfy further allocations, instead of polluting
> + * multiple pageblocks.
> *
> - * If we are stealing a relatively large buddy page, it is likely there will
> - * be more free pages in the pageblock, so try to steal them all. For
> - * reclaimable and unmovable allocations, we steal regardless of page size,
> - * as fragmentation caused by those allocations polluting movable pageblocks
> - * is worse than movable allocations stealing from unmovable and reclaimable
> - * pageblocks.
> + * If we are stealing a relatively large buddy page, it is likely there will be
> + * more free pages in the pageblock, so try to claim the whole block. For
> + * reclaimable and unmovable allocations, we claim the whole block regardless of
It's also "try to claim" here as it may still fail due to not enough
free/compatible pages even for those migratetypes. Maybe the question
(out of scope of the patch) if they should get a lower threshold than
half. Before migratetype hygiene, the "we steal regardless" meant that
we really would steal all free pages even if not claiming the pageblock.
> + * page size, as fragmentation caused by those allocations polluting movable
> + * pageblocks is worse than movable allocations stealing from unmovable and
> + * reclaimable pageblocks.
> */
> -static bool can_steal_fallback(unsigned int order, int start_mt)
> +static bool should_claim_block(unsigned int order, int start_mt)
So technically it's should_try_claim_block() if we want to be precise
(but longer).
> {
> /*
> * Leaving this order check is intended, although there is
> * relaxed order check in next check. The reason is that
> - * we can actually steal whole pageblock if this condition met,
> + * we can actually claim the whole pageblock if this condition met,
try claiming
> * but, below check doesn't guarantee it and that is just heuristic
> * so could be changed anytime.
> */
> @@ -1970,7 +1970,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
> * reclaimable pages that are closest to the request size. After a
> * while, memory compaction may occur to form large contiguous pages,
> * and the next movable allocation may not need to steal. Unmovable and
> - * reclaimable allocations need to actually steal pages.
> + * reclaimable allocations need to actually claim the whole block.
also
> */
> if (order >= pageblock_order / 2 ||
> start_mt == MIGRATE_RECLAIMABLE ||
Powered by blists - more mailing lists