[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5a925c5-523e-41e1-a3ce-0bb51ce0e995@kernel.org>
Date: Wed, 3 Dec 2025 20:43:29 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Frank van der Linden <fvdl@...gle.com>, Gregory Price <gourry@...rry.net>
Cc: Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
kernel-team@...a.com, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, ziy@...dia.com, kas@...nel.org,
dave.hansen@...ux.intel.com, rick.p.edgecombe@...el.com,
muchun.song@...ux.dev, osalvador@...e.de, x86@...nel.org,
linux-coco@...ts.linux.dev, kvm@...r.kernel.org,
Wei Yang <richard.weiyang@...il.com>, David Rientjes <rientjes@...gle.com>,
Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH v4] page_alloc: allow migration of smaller hugepages
during contig_alloc
On 12/3/25 19:01, Frank van der Linden wrote:
> On Wed, Dec 3, 2025 at 9:53 AM Gregory Price <gourry@...rry.net> wrote:
>>
>> On Wed, Dec 03, 2025 at 12:32:09PM -0500, Johannes Weiner wrote:
>>> On Wed, Dec 03, 2025 at 01:30:04AM -0500, Gregory Price wrote:
>>>> - if (PageHuge(page))
>>>> - return false;
>>>> + /*
>>>> + * Only consider ranges containing hugepages if those pages are
>>>> + * smaller than the requested contiguous region. e.g.:
>>>> + * Move 2MB pages to free up a 1GB range.
>>>
>>> This one makes sense to me.
>>>
>>>> + * Don't move 1GB pages to free up a 2MB range.
>>>
>>> This one I might be missing something. We don't use cma for 2M pages,
>>> so I don't see how we can end up in this path for 2M allocations.
>>>
>>
>> I used 2MB as an example, but the other users (listed in the changelog)
>> would run into these as well. The contiguous order size seemed
>> different between each of the 4 users (memtrace, tx, kfence, thp debug).
>>
>>> The reason I'm bringing this up is because this function overall looks
>>> kind of unnecessary. Page isolation checks all of these conditions
>>> already, and arbitrates huge pages on hugepage_migration_supported() -
>>> which seems to be the semantics you also desire here.
>>>
>>> Would it make sense to just remove pfn_range_valid_contig()?
>>
>> This seems like a pretty clear optimization that was added at some point
>> to prevent incurring the cost of starting to isolate 512MB of pages and
>> then having to go undo it because it ran into a single huge page.
>>
>> for_each_zone_zonelist_nodemask(zone, z, zonelist,
>> gfp_zone(gfp_mask), nodemask) {
>>
>> spin_lock_irqsave(&zone->lock, flags);
>> pfn = ALIGN(zone->zone_start_pfn, nr_pages);
>> while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
>> if (pfn_range_valid_contig(zone, pfn, nr_pages)) {
>>
>> spin_unlock_irqrestore(&zone->lock, flags);
>> ret = __alloc_contig_pages(pfn, nr_pages,
>> gfp_mask);
>> spin_lock_irqsave(&zone->lock, flags);
>>
>> }
>> pfn += nr_pages;
>> }
>> spin_unlock_irqrestore(&zone->lock, flags);
>> }
>>
>> and then
>>
>> __alloc_contig_pages
>> ret = start_isolate_page_range(start, end, mode);
>>
>> This is called without pre-checking the range for unmovable pages.
>>
>> Seems dangerous to remove without significant data.
>>
>> ~Gregory
>
> Yeah, the function itself makes sense: "check if this is actually a
> contiguous range available within this zone, so no holes and/or
> reserved pages".
>
> The PageHuge() check seems a bit out of place there, if you just
> removed it altogether you'd get the same results, right? The isolation
> code will deal with it. But sure, it does potentially avoid doing some
> unnecessary work.
commit 4d73ba5fa710fe7d432e0b271e6fecd252aef66e
Author: Mel Gorman <mgorman@...hsingularity.net>
Date: Fri Apr 14 15:14:29 2023 +0100
mm: page_alloc: skip regions with hugetlbfs pages when allocating 1G pages
A bug was reported by Yuanxi Liu where allocating 1G pages at runtime is
taking an excessive amount of time for large amounts of memory. Further
testing allocating huge pages that the cost is linear i.e. if allocating
1G pages in batches of 10 then the time to allocate nr_hugepages from
10->20->30->etc increases linearly even though 10 pages are allocated at
each step. Profiles indicated that much of the time is spent checking the
validity within already existing huge pages and then attempting a
migration that fails after isolating the range, draining pages and a whole
lot of other useless work.
Commit eb14d4eefdc4 ("mm,page_alloc: drop unnecessary checks from
pfn_range_valid_contig") removed two checks, one which ignored huge pages
for contiguous allocations as huge pages can sometimes migrate. While
there may be value on migrating a 2M page to satisfy a 1G allocation, it's
potentially expensive if the 1G allocation fails and it's pointless to try
moving a 1G page for a new 1G allocation or scan the tail pages for valid
PFNs.
Reintroduce the PageHuge check and assume any contiguous region with
hugetlbfs pages is unsuitable for a new 1G allocation.
...
--
Cheers
David
Powered by blists - more mailing lists