[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTB5CJ0oFfPjavGx@gourry-fedora-PF4VCD3F>
Date: Wed, 3 Dec 2025 12:53:12 -0500
From: Gregory Price <gourry@...rry.net>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, ziy@...dia.com,
kas@...nel.org, dave.hansen@...ux.intel.com,
rick.p.edgecombe@...el.com, muchun.song@...ux.dev,
osalvador@...e.de, david@...hat.com, x86@...nel.org,
linux-coco@...ts.linux.dev, kvm@...r.kernel.org,
Wei Yang <richard.weiyang@...il.com>,
David Rientjes <rientjes@...gle.com>,
Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH v4] page_alloc: allow migration of smaller hugepages
during contig_alloc
On Wed, Dec 03, 2025 at 12:32:09PM -0500, Johannes Weiner wrote:
> On Wed, Dec 03, 2025 at 01:30:04AM -0500, Gregory Price wrote:
> > - if (PageHuge(page))
> > - return false;
> > + /*
> > + * Only consider ranges containing hugepages if those pages are
> > + * smaller than the requested contiguous region. e.g.:
> > + * Move 2MB pages to free up a 1GB range.
>
> This one makes sense to me.
>
> > + * Don't move 1GB pages to free up a 2MB range.
>
> This one I might be missing something. We don't use cma for 2M pages,
> so I don't see how we can end up in this path for 2M allocations.
>
I used 2MB as an example, but the other users (listed in the changelog)
would run into these as well. The contiguous order size seemed
different between each of the 4 users (memtrace, tx, kfence, thp debug).
> The reason I'm bringing this up is because this function overall looks
> kind of unnecessary. Page isolation checks all of these conditions
> already, and arbitrates huge pages on hugepage_migration_supported() -
> which seems to be the semantics you also desire here.
>
> Would it make sense to just remove pfn_range_valid_contig()?
This seems like a pretty clear optimization that was added at some point
to prevent incurring the cost of starting to isolate 512MB of pages and
then having to go undo it because it ran into a single huge page.
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(gfp_mask), nodemask) {
spin_lock_irqsave(&zone->lock, flags);
pfn = ALIGN(zone->zone_start_pfn, nr_pages);
while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
if (pfn_range_valid_contig(zone, pfn, nr_pages)) {
spin_unlock_irqrestore(&zone->lock, flags);
ret = __alloc_contig_pages(pfn, nr_pages,
gfp_mask);
spin_lock_irqsave(&zone->lock, flags);
}
pfn += nr_pages;
}
spin_unlock_irqrestore(&zone->lock, flags);
}
and then
__alloc_contig_pages
ret = start_isolate_page_range(start, end, mode);
This is called without pre-checking the range for unmovable pages.
Seems dangerous to remove without significant data.
~Gregory
Powered by blists - more mailing lists