[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251203063004.185182-1-gourry@gourry.net>
Date: Wed, 3 Dec 2025 01:30:04 -0500
From: Gregory Price <gourry@...rry.net>
To: linux-mm@...ck.org
Cc: kernel-team@...a.com,
linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org,
vbabka@...e.cz,
surenb@...gle.com,
mhocko@...e.com,
jackmanb@...gle.com,
hannes@...xchg.org,
ziy@...dia.com,
kas@...nel.org,
dave.hansen@...ux.intel.com,
rick.p.edgecombe@...el.com,
muchun.song@...ux.dev,
osalvador@...e.de,
david@...hat.com,
x86@...nel.org,
linux-coco@...ts.linux.dev,
kvm@...r.kernel.org,
Wei Yang <richard.weiyang@...il.com>,
David Rientjes <rientjes@...gle.com>,
Joshua Hahn <joshua.hahnjy@...il.com>
Subject: [PATCH v4] page_alloc: allow migration of smaller hugepages during contig_alloc
We presently skip regions with hugepages entirely when trying to do
contiguous page allocation. This will cause otherwise-movable
2MB HugeTLB pages to be considered unmovable, and will make 1GB
hugepages more difficult to allocate on systems utilizing both.
Instead, if hugepage migration is enabled, consider regions with
hugepages smaller than the target contiguous allocation request
as valid targets for allocation.
isolate_migrate_pages_block() has similar logic, and the hugetlb code
does a migratable check in folio_isolate_hugetlb() during isolation.
So the code servicing the subsequent allocaiton and migration already
supports this exact use case (it's just unreachable).
To test, allocate a bunch of 2MB HugeTLB pages (in this case 48GB)
and then attempt to allocate some 1G HugeTLB pages (in this case 4GB)
(Scale to your machine's memory capacity).
echo 24576 > .../hugepages-2048kB/nr_hugepages
echo 4 > .../hugepages-1048576kB/nr_hugepages
Prior to this patch, the 1GB page allocation can fail if no contiguous
1GB pages remain. After this patch, the kernel will try to move 2MB
pages and successfully allocate the 1GB pages (assuming overall
sufficient memory is available).
folio_alloc_gigantic() is the primary user of alloc_contig_pages(),
other users are debug or init-time allocations and largely unaffected.
- ppc/memtrace is a debugfs interface
- x86/tdx memory allocation occurs once on module-init
- kfence/core happens once on module (late) init
- THP uses it in debug_vm_pgtable_alloc_huge_page at __init time
Suggested-by: David Hildenbrand <david@...hat.com>
Link: https://lore.kernel.org/linux-mm/6fe3562d-49b2-4975-aa86-e139c535ad00@redhat.com/
Signed-off-by: Gregory Price <gourry@...rry.net>
Reviewed-by: Zi Yan <ziy@...dia.com>
Reviewed-by: Wei Yang <richard.weiyang@...il.com>
Reviewed-by: Oscar Salvador <osalvador@...e.de>
Acked-by: David Rientjes <rientjes@...gle.com>
Acked-by: David Hildenbrand <david@...hat.com>
Tested-by: Joshua Hahn <joshua.hahnjy@...il.com>
---
mm/page_alloc.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 95d8b812efd0..8ca3273f734a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7069,8 +7069,27 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
if (PageReserved(page))
return false;
- if (PageHuge(page))
- return false;
+ /*
+ * Only consider ranges containing hugepages if those pages are
+ * smaller than the requested contiguous region. e.g.:
+ * Move 2MB pages to free up a 1GB range.
+ * Don't move 1GB pages to free up a 2MB range.
+ *
+ * This makes contiguous allocation more reliable if multiple
+ * hugepage sizes are used without causing needless movement.
+ */
+ if (PageHuge(page)) {
+ unsigned int order;
+
+ if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION))
+ return false;
+
+ page = compound_head(page);
+ order = compound_order(page);
+ if ((order >= MAX_FOLIO_ORDER) ||
+ (nr_pages <= (1 << order)))
+ return false;
+ }
}
return true;
}
--
2.52.0
Powered by blists - more mailing lists