[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250919092353.41671-1-lizhe.67@bytedance.com>
Date: Fri, 19 Sep 2025 17:23:53 +0800
From: lizhe.67@...edance.com
To: muchun.song@...ux.dev,
osalvador@...e.de,
david@...hat.com,
akpm@...ux-foundation.org
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
lizhe.67@...edance.com
Subject: [PATCH] hugetlb: increase number of reserving hugepages via cmdline
From: Li Zhe <lizhe.67@...edance.com>
Commit 79359d6d24df ("hugetlb: perform vmemmap optimization on a list of
pages") batches the submission of HugeTLB vmemmap optimization (HVO) during
hugepage reservation. With HVO enabled, hugepages obtained from the buddy
allocator are not submitted for optimization and their struct-page memory
is therefore not released—until the entire reservation request has been
satisfied. As a result, any struct-page memory freed in the course of the
allocation cannot be reused for the ongoing reservation, artificially
limiting the number of huge pages that can ultimately be provided.
As commit b1222550fbf7 ("mm/hugetlb: do pre-HVO for bootmem allocated
pages") already applies early HVO to bootmem-allocated huge pages, this
patch extends the same benefit to non-bootmem pages by incrementally
submitting them for HVO as they are allocated, thereby returning
struct-page memory to the buddy allocator in real time. The change raises
the maximum 2 MiB hugepage reservation from just under 376 GB to more than
381 GB on a 384 GB x86 VM.
Signed-off-by: Li Zhe <lizhe.67@...edance.com>
---
mm/hugetlb.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index eed59cfb5d21..fd44690878a1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3554,7 +3554,14 @@ static void __init hugetlb_pages_alloc_boot_node(unsigned long start, unsigned l
nodes_clear(node_alloc_noretry);
for (i = 0; i < num; ++i) {
- struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
+ struct folio *folio;
+
+ if (hugetlb_vmemmap_optimizable_size(h) &&
+ (si_mem_available() == 0) && !list_empty(&folio_list)) {
+ prep_and_add_allocated_folios(h, &folio_list);
+ INIT_LIST_HEAD(&folio_list);
+ }
+ folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
&node_alloc_noretry, &next_node);
if (!folio)
break;
--
2.20.1
Powered by blists - more mailing lists