lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251116014721.1561456-2-jiaqiyan@google.com>
Date: Sun, 16 Nov 2025 01:47:20 +0000
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: nao.horiguchi@...il.com, linmiaohe@...wei.com, ziy@...dia.com
Cc: david@...hat.com, lorenzo.stoakes@...cle.com, william.roche@...cle.com, 
	harry.yoo@...cle.com, tony.luck@...el.com, wangkefeng.wang@...wei.com, 
	willy@...radead.org, jane.chu@...cle.com, akpm@...ux-foundation.org, 
	osalvador@...e.de, muchun.song@...ux.dev, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org, 
	Jiaqi Yan <jiaqiyan@...gle.com>
Subject: [PATCH v1 1/2] mm/huge_memory: introduce uniform_split_unmapped_folio_to_zero_order

When freeing a high-order folio that contains HWPoison pages,
to ensure these HWPoison pages are not added to buddy allocator,
we can first uniformly split a free and unmapped high-order folio
to 0-order folios first, then only add non-HWPoison folios to
buddy allocator and exclude HWPoison ones.

Introduce uniform_split_unmapped_folio_to_zero_order, a wrapper
to the existing __split_unmapped_folio. Caller can use it to
uniformly split an unmapped high-order folio into 0-order folios.

No functional change. It will be used in a subsequent commit.

Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
---
 include/linux/huge_mm.h | 6 ++++++
 mm/huge_memory.c        | 8 ++++++++
 2 files changed, 14 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 71ac78b9f834f..ef6a84973e157 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -365,6 +365,7 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
 		vm_flags_t vm_flags);
 
 bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
+int uniform_split_unmapped_folio_to_zero_order(struct folio *folio);
 int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 		unsigned int new_order);
 int min_order_for_split(struct folio *folio);
@@ -569,6 +570,11 @@ can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
 {
 	return false;
 }
+static inline int uniform_split_unmapped_folio_to_zero_order(struct folio *folio)
+{
+	VM_WARN_ON_ONCE_PAGE(1, page);
+	return -EINVAL;
+}
 static inline int
 split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 		unsigned int new_order)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 323654fb4f8cf..c7b6c1c75a18e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3515,6 +3515,14 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 	return ret;
 }
 
+int uniform_split_unmapped_folio_to_zero_order(struct folio *folio)
+{
+	return __split_unmapped_folio(folio, /*new_order=*/0,
+				      /*split_at=*/&folio->page,
+				      /*xas=*/NULL, /*mapping=*/NULL,
+				      /*uniform_split=*/true);
+}
+
 bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
 		bool warns)
 {
-- 
2.52.0.rc1.455.g30608eb744-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ