lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022033531.389351-2-ziy@nvidia.com>
Date: Tue, 21 Oct 2025 23:35:27 -0400
From: Zi Yan <ziy@...dia.com>
To: linmiaohe@...wei.com,
	david@...hat.com,
	jane.chu@...cle.com
Cc: kernel@...kajraghav.com,
	ziy@...dia.com,
	akpm@...ux-foundation.org,
	mcgrof@...nel.org,
	nao.horiguchi@...il.com,
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
	Baolin Wang <baolin.wang@...ux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@...cle.com>,
	Nico Pache <npache@...hat.com>,
	Ryan Roberts <ryan.roberts@....com>,
	Dev Jain <dev.jain@....com>,
	Barry Song <baohua@...nel.org>,
	Lance Yang <lance.yang@...ux.dev>,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	Wei Yang <richard.weiyang@...il.com>,
	Yang Shi <shy828301@...il.com>,
	linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: [PATCH v3 1/4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order

folio split clears PG_has_hwpoisoned, but the flag should be preserved in
after-split folios containing pages with PG_hwpoisoned flag if the folio is
split to >0 order folios. Scan all pages in a to-be-split folio to
determine which after-split folios need the flag.

An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
avoid the scan and set it on all after-split folios, but resulting false
positive has undesirable negative impact. To remove false positive, caller
of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
do the scan. That might be causing a hassle for current and future callers
and more costly than doing the scan in the split code. More details are
discussed in [1].

It is OK that current implementation does not do this, because memory
failure code always tries to split to order-0 folios and if a folio cannot
be split to order-0, memory failure code either gives warnings or the split
is not performed.

Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
Signed-off-by: Zi Yan <ziy@...dia.com>
---
 mm/huge_memory.c | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fc65ec3393d2..f3896c1f130f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
 					caller_pins;
 }
 
+static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
+{
+	long i;
+
+	for (i = 0; i < nr_pages; i++)
+		if (PageHWPoison(first_page + i))
+			return true;
+
+	return false;
+}
+
 /*
  * It splits @folio into @new_order folios and copies the @folio metadata to
  * all the resulting folios.
@@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
 static void __split_folio_to_order(struct folio *folio, int old_order,
 		int new_order)
 {
+	/* Scan poisoned pages when split a poisoned folio to large folios */
+	bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
+				    new_order != 0;
 	long new_nr_pages = 1 << new_order;
 	long nr_pages = 1 << old_order;
 	long i;
 
+	folio_clear_has_hwpoisoned(folio);
+
+	/* Check first new_nr_pages since the loop below skips them */
+	if (check_poisoned_pages &&
+	    page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
+		folio_set_has_hwpoisoned(folio);
 	/*
 	 * Skip the first new_nr_pages, since the new folio from them have all
 	 * the flags from the original folio.
 	 */
 	for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
 		struct page *new_head = &folio->page + i;
-
 		/*
 		 * Careful: new_folio is not a "real" folio before we cleared PageTail.
 		 * Don't pass it around before clear_compound_head().
 		 */
 		struct folio *new_folio = (struct folio *)new_head;
+		bool poisoned_new_folio = check_poisoned_pages &&
+			page_range_has_hwpoisoned(new_head, new_nr_pages);
 
 		VM_BUG_ON_PAGE(atomic_read(&new_folio->_mapcount) != -1, new_head);
 
@@ -3514,6 +3535,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 				 (1L << PG_dirty) |
 				 LRU_GEN_MASK | LRU_REFS_MASK));
 
+		if (poisoned_new_folio)
+			folio_set_has_hwpoisoned(new_folio);
+
 		new_folio->mapping = folio->mapping;
 		new_folio->index = folio->index + i;
 
@@ -3600,8 +3624,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 	int start_order = uniform_split ? new_order : old_order - 1;
 	int split_order;
 
-	folio_clear_has_hwpoisoned(folio);
-
 	/*
 	 * split to new_order one order at a time. For uniform split,
 	 * folio is split to new_order directly.
-- 
2.51.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ