[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251219183346.3627510-3-jiaqiyan@google.com>
Date: Fri, 19 Dec 2025 18:33:45 +0000
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: jackmanb@...gle.com, hannes@...xchg.org, linmiaohe@...wei.com,
ziy@...dia.com, harry.yoo@...cle.com, willy@...radead.org
Cc: nao.horiguchi@...il.com, david@...hat.com, lorenzo.stoakes@...cle.com,
william.roche@...cle.com, tony.luck@...el.com, wangkefeng.wang@...wei.com,
jane.chu@...cle.com, akpm@...ux-foundation.org, osalvador@...e.de,
muchun.song@...ux.dev, rientjes@...gle.com, duenwen@...gle.com,
jthoughton@...gle.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, Jiaqi Yan <jiaqiyan@...gle.com>
Subject: [PATCH v2 2/3] mm/page_alloc: only free healthy pages in high-order
HWPoison folio
At the end of dissolve_free_hugetlb_folio that a free HugeTLB
folio becomes non-HugeTLB, it is released to buddy allocator
as a high-order folio, e.g. a folio that contains 262144 pages
if the folio was a 1G HugeTLB hugepage.
This is problematic if the HugeTLB hugepage contained HWPoison
subpages. In that case, since buddy allocator does not check
HWPoison for non-zero-order folio, the raw HWPoison page can
be given out with its buddy page and be re-used by either
kernel or userspace.
Memory failure recovery (MFR) in kernel does attempt to take
raw HWPoison page off buddy allocator after
dissolve_free_hugetlb_folio. However, there is always a time
window between dissolve_free_hugetlb_folio frees a HWPoison
high-order folio to buddy allocator and MFR takes HWPoison
raw page off buddy allocator.
One obvious way to avoid this problem is to add page sanity
checks in page allocate or free path. However, it is against
the past efforts to reduce sanity check overhead [1,2,3].
Introduce free_has_hwpoison_pages to only free the healthy
pages and excludes the HWPoison ones in the high-order folio.
The idea is to iterate through the sub-pages of the folio to
identify contiguous ranges of healthy pages. Instead of freeing
pages one by one, decompose healthy ranges into the largest
possible blocks. Each block meets the requirements to be freed
to buddy allocator (__free_frozen_pages).
free_has_hwpoison_pages has linear time complexity O(N) wrt the
number of pages in the folio. While the power-of-two decomposition
ensures that the number of calls to the buddy allocator is
logarithmic for each contiguous healthy range, the mandatory
linear scan of pages to identify PageHWPoison defines the
overall time complexity.
[1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net/
[2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net/
[3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
---
mm/page_alloc.c | 101 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 822e05f1a9646..20c8862ce594e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2976,8 +2976,109 @@ static void __free_frozen_pages(struct page *page, unsigned int order,
}
}
+static void prepare_compound_page_to_free(struct page *new_head,
+ unsigned int order,
+ unsigned long flags)
+{
+ new_head->flags.f = flags & (~PAGE_FLAGS_CHECK_AT_FREE);
+ new_head->mapping = NULL;
+ new_head->private = 0;
+
+ clear_compound_head(new_head);
+ if (order)
+ prep_compound_page(new_head, order);
+}
+
+/*
+ * Given a range of pages physically contiguous physical, efficiently
+ * free them in blocks that meet __free_frozen_pages's requirements.
+ */
+static void free_contiguous_pages(struct page *curr, struct page *next,
+ unsigned long flags)
+{
+ unsigned int order;
+ unsigned int align_order;
+ unsigned int size_order;
+ unsigned long pfn;
+ unsigned long end_pfn = page_to_pfn(next);
+ unsigned long remaining;
+
+ /*
+ * This decomposition algorithm at every iteration chooses the
+ * order to be the minimum of two constraints:
+ * - Alignment: the largest power-of-two that divides current pfn.
+ * - Size: the largest power-of-two that fits in the
+ * current remaining number of pages.
+ */
+ while (curr < next) {
+ pfn = page_to_pfn(curr);
+ remaining = end_pfn - pfn;
+
+ align_order = ffs(pfn) - 1;
+ size_order = fls_long(remaining) - 1;
+ order = min(align_order, size_order);
+
+ prepare_compound_page_to_free(curr, order, flags);
+ __free_frozen_pages(curr, order, FPI_NONE);
+ curr += (1UL << order);
+ }
+
+ VM_WARN_ON(curr != next);
+}
+
+/*
+ * Given a high-order compound page containing certain number of HWPoison
+ * pages, free only the healthy ones to buddy allocator.
+ *
+ * It calls __free_frozen_pages O(2^order) times and cause nontrivial
+ * overhead. So only use this when compound page really contains HWPoison.
+ *
+ * This implementation doesn't work in memdesc world.
+ */
+static void free_has_hwpoison_pages(struct page *page, unsigned int order)
+{
+ struct page *curr = page;
+ struct page *end = page + (1 << order);
+ struct page *next;
+ unsigned long flags = page->flags.f;
+ unsigned long nr_pages;
+ unsigned long total_freed = 0;
+ unsigned long total_hwp = 0;
+
+ VM_WARN_ON(flags & PAGE_FLAGS_CHECK_AT_FREE);
+
+ while (curr < end) {
+ next = curr;
+ nr_pages = 0;
+
+ while (next < end && !PageHWPoison(next)) {
+ ++next;
+ ++nr_pages;
+ }
+
+ if (PageHWPoison(next))
+ ++total_hwp;
+
+ free_contiguous_pages(curr, next, flags);
+
+ total_freed += nr_pages;
+ curr = PageHWPoison(next) ? next + 1 : next;
+ }
+
+ pr_info("Excluded %lu hwpoison pages from folio\n", total_hwp);
+ pr_info("Freed %#lx pages from folio\n", total_freed);
+}
+
void free_frozen_pages(struct page *page, unsigned int order)
{
+ struct folio *folio = page_folio(page);
+
+ if (order > 0 && unlikely(folio_test_has_hwpoisoned(folio))) {
+ folio_clear_has_hwpoisoned(folio);
+ free_has_hwpoison_pages(page, order);
+ return;
+ }
+
__free_frozen_pages(page, order, FPI_NONE);
}
--
2.52.0.322.g1dd061c0dc-goog
Powered by blists - more mailing lists