lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CD886E34-9126-4B34-93B2-3DFBDAC4C454@nvidia.com>
Date: Sat, 15 Nov 2025 21:10:01 -0500
From: Zi Yan <ziy@...dia.com>
To: Jiaqi Yan <jiaqiyan@...gle.com>
Cc: nao.horiguchi@...il.com, linmiaohe@...wei.com, david@...hat.com,
 lorenzo.stoakes@...cle.com, william.roche@...cle.com, harry.yoo@...cle.com,
 tony.luck@...el.com, wangkefeng.wang@...wei.com, willy@...radead.org,
 jane.chu@...cle.com, akpm@...ux-foundation.org, osalvador@...e.de,
 muchun.song@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v1 2/2] mm/memory-failure: avoid free HWPoison high-order
 folio

On 15 Nov 2025, at 20:47, Jiaqi Yan wrote:

> At the end of dissolve_free_hugetlb_folio, when a free HugeTLB
> folio becomes non-HugeTLB, it is released to buddy allocator
> as a high-order folio, e.g. a folio that contains 262144 pages
> if the folio was a 1G HugeTLB hugepage.
>
> This is problematic if the HugeTLB hugepage contained HWPoison
> subpages. In that case, since buddy allocator does not check
> HWPoison for non-zero-order folio, the raw HWPoison page can
> be given out with its buddy page and be re-used by either
> kernel or userspace.
>
> Memory failure recovery (MFR) in kernel does attempt to take
> raw HWPoison page off buddy allocator after
> dissolve_free_hugetlb_folio. However, there is always a time
> window between freed to buddy allocator and taken off from
> buddy allocator.
>
> One obvious way to avoid this problem is to add page sanity
> checks in page allocate or free path. However, it is against
> the past efforts to reduce sanity check overhead [1,2,3].
>
> Introduce hugetlb_free_hwpoison_folio to solve this problem.
> The idea is, in case a HugeTLB folio for sure contains HWPoison
> page(s), first split the non-HugeTLB high-order folio uniformly
> into 0-order folios, then let healthy pages join the buddy
> allocator while reject the HWPoison ones.
>
> [1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net/
> [2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net/
> [3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
> ---
>  include/linux/hugetlb.h |  4 ++++
>  mm/hugetlb.c            |  8 ++++++--
>  mm/memory-failure.c     | 43 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 53 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 8e63e46b8e1f0..e1c334a7db2fe 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -870,8 +870,12 @@ int dissolve_free_hugetlb_folios(unsigned long start_pfn,
>  				    unsigned long end_pfn);
>
>  #ifdef CONFIG_MEMORY_FAILURE
> +extern void hugetlb_free_hwpoison_folio(struct folio *folio);
>  extern void folio_clear_hugetlb_hwpoison(struct folio *folio);
>  #else
> +static inline void hugetlb_free_hwpoison_folio(struct folio *folio)
> +{
> +}
>  static inline void folio_clear_hugetlb_hwpoison(struct folio *folio)
>  {
>  }
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0455119716ec0..801ca1a14c0f0 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1596,6 +1596,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
>  						struct folio *folio)
>  {
>  	bool clear_flag = folio_test_hugetlb_vmemmap_optimized(folio);
> +	bool has_hwpoison = folio_test_hwpoison(folio);
>
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>  		return;
> @@ -1638,12 +1639,15 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
>  	 * Move PageHWPoison flag from head page to the raw error pages,
>  	 * which makes any healthy subpages reusable.
>  	 */
> -	if (unlikely(folio_test_hwpoison(folio)))
> +	if (unlikely(has_hwpoison))
>  		folio_clear_hugetlb_hwpoison(folio);
>
>  	folio_ref_unfreeze(folio, 1);
>
> -	hugetlb_free_folio(folio);
> +	if (unlikely(has_hwpoison))
> +		hugetlb_free_hwpoison_folio(folio);
> +	else
> +		hugetlb_free_folio(folio);
>  }
>
>  /*
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 3edebb0cda30b..e6a9deba6292a 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2002,6 +2002,49 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
>  	return ret;
>  }
>
> +void hugetlb_free_hwpoison_folio(struct folio *folio)
> +{
> +	struct folio *curr, *next;
> +	struct folio *end_folio = folio_next(folio);
> +	int ret;
> +
> +	VM_WARN_ON_FOLIO(folio_ref_count(folio) != 1, folio);
> +
> +	ret = uniform_split_unmapped_folio_to_zero_order(folio);

I realize that __split_unmapped_folio() is a wrong name and causes confusion.
It should be __split_frozen_folio(), since when you look at its current
call site, it is called after the folio is frozen. There probably
should be a check in __split_unmapped_folio() to make sure the folio is frozen.

Is it possible to change hugetlb_free_hwpoison_folio() so that it
can be called before folio_ref_unfreeze(folio, 1)? In this way,
__split_unmapped_folio() is called at frozen folios.

You can add a preparation patch to rename __split_unmapped_folio() to
__split_frozen_folio() and add
VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio) != 0, folio) to the function.

Thanks.

> +	if (ret) {
> +		/*
> +		 * In case of split failure, none of the pages in folio
> +		 * will be freed to buddy allocator.
> +		 */
> +		pr_err("%#lx: failed to split free %d-order folio with HWPoison page(s): %d\n",
> +		       folio_pfn(folio), folio_order(folio), ret);
> +		return;
> +	}
> +
> +	/* Expect 1st folio's refcount==1, and other's refcount==0. */
> +	for (curr = folio; curr != end_folio; curr = next) {
> +		next = folio_next(curr);
> +
> +		VM_WARN_ON_FOLIO(folio_order(curr), curr);
> +
> +		if (PageHWPoison(&curr->page)) {
> +			if (curr != folio)
> +				folio_ref_inc(curr);
> +
> +			VM_WARN_ON_FOLIO(folio_ref_count(curr) != 1, curr);
> +			pr_warn("%#lx: prevented freeing HWPoison page\n",
> +				folio_pfn(curr));
> +			continue;
> +		}
> +
> +		if (curr == folio)
> +			folio_ref_dec(curr);
> +
> +		VM_WARN_ON_FOLIO(folio_ref_count(curr), curr);
> +		free_frozen_pages(&curr->page, folio_order(curr));
> +	}
> +}
> +
>  /*
>   * Taking refcount of hugetlb pages needs extra care about race conditions
>   * with basic operations like hugepage allocation/free/demotion.
> -- 
> 2.52.0.rc1.455.g30608eb744-goog


--
Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ