lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3DD2EF5E-3E6A-40B0-AFCC-8FB38F0763DB@nvidia.com>
Date: Sun, 21 Sep 2025 22:36:31 -0400
From: Zi Yan <ziy@...dia.com>
To: Lance Yang <lance.yang@...ux.dev>
Cc: akpm@...ux-foundation.org, david@...hat.com, lorenzo.stoakes@...cle.com,
 usamaarif642@...il.com, yuzhao@...gle.com, baolin.wang@...ux.alibaba.com,
 baohua@...nel.org, voidice@...il.com, Liam.Howlett@...cle.com,
 catalin.marinas@....com, cerasuolodomenico@...il.com, hannes@...xchg.org,
 kaleshsingh@...gle.com, npache@...hat.com, riel@...riel.com,
 roman.gushchin@...ux.dev, rppt@...nel.org, ryan.roberts@....com,
 dev.jain@....com, ryncsn@...il.com, shakeel.butt@...ux.dev,
 surenb@...gle.com, hughd@...gle.com, willy@...radead.org,
 matthew.brost@...el.com, joshua.hahnjy@...il.com, rakie.kim@...com,
 byungchul@...com, gourry@...rry.net, ying.huang@...ux.alibaba.com,
 apopple@...dia.com, qun-wei.lin@...iatek.com, Andrew.Yang@...iatek.com,
 casper.li@...iatek.com, chinwen.chang@...iatek.com,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 linux-mediatek@...ts.infradead.org, linux-mm@...ck.org, ioworker0@...il.com,
 stable@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/thp: fix MTE tag mismatch when replacing
 zero-filled subpages

On 21 Sep 2025, at 22:14, Lance Yang wrote:

> From: Lance Yang <lance.yang@...ux.dev>
>
> When both THP and MTE are enabled, splitting a THP and replacing its
> zero-filled subpages with the shared zeropage can cause MTE tag mismatch
> faults in userspace.
>
> Remapping zero-filled subpages to the shared zeropage is unsafe, as the
> zeropage has a fixed tag of zero, which may not match the tag expected by
> the userspace pointer.
>
> KSM already avoids this problem by using memcmp_pages(), which on arm64
> intentionally reports MTE-tagged pages as non-identical to prevent unsafe
> merging.
>
> As suggested by David[1], this patch adopts the same pattern, replacing the
> memchr_inv() byte-level check with a call to pages_identical(). This
> leverages existing architecture-specific logic to determine if a page is
> truly identical to the shared zeropage.
>
> Having both the THP shrinker and KSM rely on pages_identical() makes the
> design more future-proof, IMO. Instead of handling quirks in generic code,
> we just let the architecture decide what makes two pages identical.
>
> [1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com
>
> Cc: <stable@...r.kernel.org>
> Reported-by: Qun-wei Lin <Qun-wei.Lin@...iatek.com>
> Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@mediatek.com
> Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp")
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
> ---
> Tested on x86_64 and on QEMU for arm64 (with and without MTE support),
> and the fix works as expected.

From [1], I see you mentioned RISC-V also has the address masking feature.
Is it affected by this? And memcmp_pages() is only implemented by ARM64
for MTE. Should any arch with address masking always implement it to avoid
the same issue?

>
>  mm/huge_memory.c | 15 +++------------
>  mm/migrate.c     |  8 +-------
>  2 files changed, 4 insertions(+), 19 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 32e0ec2dde36..28d4b02a1aa5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -4104,29 +4104,20 @@ static unsigned long deferred_split_count(struct shrinker *shrink,
>  static bool thp_underused(struct folio *folio)
>  {
>  	int num_zero_pages = 0, num_filled_pages = 0;
> -	void *kaddr;
>  	int i;
>
>  	for (i = 0; i < folio_nr_pages(folio); i++) {
> -		kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
> -		if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
> -			num_zero_pages++;
> -			if (num_zero_pages > khugepaged_max_ptes_none) {
> -				kunmap_local(kaddr);
> +		if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
> +			if (++num_zero_pages > khugepaged_max_ptes_none)
>  				return true;
> -			}
>  		} else {
>  			/*
>  			 * Another path for early exit once the number
>  			 * of non-zero filled pages exceeds threshold.
>  			 */
> -			num_filled_pages++;
> -			if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) {
> -				kunmap_local(kaddr);
> +			if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none)
>  				return false;
> -			}
>  		}
> -		kunmap_local(kaddr);
>  	}
>  	return false;
>  }
> diff --git a/mm/migrate.c b/mm/migrate.c
> index aee61a980374..ce83c2c3c287 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -300,9 +300,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
>  					  unsigned long idx)
>  {
>  	struct page *page = folio_page(folio, idx);
> -	bool contains_data;
>  	pte_t newpte;
> -	void *addr;
>
>  	if (PageCompound(page))
>  		return false;
> @@ -319,11 +317,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
>  	 * this subpage has been non present. If the subpage is only zero-filled
>  	 * then map it to the shared zeropage.
>  	 */
> -	addr = kmap_local_page(page);
> -	contains_data = memchr_inv(addr, 0, PAGE_SIZE);
> -	kunmap_local(addr);
> -
> -	if (contains_data)
> +	if (!pages_identical(page, ZERO_PAGE(0)))
>  		return false;
>
>  	newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
> -- 
> 2.49.0

The changes look good to me. Thanks. Acked-by: Zi Yan <ziy@...dia.com>

--
Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ