lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260116082721.275178-1-dev.jain@arm.com>
Date: Fri, 16 Jan 2026 13:57:21 +0530
From: Dev Jain <dev.jain@....com>
To: akpm@...ux-foundation.org
Cc: axelrasmussen@...gle.com,
	yuanchu@...gle.com,
	weixugc@...gle.com,
	david@...nel.org,
	lorenzo.stoakes@...cle.com,
	Liam.Howlett@...cle.com,
	vbabka@...e.cz,
	rppt@...nel.org,
	surenb@...gle.com,
	mhocko@...e.com,
	riel@...riel.com,
	harry.yoo@...cle.com,
	jannh@...gle.com,
	ryan.roberts@....com,
	baohua@...nel.org,
	baolin.wang@...ux.alibaba.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Dev Jain <dev.jain@....com>
Subject: [PATCH mm-unstable] mm: Fix uffd-wp bit loss when batching file folio unmapping

The recently added file folio unmap batching support forgets to update
pte_install_uffd_wp_if_needed(), which still updates a single pte.
We end up jumping to the end of the folio in page_vma_mapped_walk(), thus
setting the uffd-wp marker only on a single pte in the batch. Fix this by
passing nr_pages into the function, and set the uffd-wp marker on all ptes.

Note that, since the nr_pages passed to this function is always derived by
some sort of batching, it is guaranteed that the set of old ptevals of the
batch have uffd-wp bit on all ptes or no ptes, therefore it is safe to derive
the value of the local variable "arm_uffd_pte" from only the particular
pteval passed to this function, but apply the result on all ptes of the batch.

Use set_pte_at() in a loop to set the markers - we cannot use set_ptes()
as that will increment the PFN, but we don't have any PFN to update here.

The userspace visible effect of the bug is inaccuracy observed by workloads
relying on uffd-wp regions to install their own pages.

Fixes: 8798e255b5ec ("mm: rmap: support batched unmapping for file large folios")
Signed-off-by: Dev Jain <dev.jain@....com>
---
Patch applies on mm-unstable, commit f8ed52ac0cfb.

I observed this bug during code inspection, but it turns out that the uffd-wp-mremap
selftest will skip some tests with a bogus complain that "MADV_PAGEOUT didn't work,
is swap enabled?" even when swap is enabled. It first sets the region uffd-wp,
then swaps it out, then checks through pagemap whether it got swapped out. For
file folios, this check makes no sense since the ptes are simply cleared, but in
this particular case, because of uffd-wp preservation, we need to store PTE_MARKER_UFFD_WP,
which is stored as a swap entry, that is why the test works out on a non-buggy kernel.

 include/linux/mm_inline.h |  7 ++++---
 mm/memory.c               | 14 +-------------
 mm/rmap.c                 |  2 +-
 3 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index fa2d6ba811b5..adec1dcb8793 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -568,7 +568,7 @@ static inline pte_marker copy_pte_marker(
  */
 static inline bool
 pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
-			      pte_t *pte, pte_t pteval)
+			      pte_t *pte, pte_t pteval, unsigned int nr_pages)
 {
 	bool arm_uffd_pte = false;
 
@@ -599,8 +599,9 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
 		arm_uffd_pte = true;
 
 	if (unlikely(arm_uffd_pte)) {
-		set_pte_at(vma->vm_mm, addr, pte,
-			   make_pte_marker(PTE_MARKER_UFFD_WP));
+		for (int i = 0; i < nr_pages; ++i, ++pte, addr += PAGE_SIZE)
+			set_pte_at(vma->vm_mm, addr, pte,
+				make_pte_marker(PTE_MARKER_UFFD_WP));
 		return true;
 	}
 
diff --git a/mm/memory.c b/mm/memory.c
index 6b22dd72ebc8..35ac86d29e77 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1588,8 +1588,6 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
 			      unsigned long addr, pte_t *pte, int nr,
 			      struct zap_details *details, pte_t pteval)
 {
-	bool was_installed = false;
-
 	if (!uffd_supports_wp_marker())
 		return false;
 
@@ -1600,17 +1598,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
 	if (zap_drop_markers(details))
 		return false;
 
-	for (;;) {
-		/* the PFN in the PTE is irrelevant. */
-		if (pte_install_uffd_wp_if_needed(vma, addr, pte, pteval))
-			was_installed = true;
-		if (--nr == 0)
-			break;
-		pte++;
-		addr += PAGE_SIZE;
-	}
-
-	return was_installed;
+	return pte_install_uffd_wp_if_needed(vma, addr, pte, pteval, nr);
 }
 
 static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
diff --git a/mm/rmap.c b/mm/rmap.c
index f13480cb9f2e..d6ca002bf79c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2171,7 +2171,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 		 * we may want to replace a none pte with a marker pte if
 		 * it's file-backed, so we don't lose the tracking info.
 		 */
-		pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval);
+		pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval, nr_pages);
 
 		/* Update high watermark before we lower rss */
 		update_hiwater_rss(mm);
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ