[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250718090244.21092-5-dev.jain@arm.com>
Date: Fri, 18 Jul 2025 14:32:41 +0530
From: Dev Jain <dev.jain@....com>
To: akpm@...ux-foundation.org
Cc: ryan.roberts@....com,
david@...hat.com,
willy@...radead.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
catalin.marinas@....com,
will@...nel.org,
Liam.Howlett@...cle.com,
lorenzo.stoakes@...cle.com,
vbabka@...e.cz,
jannh@...gle.com,
anshuman.khandual@....com,
peterx@...hat.com,
joey.gouly@....com,
ioworker0@...il.com,
baohua@...nel.org,
kevin.brodsky@....com,
quic_zhenhuah@...cinc.com,
christophe.leroy@...roup.eu,
yangyicong@...ilicon.com,
linux-arm-kernel@...ts.infradead.org,
hughd@...gle.com,
yang@...amperecomputing.com,
ziy@...dia.com,
Dev Jain <dev.jain@....com>
Subject: [PATCH v5 4/7] mm: Introduce FPB_RESPECT_WRITE for PTE batching infrastructure
Patch 6 optimizes mprotect() by batch clearing the ptes, masking in the new
protections, and batch setting the ptes. Suppose that the first pte
of the batch is writable - with the current implementation of
folio_pte_batch(), it is not guaranteed that the other ptes in the batch
are already writable too, so we may incorrectly end up setting the
writable bit on all ptes via modify_prot_commit_ptes().
Therefore, introduce FPB_RESPECT_WRITE so that all ptes in the batch
are writable or not.
Signed-off-by: Dev Jain <dev.jain@....com>
---
mm/internal.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 5b0f71e5434b..28d2d5b051df 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -208,17 +208,20 @@ typedef int __bitwise fpb_t;
/* Compare PTEs respecting the soft-dirty bit. */
#define FPB_RESPECT_SOFT_DIRTY ((__force fpb_t)BIT(1))
+/* Compare PTEs respecting the writable bit. */
+#define FPB_RESPECT_WRITE ((__force fpb_t)BIT(2))
+
/*
* Merge PTE write bits: if any PTE in the batch is writable, modify the
* PTE at @ptentp to be writable.
*/
-#define FPB_MERGE_WRITE ((__force fpb_t)BIT(2))
+#define FPB_MERGE_WRITE ((__force fpb_t)BIT(3))
/*
* Merge PTE young and dirty bits: if any PTE in the batch is young or dirty,
* modify the PTE at @ptentp to be young or dirty, respectively.
*/
-#define FPB_MERGE_YOUNG_DIRTY ((__force fpb_t)BIT(3))
+#define FPB_MERGE_YOUNG_DIRTY ((__force fpb_t)BIT(4))
static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
{
@@ -226,7 +229,9 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
pte = pte_mkclean(pte);
if (likely(!(flags & FPB_RESPECT_SOFT_DIRTY)))
pte = pte_clear_soft_dirty(pte);
- return pte_wrprotect(pte_mkold(pte));
+ if (likely(!(flags & FPB_RESPECT_WRITE)))
+ pte = pte_wrprotect(pte);
+ return pte_mkold(pte);
}
/**
--
2.30.2
Powered by blists - more mailing lists