[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250928045207.78546-1-lance.yang@linux.dev>
Date: Sun, 28 Sep 2025 12:52:07 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: akpm@...ux-foundation.org,
david@...hat.com
Cc: xu.xin16@....com.cn,
chengming.zhou@...ux.dev,
ran.xiaokai@....com.cn,
yang.yang29@....com.cn,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
ioworker0@...il.com,
Lance Yang <lance.yang@...ux.dev>
Subject: [PATCH 1/1] mm/ksm: fix spurious soft-dirty bit on zero-filled page merging
From: Lance Yang <lance.yang@...ux.dev>
When KSM merges a zero-filled page with the shared zeropage, it uses
pte_mkdirty() to mark the new PTE for internal accounting. However,
pte_mkdirty() unconditionally sets both the hardware dirty bit and the
soft-dirty bit.
This behavior causes false positives in userspace tools like CRIU that
rely on the soft-dirty mechanism for tracking memory changes.
So, preserve the correct state by reading the old PTE under the page
table lock and explicitly clearing the soft-dirty bit from the new PTE
if the original was not soft-dirty.
Fixes: 79271476b336 ("ksm: support unsharing KSM-placed zero pages")
Signed-off-by: Lance Yang <lance.yang@...ux.dev>
---
mm/ksm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/ksm.c b/mm/ksm.c
index 04019a15b25d..e34516b8fbe4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1403,6 +1403,9 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
* the dirty bit in zero page's PTE is set.
*/
newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)));
+ if (!pte_soft_dirty(ptep_get(ptep)))
+ newpte = pte_clear_soft_dirty(newpte);
+
ksm_map_zero_page(mm);
/*
* We're replacing an anonymous page with a zero page, which is
--
2.49.0
Powered by blists - more mailing lists