[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <202602061522548871ohgXN8z0qq87sTSX-yZc@zte.com.cn>
Date: Fri, 6 Feb 2026 15:22:54 +0800 (CST)
From: <xu.xin16@....com.cn>
To: <david@...nel.org>, <akpm@...ux-foundation.org>
Cc: <chengming.zhou@...ux.dev>, <hughd@...gle.com>, <wang.yaxin@....com.cn>,
<yang.yang29@....com.cn>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: ksm: initialize rmap values directly and make them const
From: xu xin <xu.xin16@....com.cn>
Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
a suitable addressrange") seems to have already been merged, this new patch is
proposed to address the issue raised by David at:
https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/
This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
make them const to make code more robust. Besides, since KSM folios are always
order-0, so folio_nr_pages(KSM folio) is always 1, so the line:
"pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
becomes directly:
"pgoff_end = pgoff_start;"
The test reproducer of rmap_walk_ksm can be found at:
https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
Signed-off-by: xu xin <xu.xin16@....com.cn>
---
mm/ksm.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 031c17e4ada6..c7ca117024a4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3171,8 +3171,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
struct anon_vma *anon_vma = rmap_item->anon_vma;
struct anon_vma_chain *vmac;
struct vm_area_struct *vma;
- unsigned long addr;
- pgoff_t pgoff_start, pgoff_end;
+ /* Ignore the stable/unstable/sqnr flags */
+ const unsigned long addr = rmap_item->address & PAGE_MASK;
+ const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
+ /* KSM folios are always order-0 normal pages */
+ const pgoff_t pgoff_end = pgoff_start;
cond_resched();
if (!anon_vma_trylock_read(anon_vma)) {
@@ -3183,12 +3186,6 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
anon_vma_lock_read(anon_vma);
}
- /* Ignore the stable/unstable/sqnr flags */
- addr = rmap_item->address & PAGE_MASK;
-
- pgoff_start = rmap_item->address >> PAGE_SHIFT;
- pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
-
anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
pgoff_start, pgoff_end) {
--
2.25.1
Powered by blists - more mailing lists