[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250605083144.43046-1-21cnbao@gmail.com>
Date: Thu, 5 Jun 2025 20:31:44 +1200
From: Barry Song <21cnbao@...il.com>
To: linux-mm@...ck.org
Cc: akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
David Hildenbrand <david@...hat.com>,
Oscar Salvador <osalvador@...e.de>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Jann Horn <jannh@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Dev Jain <dev.jain@....com>,
Tangquan Zheng <zhengtangquan@...o.com>
Subject: [PATCH v2] mm: madvise: use walk_page_range_vma() instead of walk_page_range()
From: Barry Song <v-songbaohua@...o.com>
We've already found the VMA within madvise_walk_vmas() before calling
specific madvise behavior functions like madvise_free_single_vma().
So calling walk_page_range() and doing find_vma() again seems
unnecessary. It also prevents potential optimizations in those madvise
callbacks, particularly the use of dedicated per-VMA locking.
Reviewed-by: Anshuman Khandual <anshuman.khandual@....com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Acked-by: David Hildenbrand <david@...hat.com>
Reviewed-by: Oscar Salvador <osalvador@...e.de>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Jann Horn <jannh@...gle.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Cc: Lokesh Gidra <lokeshgidra@...gle.com>
Cc: Dev Jain <dev.jain@....com>
Cc: Tangquan Zheng <zhengtangquan@...o.com>
Signed-off-by: Barry Song <v-songbaohua@...o.com>
---
-v2:
* Also extend the modification to callbacks beyond
madvise_free_single_vma() since the code flow is
the same - Dev, Lorenzo
-rfc:
https://lore.kernel.org/linux-mm/20250603013154.5905-1-21cnbao@gmail.com/
mm/madvise.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 5f7a66a1617e..56d9ca2557b9 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -282,7 +282,7 @@ static long madvise_willneed(struct vm_area_struct *vma,
*prev = vma;
#ifdef CONFIG_SWAP
if (!file) {
- walk_page_range(vma->vm_mm, start, end, &swapin_walk_ops, vma);
+ walk_page_range_vma(vma, start, end, &swapin_walk_ops, vma);
lru_add_drain(); /* Push any new pages onto the LRU now */
return 0;
}
@@ -581,7 +581,7 @@ static void madvise_cold_page_range(struct mmu_gather *tlb,
};
tlb_start_vma(tlb, vma);
- walk_page_range(vma->vm_mm, addr, end, &cold_walk_ops, &walk_private);
+ walk_page_range_vma(vma, addr, end, &cold_walk_ops, &walk_private);
tlb_end_vma(tlb, vma);
}
@@ -619,7 +619,7 @@ static void madvise_pageout_page_range(struct mmu_gather *tlb,
};
tlb_start_vma(tlb, vma);
- walk_page_range(vma->vm_mm, addr, end, &cold_walk_ops, &walk_private);
+ walk_page_range_vma(vma, addr, end, &cold_walk_ops, &walk_private);
tlb_end_vma(tlb, vma);
}
@@ -825,7 +825,7 @@ static int madvise_free_single_vma(struct madvise_behavior *madv_behavior,
mmu_notifier_invalidate_range_start(&range);
tlb_start_vma(tlb, vma);
- walk_page_range(vma->vm_mm, range.start, range.end,
+ walk_page_range_vma(vma, range.start, range.end,
&madvise_free_walk_ops, tlb);
tlb_end_vma(tlb, vma);
mmu_notifier_invalidate_range_end(&range);
@@ -1160,7 +1160,7 @@ static long madvise_guard_install(struct vm_area_struct *vma,
unsigned long nr_pages = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
- err = walk_page_range_mm(vma->vm_mm, start, end,
+ err = walk_page_range_vma(vma, start, end,
&guard_install_walk_ops, &nr_pages);
if (err < 0)
return err;
@@ -1244,7 +1244,7 @@ static long madvise_guard_remove(struct vm_area_struct *vma,
if (!is_valid_guard_vma(vma, /* allow_locked = */true))
return -EINVAL;
- return walk_page_range(vma->vm_mm, start, end,
+ return walk_page_range_vma(vma, start, end,
&guard_remove_walk_ops, NULL);
}
--
2.39.3 (Apple Git-146)
Powered by blists - more mailing lists