[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f75b988-9729-452b-beb5-deab0718faa8@kernel.org>
Date: Fri, 6 Feb 2026 12:01:08 +0100
From: "David Hildenbrand (Arm)" <david@...nel.org>
To: xu.xin16@....com.cn, akpm@...ux-foundation.org
Cc: chengming.zhou@...ux.dev, hughd@...gle.com, wang.yaxin@....com.cn,
yang.yang29@....com.cn, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable
address range
On 2/6/26 11:01, xu.xin16@....com.cn wrote:
> From: xu xin <xu.xin16@....com.cn>
>
> Problem
> =======
> When available memory is extremely tight, causing KSM pages to be swapped
> out, or when there is significant memory fragmentation and THP triggers
> memory compaction, the system will invoke the rmap_walk_ksm function to
> perform reverse mapping. However, we observed that this function becomes
> particularly time-consuming when a large number of VMAs (e.g., 20,000)
> share the same anon_vma. Through debug trace analysis, we found that most
> of the latency occurs within anon_vma_interval_tree_foreach, leading to an
> excessively long hold time on the anon_vma lock (even reaching 500ms or
> more), which in turn causes upper-layer applications (waiting for the
> anon_vma lock) to be blocked for extended periods.
>
> Root Reaon
s/Reaon/Reason/ or better "Cause"
> ==========
> Further investigation revealed that 99.9% of iterations inside the
> anon_vma_interval_tree_foreach loop are skipped due to the first check
> "if (addr < vma->vm_start || addr >= vma->vm_end)), indicating that a large
> number of loop iterations are ineffective. This inefficiency arises because
> the pgoff_start and pgoff_end parameters passed to
> anon_vma_interval_tree_foreach span the entire address space from 0 to
> ULONG_MAX, resulting in very poor loop efficiency.
>
> Solution
> ========
> In fact, we can significantly improve performance by passing a more precise
> range based on the given addr. Since the original pages merged by KSM
> correspond to anonymous VMAs, the page offset can be calculated as
> pgoff = address >> PAGE_SHIFT. Therefore, we can optimize the call by
> defining:
>
> pgoff_start = rmap_item->address >> PAGE_SHIFT;
>
> since KSM folios are always order-0, so folio_nr_pages(KSM folio) is always 1,
> so the line:
>
> "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"
>
> becomes directly:
>
> "pgoff_end = pgoff_start;"
>
> Performance
> ===========
> In our real embedded Linux environment, the measured metrcis were as follows:
>
> 1) Time_ms: Max time for holding anon_vma lock in a single rmap_walk_ksm.
> 2) Nr_iteration_total: The max times of iterations in a loop of anon_vma_interval_tree_foreach
> 3) Skip_addr_out_of_range: The max times of skipping due to the first check (vma->vm_start
> and vma->vm_end) in a loop of anon_vma_interval_tree_foreach.
> 4) Skip_mm_mismatch: The max times of skipping due to the second check (rmap_item->mm == vma->vm_mm)
> in a loop of anon_vma_interval_tree_foreach.
>
> The result is as follows:
>
> Time_ms Nr_iteration_total Skip_addr_out_of_range Skip_mm_mismatch
> Before patched: 228.65 22169 22168 0
> After pacthed: 0.396 3 0 2
s/pacthed/patched/
But I would just call it "Before" and "After".
>
> The referenced reproducer of rmap_walk_ksm can be found at:
> https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
>
> Signed-off-by: xu xin <xu.xin16@....com.cn>
Did you accidentally drop a
Co-developed-by: Wang Yaxin <wang.yaxin@....com.cn>
?
> ---
> mm/ksm.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 950e122bcbf4..54f72e92b7f3 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -3170,6 +3170,9 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
> hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
> /* Ignore the stable/unstable/sqnr flags */
> const unsigned long addr = rmap_item->address & PAGE_MASK;
> + const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
> + /* KSM folios are always order-0 normal pages */
> + const pgoff_t pgoff_end = pgoff_start;
Maybe simply
const pgoff_t pgoff = rmap_item->address >> PAGE_SHIFT;
and drop pgoff_end? Then you simply pass pgoff as start and end below.
You could add the KSM folio comment above the
anon_vma_interval_tree_foreach.
If the tools/testing/selftests/mm/rmap.c selftests keeps passing
rmap_walk_ksm() should be working as expected. Did you run it to make sure?
--
Cheers,
David
Powered by blists - more mailing lists