[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <86ab6678-94a3-4150-8847-4fad00e09452@redhat.com>
Date: Wed, 15 Oct 2025 14:22:23 +0200
From: David Hildenbrand <david@...hat.com>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, Chengming Zhou <chengming.zhou@...ux.dev>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] ksm: use range-walk function to jump over holes in
scan_get_next_rmap_item
On 14.10.25 17:11, Pedro Demarchi Gomes wrote:
> Currently, scan_get_next_rmap_item() walks every page address in a VMA
> to locate mergeable pages. This becomes highly inefficient when scanning
> large virtual memory areas that contain mostly unmapped regions.
>
> This patch replaces the per-address lookup with a range walk using
> walk_page_range(). The range walker allows KSM to skip over entire
> unmapped holes in a VMA, avoiding unnecessary lookups.
> This problem was previously discussed in [1].
>
> Changes since v1 [2]:
> - Use pmd_entry to walk page range
> - Use cond_resched inside pmd_entry()
> - walk_page_range returns page+folio
>
> [1] https://lore.kernel.org/linux-mm/423de7a3-1c62-4e72-8e79-19a6413e420c@redhat.com/
> [2] https://lore.kernel.org/linux-mm/20251014055828.124522-1-pedrodemargomes@gmail.com/
>
Can you also make sure to CC the reporter. So you might want to add
Reported-by: craftfever <craftfever@...mail.cc>
Closes: https://lkml.kernel.org/r/020cf8de6e773bb78ba7614ef250129f11a63781@murena.io
And if it was my suggestion
Suggested-by: David Hildenbrand <david@...hat.com>
Not sure if we want a Fixes: tag ... we could have created gigantic
VMAs with an anon VMA for like ever, so it would date back quite a bit.
Please make sure to thoroughly compile- and runtime-test your changes.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists