lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be137610-65a7-4402-86d8-3d169e3ac064@gmail.com>
Date: Tue, 14 Oct 2025 18:57:54 -0300
From: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
To: David Hildenbrand <david@...hat.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, Chengming Zhou <chengming.zhou@...ux.dev>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] ksm: use range-walk function to jump over holes in
 scan_get_next_rmap_item



On 10/14/25 12:59, David Hildenbrand wrote:
> On 14.10.25 17:11, Pedro Demarchi Gomes wrote:
>> Currently, scan_get_next_rmap_item() walks every page address in a VMA
>> to locate mergeable pages. This becomes highly inefficient when scanning
>> large virtual memory areas that contain mostly unmapped regions.
>>
>> This patch replaces the per-address lookup with a range walk using
>> walk_page_range(). The range walker allows KSM to skip over entire
>> unmapped holes in a VMA, avoiding unnecessary lookups.
>> This problem was previously discussed in [1].
>>
>> Changes since v1 [2]:
>> - Use pmd_entry to walk page range
>> - Use cond_resched inside pmd_entry()
>> - walk_page_range returns page+folio
>>
>> [1] https://lore.kernel.org/linux- 
>> mm/423de7a3-1c62-4e72-8e79-19a6413e420c@...hat.com/
>> [2] https://lore.kernel.org/linux-mm/20251014055828.124522-1- 
>> pedrodemargomes@...il.com/
>>
>> Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
>> ---
> 
> [...]
> 
>> +
>> +static int ksm_pmd_entry(pmd_t *pmd, unsigned long addr,
>> +                unsigned long end, struct mm_walk *walk)
>> +{
>> +    struct mm_struct *mm = walk->mm;
>> +    struct vm_area_struct *vma = walk->vma;
>> +    struct ksm_walk_private *private = (struct ksm_walk_private *) 
>> walk->private;
>> +    struct folio *folio;
>> +    pte_t *start_pte, *pte, ptent;
>> +    spinlock_t *ptl;
>> +    int ret = 0;
>> +
>> +    start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
>> +    if (!start_pte) {
>> +        ksm_scan.address = end;
>> +        return 0;
>> +    }
> 
> Please take more time to understand the details. If there is a THP there 
> you actually have to find the relevant page.
> 

Ok

>> +
>> +    for (; addr < end; pte++, addr += PAGE_SIZE) {
>> +        ptent = ptep_get(pte);
>> +        struct page *page = vm_normal_page(vma, addr, ptent);
>> +        ksm_scan.address = addr;
> 
> Updating that value from in here is a bit nasty. I wonder if you should 
> rather make the function also return the address of the found page as well.
> 
> In the caller, if we don't find any page, there is no need to update the
> address from this function I guess. We iterated the complete MM space in 
> that case.
> 

Ok

>> +
>> +        if (ksm_test_exit(mm)) {
>> +            ret = 1;
>> +            break;
>> +        }
>> +
>> +        if (!page)
>> +            continue;
>> +
>> +        folio = page_folio(page);
>> +        if (folio_is_zone_device(folio) || !folio_test_anon(folio))
>> +            continue;
>> +
>> +        ret = 1;
>> +        folio_get(folio);
>> +        private->page = page;
>> +        private->folio = folio;
>> +        private->vma = vma;
>> +        break;
>> +    }
>> +    pte_unmap_unlock(start_pte, ptl);
>> +
>> +    cond_resched();
>> +    return ret;
>> +}
>> +
>> +struct mm_walk_ops walk_ops = {
>> +    .pmd_entry = ksm_pmd_entry,
>> +    .test_walk = ksm_walk_test,
>> +    .walk_lock = PGWALK_RDLOCK,
>> +};
>> +
>>   static struct ksm_rmap_item *scan_get_next_rmap_item(struct page 
>> **page)
>>   {
>>       struct mm_struct *mm;
>>       struct ksm_mm_slot *mm_slot;
>>       struct mm_slot *slot;
>> -    struct vm_area_struct *vma;
>>       struct ksm_rmap_item *rmap_item;
>> -    struct vma_iterator vmi;
>>       int nid;
>>       if (list_empty(&ksm_mm_head.slot.mm_node))
>> @@ -2527,64 +2595,40 @@ static struct ksm_rmap_item 
>> *scan_get_next_rmap_item(struct page **page)
>>       slot = &mm_slot->slot;
>>       mm = slot->mm;
>> -    vma_iter_init(&vmi, mm, ksm_scan.address);
>>       mmap_read_lock(mm);
>>       if (ksm_test_exit(mm))
>>           goto no_vmas;
>> -    for_each_vma(vmi, vma) {
>> -        if (!(vma->vm_flags & VM_MERGEABLE))
>> -            continue;
>> -        if (ksm_scan.address < vma->vm_start)
>> -            ksm_scan.address = vma->vm_start;
>> -        if (!vma->anon_vma)
>> -            ksm_scan.address = vma->vm_end;
>> -
>> -        while (ksm_scan.address < vma->vm_end) {
>> -            struct page *tmp_page = NULL;
>> -            struct folio_walk fw;
>> -            struct folio *folio;
>> +get_page:
>> +    struct ksm_walk_private walk_private = {
>> +        .page = NULL,
>> +        .folio = NULL,
>> +        .vma = NULL
>> +    };
>> -            if (ksm_test_exit(mm))
>> -                break;
>> +    walk_page_range(mm, ksm_scan.address, -1, &walk_ops, (void *) 
>> &walk_private);
>> +    if (walk_private.page) {
>> +        flush_anon_page(walk_private.vma, walk_private.page, 
>> ksm_scan.address);
>> +        flush_dcache_page(walk_private.page);
> 
> Keep working on the folio please.
> 

Ok

>> +        rmap_item = get_next_rmap_item(mm_slot,
>> +            ksm_scan.rmap_list, ksm_scan.address);
>> +        if (rmap_item) {
>> +            ksm_scan.rmap_list =
>> +                    &rmap_item->rmap_list;
>> -            folio = folio_walk_start(&fw, vma, ksm_scan.address, 0);
>> -            if (folio) {
>> -                if (!folio_is_zone_device(folio) &&
>> -                     folio_test_anon(folio)) {
>> -                    folio_get(folio);
>> -                    tmp_page = fw.page;
>> -                }
>> -                folio_walk_end(&fw, vma);
>> +            ksm_scan.address += PAGE_SIZE;
>> +            if (should_skip_rmap_item(walk_private.folio, rmap_item)) {
>> +                folio_put(walk_private.folio);
>> +                goto get_page;
> 
> Can you make that a while() loop to avoid the label?
> 

Ok, I will make this corrections and send a v3. Thanks!



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ