lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8f71e65c-a860-40ec-8570-5cf8f0f947d1@redhat.com>
Date: Tue, 21 Oct 2025 17:25:24 +0200
From: David Hildenbrand <david@...hat.com>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, craftfever <craftfever@...mail.cc>,
 Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] ksm: use range-walk function to jump over holes in
 scan_get_next_rmap_item

On 21.10.25 05:00, Pedro Demarchi Gomes wrote:
> 
> On 10/17/25 19:23, David Hildenbrand wrote:
> 
>> This patch does to much in a single patch which makes it
>> rather hard to review.
>>
>> As a first step, we should focus on leaving most of
>> scan_get_next_rmap_item() alone and only focus on replacing
>> folio_walk by walk_page_range_vma().
>>
>> Follow-up cleanups could try cleaning up scan_get_next_rmap_item()
>> -- and boy oh boy, does that function scream for quite some cleanups.
>>
>> This is something minimal based on your v3. I applied plenty of more
>> cleanups and I wish we could further shrink the pmd_entry function,
>> but I have to give up for today (well, it's already tomorrow :) ).
> 
> Should I send a v4 to be applied on top of your minimal patch? This
> v4 would eliminate the need of the for_each_vma using the test_walk
> callback like the previous versions.

It would be good if you could test the rework I sent and see if you want 
to do any tweaks to it. It was a rather quick rework on my side.


Then resend that as v4, which is then minimal and we can reasonable add 
Fixes: + Cc: stable.

Right from that start we used follow_page() on each individual address.

So likely best to add

	Fixes: 31dbd01f3143 ("ksm: Kernel SamePage Merging")

Once that fix is in you can send further cleanups that are independent 
of the fix itself, like removing the for_each_vma() etc.

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ