lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ff4jfxphz32fackvh2236an7575zhqnwntrx5ledudb4afu2ag@sk4vigyq5jif>
Date: Wed, 5 Nov 2025 10:32:23 -0300
From: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: David Hildenbrand <david@...hat.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, Xu Xin <xu.xin16@....com.cn>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] ksm: perform a range-walk in break_ksm

On Mon, Nov 03, 2025 at 06:06:26PM +0100, David Hildenbrand (Red Hat) wrote:
> On 31.10.25 18:46, Pedro Demarchi Gomes wrote:
> > Make break_ksm() receive an address range and change
> > break_ksm_pmd_entry() to perform a range-walk and return the address of
> > the first ksm page found.
> > 
> > This change allows break_ksm() to skip unmapped regions instead of
> > iterating every page address. When unmerging large sparse VMAs, this
> > significantly reduces runtime.
> > 
> > In a benchmark unmerging a 32 TiB sparse virtual address space where
> > only one page was populated, the runtime dropped from 9 minutes to less
> > then 5 seconds.
> > 
> > Suggested-by: David Hildenbrand <david@...hat.com>
> > Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
> > ---
> >   mm/ksm.c | 88 ++++++++++++++++++++++++++++++--------------------------
> >   1 file changed, 48 insertions(+), 40 deletions(-)
> > 
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index 922d2936e206..64d66699133d 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -607,35 +607,55 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
> >   	return atomic_read(&mm->mm_users) == 0;
> >   }
> > -static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
> > +struct break_ksm_arg {
> > +	unsigned long addr;
> > +};
> 
> Leftover? :)
> 

Yes, I am sorry.
I will remove it in the v3.

> > +
> > +static int break_ksm_pmd_entry(pmd_t *pmdp, unsigned long addr, unsigned long end,
> >   			struct mm_walk *walk)
> >   {
> > -	struct folio *folio = NULL;
> > +	unsigned long *found_addr = (unsigned long *) walk->private;
> > +	struct mm_struct *mm = walk->mm;
> > +	pte_t *start_ptep, *ptep;
> >   	spinlock_t *ptl;
> > -	pte_t *pte;
> > -	pte_t ptent;
> > -	int ret;
> > +	int found = 0;
> 
> Best to perform the ret -> found rename already in patch #1.
>

Ok

> With both things
> 
> Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>

Thanks!

> -- 
> Cheers
> 
> David
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ