lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACZaFFOvDad09MUopairAoAjZG6X5gffMaQbnfy0sCHGz8xSfg@mail.gmail.com>
Date: Wed, 31 Dec 2025 18:57:28 +0800
From: Vernon Yang <vernon2gm@...il.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: akpm@...ux-foundation.org, lorenzo.stoakes@...cle.com, ziy@...dia.com, 
	dev.jain@....com, baohua@...nel.org, lance.yang@...ux.dev, 
	richard.weiyang@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Vernon Yang <yanglincheng@...inos.cn>
Subject: Re: [PATCH v2 4/4] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY

On Wed, Dec 31, 2025 at 4:03 AM David Hildenbrand (Red Hat)
<david@...nel.org> wrote:
>
> On 12/29/25 06:51, Vernon Yang wrote:
> > When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
> > scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
> > reduce redundant operation.
> >
> > Signed-off-by: Vernon Yang <yanglincheng@...inos.cn>
> > ---
> >   mm/khugepaged.c | 9 +++++++--
> >   1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 2b3685b195f5..72be87ef384b 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -2439,6 +2439,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> >
> >               cond_resched();
> >               if (unlikely(hpage_collapse_test_exit_or_disable(mm))) {
> > +                     vma = NULL;
> >                       progress++;
> >                       break;
> >               }
>
> I don't understand why we need changes at all.
>
> The code is
>
>         mm = slot->mm;
>         /*
>          * Don't wait for semaphore (to avoid long wait times).  Just move to
>          * the next mm on the list.
>          */
>         vma = NULL;
>         if (unlikely(!mmap_read_trylock(mm)))
>                 goto breakouterloop_mmap_lock;
>
>         progress++;
>         if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
>                 goto breakouterloop;
>
>         ...
>
> So we'll go straight to breakouterloop with vma=NULL.
>
> Do you want to optimize for skipping the MM if the flag gets toggled
> while we are scanning that MM?

Yes

> Is that really something we should be worrying about?

Just reduce redundant operation.

Before optimizing, entering khugepaged_scan_mm_slot() next time, vma = NULL,
we will set khugepaged_scan.mm_slot to the next mm_slot.
After optimizing, we will directly set khugepaged_scan.mm_slot to the next
mm_slot.

> Also, why can't we simply do a
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 97d1b2824386f..af8481d4b0f4e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2516,7 +2516,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
>           * Release the current mm_slot if this mm is about to die, or
>           * if we scanned all vmas of this mm.
>           */
> -       if (hpage_collapse_test_exit(mm) || !vma) {
> +       if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
>                  /*
>                   * Make sure that if mm_users is reaching zero while
>                   * khugepaged runs here, khugepaged_exit will find
>

Sound goods to me. Thank you for your review and suggestion, I will do it in
the next version.

--
Thanks,
Vernon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ