lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkpOQrshbBDfHjGVDbpcfdV7hgiwav6TyaZTzm_cq4PeDw@mail.gmail.com>
Date:   Fri, 10 Jun 2022 15:03:53 -0700
From:   Yang Shi <shy828301@...il.com>
To:     "Zach O'Keefe" <zokeefe@...gle.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v3 PATCH 4/7] mm: khugepaged: use transhuge_vma_suitable replace open-code

On Fri, Jun 10, 2022 at 9:59 AM Yang Shi <shy828301@...il.com> wrote:
>
> On Thu, Jun 9, 2022 at 6:52 PM Zach O'Keefe <zokeefe@...gle.com> wrote:
> >
> > On Mon, Jun 6, 2022 at 2:44 PM Yang Shi <shy828301@...il.com> wrote:
> > >
> > > The hugepage_vma_revalidate() needs to check if the address is still in
> > > the aligned HPAGE_PMD_SIZE area of the vma when reacquiring mmap_lock,
> > > but it was open-coded, use transhuge_vma_suitable() to do the job.  And
> > > add proper comments for transhuge_vma_suitable().
> > >
> > > Signed-off-by: Yang Shi <shy828301@...il.com>
> > > ---
> > >  include/linux/huge_mm.h | 6 ++++++
> > >  mm/khugepaged.c         | 5 +----
> > >  2 files changed, 7 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > > index a8f61db47f2a..79d5919beb83 100644
> > > --- a/include/linux/huge_mm.h
> > > +++ b/include/linux/huge_mm.h
> > > @@ -128,6 +128,12 @@ static inline bool transhuge_vma_size_ok(struct vm_area_struct *vma)
> > >         return false;
> > >  }
> > >
> > > +/*
> > > + * Do the below checks:
> > > + *   - For non-anon vma, check if the vm_pgoff is HPAGE_PMD_NR aligned.
> > > + *   - For all vmas, check if the haddr is in an aligned HPAGE_PMD_SIZE
> > > + *     area.
> > > + */
> >
> > AFAIK we aren't checking if vm_pgoff is HPAGE_PMD_NR aligned, but
> > rather that linear_page_index(vma, round_up(vma->vm_start,
> > HPAGE_PMD_SIZE)) is HPAGE_PMD_NR aligned within vma->vm_file. I was
>
> Yeah, you are right.
>
> > pretty confused about this (hopefully I have it right now - if not -
> > case and point :) ), so it might be a good opportunity to add some
> > extra commentary to help future travelers understand why this
> > constraint exists.
>
> I'm not fully sure I understand this 100%. I think this is related to
> how page cache is structured. I will try to add more comments.

How's about "The underlying THP is always properly aligned in page
cache, but it may be across the boundary of VMA if the VMA is
misaligned, so the THP can't be PMD mapped for this case."

>
> >
> > Also I wonder while we're at it if we can rename this to
> > transhuge_addr_aligned() or transhuge_addr_suitable() or something.
>
> I think it is still actually used to check vma.
>
> >
> > Otherwise I think the change is a nice cleanup.
> >
> > >  static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
> > >                 unsigned long addr)
> > >  {
> > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > index 7a5d1c1a1833..ca1754d3a827 100644
> > > --- a/mm/khugepaged.c
> > > +++ b/mm/khugepaged.c
> > > @@ -951,7 +951,6 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> > >                 struct vm_area_struct **vmap)
> > >  {
> > >         struct vm_area_struct *vma;
> > > -       unsigned long hstart, hend;
> > >
> > >         if (unlikely(khugepaged_test_exit(mm)))
> > >                 return SCAN_ANY_PROCESS;
> > > @@ -960,9 +959,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> > >         if (!vma)
> > >                 return SCAN_VMA_NULL;
> > >
> > > -       hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
> > > -       hend = vma->vm_end & HPAGE_PMD_MASK;
> > > -       if (address < hstart || address + HPAGE_PMD_SIZE > hend)
> > > +       if (!transhuge_vma_suitable(vma, address))
> > >                 return SCAN_ADDRESS_RANGE;
> > >         if (!hugepage_vma_check(vma, vma->vm_flags))
> > >                 return SCAN_VMA_CHECK;
> > > --
> > > 2.26.3
> > >
> > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ