lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24=vNputsQDFuceaYLenQXYTLJDPzsoD9bhNC1ey=b-+Dw@mail.gmail.com>
Date: Thu, 9 May 2024 20:17:05 +0800
From: Lance Yang <ioworker0@...il.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, sj@...nel.org, 
	maskray@...gle.com, ziy@...dia.com, ryan.roberts@....com, david@...hat.com, 
	21cnbao@...il.com, mhocko@...e.com, fengwei.yin@...el.com, zokeefe@...gle.com, 
	shy828301@...il.com, xiehuan09@...il.com, libang.li@...group.com, 
	wangkefeng.wang@...wei.com, songmuchun@...edance.com, peterx@...hat.com, 
	minchan@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list()

On Thu, May 9, 2024 at 5:36 PM Baolin Wang
<baolin.wang@...ux.alibaba.com> wrote:
>
>
>
> On 2024/5/7 19:37, Lance Yang wrote:
> > On Tue, May 7, 2024 at 5:33 PM Baolin Wang
> > <baolin.wang@...ux.alibaba.com> wrote:
> >>
> >>
> >>
> >> On 2024/5/7 16:26, Lance Yang wrote:
> >>> On Tue, May 7, 2024 at 2:32 PM Lance Yang <ioworker0@...il.com> wrote:
> >>>>
> >>>> Hey Baolin,
> >>>>
> >>>> Thanks a lot for taking time to review!
> >>>>
> >>>> On Tue, May 7, 2024 at 12:01 PM Baolin Wang
> >>>> <baolin.wang@...ux.alibaba.com> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 2024/5/1 12:27, Lance Yang wrote:
> >>>>>> When the user no longer requires the pages, they would use
> >>>>>> madvise(MADV_FREE) to mark the pages as lazy free. Subsequently, they
> >>>>>> typically would not re-write to that memory again.
> >>>>>>
> >>>>>> During memory reclaim, if we detect that the large folio and its PMD are
> >>>>>> both still marked as clean and there are no unexpected references
> >>>>>> (such as GUP), so we can just discard the memory lazily, improving the
> >>>>>> efficiency of memory reclamation in this case.  On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using
> >>>>>> mem_cgroup_force_empty() results in the following runtimes in seconds
> >>>>>> (shorter is better):
> >>>>>>
> >>>>>> --------------------------------------------
> >>>>>> |     Old       |      New       |  Change  |
> >>>>>> --------------------------------------------
> >>>>>> |   0.683426    |    0.049197    |  -92.80% |
> >>>>>> --------------------------------------------
> >>>>>>
> >>>>>> Suggested-by: Zi Yan <ziy@...dia.com>
> >>>>>> Suggested-by: David Hildenbrand <david@...hat.com>
> >>>>>> Signed-off-by: Lance Yang <ioworker0@...il.com>
> >>>>>> ---
> >>>>>>     include/linux/huge_mm.h |  9 +++++
> >>>>>>     mm/huge_memory.c        | 73 +++++++++++++++++++++++++++++++++++++++++
> >>>>>>     mm/rmap.c               |  3 ++
> >>>>>>     3 files changed, 85 insertions(+)
> >>>>>>
> >>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >>>>>> index 38c4b5537715..017cee864080 100644
> >>>>>> --- a/include/linux/huge_mm.h
> >>>>>> +++ b/include/linux/huge_mm.h
> >>>>>> @@ -411,6 +411,8 @@ static inline bool thp_migration_supported(void)
> >>>>>>
> >>>>>>     void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> >>>>>>                            pmd_t *pmd, bool freeze, struct folio *folio);
> >>>>>> +bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
> >>>>>> +                        pmd_t *pmdp, struct folio *folio);
> >>>>>>
> >>>>>>     static inline void align_huge_pmd_range(struct vm_area_struct *vma,
> >>>>>>                                         unsigned long *start,
> >>>>>> @@ -492,6 +494,13 @@ static inline void align_huge_pmd_range(struct vm_area_struct *vma,
> >>>>>>                                         unsigned long *start,
> >>>>>>                                         unsigned long *end) {}
> >>>>>>
> >>>>>> +static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
> >>>>>> +                                      unsigned long addr, pmd_t *pmdp,
> >>>>>> +                                      struct folio *folio)
> >>>>>> +{
> >>>>>> +     return false;
> >>>>>> +}
> >>>>>> +
> >>>>>>     #define split_huge_pud(__vma, __pmd, __address)     \
> >>>>>>         do { } while (0)
> >>>>>>
> >>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>>>>> index 145505a1dd05..90fdef847a88 100644
> >>>>>> --- a/mm/huge_memory.c
> >>>>>> +++ b/mm/huge_memory.c
> >>>>>> @@ -2690,6 +2690,79 @@ static void unmap_folio(struct folio *folio)
> >>>>>>         try_to_unmap_flush();
> >>>>>>     }
> >>>>>>
> >>>>>> +static bool __discard_trans_pmd_locked(struct vm_area_struct *vma,
> >>>>>> +                                    unsigned long addr, pmd_t *pmdp,
> >>>>>> +                                    struct folio *folio)
> >>>>>> +{
> >>>>>> +     struct mm_struct *mm = vma->vm_mm;
> >>>>>> +     int ref_count, map_count;
> >>>>>> +     pmd_t orig_pmd = *pmdp;
> >>>>>> +     struct mmu_gather tlb;
> >>>>>> +     struct page *page;
> >>>>>> +
> >>>>>> +     if (pmd_dirty(orig_pmd) || folio_test_dirty(folio))
> >>>>>> +             return false;
> >>>>>> +     if (unlikely(!pmd_present(orig_pmd) || !pmd_trans_huge(orig_pmd)))
> >>>>>> +             return false;
> >>>>>> +
> >>>>>> +     page = pmd_page(orig_pmd);
> >>>>>> +     if (unlikely(page_folio(page) != folio))
> >>>>>> +             return false;
> >>>>>> +
> >>>>>> +     tlb_gather_mmu(&tlb, mm);
> >>>>>> +     orig_pmd = pmdp_huge_get_and_clear(mm, addr, pmdp);
> >>>>>> +     tlb_remove_pmd_tlb_entry(&tlb, pmdp, addr);
> >>>>>> +
> >>>>>> +     /*
> >>>>>> +      * Syncing against concurrent GUP-fast:
> >>>>>> +      * - clear PMD; barrier; read refcount
> >>>>>> +      * - inc refcount; barrier; read PMD
> >>>>>> +      */
> >>>>>> +     smp_mb();
> >>>>>> +
> >>>>>> +     ref_count = folio_ref_count(folio);
> >>>>>> +     map_count = folio_mapcount(folio);
> >>>>>> +
> >>>>>> +     /*
> >>>>>> +      * Order reads for folio refcount and dirty flag
> >>>>>> +      * (see comments in __remove_mapping()).
> >>>>>> +      */
> >>>>>> +     smp_rmb();
> >>>>>> +
> >>>>>> +     /*
> >>>>>> +      * If the PMD or folio is redirtied at this point, or if there are
> >>>>>> +      * unexpected references, we will give up to discard this folio
> >>>>>> +      * and remap it.
> >>>>>> +      *
> >>>>>> +      * The only folio refs must be one from isolation plus the rmap(s).
> >>>>>> +      */
> >>>>>> +     if (ref_count != map_count + 1 || folio_test_dirty(folio) ||
> >>>>>> +         pmd_dirty(orig_pmd)) {
> >>>>>> +             set_pmd_at(mm, addr, pmdp, orig_pmd);
> >>>>>> +             return false;
> >>>>>> +     }
> >>>>>> +
> >>>>>> +     folio_remove_rmap_pmd(folio, page, vma);
> >>>>>> +     zap_deposited_table(mm, pmdp);
> >>>>>> +     add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> >>>>>> +     folio_put(folio);
> >>>>>
> >>>>> IIUC, you missed handling mlock vma, see mlock_drain_local() in
> >>>>> try_to_unmap_one().
> >>>>
> >>>> Good spot!
> >>>>
> >>>> I suddenly realized that I overlooked another thing: If we detect that a
> >>>> PMD-mapped THP is within the range of the VM_LOCKED VMA, we
> >>>> should check whether the TTU_IGNORE_MLOCK flag is set in
> >>>> try_to_unmap_one(). If the flag is set, we will remove the PMD mapping
> >>>> from the folio. Otherwise, the folio should be mlocked, which avoids
> >>>> splitting the folio and then mlocking each page again.
> >>>
> >>> My previous response above is flawed - sorry :(
> >>>
> >>> If we detect that a PMD-mapped THP is within the range of the
> >>> VM_LOCKED VMA.
> >>>
> >>> 1) If the TTU_IGNORE_MLOCK flag is set, we will try to remove the
> >>> PMD mapping from the folio, as this series has done.
> >>
> >> Right.
> >>
> >>> 2) If the flag is not set, the large folio should be mlocked to prevent it
> >>> from being picked during memory reclaim? Currently, we just leave it
> >>
> >> Yes. From commit 1acbc3f93614 ("mm: handle large folio when large folio
> >> in VM_LOCKED VMA range"), large folios of the mlocked VMA will be
> >> handled during page reclaim phase.
> >>
> >>> as is and do not to mlock it, IIUC.
> >>
> >> Original code already handle the mlock case after the PMD-mapped THP is
> >> split in try_to_unmap_one():
> >
> > Yep. But this series doesn't do the TTU_SPLIT_HUGE_PMD immediately.
> >
> >>                   /*
> >>                    * If the folio is in an mlock()d vma, we must not swap
> >> it out.
> >>                    */
> >>                   if (!(flags & TTU_IGNORE_MLOCK) &&
> >>                       (vma->vm_flags & VM_LOCKED)) {
> >>                           /* Restore the mlock which got missed */
> >
> > IIUC, we could detect a PMD-mapped THP here. So, I'm not sure if we
> > need to mlock it to prevent it from being picked again during memory
> > reclaim. The change is as follows:
>
> For the page reclaim path, folio_check_references() should be able to
> help restore the mlock of the PMD-mapped THP. However, for other paths

I understood, thanks for clarifying!

> that call try_to_unmap(), I believe it is still necessary to check
> whether the mlock of the PMD-mapped THP was missed.

Yeah, agreed!

The TTU_SPLIT_HUGE_PMD will no longer perform immediately, so we
might encounter a PMD-mapped THP missing the mlock in the VM_LOCKED
range during the pagewalk. It's likely necessary to mlock this THP to prevent
it from being picked up during page reclaim.

Given this, I'll include the change below in the next version.

>
> Below code looks reasonable to me from a quick glance.

Thanks again for the review!
Lance

>
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index ed7f82036986..2a9d037ab23c 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1673,7 +1673,8 @@ static bool try_to_unmap_one(struct folio
> > *folio, struct vm_area_struct *vma,
> >                  if (!(flags & TTU_IGNORE_MLOCK) &&
> >                      (vma->vm_flags & VM_LOCKED)) {
> >                          /* Restore the mlock which got missed */
> > -                       if (!folio_test_large(folio))
> > +                       if (!folio_test_large(folio) ||
> > +                           (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >                                  mlock_vma_folio(folio, vma);
> >                          goto walk_done_err;
> >                  }
> >
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ