[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAK1f24nT429ZdEcFUv0r0Sbihjum-Z8ghoYxML=tzaDpVWZk-w@mail.gmail.com>
Date: Fri, 7 Jun 2024 09:50:49 +0800
From: Lance Yang <ioworker0@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, sj@...nel.org,
baolin.wang@...ux.alibaba.com, maskray@...gle.com, ziy@...dia.com,
ryan.roberts@....com, 21cnbao@...il.com, mhocko@...e.com,
fengwei.yin@...el.com, zokeefe@...gle.com, shy828301@...il.com,
xiehuan09@...il.com, libang.li@...group.com, wangkefeng.wang@...wei.com,
songmuchun@...edance.com, peterx@...hat.com, minchan@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 2/3] mm/rmap: integrate PMD-mapped folio splitting into
pagewalk loop
On Thu, Jun 6, 2024 at 5:41 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 06.06.24 11:38, Lance Yang wrote:
> > On Thu, Jun 6, 2024 at 4:06 PM David Hildenbrand <david@...hat.com> wrote:
> >>
> >> On 06.06.24 10:01, David Hildenbrand wrote:
> >>> On 06.06.24 05:55, Lance Yang wrote:
> >>>> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@...hat.com> wrote:
> >>>>>
> >>>>> On 05.06.24 16:20, Lance Yang wrote:
> >>>>>> Hi David,
> >>>>>>
> >>>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@...hat.com> wrote:
> >>>>>>>
> >>>>>>> On 21.05.24 06:02, Lance Yang wrote:
> >>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>>>>>> split the folio.
> >>>>>>>>
> >>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>>>>>> being picked up during page reclaim.
> >>>>>>>>
> >>>>>>>> Suggested-by: David Hildenbrand <david@...hat.com>
> >>>>>>>> Suggested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> >>>>>>>> Signed-off-by: Lance Yang <ioworker0@...il.com>
> >>>>>>>> ---
> >>>>>>>
> >>>>>>> [...] again, sorry for the late review.
> >>>>>>
> >>>>>> No worries at all, thanks for taking time to review!
> >>>>>>
> >>>>>>>
> >>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>>>>>> index ddffa30c79fb..08a93347f283 100644
> >>>>>>>> --- a/mm/rmap.c
> >>>>>>>> +++ b/mm/rmap.c
> >>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>> if (flags & TTU_SYNC)
> >>>>>>>> pvmw.flags = PVMW_SYNC;
> >>>>>>>>
> >>>>>>>> - if (flags & TTU_SPLIT_HUGE_PMD)
> >>>>>>>> - split_huge_pmd_address(vma, address, false, folio);
> >>>>>>>> -
> >>>>>>>> /*
> >>>>>>>> * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>>>>> * For hugetlb, it could be much worse if we need to do pud
> >>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>> mmu_notifier_invalidate_range_start(&range);
> >>>>>>>>
> >>>>>>>> while (page_vma_mapped_walk(&pvmw)) {
> >>>>>>>> - /* Unexpected PMD-mapped THP? */
> >>>>>>>> - VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>>>>>> -
> >>>>>>>> /*
> >>>>>>>> * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>>>>> */
> >>>>>>>> if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>>>>> (vma->vm_flags & VM_LOCKED)) {
> >>>>>>>> /* Restore the mlock which got missed */
> >>>>>>>> - if (!folio_test_large(folio))
> >>>>>>>> + if (!folio_test_large(folio) ||
> >>>>>>>> + (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>>> mlock_vma_folio(folio, vma);
> >
> > Should we still keep the '!pvmw.pte' here? Something like:
> >
> > if (!folio_test_large(folio) || !pvmw.pte)
> > mlock_vma_folio(folio, vma);
>
> I was wondering the same the whole time ...
>
> >
> > We can mlock the THP to prevent it from being picked up during page reclaim.
> >
> > David, I’d like to hear your thoughts on this ;)
>
> but I think there is no need to for now, in the context of your patchset. :)
Agreed. Let's drop it for now :)
Thanks a lot for your thoughts!
Lance
>
> --
> Cheers,
>
> David / dhildenb
>
Powered by blists - more mailing lists