[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24nn1Ypxi2vxOzHEje=YG71=REd-QXqxA51pJ+dSqqcwQg@mail.gmail.com>
Date: Thu, 7 Mar 2024 23:08:37 +0800
From: Lance Yang <ioworker0@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>, Barry Song <21cnbao@...il.com>,
Vishal Moola <vishal.moola@...il.com>, akpm@...ux-foundation.org, zokeefe@...gle.com,
shy828301@...il.com, mhocko@...e.com, fengwei.yin@...el.com,
xiehuan09@...il.com, wangkefeng.wang@...wei.com, songmuchun@...edance.com,
peterx@...hat.com, minchan@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
Thanks a lot, David!
Got it. I'll do my best.
Thanks,
Lance
On Thu, Mar 7, 2024 at 10:58 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 07.03.24 15:41, Lance Yang wrote:
> > Hey Barry, Ryan, David,
> >
> > Thanks a lot for taking the time to explain and provide suggestions!
> > I really appreciate your time!
> >
> > IIUC, here's what we need to do for v3:
> >
> > If folio_likely_mapped_shared() is true, or if we cannot acquire
> > the folio lock, we simply skip the batched PTEs. Then, we compare
> > the number of batched PTEs against folio_mapcount(). Finally,
> > batch-update the access and dirty only.
> >
> > I'm not sure if I've understood correctly, could you please confirm?
> >
> > Thanks,
> > Lance
> >
> > On Thu, Mar 7, 2024 at 7:17 PM David Hildenbrand <david@...hat.com> wrote:
> >>
> >> On 07.03.24 12:13, Ryan Roberts wrote:
> >>> On 07/03/2024 10:54, David Hildenbrand wrote:
> >>>> On 07.03.24 11:54, David Hildenbrand wrote:
> >>>>> On 07.03.24 11:50, Ryan Roberts wrote:
> >>>>>> On 07/03/2024 09:33, Barry Song wrote:
> >>>>>>> On Thu, Mar 7, 2024 at 10:07 PM Ryan Roberts <ryan.roberts@....com> wrote:
> >>>>>>>>
> >>>>>>>> On 07/03/2024 08:10, Barry Song wrote:
> >>>>>>>>> On Thu, Mar 7, 2024 at 9:00 PM Lance Yang <ioworker0@...il.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Hey Barry,
> >>>>>>>>>>
> >>>>>>>>>> Thanks for taking time to review!
> >>>>>>>>>>
> >>>>>>>>>> On Thu, Mar 7, 2024 at 3:00 PM Barry Song <21cnbao@...il.com> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> On Thu, Mar 7, 2024 at 7:15 PM Lance Yang <ioworker0@...il.com> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>> [...]
> >>>>>>>>>>>> +static inline bool can_mark_large_folio_lazyfree(unsigned long addr,
> >>>>>>>>>>>> + struct folio *folio,
> >>>>>>>>>>>> pte_t *start_pte)
> >>>>>>>>>>>> +{
> >>>>>>>>>>>> + int nr_pages = folio_nr_pages(folio);
> >>>>>>>>>>>> + fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + for (int i = 0; i < nr_pages; i++)
> >>>>>>>>>>>> + if (page_mapcount(folio_page(folio, i)) != 1)
> >>>>>>>>>>>> + return false;
> >>>>>>>>>>>
> >>>>>>>>>>> we have moved to folio_estimated_sharers though it is not precise, so
> >>>>>>>>>>> we don't do
> >>>>>>>>>>> this check with lots of loops and depending on the subpage's mapcount.
> >>>>>>>>>>
> >>>>>>>>>> If we don't check the subpage’s mapcount, and there is a cow folio
> >>>>>>>>>> associated
> >>>>>>>>>> with this folio and the cow folio has smaller size than this folio,
> >>>>>>>>>> should we still
> >>>>>>>>>> mark this folio as lazyfree?
> >>>>>>>>>
> >>>>>>>>> I agree, this is true. However, we've somehow accepted the fact that
> >>>>>>>>> folio_likely_mapped_shared
> >>>>>>>>> can result in false negatives or false positives to balance the
> >>>>>>>>> overhead. So I really don't know :-)
> >>>>>>>>>
> >>>>>>>>> Maybe David and Vishal can give some comments here.
> >>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>> BTW, do we need to rebase our work against David's changes[1]?
> >>>>>>>>>>> [1]
> >>>>>>>>>>> https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@redhat.com/
> >>>>>>>>>>
> >>>>>>>>>> Yes, we should rebase our work against David’s changes.
> >>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + return nr_pages == folio_pte_batch(folio, addr, start_pte,
> >>>>>>>>>>>> + ptep_get(start_pte), nr_pages,
> >>>>>>>>>>>> flags, NULL);
> >>>>>>>>>>>> +}
> >>>>>>>>>>>> +
> >>>>>>>>>>>> static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >>>>>>>>>>>> unsigned long end, struct mm_walk *walk)
> >>>>>>>>>>>>
> >>>>>>>>>>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd,
> >>>>>>>>>>>> unsigned long addr,
> >>>>>>>>>>>> */
> >>>>>>>>>>>> if (folio_test_large(folio)) {
> >>>>>>>>>>>> int err;
> >>>>>>>>>>>> + unsigned long next_addr, align;
> >>>>>>>>>>>>
> >>>>>>>>>>>> - if (folio_estimated_sharers(folio) != 1)
> >>>>>>>>>>>> - break;
> >>>>>>>>>>>> - if (!folio_trylock(folio))
> >>>>>>>>>>>> - break;
> >>>>>>>>>>>> + if (folio_estimated_sharers(folio) != 1 ||
> >>>>>>>>>>>> + !folio_trylock(folio))
> >>>>>>>>>>>> + goto skip_large_folio;
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> I don't think we can skip all the PTEs for nr_pages, as some of them
> >>>>>>>>>>> might be
> >>>>>>>>>>> pointing to other folios.
> >>>>>>>>>>>
> >>>>>>>>>>> for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16),
> >>>>>>>>>>> and write the memory of PTE15 and PTE16, you get page faults, thus PTE15
> >>>>>>>>>>> and PTE16 will point to two different small folios. We can only skip
> >>>>>>>>>>> when we
> >>>>>>>>>>> are sure nr_pages == folio_pte_batch() is sure.
> >>>>>>>>>>
> >>>>>>>>>> Agreed. Thanks for pointing that out.
> >>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + align = folio_nr_pages(folio) * PAGE_SIZE;
> >>>>>>>>>>>> + next_addr = ALIGN_DOWN(addr + align, align);
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + /*
> >>>>>>>>>>>> + * If we mark only the subpages as lazyfree, or
> >>>>>>>>>>>> + * cannot mark the entire large folio as lazyfree,
> >>>>>>>>>>>> + * then just split it.
> >>>>>>>>>>>> + */
> >>>>>>>>>>>> + if (next_addr > end || next_addr - addr !=
> >>>>>>>>>>>> align ||
> >>>>>>>>>>>> + !can_mark_large_folio_lazyfree(addr, folio,
> >>>>>>>>>>>> pte))
> >>>>>>>>>>>> + goto split_large_folio;
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + /*
> >>>>>>>>>>>> + * Avoid unnecessary folio splitting if the large
> >>>>>>>>>>>> + * folio is entirely within the given range.
> >>>>>>>>>>>> + */
> >>>>>>>>>>>> + folio_clear_dirty(folio);
> >>>>>>>>>>>> + folio_unlock(folio);
> >>>>>>>>>>>> + for (; addr != next_addr; pte++, addr +=
> >>>>>>>>>>>> PAGE_SIZE) {
> >>>>>>>>>>>> + ptent = ptep_get(pte);
> >>>>>>>>>>>> + if (pte_young(ptent) ||
> >>>>>>>>>>>> pte_dirty(ptent)) {
> >>>>>>>>>>>> + ptent = ptep_get_and_clear_full(
> >>>>>>>>>>>> + mm, addr, pte,
> >>>>>>>>>>>> tlb->fullmm);
> >>>>>>>>>>>> + ptent = pte_mkold(ptent);
> >>>>>>>>>>>> + ptent = pte_mkclean(ptent);
> >>>>>>>>>>>> + set_pte_at(mm, addr, pte, ptent);
> >>>>>>>>>>>> + tlb_remove_tlb_entry(tlb, pte,
> >>>>>>>>>>>> addr);
> >>>>>>>>>>>> + }
> >>>>>>>>>>>
> >>>>>>>>>>> Can we do this in batches? for a CONT-PTE mapped large folio, you are
> >>>>>>>>>>> unfolding
> >>>>>>>>>>> and folding again. It seems quite expensive.
> >>>>>>>>
> >>>>>>>> I'm not convinced we should be doing this in batches. We want the initial
> >>>>>>>> folio_pte_batch() to be as loose as possible regarding permissions so that we
> >>>>>>>> reduce our chances of splitting folios to the min. (e.g. ignore SW bits like
> >>>>>>>> soft dirty, etc). I think it might be possible that some PTEs are RO and other
> >>>>>>>> RW too (e.g. due to cow - although with the current cow impl, probably not.
> >>>>>>>> But
> >>>>>>>> its fragile to assume that). Anyway, if we do an initial batch that ignores
> >>>>>>>> all
> >>>>>>>
> >>>>>>> You are correct. I believe this scenario could indeed occur. For instance,
> >>>>>>> if process A forks process B and then unmaps itself, leaving B as the
> >>>>>>> sole process owning the large folio. The current wp_page_reuse() function
> >>>>>>> will reuse PTE one by one while the specific subpage is written.
> >>>>>>
> >>>>>> Hmm - I thought it would only reuse if the total mapcount for the folio was 1.
> >>>>>> And since it is a large folio with each page mapped once in proc B, I thought
> >>>>>> every subpage write would cause a copy except the last one? I haven't looked at
> >>>>>> the code for a while. But I had it in my head that this is an area we need to
> >>>>>> improve for mTHP.
> >>>>>
> >>>>> wp_page_reuse() will currently reuse a PTE part of a large folio only if
> >>>>> a single PTE remains mapped (refcount == 0).
> >>>>
> >>>> ^ == 1
> >>>
> >>> Ahh yes. That's what I meant. I got the behacviour vagulely right though.
> >>>
> >>> Anyway, regardless, I'm not sure we want to batch here. Or if we do, we want to
> >>> batch function that will only clear access and dirty.
> >>
> >> We likely want to detect a folio batch the "usual" way (as relaxed as
> >> possible), then do all the checks (#pte == folio_mapcount() under folio
> >> lock), and finally batch-update the access and dirty only.
>
> Something like:
>
> 1) We might want to factor out the existing single-pte case and extend
> it to handle multiple PTEs (nr_pages). For the existing code, we would
> pass in "nr_pages".
>
> For example, instead of "folio_mapcount(folio) != 1" you'd check
> "folio_mapcount(folio) != nr_pages" in there. And we'd need functions to
> abstract working on multiple ptes.
>
> 2) We'd add something like wrprotect_ptes(), that does the mkold+clean
> on multiple PTEs.
>
> Naming suggestion for such a function requested :)
>
> 3) Then, we might want to extend folio_pte_batch() by an *any_young and
> *any_dirty parameter that will get optimized out for other users. So you
> get that information right when scanning.
>
>
> Just a rough idea, devil is in the detail. But likely trying to abstrct
> the code to handle "multiple pages of the same folio" should come just
> naturally like we used to do for fork() and munmap() so far.
>
> --
> Cheers,
>
> David / dhildenb
>
Powered by blists - more mailing lists