lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 7 Mar 2024 20:48:50 +0100
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, Ryan Roberts <ryan.roberts@....com>
Cc: Lance Yang <ioworker0@...il.com>, Vishal Moola <vishal.moola@...il.com>,
 akpm@...ux-foundation.org, zokeefe@...gle.com, shy828301@...il.com,
 mhocko@...e.com, fengwei.yin@...el.com, xiehuan09@...il.com,
 wangkefeng.wang@...wei.com, songmuchun@...edance.com, peterx@...hat.com,
 minchan@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in
 madvise_free

On 07.03.24 19:54, Barry Song wrote:
> On Fri, Mar 8, 2024 at 12:31 AM Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 07/03/2024 12:01, Barry Song wrote:
>>> On Thu, Mar 7, 2024 at 7:45 PM David Hildenbrand <david@...hat.com> wrote:
>>>>
>>>> On 07.03.24 12:42, Ryan Roberts wrote:
>>>>> On 07/03/2024 11:31, David Hildenbrand wrote:
>>>>>> On 07.03.24 12:26, Barry Song wrote:
>>>>>>> On Thu, Mar 7, 2024 at 7:13 PM Ryan Roberts <ryan.roberts@....com> wrote:
>>>>>>>>
>>>>>>>> On 07/03/2024 10:54, David Hildenbrand wrote:
>>>>>>>>> On 07.03.24 11:54, David Hildenbrand wrote:
>>>>>>>>>> On 07.03.24 11:50, Ryan Roberts wrote:
>>>>>>>>>>> On 07/03/2024 09:33, Barry Song wrote:
>>>>>>>>>>>> On Thu, Mar 7, 2024 at 10:07 PM Ryan Roberts <ryan.roberts@....com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 07/03/2024 08:10, Barry Song wrote:
>>>>>>>>>>>>>> On Thu, Mar 7, 2024 at 9:00 PM Lance Yang <ioworker0@...il.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Barry,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks for taking time to review!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Mar 7, 2024 at 3:00 PM Barry Song <21cnbao@...il.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Mar 7, 2024 at 7:15 PM Lance Yang <ioworker0@...il.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [...]
>>>>>>>>>>>>>>>>> +static inline bool can_mark_large_folio_lazyfree(unsigned long addr,
>>>>>>>>>>>>>>>>> +                                                struct folio *folio,
>>>>>>>>>>>>>>>>> pte_t *start_pte)
>>>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>>>> +       int nr_pages = folio_nr_pages(folio);
>>>>>>>>>>>>>>>>> +       fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +       for (int i = 0; i < nr_pages; i++)
>>>>>>>>>>>>>>>>> +               if (page_mapcount(folio_page(folio, i)) != 1)
>>>>>>>>>>>>>>>>> +                       return false;
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> we have moved to folio_estimated_sharers though it is not precise, so
>>>>>>>>>>>>>>>> we don't do
>>>>>>>>>>>>>>>> this check with lots of loops and depending on the subpage's mapcount.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If we don't check the subpage’s mapcount, and there is a cow folio
>>>>>>>>>>>>>>> associated
>>>>>>>>>>>>>>> with this folio and the cow folio has smaller size than this folio,
>>>>>>>>>>>>>>> should we still
>>>>>>>>>>>>>>> mark this folio as lazyfree?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I agree, this is true. However, we've somehow accepted the fact that
>>>>>>>>>>>>>> folio_likely_mapped_shared
>>>>>>>>>>>>>> can result in false negatives or false positives to balance the
>>>>>>>>>>>>>> overhead.  So I really don't know :-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Maybe David and Vishal can give some comments here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> BTW, do we need to rebase our work against David's changes[1]?
>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>> https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@redhat.com/
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Yes, we should rebase our work against David’s changes.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +       return nr_pages == folio_pte_batch(folio, addr, start_pte,
>>>>>>>>>>>>>>>>> +                                        ptep_get(start_pte), nr_pages,
>>>>>>>>>>>>>>>>> flags, NULL);
>>>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>       static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>>>>>>>>>>>>>>>>>                                      unsigned long end, struct mm_walk
>>>>>>>>>>>>>>>>> *walk)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd,
>>>>>>>>>>>>>>>>> unsigned long addr,
>>>>>>>>>>>>>>>>>                       */
>>>>>>>>>>>>>>>>>                      if (folio_test_large(folio)) {
>>>>>>>>>>>>>>>>>                              int err;
>>>>>>>>>>>>>>>>> +                       unsigned long next_addr, align;
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> -                       if (folio_estimated_sharers(folio) != 1)
>>>>>>>>>>>>>>>>> -                               break;
>>>>>>>>>>>>>>>>> -                       if (!folio_trylock(folio))
>>>>>>>>>>>>>>>>> -                               break;
>>>>>>>>>>>>>>>>> +                       if (folio_estimated_sharers(folio) != 1 ||
>>>>>>>>>>>>>>>>> +                           !folio_trylock(folio))
>>>>>>>>>>>>>>>>> +                               goto skip_large_folio;
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I don't think we can skip all the PTEs for nr_pages, as some of them
>>>>>>>>>>>>>>>> might be
>>>>>>>>>>>>>>>> pointing to other folios.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16),
>>>>>>>>>>>>>>>> and write the memory of PTE15 and PTE16, you get page faults, thus PTE15
>>>>>>>>>>>>>>>> and PTE16 will point to two different small folios. We can only skip
>>>>>>>>>>>>>>>> when we
>>>>>>>>>>>>>>>> are sure nr_pages == folio_pte_batch() is sure.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Agreed. Thanks for pointing that out.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +                       align = folio_nr_pages(folio) * PAGE_SIZE;
>>>>>>>>>>>>>>>>> +                       next_addr = ALIGN_DOWN(addr + align, align);
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +                       /*
>>>>>>>>>>>>>>>>> +                        * If we mark only the subpages as lazyfree, or
>>>>>>>>>>>>>>>>> +                        * cannot mark the entire large folio as
>>>>>>>>>>>>>>>>> lazyfree,
>>>>>>>>>>>>>>>>> +                        * then just split it.
>>>>>>>>>>>>>>>>> +                        */
>>>>>>>>>>>>>>>>> +                       if (next_addr > end || next_addr - addr !=
>>>>>>>>>>>>>>>>> align ||
>>>>>>>>>>>>>>>>> +                           !can_mark_large_folio_lazyfree(addr, folio,
>>>>>>>>>>>>>>>>> pte))
>>>>>>>>>>>>>>>>> +                               goto split_large_folio;
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +                       /*
>>>>>>>>>>>>>>>>> +                        * Avoid unnecessary folio splitting if the
>>>>>>>>>>>>>>>>> large
>>>>>>>>>>>>>>>>> +                        * folio is entirely within the given range.
>>>>>>>>>>>>>>>>> +                        */
>>>>>>>>>>>>>>>>> +                       folio_clear_dirty(folio);
>>>>>>>>>>>>>>>>> +                       folio_unlock(folio);
>>>>>>>>>>>>>>>>> +                       for (; addr != next_addr; pte++, addr +=
>>>>>>>>>>>>>>>>> PAGE_SIZE) {
>>>>>>>>>>>>>>>>> +                               ptent = ptep_get(pte);
>>>>>>>>>>>>>>>>> +                               if (pte_young(ptent) ||
>>>>>>>>>>>>>>>>> pte_dirty(ptent)) {
>>>>>>>>>>>>>>>>> +                                       ptent =
>>>>>>>>>>>>>>>>> ptep_get_and_clear_full(
>>>>>>>>>>>>>>>>> +                                               mm, addr, pte,
>>>>>>>>>>>>>>>>> tlb->fullmm);
>>>>>>>>>>>>>>>>> +                                       ptent = pte_mkold(ptent);
>>>>>>>>>>>>>>>>> +                                       ptent = pte_mkclean(ptent);
>>>>>>>>>>>>>>>>> +                                       set_pte_at(mm, addr, pte,
>>>>>>>>>>>>>>>>> ptent);
>>>>>>>>>>>>>>>>> +                                       tlb_remove_tlb_entry(tlb, pte,
>>>>>>>>>>>>>>>>> addr);
>>>>>>>>>>>>>>>>> +                               }
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Can we do this in batches? for a CONT-PTE mapped large folio, you are
>>>>>>>>>>>>>>>> unfolding
>>>>>>>>>>>>>>>> and folding again. It seems quite expensive.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'm not convinced we should be doing this in batches. We want the initial
>>>>>>>>>>>>> folio_pte_batch() to be as loose as possible regarding permissions so
>>>>>>>>>>>>> that we
>>>>>>>>>>>>> reduce our chances of splitting folios to the min. (e.g. ignore SW bits
>>>>>>>>>>>>> like
>>>>>>>>>>>>> soft dirty, etc). I think it might be possible that some PTEs are RO and
>>>>>>>>>>>>> other
>>>>>>>>>>>>> RW too (e.g. due to cow - although with the current cow impl, probably not.
>>>>>>>>>>>>> But
>>>>>>>>>>>>> its fragile to assume that). Anyway, if we do an initial batch that ignores
>>>>>>>>>>>>> all
>>>>>>>>>>>>
>>>>>>>>>>>> You are correct. I believe this scenario could indeed occur. For instance,
>>>>>>>>>>>> if process A forks process B and then unmaps itself, leaving B as the
>>>>>>>>>>>> sole process owning the large folio.  The current wp_page_reuse() function
>>>>>>>>>>>> will reuse PTE one by one while the specific subpage is written.
>>>>>>>>>>>
>>>>>>>>>>> Hmm - I thought it would only reuse if the total mapcount for the folio
>>>>>>>>>>> was 1.
>>>>>>>>>>> And since it is a large folio with each page mapped once in proc B, I thought
>>>>>>>>>>> every subpage write would cause a copy except the last one? I haven't
>>>>>>>>>>> looked at
>>>>>>>>>>> the code for a while. But I had it in my head that this is an area we need to
>>>>>>>>>>> improve for mTHP.
>>>>>>>
>>>>>>> So sad I am wrong again 😢
>>>>>>>
>>>>>>>>>>
>>>>>>>>>> wp_page_reuse() will currently reuse a PTE part of a large folio only if
>>>>>>>>>> a single PTE remains mapped (refcount == 0).
>>>>>>>>>
>>>>>>>>> ^ == 1
>>>>>>>
>>>>>>> seems this needs improvement. it is a waste the last subpage can
>>>>>>
>>>>>> My take that is WIP:
>>>>>>
>>>>>> https://lore.kernel.org/all/20231124132626.235350-1-david@redhat.com/T/#u
>>>>>>
>>>>>>> reuse the whole large folio. i was doing it in a quite different way,
>>>>>>> if the large folio had only one subpage left, i would do copy and
>>>>>>> released the large folio[1]. and if i could reuse the whole large folio
>>>>>>> with CONT-PTE, i would reuse the whole large folio[2]. in mainline,
>>>>>>> we don't have this cont-pte luxury exposed to mm, so i guess we can
>>>>>>> not do [2] easily, but [1] seems to be an optimization.
>>>>>>
>>>>>> Yeah, I had essentially the same idea: just free up the large folio if most of
>>>>>> the stuff is unmapped. But that's rather a corner-case optimization, so I did
>>>>>> not proceed with that.
>>>>>>
>>>>>
>>>>> I'm not sure it's a corner case, really? - process forks, then both parent and
>>>>> child and write to all pages in what was previously a fully & contiguously
>>>>> mapped large folio?
>>>>
>>>> Well, with 2 MiB my assumption was that while it can happen, it's rather
>>>> rare. With smaller THP it might get more likely, agreed.
>>>>
>>>>>
>>>>> Reggardless, why is it an optimization to do the copy for the last subpage and
>>>>> syncrhonously free the large folio? It's already partially mapped so is on the
>>>>> deferred split list and can be split if memory is tight.
>>>
>>> we don't want reclamation overhead later. and we want memories immediately
>>> available to others.
>>
>> But by that logic, you also don't want to leave the large folio partially mapped
>> all the way until the last subpage is CoWed. Surely you would want to reclaim it
>> when you reach partial map status?
> 
> To some extent, I agree. But then we will have two many copies. The last
> subpage is small, and a safe place to copy instead.

Right, it's essentially a simplistic page migration at a point where you 
know you can safely replace the page (PAE not set, so it cannot be 
pinned using FOLL_PIN). No rmap walk, no migration entries, no worry 
about additional page references.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ