[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74596ea2-e7e5-4161-9600-ad1a69b6a6fe@arm.com>
Date: Mon, 27 Nov 2023 14:52:11 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>,
Barry Song <21cnbao@...il.com>,
Steven Price <steven.price@....com>
Cc: akpm@...ux-foundation.org, catalin.marinas@....com,
will@...nel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
mhocko@...e.com, shy828301@...il.com, v-songbaohua@...o.com,
wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
ying.huang@...el.com, yuzhao@...gle.com
Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for
large folios
On 27/11/2023 14:16, David Hildenbrand wrote:
> On 27.11.23 13:14, Ryan Roberts wrote:
>> On 27/11/2023 12:01, David Hildenbrand wrote:
>>> On 27.11.23 12:56, Ryan Roberts wrote:
>>>> On 24/11/2023 18:14, Barry Song wrote:
>>>>> On Fri, Nov 24, 2023 at 10:55 PM Steven Price <steven.price@....com> wrote:
>>>>>>
>>>>>> On 24/11/2023 09:01, Ryan Roberts wrote:
>>>>>>> On 24/11/2023 08:55, David Hildenbrand wrote:
>>>>>>>> On 24.11.23 02:35, Barry Song wrote:
>>>>>>>>> On Mon, Nov 20, 2023 at 11:57 PM Ryan Roberts <ryan.roberts@....com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> On 20/11/2023 09:11, David Hildenbrand wrote:
>>>>>>>>>>> On 17.11.23 19:41, Barry Song wrote:
>>>>>>>>>>>> On Fri, Nov 17, 2023 at 7:28 PM David Hildenbrand <david@...hat.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 17.11.23 01:15, Barry Song wrote:
>>>>>>>>>>>>>> On Fri, Nov 17, 2023 at 7:47 AM Barry Song <21cnbao@...il.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand
>>>>>>>>>>>>>>> <david@...hat.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 15.11.23 21:49, Barry Song wrote:
>>>>>>>>>>>>>>>>> On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand
>>>>>>>>>>>>>>>>> <david@...hat.com>
>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 14.11.23 02:43, Barry Song wrote:
>>>>>>>>>>>>>>>>>>> This patch makes MTE tags saving and restoring support large
>>>>>>>>>>>>>>>>>>> folios,
>>>>>>>>>>>>>>>>>>> then we don't need to split them into base pages for swapping
>>>>>>>>>>>>>>>>>>> out
>>>>>>>>>>>>>>>>>>> on ARM64 SoCs with MTE.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> arch_prepare_to_swap() should take folio rather than page as
>>>>>>>>>>>>>>>>>>> parameter
>>>>>>>>>>>>>>>>>>> because we support THP swap-out as a whole.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Meanwhile, arch_swap_restore() should use page parameter rather
>>>>>>>>>>>>>>>>>>> than
>>>>>>>>>>>>>>>>>>> folio as swap-in always works at the granularity of base pages
>>>>>>>>>>>>>>>>>>> right
>>>>>>>>>>>>>>>>>>> now.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ... but then we always have order-0 folios and can pass a folio,
>>>>>>>>>>>>>>>>>> or what
>>>>>>>>>>>>>>>>>> am I missing?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi David,
>>>>>>>>>>>>>>>>> you missed the discussion here:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@mail.gmail.com/
>>>>>>>>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@mail.gmail.com/
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Okay, so you want to handle the refault-from-swapcache case
>>>>>>>>>>>>>>>> where you
>>>>>>>>>>>>>>>> get a
>>>>>>>>>>>>>>>> large folio.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I was mislead by your "folio as swap-in always works at the
>>>>>>>>>>>>>>>> granularity of
>>>>>>>>>>>>>>>> base pages right now" comment.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What you actually wanted to say is "While we always swap in small
>>>>>>>>>>>>>>>> folios, we
>>>>>>>>>>>>>>>> might refault large folios from the swapcache, and we only want to
>>>>>>>>>>>>>>>> restore
>>>>>>>>>>>>>>>> the tags for the page of the large folio we are faulting on."
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But, I do if we can't simply restore the tags for the whole thing
>>>>>>>>>>>>>>>> at once
>>>>>>>>>>>>>>>> at make the interface page-free?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Let me elaborate:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> IIRC, if we have a large folio in the swapcache, the swap
>>>>>>>>>>>>>>>> entries/offset are
>>>>>>>>>>>>>>>> contiguous. If you know you are faulting on page[1] of the folio
>>>>>>>>>>>>>>>> with a
>>>>>>>>>>>>>>>> given swap offset, you can calculate the swap offset for page[0]
>>>>>>>>>>>>>>>> simply by
>>>>>>>>>>>>>>>> subtracting from the offset.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> See page_swap_entry() on how we perform this calculation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So you can simply pass the large folio and the swap entry
>>>>>>>>>>>>>>>> corresponding
>>>>>>>>>>>>>>>> to the first page of the large folio, and restore all tags at once.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So the interface would be
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> arch_prepare_to_swap(struct folio *folio);
>>>>>>>>>>>>>>>> void arch_swap_restore(struct page *folio, swp_entry_t
>>>>>>>>>>>>>>>> start_entry);
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'm sorry if that was also already discussed.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This has been discussed. Steven, Ryan and I all don't think this is
>>>>>>>>>>>>>>> a good
>>>>>>>>>>>>>>> option. in case we have a large folio with 16 basepages, as
>>>>>>>>>>>>>>> do_swap_page
>>>>>>>>>>>>>>> can only map one base page for each page fault, that means we have
>>>>>>>>>>>>>>> to restore 16(tags we restore in each page fault) * 16(the times of
>>>>>>>>>>>>>>> page
>>>>>>>>>>>>>>> faults)
>>>>>>>>>>>>>>> for this large folio.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> and still the worst thing is the page fault in the Nth PTE of large
>>>>>>>>>>>>>>> folio
>>>>>>>>>>>>>>> might free swap entry as that swap has been in.
>>>>>>>>>>>>>>> do_swap_page()
>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>> /*
>>>>>>>>>>>>>>> * Remove the swap entry and conditionally try to free up
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> swapcache.
>>>>>>>>>>>>>>> * We're already holding a reference on the page but haven't
>>>>>>>>>>>>>>> mapped it
>>>>>>>>>>>>>>> * yet.
>>>>>>>>>>>>>>> */
>>>>>>>>>>>>>>> swap_free(entry);
>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you
>>>>>>>>>>>>>>> might
>>>>>>>>>>>>>>> access
>>>>>>>>>>>>>>> a freed tag.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> And David, one more information is that to keep the parameter of
>>>>>>>>>>>>>> arch_swap_restore() unchanged as folio,
>>>>>>>>>>>>>> i actually tried an ugly approach in rfc v2:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>> + if (system_supports_mte()) {
>>>>>>>>>>>>>> + /*
>>>>>>>>>>>>>> + * We don't support large folios swap in as whole yet, but
>>>>>>>>>>>>>> + * we can hit a large folio which is still in swapcache
>>>>>>>>>>>>>> + * after those related processes' PTEs have been unmapped
>>>>>>>>>>>>>> + * but before the swapcache folio is dropped, in this case,
>>>>>>>>>>>>>> + * we need to find the exact page which "entry" is mapping
>>>>>>>>>>>>>> + * to. If we are not hitting swapcache, this folio won't be
>>>>>>>>>>>>>> + * large
>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>> + struct page *page = folio_file_page(folio, swp_offset(entry));
>>>>>>>>>>>>>> + mte_restore_tags(entry, page);
>>>>>>>>>>>>>> + }
>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> And obviously everybody in the discussion hated it :-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I can relate :D
>>>>>>>>>>>>>
>>>>>>>>>>>>>> i feel the only way to keep API unchanged using folio is that we
>>>>>>>>>>>>>> support restoring PTEs
>>>>>>>>>>>>>> all together for the whole large folio and we support the swap-in of
>>>>>>>>>>>>>> large folios. This is
>>>>>>>>>>>>>> in my list to do, I will send a patchset based on Ryan's large anon
>>>>>>>>>>>>>> folios series after a
>>>>>>>>>>>>>> while. till that is really done, it seems using page rather than
>>>>>>>>>>>>>> folio
>>>>>>>>>>>>>> is a better choice.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think just restoring all tags and remembering for a large folio that
>>>>>>>>>>>>> they have been restored might be the low hanging fruit. But as always,
>>>>>>>>>>>>> devil is in the detail :)
>>>>>>>>>>>>
>>>>>>>>>>>> Hi David,
>>>>>>>>>>>> thanks for all your suggestions though my feeling is this is too
>>>>>>>>>>>> complex and
>>>>>>>>>>>> is not worth it for at least three reasons.
>>>>>>>>>>>
>>>>>>>>>>> Fair enough.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 1. In multi-thread and particularly multi-processes, we need some
>>>>>>>>>>>> locks to
>>>>>>>>>>>> protect and help know if one process is the first one to restore tags
>>>>>>>>>>>> and if
>>>>>>>>>>>> someone else is restoring tags when one process wants to restore. there
>>>>>>>>>>>> is not this kind of fine-grained lock at all.
>>>>>>>>>>>
>>>>>>>>>>> We surely always hold the folio lock on swapin/swapout, no? So when
>>>>>>>>>>> these
>>>>>>>>>>> functions are called.
>>>>>>>>>>>
>>>>>>>>>>> So that might just work already -- unless I am missing something
>>>>>>>>>>> important.
>>>>>>>>>>
>>>>>>>>>> We already have a page flag that we use to mark the page as having had
>>>>>>>>>> its mte
>>>>>>>>>> state associated; PG_mte_tagged. This is currently per-page (and IIUC,
>>>>>>>>>> Matthew
>>>>>>>>>> has been working to remove as many per-page flags as possible). Couldn't
>>>>>>>>>> we just
>>>>>>>>>> make arch_swap_restore() take a folio, restore the tags for *all* the
>>>>>>>>>> pages and
>>>>>>>>>> repurpose that flag to be per-folio (so head page only)? It looks like
>>>>>>>>>> the the
>>>>>>>>>> mte code already manages all the serialization requirements too. Then
>>>>>>>>>> arch_swap_restore() can just exit early if it sees the flag is already
>>>>>>>>>> set on
>>>>>>>>>> the folio.
>>>>>>>>>>
>>>>>>>>>> One (probably nonsense) concern that just sprung to mind about having
>>>>>>>>>> MTE work
>>>>>>>>>> with large folios in general; is it possible that user space could cause
>>>>>>>>>> a large
>>>>>>>>>> anon folio to be allocated (THP), then later mark *part* of it to be
>>>>>>>>>> tagged with
>>>>>>>>>> MTE? In this case you would need to apply tags to part of the folio only.
>>>>>>>>>> Although I have a vague recollection that any MTE areas have to be
>>>>>>>>>> marked at
>>>>>>>>>> mmap time and therefore this type of thing is impossible?
>>>>>>>>>
>>>>>>>>> right, we might need to consider only a part of folio needs to be
>>>>>>>>> mapped and restored MTE tags.
>>>>>>>>> do_swap_page() can have a chance to hit a large folio but it only
>>>>>>>>> needs to fault-in a page.
>>>>>>>>>
>>>>>>>>> A case can be quite simple as below,
>>>>>>>>>
>>>>>>>>> 1. anon folio shared by process A and B
>>>>>>>>> 2. add_to_swap() as a large folio;
>>>>>>>>> 3. try to unmap A and B;
>>>>>>>>> 4. after A is unmapped(ptes become swap entries), we do a
>>>>>>>>> MADV_DONTNEED on a part of the folio. this can
>>>>>>>>> happen very easily as userspace is still working in 4KB level;
>>>>>>>>> userspace heap management can free an
>>>>>>>>> basepage area by MADV_DONTNEED;
>>>>>>>>> madvise(address, MADV_DONTNEED, 4KB);
>>>>>>>>> 5. A refault on address + 8KB, we will hit large folio in
>>>>>>>>> do_swap_page() but we will only need to map
>>>>>>>>> one basepage, we will never need this DONTNEEDed in process A.
>>>>>>>>>
>>>>>>>>> another more complicated case can be mprotect and munmap a part of
>>>>>>>>> large folios. since userspace
>>>>>>>>> has no idea of large folios in their mind, they can do all strange
>>>>>>>>> things. are we sure in all cases,
>>>>>>>>> large folios have been splitted into small folios?
>>>>>>>
>>>>>>> I don;'t think these examples you cite are problematic. Although user space
>>>>>>> thinks about things in 4K pages, the kernel does things in units of folios.
>>>>>>> So a
>>>>>>> folio is either fully swapped out or not swapped out at all. MTE tags can be
>>>>>>> saved/restored per folio, even if only part of that folio ends up being
>>>>>>> mapped
>>>>>>> back into user space.
>>>>>
>>>>> I am not so optimistic :-)
>>>>>
>>>>> but zap_pte_range() due to DONTNEED on a part of swapped-out folio can
>>>>> free a part of swap
>>>>> entries? thus, free a part of MTE tags in a folio?
>>>>> after process's large folios are swapped out, all PTEs in a large
>>>>> folio become swap
>>>>> entries, but DONTNEED on a part of this area will only set a part of
>>>>> swap entries to
>>>>> PTE_NONE, thus decrease the swapcount of this part?
>>>>>
>>>>> zap_pte_range
>>>>> ->
>>>>> entry = pte_to_swp_entry
>>>>> -> free_swap_and_cache(entry)
>>>>> -> mte tags invalidate
>>>>
>>>> OK I see what you mean.
>>>>
>>>> Just trying to summarize this, I think there are 2 questions behind all this:
>>>>
>>>> 1) Can we save/restore MTE tags on at the granularity of a folio?
>>>>
>>>> I think the answer is no; we can enable MTE on a individual pages within a
>>>> folio
>>>> with mprotect, and we can throw away tags on individual pages as you describe
>>>> above. So we have to continue to handle tags per-page.
>>>
>>> Can you enlighten me why the scheme proposed by Steven doesn't work?
>>
>> Are you referring to Steven's suggestion of reading the tag to see if it's
>> zeros? I think that demonstrates my point that this has to be done per-page and
>
> Yes.
OK I'm obviously being thick, because checking a page's tag to see if its zero
seems logically equivalent to checking the existing per-page flag, just more
expensive. Yes we could make that change but I don't see how it helps solve the
real problem at hand. Unless you are also deliberately trying to remove the
per-page flag at the same time, as per Matthew's master plan?
>
>> not per-folio? I'm also not sure what it buys us - instead of reading a per-page
>> flag we now have to read 128 bytes of tag for each page and check its zero.
>
> My point is, if that is the corner case, we might not care about that.
>
>>
>>>
>>> I mean, having a mixture of tagged vs. untagged is assumed to be the corner
>>> case, right?
>>
>> Yes. But I'm not sure how we exploit that; I guess we could have a per-folio
>> flag; when set it means the whole folio is tagged and when clear it means fall
>> back to checking the per-page flag?
>
> Let me think this through. We have the following states:
>
> 1) Any subpage is possibly logically tagged and all relevant tags are
> applied/restored.
> 2) Any subpage is possibly logically tagged, and all relevant tags are
> not applied but stored in the datastructure.
> 3) No subpage is logically tagged.
>
> We can identify in 1) the subpages by reading the tag from HW,
I don't think this actually works; I'm pretty sure the optimization to clear the
tag at the same time as the page clearing only happens for small pages. I don't
think this will be done when allocating a THP today. Obviously that could change.
> and on 2) by
> checking the datastructure. For 3), there is nothing to check.
>
> On swapout of a large folio:
>
> * For 3) we don't do anything
> * For 2) we don't do anything
> * For 1) we store all tags that are non-zero (reading all tags) and
> transition to 2).
Given a tag architecturally exists for every page even when unused, and we think
a folio being partially mte-tagged is the corner case, could you simplify this
further and just write out all the tags for the folio and not care if some are
not in use?
>
> On swapin of a large folio
>
> A) Old folio (swapcache)
>
> If in 1) or 3) already, nothing to do.
>
> If in 2), restore all tags that are in the datastructure and move to 1). Nothing
> to do for 1
>
> b) Fresh folio (we lost any MTE marker)
>
> Currently always order-0, so nothing to do. We'd have to check the datastructure
> for any tag part of the folio and set the state accordingly.
>
>
> Of course, that means that on swapout, you read all tags. But if the common case
> is that all subpages have tags, we don't really care.
>
> I'm sure I made a mistake somewhere, but where? :)
>
Powered by blists - more mailing lists