lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8aa8f095-1840-4a2e-ad06-3f375282ab6a@arm.com>
Date:   Mon, 27 Nov 2023 12:14:57 +0000
From:   Ryan Roberts <ryan.roberts@....com>
To:     David Hildenbrand <david@...hat.com>,
        Barry Song <21cnbao@...il.com>,
        Steven Price <steven.price@....com>
Cc:     akpm@...ux-foundation.org, catalin.marinas@....com,
        will@...nel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        mhocko@...e.com, shy828301@...il.com, v-songbaohua@...o.com,
        wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
        ying.huang@...el.com, yuzhao@...gle.com
Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for
 large folios

On 27/11/2023 12:01, David Hildenbrand wrote:
> On 27.11.23 12:56, Ryan Roberts wrote:
>> On 24/11/2023 18:14, Barry Song wrote:
>>> On Fri, Nov 24, 2023 at 10:55 PM Steven Price <steven.price@....com> wrote:
>>>>
>>>> On 24/11/2023 09:01, Ryan Roberts wrote:
>>>>> On 24/11/2023 08:55, David Hildenbrand wrote:
>>>>>> On 24.11.23 02:35, Barry Song wrote:
>>>>>>> On Mon, Nov 20, 2023 at 11:57 PM Ryan Roberts <ryan.roberts@....com> wrote:
>>>>>>>>
>>>>>>>> On 20/11/2023 09:11, David Hildenbrand wrote:
>>>>>>>>> On 17.11.23 19:41, Barry Song wrote:
>>>>>>>>>> On Fri, Nov 17, 2023 at 7:28 PM David Hildenbrand <david@...hat.com>
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On 17.11.23 01:15, Barry Song wrote:
>>>>>>>>>>>> On Fri, Nov 17, 2023 at 7:47 AM Barry Song <21cnbao@...il.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand
>>>>>>>>>>>>> <david@...hat.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 15.11.23 21:49, Barry Song wrote:
>>>>>>>>>>>>>>> On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand
>>>>>>>>>>>>>>> <david@...hat.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 14.11.23 02:43, Barry Song wrote:
>>>>>>>>>>>>>>>>> This patch makes MTE tags saving and restoring support large
>>>>>>>>>>>>>>>>> folios,
>>>>>>>>>>>>>>>>> then we don't need to split them into base pages for swapping out
>>>>>>>>>>>>>>>>> on ARM64 SoCs with MTE.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> arch_prepare_to_swap() should take folio rather than page as
>>>>>>>>>>>>>>>>> parameter
>>>>>>>>>>>>>>>>> because we support THP swap-out as a whole.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Meanwhile, arch_swap_restore() should use page parameter rather
>>>>>>>>>>>>>>>>> than
>>>>>>>>>>>>>>>>> folio as swap-in always works at the granularity of base pages
>>>>>>>>>>>>>>>>> right
>>>>>>>>>>>>>>>>> now.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ... but then we always have order-0 folios and can pass a folio,
>>>>>>>>>>>>>>>> or what
>>>>>>>>>>>>>>>> am I missing?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi David,
>>>>>>>>>>>>>>> you missed the discussion here:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@mail.gmail.com/
>>>>>>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@mail.gmail.com/
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Okay, so you want to handle the refault-from-swapcache case where you
>>>>>>>>>>>>>> get a
>>>>>>>>>>>>>> large folio.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I was mislead by your "folio as swap-in always works at the
>>>>>>>>>>>>>> granularity of
>>>>>>>>>>>>>> base pages right now" comment.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What you actually wanted to say is "While we always swap in small
>>>>>>>>>>>>>> folios, we
>>>>>>>>>>>>>> might refault large folios from the swapcache, and we only want to
>>>>>>>>>>>>>> restore
>>>>>>>>>>>>>> the tags for the page of the large folio we are faulting on."
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But, I do if we can't simply restore the tags for the whole thing
>>>>>>>>>>>>>> at once
>>>>>>>>>>>>>> at make the interface page-free?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Let me elaborate:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> IIRC, if we have a large folio in the swapcache, the swap
>>>>>>>>>>>>>> entries/offset are
>>>>>>>>>>>>>> contiguous. If you know you are faulting on page[1] of the folio
>>>>>>>>>>>>>> with a
>>>>>>>>>>>>>> given swap offset, you can calculate the swap offset for page[0]
>>>>>>>>>>>>>> simply by
>>>>>>>>>>>>>> subtracting from the offset.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> See page_swap_entry() on how we perform this calculation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So you can simply pass the large folio and the swap entry
>>>>>>>>>>>>>> corresponding
>>>>>>>>>>>>>> to the first page of the large folio, and restore all tags at once.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So the interface would be
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> arch_prepare_to_swap(struct folio *folio);
>>>>>>>>>>>>>> void arch_swap_restore(struct page *folio, swp_entry_t start_entry);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I'm sorry if that was also already discussed.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This has been discussed. Steven, Ryan and I all don't think this is
>>>>>>>>>>>>> a good
>>>>>>>>>>>>> option. in case we have a large folio with 16 basepages, as
>>>>>>>>>>>>> do_swap_page
>>>>>>>>>>>>> can only map one base page for each page fault, that means we have
>>>>>>>>>>>>> to restore 16(tags we restore in each page fault) * 16(the times of
>>>>>>>>>>>>> page
>>>>>>>>>>>>> faults)
>>>>>>>>>>>>> for this large folio.
>>>>>>>>>>>>>
>>>>>>>>>>>>> and still the worst thing is the page fault in the Nth PTE of large
>>>>>>>>>>>>> folio
>>>>>>>>>>>>> might free swap entry as that swap has been in.
>>>>>>>>>>>>> do_swap_page()
>>>>>>>>>>>>> {
>>>>>>>>>>>>>        /*
>>>>>>>>>>>>>         * Remove the swap entry and conditionally try to free up the
>>>>>>>>>>>>> swapcache.
>>>>>>>>>>>>>         * We're already holding a reference on the page but haven't
>>>>>>>>>>>>> mapped it
>>>>>>>>>>>>>         * yet.
>>>>>>>>>>>>>         */
>>>>>>>>>>>>>         swap_free(entry);
>>>>>>>>>>>>> }
>>>>>>>>>>>>>
>>>>>>>>>>>>> So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you
>>>>>>>>>>>>> might
>>>>>>>>>>>>> access
>>>>>>>>>>>>> a freed tag.
>>>>>>>>>>>>
>>>>>>>>>>>> And David, one more information is that to keep the parameter of
>>>>>>>>>>>> arch_swap_restore() unchanged as folio,
>>>>>>>>>>>> i actually tried an ugly approach in rfc v2:
>>>>>>>>>>>>
>>>>>>>>>>>> +void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>>>>>>>>>>>> +{
>>>>>>>>>>>> + if (system_supports_mte()) {
>>>>>>>>>>>> +      /*
>>>>>>>>>>>> +       * We don't support large folios swap in as whole yet, but
>>>>>>>>>>>> +       * we can hit a large folio which is still in swapcache
>>>>>>>>>>>> +       * after those related processes' PTEs have been unmapped
>>>>>>>>>>>> +       * but before the swapcache folio  is dropped, in this case,
>>>>>>>>>>>> +       * we need to find the exact page which "entry" is mapping
>>>>>>>>>>>> +       * to. If we are not hitting swapcache, this folio won't be
>>>>>>>>>>>> +       * large
>>>>>>>>>>>> +     */
>>>>>>>>>>>> + struct page *page = folio_file_page(folio, swp_offset(entry));
>>>>>>>>>>>> + mte_restore_tags(entry, page);
>>>>>>>>>>>> + }
>>>>>>>>>>>> +}
>>>>>>>>>>>>
>>>>>>>>>>>> And obviously everybody in the discussion hated it :-)
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I can relate :D
>>>>>>>>>>>
>>>>>>>>>>>> i feel the only way to keep API unchanged using folio is that we
>>>>>>>>>>>> support restoring PTEs
>>>>>>>>>>>> all together for the whole large folio and we support the swap-in of
>>>>>>>>>>>> large folios. This is
>>>>>>>>>>>> in my list to do, I will send a patchset based on Ryan's large anon
>>>>>>>>>>>> folios series after a
>>>>>>>>>>>> while. till that is really done, it seems using page rather than folio
>>>>>>>>>>>> is a better choice.
>>>>>>>>>>>
>>>>>>>>>>> I think just restoring all tags and remembering for a large folio that
>>>>>>>>>>> they have been restored might be the low hanging fruit. But as always,
>>>>>>>>>>> devil is in the detail :)
>>>>>>>>>>
>>>>>>>>>> Hi David,
>>>>>>>>>> thanks for all your suggestions though my feeling is this is too
>>>>>>>>>> complex and
>>>>>>>>>> is not worth it for at least  three reasons.
>>>>>>>>>
>>>>>>>>> Fair enough.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 1. In multi-thread and particularly multi-processes, we need some
>>>>>>>>>> locks to
>>>>>>>>>> protect and help know if one process is the first one to restore tags
>>>>>>>>>> and if
>>>>>>>>>> someone else is restoring tags when one process wants to restore. there
>>>>>>>>>> is not this kind of fine-grained lock at all.
>>>>>>>>>
>>>>>>>>> We surely always hold the folio lock on swapin/swapout, no? So when these
>>>>>>>>> functions are called.
>>>>>>>>>
>>>>>>>>> So that might just work already -- unless I am missing something
>>>>>>>>> important.
>>>>>>>>
>>>>>>>> We already have a page flag that we use to mark the page as having had
>>>>>>>> its mte
>>>>>>>> state associated; PG_mte_tagged. This is currently per-page (and IIUC,
>>>>>>>> Matthew
>>>>>>>> has been working to remove as many per-page flags as possible). Couldn't
>>>>>>>> we just
>>>>>>>> make arch_swap_restore() take a folio, restore the tags for *all* the
>>>>>>>> pages and
>>>>>>>> repurpose that flag to be per-folio (so head page only)? It looks like
>>>>>>>> the the
>>>>>>>> mte code already manages all the serialization requirements too. Then
>>>>>>>> arch_swap_restore() can just exit early if it sees the flag is already
>>>>>>>> set on
>>>>>>>> the folio.
>>>>>>>>
>>>>>>>> One (probably nonsense) concern that just sprung to mind about having
>>>>>>>> MTE work
>>>>>>>> with large folios in general; is it possible that user space could cause
>>>>>>>> a large
>>>>>>>> anon folio to be allocated (THP), then later mark *part* of it to be
>>>>>>>> tagged with
>>>>>>>> MTE? In this case you would need to apply tags to part of the folio only.
>>>>>>>> Although I have a vague recollection that any MTE areas have to be
>>>>>>>> marked at
>>>>>>>> mmap time and therefore this type of thing is impossible?
>>>>>>>
>>>>>>> right, we might need to consider only a part of folio needs to be
>>>>>>> mapped and restored MTE tags.
>>>>>>> do_swap_page() can have a chance to hit a large folio but it only
>>>>>>> needs to fault-in a page.
>>>>>>>
>>>>>>> A case can be quite simple as below,
>>>>>>>
>>>>>>> 1. anon folio shared by process A and B
>>>>>>> 2. add_to_swap() as a large folio;
>>>>>>> 3. try to unmap A and B;
>>>>>>> 4. after A is unmapped(ptes become swap entries), we do a
>>>>>>> MADV_DONTNEED on a part of the folio. this can
>>>>>>> happen very easily as userspace is still working in 4KB level;
>>>>>>> userspace heap management can free an
>>>>>>> basepage area by MADV_DONTNEED;
>>>>>>> madvise(address, MADV_DONTNEED, 4KB);
>>>>>>> 5. A refault on address + 8KB, we will hit large folio in
>>>>>>> do_swap_page() but we will only need to map
>>>>>>> one basepage, we will never need this DONTNEEDed in process A.
>>>>>>>
>>>>>>> another more complicated case can be mprotect and munmap a part of
>>>>>>> large folios. since userspace
>>>>>>> has no idea of large folios in their mind, they can do all strange
>>>>>>> things. are we sure in all cases,
>>>>>>> large folios have been splitted into small folios?
>>>>>
>>>>> I don;'t think these examples you cite are problematic. Although user space
>>>>> thinks about things in 4K pages, the kernel does things in units of folios.
>>>>> So a
>>>>> folio is either fully swapped out or not swapped out at all. MTE tags can be
>>>>> saved/restored per folio, even if only part of that folio ends up being mapped
>>>>> back into user space.
>>>
>>> I am not so optimistic :-)
>>>
>>> but zap_pte_range() due to DONTNEED on a part of swapped-out folio can
>>> free a part of swap
>>> entries? thus, free a part of MTE tags in a folio?
>>> after process's large folios are swapped out, all PTEs in a large
>>> folio become swap
>>> entries, but DONTNEED on a part of this area will only set a part of
>>> swap entries to
>>> PTE_NONE, thus decrease the swapcount of this part?
>>>
>>> zap_pte_range
>>>      ->
>>>            entry = pte_to_swp_entry
>>>                    -> free_swap_and_cache(entry)
>>>                        -> mte tags invalidate
>>
>> OK I see what you mean.
>>
>> Just trying to summarize this, I think there are 2 questions behind all this:
>>
>> 1) Can we save/restore MTE tags on at the granularity of a folio?
>>
>> I think the answer is no; we can enable MTE on a individual pages within a folio
>> with mprotect, and we can throw away tags on individual pages as you describe
>> above. So we have to continue to handle tags per-page.
> 
> Can you enlighten me why the scheme proposed by Steven doesn't work?

Are you referring to Steven's suggestion of reading the tag to see if it's
zeros? I think that demonstrates my point that this has to be done per-page and
not per-folio? I'm also not sure what it buys us - instead of reading a per-page
flag we now have to read 128 bytes of tag for each page and check its zero.

> 
> I mean, having a mixture of tagged vs. untagged is assumed to be the corner
> case, right?

Yes. But I'm not sure how we exploit that; I guess we could have a per-folio
flag; when set it means the whole folio is tagged and when clear it means fall
back to checking the per-page flag?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ