lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Nov 2023 09:55:10 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Barry Song <21cnbao@...il.com>, Ryan Roberts <ryan.roberts@....com>
Cc:     steven.price@....com, akpm@...ux-foundation.org,
        catalin.marinas@....com, will@...nel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
        shy828301@...il.com, v-songbaohua@...o.com,
        wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
        ying.huang@...el.com, yuzhao@...gle.com
Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for
 large folios

On 24.11.23 02:35, Barry Song wrote:
> On Mon, Nov 20, 2023 at 11:57 PM Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 20/11/2023 09:11, David Hildenbrand wrote:
>>> On 17.11.23 19:41, Barry Song wrote:
>>>> On Fri, Nov 17, 2023 at 7:28 PM David Hildenbrand <david@...hat.com> wrote:
>>>>>
>>>>> On 17.11.23 01:15, Barry Song wrote:
>>>>>> On Fri, Nov 17, 2023 at 7:47 AM Barry Song <21cnbao@...il.com> wrote:
>>>>>>>
>>>>>>> On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand <david@...hat.com> wrote:
>>>>>>>>
>>>>>>>> On 15.11.23 21:49, Barry Song wrote:
>>>>>>>>> On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand <david@...hat.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 14.11.23 02:43, Barry Song wrote:
>>>>>>>>>>> This patch makes MTE tags saving and restoring support large folios,
>>>>>>>>>>> then we don't need to split them into base pages for swapping out
>>>>>>>>>>> on ARM64 SoCs with MTE.
>>>>>>>>>>>
>>>>>>>>>>> arch_prepare_to_swap() should take folio rather than page as parameter
>>>>>>>>>>> because we support THP swap-out as a whole.
>>>>>>>>>>>
>>>>>>>>>>> Meanwhile, arch_swap_restore() should use page parameter rather than
>>>>>>>>>>> folio as swap-in always works at the granularity of base pages right
>>>>>>>>>>> now.
>>>>>>>>>>
>>>>>>>>>> ... but then we always have order-0 folios and can pass a folio, or what
>>>>>>>>>> am I missing?
>>>>>>>>>
>>>>>>>>> Hi David,
>>>>>>>>> you missed the discussion here:
>>>>>>>>>
>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@mail.gmail.com/
>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@mail.gmail.com/
>>>>>>>>
>>>>>>>> Okay, so you want to handle the refault-from-swapcache case where you get a
>>>>>>>> large folio.
>>>>>>>>
>>>>>>>> I was mislead by your "folio as swap-in always works at the granularity of
>>>>>>>> base pages right now" comment.
>>>>>>>>
>>>>>>>> What you actually wanted to say is "While we always swap in small folios, we
>>>>>>>> might refault large folios from the swapcache, and we only want to restore
>>>>>>>> the tags for the page of the large folio we are faulting on."
>>>>>>>>
>>>>>>>> But, I do if we can't simply restore the tags for the whole thing at once
>>>>>>>> at make the interface page-free?
>>>>>>>>
>>>>>>>> Let me elaborate:
>>>>>>>>
>>>>>>>> IIRC, if we have a large folio in the swapcache, the swap entries/offset are
>>>>>>>> contiguous. If you know you are faulting on page[1] of the folio with a
>>>>>>>> given swap offset, you can calculate the swap offset for page[0] simply by
>>>>>>>> subtracting from the offset.
>>>>>>>>
>>>>>>>> See page_swap_entry() on how we perform this calculation.
>>>>>>>>
>>>>>>>>
>>>>>>>> So you can simply pass the large folio and the swap entry corresponding
>>>>>>>> to the first page of the large folio, and restore all tags at once.
>>>>>>>>
>>>>>>>> So the interface would be
>>>>>>>>
>>>>>>>> arch_prepare_to_swap(struct folio *folio);
>>>>>>>> void arch_swap_restore(struct page *folio, swp_entry_t start_entry);
>>>>>>>>
>>>>>>>> I'm sorry if that was also already discussed.
>>>>>>>
>>>>>>> This has been discussed. Steven, Ryan and I all don't think this is a good
>>>>>>> option. in case we have a large folio with 16 basepages, as do_swap_page
>>>>>>> can only map one base page for each page fault, that means we have
>>>>>>> to restore 16(tags we restore in each page fault) * 16(the times of page
>>>>>>> faults)
>>>>>>> for this large folio.
>>>>>>>
>>>>>>> and still the worst thing is the page fault in the Nth PTE of large folio
>>>>>>> might free swap entry as that swap has been in.
>>>>>>> do_swap_page()
>>>>>>> {
>>>>>>>       /*
>>>>>>>        * Remove the swap entry and conditionally try to free up the swapcache.
>>>>>>>        * We're already holding a reference on the page but haven't mapped it
>>>>>>>        * yet.
>>>>>>>        */
>>>>>>>        swap_free(entry);
>>>>>>> }
>>>>>>>
>>>>>>> So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you might
>>>>>>> access
>>>>>>> a freed tag.
>>>>>>
>>>>>> And David, one more information is that to keep the parameter of
>>>>>> arch_swap_restore() unchanged as folio,
>>>>>> i actually tried an ugly approach in rfc v2:
>>>>>>
>>>>>> +void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>>>>>> +{
>>>>>> + if (system_supports_mte()) {
>>>>>> +      /*
>>>>>> +       * We don't support large folios swap in as whole yet, but
>>>>>> +       * we can hit a large folio which is still in swapcache
>>>>>> +       * after those related processes' PTEs have been unmapped
>>>>>> +       * but before the swapcache folio  is dropped, in this case,
>>>>>> +       * we need to find the exact page which "entry" is mapping
>>>>>> +       * to. If we are not hitting swapcache, this folio won't be
>>>>>> +       * large
>>>>>> +     */
>>>>>> + struct page *page = folio_file_page(folio, swp_offset(entry));
>>>>>> + mte_restore_tags(entry, page);
>>>>>> + }
>>>>>> +}
>>>>>>
>>>>>> And obviously everybody in the discussion hated it :-)
>>>>>>
>>>>>
>>>>> I can relate :D
>>>>>
>>>>>> i feel the only way to keep API unchanged using folio is that we
>>>>>> support restoring PTEs
>>>>>> all together for the whole large folio and we support the swap-in of
>>>>>> large folios. This is
>>>>>> in my list to do, I will send a patchset based on Ryan's large anon
>>>>>> folios series after a
>>>>>> while. till that is really done, it seems using page rather than folio
>>>>>> is a better choice.
>>>>>
>>>>> I think just restoring all tags and remembering for a large folio that
>>>>> they have been restored might be the low hanging fruit. But as always,
>>>>> devil is in the detail :)
>>>>
>>>> Hi David,
>>>> thanks for all your suggestions though my feeling is this is too complex and
>>>> is not worth it for at least  three reasons.
>>>
>>> Fair enough.
>>>
>>>>
>>>> 1. In multi-thread and particularly multi-processes, we need some locks to
>>>> protect and help know if one process is the first one to restore tags and if
>>>> someone else is restoring tags when one process wants to restore. there
>>>> is not this kind of fine-grained lock at all.
>>>
>>> We surely always hold the folio lock on swapin/swapout, no? So when these
>>> functions are called.
>>>
>>> So that might just work already -- unless I am missing something important.
>>
>> We already have a page flag that we use to mark the page as having had its mte
>> state associated; PG_mte_tagged. This is currently per-page (and IIUC, Matthew
>> has been working to remove as many per-page flags as possible). Couldn't we just
>> make arch_swap_restore() take a folio, restore the tags for *all* the pages and
>> repurpose that flag to be per-folio (so head page only)? It looks like the the
>> mte code already manages all the serialization requirements too. Then
>> arch_swap_restore() can just exit early if it sees the flag is already set on
>> the folio.
>>
>> One (probably nonsense) concern that just sprung to mind about having MTE work
>> with large folios in general; is it possible that user space could cause a large
>> anon folio to be allocated (THP), then later mark *part* of it to be tagged with
>> MTE? In this case you would need to apply tags to part of the folio only.
>> Although I have a vague recollection that any MTE areas have to be marked at
>> mmap time and therefore this type of thing is impossible?
> 
> right, we might need to consider only a part of folio needs to be
> mapped and restored MTE tags.
> do_swap_page() can have a chance to hit a large folio but it only
> needs to fault-in a page.
> 
> A case can be quite simple as below,
> 
> 1. anon folio shared by process A and B
> 2. add_to_swap() as a large folio;
> 3. try to unmap A and B;
> 4. after A is unmapped(ptes become swap entries), we do a
> MADV_DONTNEED on a part of the folio. this can
> happen very easily as userspace is still working in 4KB level;
> userspace heap management can free an
> basepage area by MADV_DONTNEED;
> madvise(address, MADV_DONTNEED, 4KB);
> 5. A refault on address + 8KB, we will hit large folio in
> do_swap_page() but we will only need to map
> one basepage, we will never need this DONTNEEDed in process A.
> 
> another more complicated case can be mprotect and munmap a part of
> large folios. since userspace
> has no idea of large folios in their mind, they can do all strange
> things. are we sure in all cases,
> large folios have been splitted into small folios?

To handle that, we'd have to identify

a) if a subpage has an mte tag to save during swapout
b) if a subpage has an mte tag to restore during swapin

I suspect b) can be had from whatever datastructure we're using to 
actually save the tags?

For a), is there some way to have that information from the HW?

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ