lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8831b4c9-4d14-4fac-84e6-66629aa32388@redhat.com>
Date:   Fri, 17 Nov 2023 12:25:43 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Barry Song <21cnbao@...il.com>
Cc:     steven.price@....com, akpm@...ux-foundation.org,
        ryan.roberts@....com, catalin.marinas@....com, will@...nel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
        shy828301@...il.com, v-songbaohua@...o.com,
        wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
        ying.huang@...el.com, yuzhao@...gle.com
Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for
 large folios

On 17.11.23 00:47, Barry Song wrote:
> On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 15.11.23 21:49, Barry Song wrote:
>>> On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand <david@...hat.com> wrote:
>>>>
>>>> On 14.11.23 02:43, Barry Song wrote:
>>>>> This patch makes MTE tags saving and restoring support large folios,
>>>>> then we don't need to split them into base pages for swapping out
>>>>> on ARM64 SoCs with MTE.
>>>>>
>>>>> arch_prepare_to_swap() should take folio rather than page as parameter
>>>>> because we support THP swap-out as a whole.
>>>>>
>>>>> Meanwhile, arch_swap_restore() should use page parameter rather than
>>>>> folio as swap-in always works at the granularity of base pages right
>>>>> now.
>>>>
>>>> ... but then we always have order-0 folios and can pass a folio, or what
>>>> am I missing?
>>>
>>> Hi David,
>>> you missed the discussion here:
>>>
>>> https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@mail.gmail.com/
>>> https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@mail.gmail.com/
>>
>> Okay, so you want to handle the refault-from-swapcache case where you get a
>> large folio.
>>
>> I was mislead by your "folio as swap-in always works at the granularity of
>> base pages right now" comment.
>>
>> What you actually wanted to say is "While we always swap in small folios, we
>> might refault large folios from the swapcache, and we only want to restore
>> the tags for the page of the large folio we are faulting on."
>>
>> But, I do if we can't simply restore the tags for the whole thing at once
>> at make the interface page-free?
>>
>> Let me elaborate:
>>
>> IIRC, if we have a large folio in the swapcache, the swap entries/offset are
>> contiguous. If you know you are faulting on page[1] of the folio with a
>> given swap offset, you can calculate the swap offset for page[0] simply by
>> subtracting from the offset.
>>
>> See page_swap_entry() on how we perform this calculation.
>>
>>
>> So you can simply pass the large folio and the swap entry corresponding
>> to the first page of the large folio, and restore all tags at once.
>>
>> So the interface would be
>>
>> arch_prepare_to_swap(struct folio *folio);
>> void arch_swap_restore(struct page *folio, swp_entry_t start_entry);
>>
>> I'm sorry if that was also already discussed.
> 
> This has been discussed. Steven, Ryan and I all don't think this is a good
> option. in case we have a large folio with 16 basepages, as do_swap_page
> can only map one base page for each page fault, that means we have
> to restore 16(tags we restore in each page fault) * 16(the times of page faults)
> for this large folio.

Can't you remember that it's already been restored? That seems like a 
reasonable thing to have.

For large folios we have plenty of page flags in tail pages available?

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ