lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4w4VgpO02YUVEn4pbKThg+SszD_bDpBGbKC9d2G90MpGA@mail.gmail.com>
Date:   Fri, 17 Nov 2023 08:15:48 +0800
From:   Barry Song <21cnbao@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     steven.price@....com, akpm@...ux-foundation.org,
        ryan.roberts@....com, catalin.marinas@....com, will@...nel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
        shy828301@...il.com, v-songbaohua@...o.com,
        wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
        ying.huang@...el.com, yuzhao@...gle.com
Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for
 large folios

On Fri, Nov 17, 2023 at 7:47 AM Barry Song <21cnbao@...il.com> wrote:
>
> On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand <david@...hat.com> wrote:
> >
> > On 15.11.23 21:49, Barry Song wrote:
> > > On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand <david@...hat.com> wrote:
> > >>
> > >> On 14.11.23 02:43, Barry Song wrote:
> > >>> This patch makes MTE tags saving and restoring support large folios,
> > >>> then we don't need to split them into base pages for swapping out
> > >>> on ARM64 SoCs with MTE.
> > >>>
> > >>> arch_prepare_to_swap() should take folio rather than page as parameter
> > >>> because we support THP swap-out as a whole.
> > >>>
> > >>> Meanwhile, arch_swap_restore() should use page parameter rather than
> > >>> folio as swap-in always works at the granularity of base pages right
> > >>> now.
> > >>
> > >> ... but then we always have order-0 folios and can pass a folio, or what
> > >> am I missing?
> > >
> > > Hi David,
> > > you missed the discussion here:
> > >
> > > https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@mail.gmail.com/
> > > https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@mail.gmail.com/
> >
> > Okay, so you want to handle the refault-from-swapcache case where you get a
> > large folio.
> >
> > I was mislead by your "folio as swap-in always works at the granularity of
> > base pages right now" comment.
> >
> > What you actually wanted to say is "While we always swap in small folios, we
> > might refault large folios from the swapcache, and we only want to restore
> > the tags for the page of the large folio we are faulting on."
> >
> > But, I do if we can't simply restore the tags for the whole thing at once
> > at make the interface page-free?
> >
> > Let me elaborate:
> >
> > IIRC, if we have a large folio in the swapcache, the swap entries/offset are
> > contiguous. If you know you are faulting on page[1] of the folio with a
> > given swap offset, you can calculate the swap offset for page[0] simply by
> > subtracting from the offset.
> >
> > See page_swap_entry() on how we perform this calculation.
> >
> >
> > So you can simply pass the large folio and the swap entry corresponding
> > to the first page of the large folio, and restore all tags at once.
> >
> > So the interface would be
> >
> > arch_prepare_to_swap(struct folio *folio);
> > void arch_swap_restore(struct page *folio, swp_entry_t start_entry);
> >
> > I'm sorry if that was also already discussed.
>
> This has been discussed. Steven, Ryan and I all don't think this is a good
> option. in case we have a large folio with 16 basepages, as do_swap_page
> can only map one base page for each page fault, that means we have
> to restore 16(tags we restore in each page fault) * 16(the times of page faults)
> for this large folio.
>
> and still the worst thing is the page fault in the Nth PTE of large folio
> might free swap entry as that swap has been in.
> do_swap_page()
> {
>    /*
>     * Remove the swap entry and conditionally try to free up the swapcache.
>     * We're already holding a reference on the page but haven't mapped it
>     * yet.
>     */
>     swap_free(entry);
> }
>
> So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you might access
> a freed tag.

And David, one more information is that to keep the parameter of
arch_swap_restore() unchanged as folio,
i actually tried an ugly approach in rfc v2:

+void arch_swap_restore(swp_entry_t entry, struct folio *folio)
+{
+ if (system_supports_mte()) {
+      /*
+       * We don't support large folios swap in as whole yet, but
+       * we can hit a large folio which is still in swapcache
+       * after those related processes' PTEs have been unmapped
+       * but before the swapcache folio  is dropped, in this case,
+       * we need to find the exact page which "entry" is mapping
+       * to. If we are not hitting swapcache, this folio won't be
+       * large
+     */
+ struct page *page = folio_file_page(folio, swp_offset(entry));
+ mte_restore_tags(entry, page);
+ }
+}

And obviously everybody in the discussion hated it :-)

i feel the only way to keep API unchanged using folio is that we
support restoring PTEs
all together for the whole large folio and we support the swap-in of
large folios. This is
in my list to do, I will send a patchset based on Ryan's large anon
folios series after a
while. till that is really done, it seems using page rather than folio
is a better choice.

>
> >
> > BUT, IIRC in the context of
> >
> > commit cfeed8ffe55b37fa10286aaaa1369da00cb88440
> > Author: David Hildenbrand <david@...hat.com>
> > Date:   Mon Aug 21 18:08:46 2023 +0200
> >
> >      mm/swap: stop using page->private on tail pages for THP_SWAP
> >
> >      Patch series "mm/swap: stop using page->private on tail pages for THP_SWAP
> >      + cleanups".
> >
> >      This series stops using page->private on tail pages for THP_SWAP, replaces
> >      folio->private by folio->swap for swapcache folios, and starts using
> >      "new_folio" for tail pages that we are splitting to remove the usage of
> >      page->private for swapcache handling completely.
> >
> > As long as the folio is in the swapcache, we even do have the proper
> > swp_entry_t start_entry available as folio_swap_entry(folio).
> >
> > But now I am confused when we actually would have to pass
> > "swp_entry_t start_entry". We shouldn't if the folio is in the swapcache ...
> >
>
> Nop, hitting swapcache doesn't necessarily mean tags have been restored.
> when A forks B,C,D,E,F. and A, B, C, D, E ,F share the swapslot.
> as we have two chances to hit swapcache:
> 1. swap out, unmap has been done but folios haven't been dropped
> 2. swap in, shared processes allocate folios and add to swapcache
>
> for 2, If A gets fault earlier than B, A will allocate folio and add
> it to swapcache.
> Then B will hit the swapcache. But If B's CPU is faster than A, B still might
> be the one mapping PTE earlier than A though A is the one which has
> added the page to swapcache. we have to make sure MTE is there when
> mapping is done.
>
> > --
> > Cheers,
> >
> > David / dhildenb

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ