lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuPtR86rhZ1-8Y96w36-+J2qgzJh=tWJCtCmVjaqYHJqqA@mail.gmail.com>
Date: Fri, 26 Jan 2024 15:14:02 -0800
From: Chris Li <chrisl@...nel.org>
To: Barry Song <21cnbao@...il.com>
Cc: ryan.roberts@....com, akpm@...ux-foundation.org, david@...hat.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, mhocko@...e.com, 
	shy828301@...il.com, wangkefeng.wang@...wei.com, willy@...radead.org, 
	xiang@...nel.org, ying.huang@...el.com, yuzhao@...gle.com, surenb@...gle.com, 
	steven.price@....com, Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH RFC 1/6] arm64: mm: swap: support THP_SWAP on hardware
 with MTE

On Thu, Jan 18, 2024 at 3:11 AM Barry Song <21cnbao@...il.com> wrote:
>
> From: Barry Song <v-songbaohua@...o.com>
>
> Commit d0637c505f8a1 ("arm64: enable THP_SWAP for arm64") brings up
> THP_SWAP on ARM64, but it doesn't enable THP_SWP on hardware with
> MTE as the MTE code works with the assumption tags save/restore is
> always handling a folio with only one page.
>
> The limitation should be removed as more and more ARM64 SoCs have
> this feature. Co-existence of MTE and THP_SWAP becomes more and
> more important.
>
> This patch makes MTE tags saving support large folios, then we don't
> need to split large folios into base pages for swapping out on ARM64
> SoCs with MTE any more.
>
> arch_prepare_to_swap() should take folio rather than page as parameter
> because we support THP swap-out as a whole. It saves tags for all
> pages in a large folio.
>
> As now we are restoring tags based-on folio, in arch_swap_restore(),
> we may increase some extra loops and early-exitings while refaulting
> a large folio which is still in swapcache in do_swap_page(). In case
> a large folio has nr pages, do_swap_page() will only set the PTE of
> the particular page which is causing the page fault.
> Thus do_swap_page() runs nr times, and each time, arch_swap_restore()
> will loop nr times for those subpages in the folio. So right now the
> algorithmic complexity becomes O(nr^2).
>
> Once we support mapping large folios in do_swap_page(), extra loops
> and early-exitings will decrease while not being completely removed
> as a large folio might get partially tagged in corner cases such as,
> 1. a large folio in swapcache can be partially unmapped, thus, MTE
> tags for the unmapped pages will be invalidated;
> 2. users might use mprotect() to set MTEs on a part of a large folio.
>
> arch_thp_swp_supported() is dropped since ARM64 MTE was the only one
> who needed it.
>
> Reviewed-by: Steven Price <steven.price@....com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
>  arch/arm64/include/asm/pgtable.h | 21 +++-------------
>  arch/arm64/mm/mteswap.c          | 42 ++++++++++++++++++++++++++++++++
>  include/linux/huge_mm.h          | 12 ---------
>  include/linux/pgtable.h          |  2 +-
>  mm/page_io.c                     |  2 +-
>  mm/swap_slots.c                  |  2 +-
>  6 files changed, 49 insertions(+), 32 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 79ce70fbb751..9902395ca426 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -45,12 +45,6 @@
>         __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> -static inline bool arch_thp_swp_supported(void)
> -{
> -       return !system_supports_mte();
> -}
> -#define arch_thp_swp_supported arch_thp_swp_supported
> -
>  /*
>   * Outside of a few very special situations (e.g. hibernation), we always
>   * use broadcast TLB invalidation instructions, therefore a spurious page
> @@ -1042,12 +1036,8 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  #ifdef CONFIG_ARM64_MTE
>
>  #define __HAVE_ARCH_PREPARE_TO_SWAP
> -static inline int arch_prepare_to_swap(struct page *page)
> -{
> -       if (system_supports_mte())
> -               return mte_save_tags(page);
> -       return 0;
> -}
> +#define arch_prepare_to_swap arch_prepare_to_swap

This seems a noop, define "arch_prepare_to_swap" back to itself.
What am I missing?

I see. Answer my own question, I guess you want to allow someone to
overwrite the arch_prepare_to_swap.
Wouldn't testing against  __HAVE_ARCH_PREPARE_TO_SWAP enough to support that?

Maybe I need to understand better how you want others to extend this
code to make suggestions.
As it is, this looks strange.

> +extern int arch_prepare_to_swap(struct folio *folio);
>
>  #define __HAVE_ARCH_SWAP_INVALIDATE
>  static inline void arch_swap_invalidate_page(int type, pgoff_t offset)
> @@ -1063,11 +1053,8 @@ static inline void arch_swap_invalidate_area(int type)
>  }
>
>  #define __HAVE_ARCH_SWAP_RESTORE
> -static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
> -{
> -       if (system_supports_mte())
> -               mte_restore_tags(entry, &folio->page);
> -}
> +#define arch_swap_restore arch_swap_restore

Same here.

> +extern void arch_swap_restore(swp_entry_t entry, struct folio *folio);
>
>  #endif /* CONFIG_ARM64_MTE */
>
> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> index a31833e3ddc5..b9ca1b35902f 100644
> --- a/arch/arm64/mm/mteswap.c
> +++ b/arch/arm64/mm/mteswap.c
> @@ -68,6 +68,13 @@ void mte_invalidate_tags(int type, pgoff_t offset)
>         mte_free_tag_storage(tags);
>  }
>
> +static inline void __mte_invalidate_tags(struct page *page)
> +{
> +       swp_entry_t entry = page_swap_entry(page);
> +
> +       mte_invalidate_tags(swp_type(entry), swp_offset(entry));
> +}
> +
>  void mte_invalidate_tags_area(int type)
>  {
>         swp_entry_t entry = swp_entry(type, 0);
> @@ -83,3 +90,38 @@ void mte_invalidate_tags_area(int type)
>         }
>         xa_unlock(&mte_pages);
>  }
> +
> +int arch_prepare_to_swap(struct folio *folio)
> +{
> +       int err;
> +       long i;
> +
> +       if (system_supports_mte()) {
Very minor nitpick.

You can do
if (!system_supports_mte())
    return 0;

Here and the for loop would have less indent. The function looks flatter.

> +               long nr = folio_nr_pages(folio);
> +
> +               for (i = 0; i < nr; i++) {
> +                       err = mte_save_tags(folio_page(folio, i));
> +                       if (err)
> +                               goto out;
> +               }
> +       }
> +       return 0;
> +
> +out:
> +       while (i--)
> +               __mte_invalidate_tags(folio_page(folio, i));
> +       return err;
> +}
> +
> +void arch_swap_restore(swp_entry_t entry, struct folio *folio)
> +{
> +       if (system_supports_mte()) {

Same here.

Looks good otherwise. None of the nitpicks are deal breakers.

Acked-by: Chris Li <chrisl@...nel.org>


Chris

> +               long i, nr = folio_nr_pages(folio);
> +
> +               entry.val -= swp_offset(entry) & (nr - 1);
> +               for (i = 0; i < nr; i++) {
> +                       mte_restore_tags(entry, folio_page(folio, i));
> +                       entry.val++;
> +               }
> +       }
> +}
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 5adb86af35fc..67219d2309dd 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -530,16 +530,4 @@ static inline int split_folio(struct folio *folio)
>         return split_folio_to_list(folio, NULL);
>  }
>
> -/*
> - * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
> - * limitations in the implementation like arm64 MTE can override this to
> - * false
> - */
> -#ifndef arch_thp_swp_supported
> -static inline bool arch_thp_swp_supported(void)
> -{
> -       return true;
> -}
> -#endif
> -
>  #endif /* _LINUX_HUGE_MM_H */
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index f6d0e3513948..37fe83b0c358 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -925,7 +925,7 @@ static inline int arch_unmap_one(struct mm_struct *mm,
>   * prototypes must be defined in the arch-specific asm/pgtable.h file.
>   */
>  #ifndef __HAVE_ARCH_PREPARE_TO_SWAP
> -static inline int arch_prepare_to_swap(struct page *page)
> +static inline int arch_prepare_to_swap(struct folio *folio)
>  {
>         return 0;
>  }
> diff --git a/mm/page_io.c b/mm/page_io.c
> index ae2b49055e43..a9a7c236aecc 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -189,7 +189,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
>          * Arch code may have to preserve more data than just the page
>          * contents, e.g. memory tags.
>          */
> -       ret = arch_prepare_to_swap(&folio->page);
> +       ret = arch_prepare_to_swap(folio);
>         if (ret) {
>                 folio_mark_dirty(folio);
>                 folio_unlock(folio);
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 0bec1f705f8e..2325adbb1f19 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio)
>         entry.val = 0;
>
>         if (folio_test_large(folio)) {
> -               if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported())
> +               if (IS_ENABLED(CONFIG_THP_SWAP))
>                         get_swap_pages(1, &entry, folio_nr_pages(folio));
>                 goto out;
>         }
> --
> 2.34.1
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ