[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231102223643.7733-1-v-songbaohua@oppo.com>
Date: Fri, 3 Nov 2023 11:36:43 +1300
From: Barry Song <21cnbao@...il.com>
To: ryan.roberts@....com
Cc: 21cnbao@...il.com, Steven.Price@....com, akpm@...ux-foundation.org,
david@...hat.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
mhocko@...e.com, shy828301@...il.com, wangkefeng.wang@...wei.com,
willy@...radead.org, xiang@...nel.org, ying.huang@...el.com,
yuzhao@...gle.com
Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting
> But, yes, it would be nice to fix that! And if I've understood the problem
> correctly, it doesn't sound like it should be too hard? Is this something you
> are volunteering for?? :)
Unfornately right now I haven't a real hardware with MTE which can run the latest
kernel. but i have written a RFC, it will be nice to get someone to test it. Let
me figure out if we can get someone :-)
[RFC PATCH] arm64: mm: swap: save and restore mte tags for large folios
This patch makes MTE tags saving and restoring support large folios,
then we don't need to split them into base pages for swapping on
ARM64 SoCs with MTE.
---
arch/arm64/include/asm/pgtable.h | 21 ++++-----------------
arch/arm64/mm/mteswap.c | 20 ++++++++++++++++++++
2 files changed, 24 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7f7d9b1df4e5..b12783dca00a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -45,12 +45,6 @@
__flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-static inline bool arch_thp_swp_supported(void)
-{
- return !system_supports_mte();
-}
-#define arch_thp_swp_supported arch_thp_swp_supported
-
/*
* Outside of a few very special situations (e.g. hibernation), we always
* use broadcast TLB invalidation instructions, therefore a spurious page
@@ -1028,12 +1022,8 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
#ifdef CONFIG_ARM64_MTE
#define __HAVE_ARCH_PREPARE_TO_SWAP
-static inline int arch_prepare_to_swap(struct page *page)
-{
- if (system_supports_mte())
- return mte_save_tags(page);
- return 0;
-}
+#define arch_prepare_to_swap arch_prepare_to_swap
+extern int arch_prepare_to_swap(struct page *page);
#define __HAVE_ARCH_SWAP_INVALIDATE
static inline void arch_swap_invalidate_page(int type, pgoff_t offset)
@@ -1049,11 +1039,8 @@ static inline void arch_swap_invalidate_area(int type)
}
#define __HAVE_ARCH_SWAP_RESTORE
-static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
-{
- if (system_supports_mte())
- mte_restore_tags(entry, &folio->page);
-}
+#define arch_swap_restore arch_swap_restore
+extern void arch_swap_restore(swp_entry_t entry, struct folio *folio);
#endif /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index a31833e3ddc5..e5637e931e4f 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -83,3 +83,23 @@ void mte_invalidate_tags_area(int type)
}
xa_unlock(&mte_pages);
}
+
+int arch_prepare_to_swap(struct page *page)
+{
+ if (system_supports_mte()) {
+ struct folio *folio = page_folio(page);
+ long i, nr = folio_nr_pages(folio);
+ for (i = 0; i < nr; i++)
+ return mte_save_tags(folio_page(folio, i));
+ }
+ return 0;
+}
+
+void arch_swap_restore(swp_entry_t entry, struct folio *folio)
+{
+ if (system_supports_mte()) {
+ long i, nr = folio_nr_pages(folio);
+ for (i = 0; i < nr; i++)
+ mte_restore_tags(entry, folio_page(folio, i));
+ }
+}
--
2.25.1
> Thanks,
> Ryan
Barry
Powered by blists - more mailing lists