[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2fe5ce7e-9c5c-4df4-b4fc-9fd3d9b2dccb@arm.com>
Date: Fri, 3 Nov 2023 13:57:49 +0000
From: Steven Price <steven.price@....com>
To: Ryan Roberts <ryan.roberts@....com>, Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, david@...hat.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
shy828301@...il.com, wangkefeng.wang@...wei.com,
willy@...radead.org, xiang@...nel.org, ying.huang@...el.com,
yuzhao@...gle.com
Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without
splitting
On 03/11/2023 11:31, Ryan Roberts wrote:
> On 02/11/2023 22:36, Barry Song wrote:
>>> But, yes, it would be nice to fix that! And if I've understood the problem
>>> correctly, it doesn't sound like it should be too hard? Is this something you
>>> are volunteering for?? :)
>>
>> Unfornately right now I haven't a real hardware with MTE which can run the latest
>> kernel. but i have written a RFC, it will be nice to get someone to test it. Let
>> me figure out if we can get someone :-)
>
> OK, let me know if you find someone. Otherwise I can have a hunt around to see
> if I can test it.
>
>>
>> [RFC PATCH] arm64: mm: swap: save and restore mte tags for large folios
>>
>> This patch makes MTE tags saving and restoring support large folios,
>> then we don't need to split them into base pages for swapping on
>> ARM64 SoCs with MTE.
>>
>> ---
>> arch/arm64/include/asm/pgtable.h | 21 ++++-----------------
>> arch/arm64/mm/mteswap.c | 20 ++++++++++++++++++++
>> 2 files changed, 24 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 7f7d9b1df4e5..b12783dca00a 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -45,12 +45,6 @@
>> __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>
>> -static inline bool arch_thp_swp_supported(void)
>> -{
>> - return !system_supports_mte();
>> -}
>> -#define arch_thp_swp_supported arch_thp_swp_supported
>
> IIRC, arm64 was the only arch implementing this, so perhaps it should be ripped
> out from the core code now?
>
>> -
>> /*
>> * Outside of a few very special situations (e.g. hibernation), we always
>> * use broadcast TLB invalidation instructions, therefore a spurious page
>> @@ -1028,12 +1022,8 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>> #ifdef CONFIG_ARM64_MTE
>>
>> #define __HAVE_ARCH_PREPARE_TO_SWAP
>> -static inline int arch_prepare_to_swap(struct page *page)
>> -{
>> - if (system_supports_mte())
>> - return mte_save_tags(page);
>> - return 0;
>> -}
>> +#define arch_prepare_to_swap arch_prepare_to_swap
>> +extern int arch_prepare_to_swap(struct page *page);
>
> I think it would be better to modify this API to take a folio explicitly. The
> caller already has the folio.
>
>>
>> #define __HAVE_ARCH_SWAP_INVALIDATE
>> static inline void arch_swap_invalidate_page(int type, pgoff_t offset)
>> @@ -1049,11 +1039,8 @@ static inline void arch_swap_invalidate_area(int type)
>> }
>>
>> #define __HAVE_ARCH_SWAP_RESTORE
>> -static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>> -{
>> - if (system_supports_mte())
>> - mte_restore_tags(entry, &folio->page);
>> -}
>> +#define arch_swap_restore arch_swap_restore
>> +extern void arch_swap_restore(swp_entry_t entry, struct folio *folio);
>>
>> #endif /* CONFIG_ARM64_MTE */
>>
>> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
>> index a31833e3ddc5..e5637e931e4f 100644
>> --- a/arch/arm64/mm/mteswap.c
>> +++ b/arch/arm64/mm/mteswap.c
>> @@ -83,3 +83,23 @@ void mte_invalidate_tags_area(int type)
>> }
>> xa_unlock(&mte_pages);
>> }
>> +
>> +int arch_prepare_to_swap(struct page *page)
>> +{
>> + if (system_supports_mte()) {
>> + struct folio *folio = page_folio(page);
>> + long i, nr = folio_nr_pages(folio);
>> + for (i = 0; i < nr; i++)
>> + return mte_save_tags(folio_page(folio, i));
>
> This will return after saving the first page of the folio! You will need to add
> each page in a loop, and if you get an error at any point, you will need to
> remove the pages that you already added successfully, by calling
> arch_swap_invalidate_page() as far as I can see. Steven can you confirm?
Yes that's right. mte_save_tags() needs to allocate memory so can fail
and if failing then arch_prepare_to_swap() would need to put things back
how they were with calls to mte_invalidate_tags() (although I think
you'd actually want to refactor to create a function which takes a
struct page *).
Steve
>> + }
>> + return 0;
>> +}
>> +
>> +void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>> +{
>> + if (system_supports_mte()) {
>> + long i, nr = folio_nr_pages(folio);
>> + for (i = 0; i < nr; i++)
>> + mte_restore_tags(entry, folio_page(folio, i));
>
> swap-in currently doesn't support large folios - everything is a single page
> folio. So this isn't technically needed. But from the API POV, it seems
> reasonable to make this change - except your implementation is broken. You are
> currently setting every page in the folio to use the same tags as the first
> page. You need to increment the swap entry for each page.
>
> Thanks,
> Ryan
>
>
>> + }
>> +}
>
Powered by blists - more mailing lists