[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220530151730.39596f41e284b5686acba04f@linux-foundation.org>
Date: Mon, 30 May 2022 15:17:30 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Barry Song <21cnbao@...il.com>
Cc: catalin.marinas@....com, will@...nel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
zhangshiming@...o.com, guojian@...o.com, hanchuanhua@...o.com,
Barry Song <v-songbaohua@...o.com>,
"Huang, Ying" <ying.huang@...el.com>,
Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Steven Price <steven.price@....com>,
Yang Shi <shy828301@...il.com>
Subject: Re: [PATCH v2] arm64: enable THP_SWAP for arm64
On Fri, 27 May 2022 22:06:44 +1200 Barry Song <21cnbao@...il.com> wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> THP_SWAP has been proved to improve the swap throughput significantly
> on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay
> splitting THP after swapped out").
> As long as arm64 uses 4K page size, it is quite similar with x86_64
> by having 2MB PMD THP. So we are going to get similar improvement.
> For other page sizes such as 16KB and 64KB, PMD might be too large.
> Negative side effects such as IO latency might be a problem. Thus,
> we can only safely enable the counterpart of X86_64.
> A corner case is that MTE has an assumption that only base pages
> can be swapped. We won't enable THP_SWP for ARM64 hardware with
> MTE support until MTE is re-arched.
>
> ...
>
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -45,6 +45,8 @@
> __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +#define arch_thp_swp_supported !system_supports_mte
Does that even work?
if (arch_thp_swp_supported())
expands to
if (!system_supports_mte())
so I guess it does work. Is this ugly party trick required for some
reason? If so, an apologetic comment describing why would be helpful.
Otherwise, can we use a static inline function here, as we do with the
stub function?
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio,
> return split_huge_page_to_list(&folio->page, list);
> }
>
> +/*
> + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to
> + * limitations in the implementation like arm64 MTE can override this to
> + * false
> + */
> +#ifndef arch_thp_swp_supported
> +static inline bool arch_thp_swp_supported(void)
> +{
> + return true;
> +}
Missing a #define arch_thp_swp_supported arch_thp_swp_supported here.
> +#endif
> +
> #endif /* _LINUX_HUGE_MM_H */
Otherwise looks OK to me. Please include it in the arm64 tree if/when
it's considered ready.
Powered by blists - more mailing lists