[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YoyTWaDmSiBUkaeg@arm.com>
Date: Tue, 24 May 2022 09:12:09 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, will@...nel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
hanchuanhua@...o.com, zhangshiming@...o.com, guojian@...o.com,
Barry Song <v-songbaohua@...o.com>,
"Huang, Ying" <ying.huang@...el.com>,
Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
Rik van Riel <riel@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Steven Price <steven.price@....com>
Subject: Re: [PATCH] arm64: enable THP_SWAP for arm64
On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> THP_SWAP has been proved to improve the swap throughput significantly
> on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay
> splitting THP after swapped out").
> As long as arm64 uses 4K page size, it is quite similar with x86_64
> by having 2MB PMD THP. So we are going to get similar improvement.
> For other page sizes such as 16KB and 64KB, PMD might be too large.
> Negative side effects such as IO latency might be a problem. Thus,
> we can only safely enable the counterpart of X86_64.
>
> Cc: "Huang, Ying" <ying.huang@...el.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Shaohua Li <shli@...nel.org>
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
> arch/arm64/Kconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index d550f5acfaf3..8e3771c56fbf 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -98,6 +98,7 @@ config ARM64
> select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
> select ARCH_WANT_LD_ORPHAN_WARN
> select ARCH_WANTS_NO_INSTR
> + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
I'm not opposed to this but I think it would break pages mapped with
PROT_MTE. We have an assumption in mte_sync_tags() that compound pages
are not swapped out (or in). With MTE, we store the tags in a slab
object (128-bytes per swapped page) and restore them when pages are
swapped in. At some point we may teach the core swap code about such
metadata but in the meantime that was the easiest way.
--
Catalin
Powered by blists - more mailing lists