[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4waQsxPKTcGvtpTAF4kVbYQXeH_iHaQX3aAYDNo-8oDLQ@mail.gmail.com>
Date: Fri, 9 Jan 2026 12:11:16 +1300
From: Barry Song <21cnbao@...il.com>
To: Will Deacon <will@...nel.org>
Cc: Weilin Tong <tongweilin@...ux.alibaba.com>, Catalin Marinas <catalin.marinas@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...nel.org>, linux-mm@...ck.org,
baolin.wang@...ux.alibaba.com
Subject: Re: [RFC PATCH] arm64: Kconfig: enable ARCH_WANTS_THP_SWAP for all pagesizes
On Fri, Jan 9, 2026 at 7:29 AM Will Deacon <will@...nel.org> wrote:
>
> On Fri, Dec 26, 2025 at 07:52:44PM +1300, Barry Song wrote:
> > On Fri, Dec 26, 2025 at 7:39 PM Weilin Tong
> > <tongweilin@...ux.alibaba.com> wrote:
> > >
> > > Currently, ARCH_WANTS_THP_SWAP was limited to 4K page size ARM64 kernels, but
> > > large folios requiring swapping also exist in other page size configurations
> > > (e.g. 64K). Without this config, large folios in these kernels cannot be swapped
> > > out.
> > >
> > > Here we enable ARCH_WANTS_THP_SWAP for all ARM64 page sizes.
> >
> > I no longer recall why this was not enabled for sizes other than
> > 4 KB in commit d0637c505f8a ("arm64: enable THP_SWAP for arm64"), but
> > it appears to be fine, and the swap cluster size should also be
> > more friendly to PMD alignment.
>
> You seemed to be worried about I/O latency in your original post:
>
> https://lore.kernel.org/all/20220524071403.128644-1-21cnbao@gmail.com/
Will, thanks for pointing this out! With a 16KB page size, a PMD
covers 32MB; with 64KB pages, a PMD covers 512MB. So, Weilin, are
we ready to wait for 32MB or 512MB to be written out before
memory can be reclaimed? By splitting, we can reclaim memory
earlier while only part of it has been swapped out.
While splitting down to order-0 is not ideal, splitting to a
relatively larger order appears to strike a balance between I/O
latency and swap performance. Anyway, I don't know :-)
Thanks
Barry
Powered by blists - more mailing lists