[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251112110807.69958-1-dev.jain@arm.com>
Date: Wed, 12 Nov 2025 16:38:05 +0530
From: Dev Jain <dev.jain@....com>
To: catalin.marinas@....com,
will@...nel.org,
urezki@...il.com,
akpm@...ux-foundation.org
Cc: ryan.roberts@....com,
anshuman.khandual@....com,
shijie@...amperecomputing.com,
yang@...amperecomputing.com,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
npiggin@...il.com,
willy@...radead.org,
david@...nel.org,
ziy@...dia.com,
Dev Jain <dev.jain@....com>
Subject: [RFC PATCH 0/2] Enable vmalloc block mappings by default on arm64
In the quest for reducing TLB pressure via block mappings, enable huge
vmalloc by default on arm64 for BBML2-noabort systems which support kernel
live mapping split.
This series is an RFC, because I cannot get a performance improvement for
the usual benchmarks which we have. Currently, vmalloc follows an opt-in
approach for block mappings - the users calling vmalloc_huge() are the ones
which expect the most advantage from block mappings. Most users of
vmalloc(), kvmalloc() and kvzalloc() map a single page. After applying this
series, it is expected that a considerable number of users will produce
cont mappings, and probably none will produce PMD mappings.
I am asking for help from the community in testing - I believe that one of
the testing methods is xfstests: a lot of code uses the APIs mentioned
above. I am hoping that someone can jump in and run at least xfstests, and
probably some other tests which can take advantage of the reduced TLB
pressure from vmalloc cont mappings.
Dev Jain (2):
mm/vmalloc: Do not align size to huge size
arm64/mm: Enable vmalloc-huge by default
arch/arm64/include/asm/vmalloc.h | 6 +++++
arch/arm64/mm/pageattr.c | 4 +--
include/linux/vmalloc.h | 7 +++++
mm/vmalloc.c | 44 +++++++++++++++++++++++++-------
4 files changed, 49 insertions(+), 12 deletions(-)
--
2.30.2
Powered by blists - more mailing lists