[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231110140721.114235-3-alexghiti@rivosinc.com>
Date: Fri, 10 Nov 2023 15:07:21 +0100
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Arnd Bergmann <arnd@...db.de>, Dennis Zhou <dennis@...nel.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Cc: Alexandre Ghiti <alexghiti@...osinc.com>
Subject: [PATCH 2/2] riscv: Enable pcpu page first chunk allocator
As explained in commit 6ea529a2037c ("percpu: make embedding first chunk
allocator check vmalloc space size"), the embedding first chunk allocator
needs the vmalloc space to be larger than the maximum distance between
units which are grouped into NUMA nodes.
On a very sparse NUMA configurations and a small vmalloc area (for example,
it is 64GB in sv39), the allocation of dynamic percpu data in the vmalloc
area could fail.
So provide the pcpu page allocator as a fallback in case we fall into
such a sparse configuration (which happened in arm64 as shown by
commit 09cea6195073 ("arm64: support page mapping percpu first chunk
allocator")).
Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
---
arch/riscv/Kconfig | 2 ++
arch/riscv/mm/kasan_init.c | 8 ++++++++
2 files changed, 10 insertions(+)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 5b1e61aca6cf..7b82d8301e42 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -416,7 +416,9 @@ config NUMA
depends on SMP && MMU
select ARCH_SUPPORTS_NUMA_BALANCING
select GENERIC_ARCH_NUMA
+ select HAVE_SETUP_PER_CPU_AREA
select NEED_PER_CPU_EMBED_FIRST_CHUNK
+ select NEED_PER_CPU_PAGE_FIRST_CHUNK
select OF_NUMA
select USE_PERCPU_NUMA_NODE_ID
help
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 5e39dcf23fdb..4c9a2c527f08 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -438,6 +438,14 @@ static void __init kasan_shallow_populate(void *start, void *end)
kasan_shallow_populate_pgd(vaddr, vend);
}
+#ifdef CONFIG_KASAN_VMALLOC
+void __init kasan_populate_early_vm_area_shadow(void *start, unsigned long size)
+{
+ kasan_populate(kasan_mem_to_shadow(start),
+ kasan_mem_to_shadow(start + size));
+}
+#endif
+
static void __init create_tmp_mapping(void)
{
void *ptr;
--
2.39.2
Powered by blists - more mailing lists