[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250722040850.2017769-1-justin.he@arm.com>
Date: Tue, 22 Jul 2025 04:08:50 +0000
From: Jia He <justin.he@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>
Cc: Anshuman Khandual <anshuman.khandual@....com>,
Ryan Roberts <ryan.roberts@....com>,
Peter Xu <peterx@...hat.com>,
Joey Gouly <joey.gouly@....com>,
Yicong Yang <yangyicong@...ilicon.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Jia He <justin.he@....com>
Subject: [PATCH] mm: vmalloc: use VMALLOC_EARLY_START boundary for early vmap area
When VMALLOC_START is redefined to a new boundary, most subsystems
continue to function correctly. However, vm_area_register_early()
assumes the use of the global _vmlist_ structure before vmalloc_init()
is invoked. This assumption can lead to issues during early boot.
See the calltrace as follows:
start_kernel()
setup_per_cpu_areas()
pcpu_page_first_chunk()
vm_area_register_early()
mm_core_init()
vmalloc_init()
The early vm areas will be added to vmlist at declare_kernel_vmas()
->declare_vma():
ffff800080010000 T _stext
ffff800080da0000 D __start_rodata
ffff800081890000 T __inittext_begin
ffff800081980000 D __initdata_begin
ffff800081ee0000 D _data
The starting address of the early areas is tied to the *old* VMALLOC_START
(i.e. 0xffff800080000000 on an arm64 N2 server).
If VMALLOC_START is redefined, it can disrupt early VM area allocation,
particularly in like pcpu_page_first_chunk()->vm_area_register_early().
To address this potential risk on arm64, introduce a new boundary,
VMALLOC_EARLY_START, to avoid boot issues when VMALLOC_START is
occasionaly redefined.
Signed-off-by: Jia He <justin.he@....com>
---
arch/arm64/include/asm/pgtable.h | 2 ++
mm/vmalloc.c | 6 +++++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 192d86e1cc76..91031912a906 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -18,9 +18,11 @@
* VMALLOC range.
*
* VMALLOC_START: beginning of the kernel vmalloc space
+ * VMALLOC_EARLY_START: early vm area before vmalloc_init()
* VMALLOC_END: extends to the available space below vmemmap
*/
#define VMALLOC_START (MODULES_END)
+#define VMALLOC_EARLY_START (MODULES_END)
#if VA_BITS == VA_BITS_MIN
#define VMALLOC_END (VMEMMAP_START - SZ_8M)
#else
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..86ab1e99641a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -50,6 +50,10 @@
#include "internal.h"
#include "pgalloc-track.h"
+#ifndef VMALLOC_EARLY_START
+#define VMALLOC_EARLY_START VMALLOC_START
+#endif
+
#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
static unsigned int __ro_after_init ioremap_max_page_shift = BITS_PER_LONG - 1;
@@ -3126,7 +3130,7 @@ void __init vm_area_add_early(struct vm_struct *vm)
*/
void __init vm_area_register_early(struct vm_struct *vm, size_t align)
{
- unsigned long addr = ALIGN(VMALLOC_START, align);
+ unsigned long addr = ALIGN(VMALLOC_EARLY_START, align);
struct vm_struct *cur, **p;
BUG_ON(vmap_initialized);
--
2.34.1
Powered by blists - more mailing lists