[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230316131711.1284451-4-alexghiti@rivosinc.com>
Date: Thu, 16 Mar 2023 14:17:10 +0100
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Rob Herring <robh+dt@...nel.org>,
Frank Rowand <frowand.list@...il.com>,
Mike Rapoport <rppt@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Anup Patel <anup@...infault.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org, devicetree@...r.kernel.org,
linux-mm@...ck.org
Cc: Alexandre Ghiti <alexghiti@...osinc.com>
Subject: [PATCH v8 3/4] arm64: Make use of memblock_isolate_memory for the linear mapping
In order to isolate the kernel text mapping and the crash kernel
region, we used some sort of hack to isolate thoses ranges which consisted
in marking them as not mappable with memblock_mark_nomap.
Simply use the newly introduced memblock_isolate_memory function which does
exactly the same but does not uselessly mark the region as not mappable.
Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
---
arch/arm64/mm/mmu.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6f9d8898a025..387c2a065a09 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -548,19 +548,18 @@ static void __init map_mem(pgd_t *pgdp)
/*
* Take care not to create a writable alias for the
- * read-only text and rodata sections of the kernel image.
- * So temporarily mark them as NOMAP to skip mappings in
- * the following for-loop
+ * read-only text and rodata sections of the kernel image so isolate
+ * those regions and map them after the for loop.
*/
- memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+ memblock_isolate_memory(kernel_start, kernel_end - kernel_start);
#ifdef CONFIG_KEXEC_CORE
if (crash_mem_map) {
if (defer_reserve_crashkernel())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
else if (crashk_res.end)
- memblock_mark_nomap(crashk_res.start,
- resource_size(&crashk_res));
+ memblock_isolate_memory(crashk_res.start,
+ resource_size(&crashk_res));
}
#endif
@@ -568,6 +567,17 @@ static void __init map_mem(pgd_t *pgdp)
for_each_mem_range(i, &start, &end) {
if (start >= end)
break;
+
+ if (start == kernel_start)
+ continue;
+
+#ifdef CONFIG_KEXEC_CORE
+ if (start == crashk_res.start &&
+ crash_mem_map && !defer_reserve_crashkernel() &&
+ crashk_res.end)
+ continue;
+#endif
+
/*
* The linear map must allow allocation tags reading/writing
* if MTE is present. Otherwise, it has the same attributes as
@@ -589,7 +599,6 @@ static void __init map_mem(pgd_t *pgdp)
*/
__map_memblock(pgdp, kernel_start, kernel_end,
PAGE_KERNEL, NO_CONT_MAPPINGS);
- memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
/*
* Use page-level mappings here so that we can shrink the region
@@ -603,8 +612,6 @@ static void __init map_mem(pgd_t *pgdp)
crashk_res.end + 1,
PAGE_KERNEL,
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
- memblock_clear_nomap(crashk_res.start,
- resource_size(&crashk_res));
}
}
#endif
--
2.37.2
Powered by blists - more mailing lists