[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210130071025.65258-8-chenzhou10@huawei.com>
Date: Sat, 30 Jan 2021 15:10:21 +0800
From: Chen Zhou <chenzhou10@...wei.com>
To: <mingo@...hat.com>, <tglx@...utronix.de>, <rppt@...nel.org>,
<dyoung@...hat.com>, <bhe@...hat.com>, <catalin.marinas@....com>,
<will@...nel.org>, <nsaenzjulienne@...e.de>, <corbet@....net>,
<John.P.donnelly@...cle.com>, <bhsharma@...hat.com>,
<prabhakar.pkin@...il.com>
CC: <horms@...ge.net.au>, <robh+dt@...nel.org>, <arnd@...db.de>,
<james.morse@....com>, <xiexiuqi@...wei.com>,
<guohanjun@...wei.com>, <huawei.libin@...wei.com>,
<wangkefeng.wang@...wei.com>, <chenzhou10@...wei.com>,
<linux-doc@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <kexec@...ts.infradead.org>,
John Donnelly <John.p.donnelly@...cle.com>
Subject: [PATCH v14 07/11] arm64: kdump: introduce some macroes for crash kernel reservation
Introduce macro CRASH_ALIGN for alignment, macro CRASH_ADDR_LOW_MAX
for upper bound of low crash memory, macro CRASH_ADDR_HIGH_MAX for
upper bound of high crash memory, use macroes instead.
Besides, keep consistent with x86, use CRASH_ALIGN as the lower bound
of crash kernel reservation.
Signed-off-by: Chen Zhou <chenzhou10@...wei.com>
Tested-by: John Donnelly <John.p.donnelly@...cle.com>
---
arch/arm64/include/asm/kexec.h | 6 ++++++
arch/arm64/mm/init.c | 6 +++---
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index d24b527e8c00..3f6ecae0bc68 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -25,6 +25,12 @@
#define KEXEC_ARCH KEXEC_ARCH_AARCH64
+/* 2M alignment for crash kernel regions */
+#define CRASH_ALIGN SZ_2M
+
+#define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit
+#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE
+
#ifndef __ASSEMBLY__
/**
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 709d98fea90c..912f64f505f7 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -84,8 +84,8 @@ static void __init reserve_crashkernel(void)
if (crash_base == 0) {
/* Current arm64 boot protocol requires 2MB alignment */
- crash_base = memblock_find_in_range(0, arm64_dma_phys_limit,
- crash_size, SZ_2M);
+ crash_base = memblock_find_in_range(CRASH_ALIGN, CRASH_ADDR_LOW_MAX,
+ crash_size, CRASH_ALIGN);
if (crash_base == 0) {
pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
crash_size);
@@ -103,7 +103,7 @@ static void __init reserve_crashkernel(void)
return;
}
- if (!IS_ALIGNED(crash_base, SZ_2M)) {
+ if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) {
pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n");
return;
}
--
2.20.1
Powered by blists - more mailing lists