[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230725202404.3470111-1-chris.zjh@huawei.com>
Date: Tue, 25 Jul 2023 20:24:04 +0000
From: Zhang Jianhua <chris.zjh@...wei.com>
To: <catalin.marinas@....com>, <ill@...nel.org>,
<mark.rutland@....com>, <ryan.roberts@....com>,
<joey.gouly@....com>, <ardb@...nel.org>,
<anshuman.khandual@....com>, <bhe@...hat.com>,
<thunder.leizhen@...wei.com>, <broonie@...nel.org>
CC: <chris.zjh@...wei.com>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>
Subject: [PATCH -next v3] arm64: fix build warning for ARM64_MEMSTART_SHIFT
When building with W=1, the following warning occurs.
arch/arm64/include/asm/kernel-pgtable.h:129:41: error: "PUD_SHIFT" is not defined, evaluates to 0 [-Werror=undef]
129 | #define ARM64_MEMSTART_SHIFT PUD_SHIFT
| ^~~~~~~~~
arch/arm64/include/asm/kernel-pgtable.h:142:5: note: in expansion of macro ‘ARM64_MEMSTART_SHIFT’
142 | #if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
| ^~~~~~~~~~~~~~~~~~~~
The reason is that PUD_SHIFT isn't defined if CONFIG_PGTABLE_LEVELS == 3
and CONFIG_VA_BITS == 39. Now move the macro ARM64_MEMSTART_SHIFT and
ARM64_MEMSTART_ALIGN to arch/arm64/mm/init.c where it is used to avoid
this issue, and also there is no other place to call these two macro.
Signed-off-by: Zhang Jianhua <chris.zjh@...wei.com>
---
v3:
Move the defination of macro ARM64_MEMSTART_SHIFT and ARM64_MEMSTART_ALIGN
from arch/arm64/include/asm/kernel-pgtable.h to arch/arm64/mm/init.c
v2:
Add define judgement of PUD_SHIFT/CONT_PMD_SHIFT/CONT_PMD_SHIFT
before use them, instead of define PUD_SHIFT only.
---
---
arch/arm64/include/asm/kernel-pgtable.h | 27 -------------------------
arch/arm64/mm/init.c | 27 +++++++++++++++++++++++++
2 files changed, 27 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 577773870b66..85d26143faa5 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -118,31 +118,4 @@
#define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
#endif
-/*
- * To make optimal use of block mappings when laying out the linear
- * mapping, round down the base of physical memory to a size that can
- * be mapped efficiently, i.e., either PUD_SIZE (4k granule) or PMD_SIZE
- * (64k granule), or a multiple that can be mapped using contiguous bits
- * in the page tables: 32 * PMD_SIZE (16k granule)
- */
-#if defined(CONFIG_ARM64_4K_PAGES)
-#define ARM64_MEMSTART_SHIFT PUD_SHIFT
-#elif defined(CONFIG_ARM64_16K_PAGES)
-#define ARM64_MEMSTART_SHIFT CONT_PMD_SHIFT
-#else
-#define ARM64_MEMSTART_SHIFT PMD_SHIFT
-#endif
-
-/*
- * sparsemem vmemmap imposes an additional requirement on the alignment of
- * memstart_addr, due to the fact that the base of the vmemmap region
- * has a direct correspondence, and needs to appear sufficiently aligned
- * in the virtual address space.
- */
-#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
-#define ARM64_MEMSTART_ALIGN (1UL << SECTION_SIZE_BITS)
-#else
-#define ARM64_MEMSTART_ALIGN (1UL << ARM64_MEMSTART_SHIFT)
-#endif
-
#endif /* __ASM_KERNEL_PGTABLE_H */
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d31c3a9290c5..4fcb88a445ef 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -73,6 +73,33 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
#define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20)
+/*
+ * To make optimal use of block mappings when laying out the linear
+ * mapping, round down the base of physical memory to a size that can
+ * be mapped efficiently, i.e., either PUD_SIZE (4k granule) or PMD_SIZE
+ * (64k granule), or a multiple that can be mapped using contiguous bits
+ * in the page tables: 32 * PMD_SIZE (16k granule)
+ */
+#if defined(CONFIG_ARM64_4K_PAGES)
+#define ARM64_MEMSTART_SHIFT PUD_SHIFT
+#elif defined(CONFIG_ARM64_16K_PAGES)
+#define ARM64_MEMSTART_SHIFT CONT_PMD_SHIFT
+#else
+#define ARM64_MEMSTART_SHIFT PMD_SHIFT
+#endif
+
+/*
+ * sparsemem vmemmap imposes an additional requirement on the alignment of
+ * memstart_addr, due to the fact that the base of the vmemmap region
+ * has a direct correspondence, and needs to appear sufficiently aligned
+ * in the virtual address space.
+ */
+#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
+#define ARM64_MEMSTART_ALIGN (1UL << SECTION_SIZE_BITS)
+#else
+#define ARM64_MEMSTART_ALIGN (1UL << ARM64_MEMSTART_SHIFT)
+#endif
+
static int __init reserve_crashkernel_low(unsigned long long low_size)
{
unsigned long long low_base;
--
2.34.1
Powered by blists - more mailing lists