[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250225114638.2038006-1-ryan.roberts@arm.com>
Date: Tue, 25 Feb 2025 11:46:36 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Luiz Capitulino <luizcap@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v1] arm64/mm: Fix Boot panic on Ampere Altra
When the range of present physical memory is sufficiently small enough
and the reserved address space for the linear map is sufficiently large
enough, The linear map base address is randomized in
arm64_memblock_init().
Prior to commit 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and
use it consistently"), we decided if the sizes were suitable with the
help of the raw mmfr0.parange. But the commit changed this to use the
sanitized version instead. But the function runs before the register has
been sanitized so this returns 0, interpreted as a parange of 32 bits.
Some fun wrapping occurs and the logic concludes that there is enough
room to randomize the linear map base address, when really there isn't.
So the top of the linear map ends up outside the reserved address space.
Fix this by intoducing a helper, cpu_get_parange() which reads the raw
parange value and overrides it with any early override (e.g. due to
arm64.nolva).
Reported-by: Luiz Capitulino <luizcap@...hat.com>
Closes: https://lore.kernel.org/all/a3d9acbe-07c2-43b6-9ba9-a7585f770e83@redhat.com/
Fixes: 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and use it consistently")
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
This applies on top of v6.14-rc4. I'm hoping this can be merged for v6.14 since
it's fixing a regression introduced in v6.14-rc1.
Luiz, are you able to test this to make sure it's definitely fixing your
original issue. The symptom I was seeing was slightly different.
I'm going to see if it's possible for read_sanitised_ftr_reg() to warn about use
before initialization. I'll send a follow up patch for that.
Thanks,
Ryan
arch/arm64/include/asm/cpufeature.h | 9 +++++++++
arch/arm64/mm/init.c | 8 +-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e0e4478f5fb5..2335f44b9a4d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -1066,6 +1066,15 @@ static inline bool cpu_has_lpa2(void)
#endif
}
+static inline u64 cpu_get_parange(void)
+{
+ u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+
+ return arm64_apply_feature_override(mmfr0,
+ ID_AA64MMFR0_EL1_PARANGE_SHIFT, 4,
+ &id_aa64mmfr0_override);
+}
+
#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9c0b8d9558fc..1b1a61191b9f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -280,13 +280,7 @@ void __init arm64_memblock_init(void)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
extern u16 memstart_offset_seed;
- /*
- * Use the sanitised version of id_aa64mmfr0_el1 so that linear
- * map randomization can be enabled by shrinking the IPA space.
- */
- u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
- int parange = cpuid_feature_extract_unsigned_field(
- mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+ int parange = cpu_get_parange();
s64 range = linear_region_size -
BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
--
2.43.0
Powered by blists - more mailing lists