[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1847fc09-a394-40ad-b66f-1afe1964a061@arm.com>
Date: Tue, 25 Feb 2025 17:13:43 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Luiz Capitulino <luizcap@...hat.com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>, Ard Biesheuvel <ardb@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] arm64/mm: Fix Boot panic on Ampere Altra
On 25/02/2025 16:57, Luiz Capitulino wrote:
> On 2025-02-25 06:46, Ryan Roberts wrote:
>> When the range of present physical memory is sufficiently small enough
>> and the reserved address space for the linear map is sufficiently large
>> enough, The linear map base address is randomized in
>> arm64_memblock_init().
>>
>> Prior to commit 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and
>> use it consistently"), we decided if the sizes were suitable with the
>> help of the raw mmfr0.parange. But the commit changed this to use the
>> sanitized version instead. But the function runs before the register has
>> been sanitized so this returns 0, interpreted as a parange of 32 bits.
>> Some fun wrapping occurs and the logic concludes that there is enough
>> room to randomize the linear map base address, when really there isn't.
>> So the top of the linear map ends up outside the reserved address space.
>>
>> Fix this by intoducing a helper, cpu_get_parange() which reads the raw
>> parange value and overrides it with any early override (e.g. due to
>> arm64.nolva).
>>
>> Reported-by: Luiz Capitulino <luizcap@...hat.com>
>> Closes: https://lore.kernel.org/all/a3d9acbe-07c2-43b6-9ba9-
>> a7585f770e83@...hat.com/
>> Fixes: 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and use it
>> consistently")
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>> ---
>>
>> This applies on top of v6.14-rc4. I'm hoping this can be merged for v6.14 since
>> it's fixing a regression introduced in v6.14-rc1.
>>
>> Luiz, are you able to test this to make sure it's definitely fixing your
>> original issue. The symptom I was seeing was slightly different.
>
> Yes, this fixes it for me!
Great!
>
> I was able to boot v6.14-rc4 one time without your patch, this is probably
> what messed up my bisection.
Yes the operation is also dependent on the value of the kaslr seed (which is why
you don't see the issue when kaslr is disabled). So sometimes a random kaslr
seed will be the right value to mask the issue. Another benefit of running this
in kvmtool is that I could pass the same seed in every time.
> But I booted v6.14-rc4 with this patch
> multiple times without an issue. I agree this needs to be in for
> v6.14 and huge thanks for jumping in and getting this fixed.
No worries!
>
> Tested-by: Luiz Capitulino <luizcap@...hat.com>
Thanks!
>
>>
>> I'm going to see if it's possible for read_sanitised_ftr_reg() to warn about use
>> before initialization. I'll send a follow up patch for that.
>>
>> Thanks,
>> Ryan
>>
>>
>> arch/arm64/include/asm/cpufeature.h | 9 +++++++++
>> arch/arm64/mm/init.c | 8 +-------
>> 2 files changed, 10 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/
>> cpufeature.h
>> index e0e4478f5fb5..2335f44b9a4d 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -1066,6 +1066,15 @@ static inline bool cpu_has_lpa2(void)
>> #endif
>> }
>>
>> +static inline u64 cpu_get_parange(void)
>> +{
>> + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
>> +
>> + return arm64_apply_feature_override(mmfr0,
>> + ID_AA64MMFR0_EL1_PARANGE_SHIFT, 4,
>> + &id_aa64mmfr0_override);
>> +}
>> +
>> #endif /* __ASSEMBLY__ */
>>
>> #endif
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 9c0b8d9558fc..1b1a61191b9f 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -280,13 +280,7 @@ void __init arm64_memblock_init(void)
>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
>> extern u16 memstart_offset_seed;
>>
>> - /*
>> - * Use the sanitised version of id_aa64mmfr0_el1 so that linear
>> - * map randomization can be enabled by shrinking the IPA space.
>> - */
>> - u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
>> - int parange = cpuid_feature_extract_unsigned_field(
>> - mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
>> + int parange = cpu_get_parange();
>> s64 range = linear_region_size -
>> BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
>>
>> --
>> 2.43.0
>>
>
Powered by blists - more mailing lists