lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0578d21-95cd-4d8a-add1-87299f36b491@arm.com>
Date: Wed, 26 Feb 2025 08:07:14 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Ard Biesheuvel <ardb@...nel.org>, Will Deacon <will@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
 Mark Rutland <mark.rutland@....com>, Luiz Capitulino <luizcap@...hat.com>,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] arm64/mm: Fix Boot panic on Ampere Altra

On 26/02/2025 06:59, Ard Biesheuvel wrote:
> On Wed, 26 Feb 2025 at 01:10, Will Deacon <will@...nel.org> wrote:
>>
>> On Tue, Feb 25, 2025 at 07:05:35PM +0100, Ard Biesheuvel wrote:
>>> Apologies for the breakage, and thanks for the fix.
>>>
>>> I have to admit that I was a bit overzealous here: there is no point
>>> yet in using the sanitised value, given that we don't actually
>>> override the PA range in the first place. 

But unless I've misunderstood something, parange is overridden; Commit
62cffa496aac (the same one we are fixing) adds an override to force parange to
48 bits when arm64.nolva is specified for LPA2 systems (see mmfr2_varange_filter()).

I thought it would be preferable to honour that override, hence my use of
arm64_apply_feature_override() in the fix. Are you saying we don't need to worry
about that case?

Thanks,
Ryan

>>> This is something I've
>>> prototyped for Android use, so that linear map randomization can be
>>> force enabled on CPUs with a wide PArange, but right now, mainline
>>> does not have that capability, and so I'd be inclined to just revert
>>> the hunk that introduces the call to read_sanitised_ftr_reg() into
>>> arm64_memblock_init(), especially given the fact that commit
>>> 62cffa496aac was tagged for stable, and was already pulled into 6.13
>>> and 6.12
>>>
>>> In any case, it would be good if we could get a fix into Linus's tree asap
>>
>> Makes sense. So the patch below?
>>
> 
> Yes, but please don't forget the cc:stable
> 
> To the patch below,
> 
> Acked-by: Ard Biesheuvel <ardb@...nel.org>
> 
> 
>> --->8
>>
>> From b76ddd40dd6fe350727a4b2ec50709fd919d8408 Mon Sep 17 00:00:00 2001
>> From: Ryan Roberts <ryan.roberts@....com>
>> Date: Tue, 25 Feb 2025 11:46:36 +0000
>> Subject: [PATCH] arm64/mm: Fix Boot panic on Ampere Altra
>>
>> When the range of present physical memory is sufficiently small enough
>> and the reserved address space for the linear map is sufficiently large
>> enough, The linear map base address is randomized in
>> arm64_memblock_init().
>>
>> Prior to commit 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and
>> use it consistently"), we decided if the sizes were suitable with the
>> help of the raw mmfr0.parange. But the commit changed this to use the
>> sanitized version instead. But the function runs before the register has
>> been sanitized so this returns 0, interpreted as a parange of 32 bits.
>> Some fun wrapping occurs and the logic concludes that there is enough
>> room to randomize the linear map base address, when really there isn't.
>> So the top of the linear map ends up outside the reserved address space.
>>
>> Since the PA range cannot be overridden in the first place, restore the
>> mmfr0 reading logic to its state prior to 62cffa496aac, where the raw
>> register value is used.
>>
>> Reported-by: Luiz Capitulino <luizcap@...hat.com>
>> Suggested-by: Ard Biesheuvel <ardb@...nel.org>
>> Closes: https://lore.kernel.org/all/a3d9acbe-07c2-43b6-9ba9-a7585f770e83@redhat.com/
>> Fixes: 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and use it consistently")
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>> Link: https://lore.kernel.org/r/20250225114638.2038006-1-ryan.roberts@arm.com
>> Signed-off-by: Will Deacon <will@...nel.org>
>> ---
>>  arch/arm64/mm/init.c | 7 +------
>>  1 file changed, 1 insertion(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 9c0b8d9558fc..ccdef53872a0 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -279,12 +279,7 @@ void __init arm64_memblock_init(void)
>>
>>         if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
>>                 extern u16 memstart_offset_seed;
>> -
>> -               /*
>> -                * Use the sanitised version of id_aa64mmfr0_el1 so that linear
>> -                * map randomization can be enabled by shrinking the IPA space.
>> -                */
>> -               u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
>> +               u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
>>                 int parange = cpuid_feature_extract_unsigned_field(
>>                                         mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
>>                 s64 range = linear_region_size -
>> --
>> 2.48.1.658.g4767266eb4-goog
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ