lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXHNO+iB4vNFz-4tR_9CPzv96hn+RW=eqyZXMGy_AySDpw@mail.gmail.com>
Date: Tue, 25 Feb 2025 19:05:35 +0100
From: Ard Biesheuvel <ardb@...nel.org>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, Luiz Capitulino <luizcap@...hat.com>, 
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] arm64/mm: Fix Boot panic on Ampere Altra

On Tue, 25 Feb 2025 at 12:46, Ryan Roberts <ryan.roberts@....com> wrote:
>
> When the range of present physical memory is sufficiently small enough
> and the reserved address space for the linear map is sufficiently large
> enough, The linear map base address is randomized in
> arm64_memblock_init().
>
> Prior to commit 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and
> use it consistently"), we decided if the sizes were suitable with the
> help of the raw mmfr0.parange. But the commit changed this to use the
> sanitized version instead. But the function runs before the register has
> been sanitized so this returns 0, interpreted as a parange of 32 bits.
> Some fun wrapping occurs and the logic concludes that there is enough
> room to randomize the linear map base address, when really there isn't.
> So the top of the linear map ends up outside the reserved address space.
>
> Fix this by intoducing a helper, cpu_get_parange() which reads the raw
> parange value and overrides it with any early override (e.g. due to
> arm64.nolva).
>
> Reported-by: Luiz Capitulino <luizcap@...hat.com>
> Closes: https://lore.kernel.org/all/a3d9acbe-07c2-43b6-9ba9-a7585f770e83@redhat.com/
> Fixes: 62cffa496aac ("arm64/mm: Override PARange for !LPA2 and use it consistently")
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
>
> This applies on top of v6.14-rc4. I'm hoping this can be merged for v6.14 since
> it's fixing a regression introduced in v6.14-rc1.
>
> Luiz, are you able to test this to make sure it's definitely fixing your
> original issue. The symptom I was seeing was slightly different.
>
> I'm going to see if it's possible for read_sanitised_ftr_reg() to warn about use
> before initialization. I'll send a follow up patch for that.
>

Apologies for the breakage, and thanks for the fix.

I have to admit that I was a bit overzealous here: there is no point
yet in using the sanitised value, given that we don't actually
override the PA range in the first place. This is something I've
prototyped for Android use, so that linear map randomization can be
force enabled on CPUs with a wide PArange, but right now, mainline
does not have that capability, and so I'd be inclined to just revert
the hunk that introduces the call to read_sanitised_ftr_reg() into
arm64_memblock_init(), especially given the fact that commit
62cffa496aac was tagged for stable, and was already pulled into 6.13
and 6.12

In any case, it would be good if we could get a fix into Linus's tree asap


>
>  arch/arm64/include/asm/cpufeature.h | 9 +++++++++
>  arch/arm64/mm/init.c                | 8 +-------
>  2 files changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index e0e4478f5fb5..2335f44b9a4d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -1066,6 +1066,15 @@ static inline bool cpu_has_lpa2(void)
>  #endif
>  }
>
> +static inline u64 cpu_get_parange(void)
> +{
> +       u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> +
> +       return arm64_apply_feature_override(mmfr0,
> +                                           ID_AA64MMFR0_EL1_PARANGE_SHIFT, 4,
> +                                           &id_aa64mmfr0_override);
> +}
> +
>  #endif /* __ASSEMBLY__ */
>
>  #endif
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 9c0b8d9558fc..1b1a61191b9f 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -280,13 +280,7 @@ void __init arm64_memblock_init(void)
>         if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
>                 extern u16 memstart_offset_seed;
>
> -               /*
> -                * Use the sanitised version of id_aa64mmfr0_el1 so that linear
> -                * map randomization can be enabled by shrinking the IPA space.
> -                */
> -               u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
> -               int parange = cpuid_feature_extract_unsigned_field(
> -                                       mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
> +               int parange = cpu_get_parange();
>                 s64 range = linear_region_size -
>                             BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
>
> --
> 2.43.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ