lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2025011217-swizzle-unusual-dd7b@gregkh>
Date: Sun, 12 Jan 2025 12:53:39 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: Florian Fainelli <florian.fainelli@...adcom.com>
Cc: stable@...r.kernel.org, Ard Biesheuvel <ardb@...nel.org>,
	Anshuman Khandual <anshuman.khandual@....com>,
	Will Deacon <will@...nel.org>, Steven Price <steven.price@....com>,
	Robin Murphy <robin.murphy@....com>,
	Catalin Marinas <catalin.marinas@....com>,
	Baruch Siach <baruch@...s.co.il>, Petr Tesarik <ptesarik@...e.com>,
	Joey Gouly <joey.gouly@....com>,
	"Mike Rapoport (IBM)" <rppt@...nel.org>,
	Baoquan He <bhe@...hat.com>, Yang Shi <yang@...amperecomputing.com>,
	"moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" <linux-arm-kernel@...ts.infradead.org>,
	open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH stable 5.4] arm64: mm: account for hotplug memory when
 randomizing the linear region

On Thu, Jan 09, 2025 at 08:54:16AM -0800, Florian Fainelli wrote:
> From: Ard Biesheuvel <ardb@...nel.org>
> 
> commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream
> 
> As a hardening measure, we currently randomize the placement of
> physical memory inside the linear region when KASLR is in effect.
> Since the random offset at which to place the available physical
> memory inside the linear region is chosen early at boot, it is
> based on the memblock description of memory, which does not cover
> hotplug memory. The consequence of this is that the randomization
> offset may be chosen such that any hotplugged memory located above
> memblock_end_of_DRAM() that appears later is pushed off the end of
> the linear region, where it cannot be accessed.
> 
> So let's limit this randomization of the linear region to ensure
> that this can no longer happen, by using the CPU's addressable PA
> range instead. As it is guaranteed that no hotpluggable memory will
> appear that falls outside of that range, we can safely put this PA
> range sized window anywhere in the linear region.
> 
> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> Cc: Anshuman Khandual <anshuman.khandual@....com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Steven Price <steven.price@....com>
> Cc: Robin Murphy <robin.murphy@....com>
> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org
> Signed-off-by: Catalin Marinas <catalin.marinas@....com>
> Signed-off-by: Florian Fainelli <florian.fainelli@...adcom.com>
> ---
>  arch/arm64/mm/init.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index cbcac03c0e0d..a6034645d6f7 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -392,15 +392,18 @@ void __init arm64_memblock_init(void)
>  
>  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
>  		extern u16 memstart_offset_seed;
> -		u64 range = linear_region_size -
> -			    (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> +		u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> +		int parange = cpuid_feature_extract_unsigned_field(
> +					mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> +		s64 range = linear_region_size -
> +			    BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
>  
>  		/*
>  		 * If the size of the linear region exceeds, by a sufficient
> -		 * margin, the size of the region that the available physical
> -		 * memory spans, randomize the linear region as well.
> +		 * margin, the size of the region that the physical memory can
> +		 * span, randomize the linear region as well.
>  		 */
> -		if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> +		if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) {
>  			range /= ARM64_MEMSTART_ALIGN;
>  			memstart_addr -= ARM64_MEMSTART_ALIGN *
>  					 ((range * memstart_offset_seed) >> 16);
> -- 
> 2.43.0
> 
> 

You are not providing any information as to WHY this is needed in stable
kernels at all.  It just looks like an unsolicted backport with no
changes from upstream, yet no hint as to any bug it fixes.

And you all really have hotpluggable memory on systems that are running
th is old kernel?  Why are they not using newer kernels if they need
this?  Surely lots of other bugs they need are resolved there, right?

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ