lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <06b7bfd1-99cd-a9be-e3cc-9fe13f2cf2a6@arm.com>
Date:   Mon, 15 Feb 2021 10:56:15 +0530
From:   Anshuman Khandual <anshuman.khandual@....com>
To:     Pavel Tatashin <pasha.tatashin@...een.com>,
        tyhicks@...ux.microsoft.com, jmorris@...ei.org,
        catalin.marinas@....com, will@...nel.org,
        akpm@...ux-foundation.org, rppt@...nel.org, logang@...tatee.com,
        ardb@...nel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: mm: correct the start of physical address in
 linear map

Hello Pavel,

On 2/13/21 6:53 AM, Pavel Tatashin wrote:
> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> linear map range is not checked correctly.
> 
> The start physical address that linear map covers can be actually at the
> end of the range because of randmomization. Check that and if so reduce it
> to 0.

Looking at the code, this seems possible if memstart_addr which is a signed
value becomes large (after falling below 0) during arm64_memblock_init().

> 
> This can be verified on QEMU with setting kaslr-seed to ~0ul:
> 
> memstart_offset_seed = 0xffff
> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> END:   __pa(PAGE_END - 1) =  1000bfffffff
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@...een.com>
> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> ---
>  arch/arm64/mm/mmu.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ae0c3d023824..6057ecaea897 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1444,14 +1444,25 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>  
>  static bool inside_linear_region(u64 start, u64 size)
>  {
> +	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> +	u64 end_linear_pa = __pa(PAGE_END - 1);
> +
> +	/*
> +	 * Check for a wrap, it is possible because of randomized linear mapping
> +	 * the start physical address is actually bigger than the end physical
> +	 * address. In this case set start to zero because [0, end_linear_pa]
> +	 * range must still be able to cover all addressable physical addresses.
> +	 */

If this is possible only with randomized linear mapping, could you please
add IS_ENABLED(CONFIG_RANDOMIZED_BASE) during the switch over. Wondering
if WARN_ON(start_linear_pa > end_linear_pa) should be added otherwise i.e
when linear mapping randomization is not enabled.

> +	if (start_linear_pa > end_linear_pa)
> +		start_linear_pa = 0;

This looks okay but will double check and give it some more testing.

> +
>  	/*
>  	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
>  	 * accommodating both its ends but excluding PAGE_END. Max physical
>  	 * range which can be mapped inside this linear mapping range, must
>  	 * also be derived from its end points.
>  	 */
> -	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
> -	       (start + size - 1) <= __pa(PAGE_END - 1);
> +	return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
>  }
>  
>  int arch_add_memory(int nid, u64 start, u64 size,
> 

- Anshuman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ