lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Oct 2020 12:05:51 +0530
From:   Anshuman Khandual <anshuman.khandual@....com>
To:     Will Deacon <will@...nel.org>
Cc:     Mark Rutland <mark.rutland@....com>,
        David Hildenbrand <david@...hat.com>, catalin.marinas@....com,
        linux-kernel@...r.kernel.org, Steven Price <steven.price@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Robin Murphy <robin.murphy@....com>,
        Ard Biesheuvel <ardb@...nel.org>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] arm64/mm: Validate hotplug range before creating linear
 mapping



On 09/30/2020 01:32 PM, Anshuman Khandual wrote:
> But if __is_lm_address() checks against the effective linear range instead
> i.e [_PAGE_OFFSET(vabits_actual)..(PAGE_END - 1)], it can be used for hot
> plug physical range check there after. Perhaps something like this, though
> not tested properly.
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index afa722504bfd..6da046b479d4 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -238,7 +238,10 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>   * space. Testing the top bit for the start of the region is a
>   * sufficient check and avoids having to worry about the tag.
>   */
> -#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> +static inline bool __is_lm_address(unsigned long addr)
> +{
> +       return ((addr >= _PAGE_OFFSET(vabits_actual)) && (addr <= (PAGE_END - 1)));
> +}
>  
>  #define __lm_to_phys(addr)     (((addr) + physvirt_offset))
>  #define __kimg_to_phys(addr)   ((addr) - kimage_voffset)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index d59ffabb9c84..5750370a7e8c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1451,8 +1451,7 @@ static bool inside_linear_region(u64 start, u64 size)
>          * address range mapped by the linear map, the start address should
>          * be calculated using vabits_actual.
>          */
> -       return ((start >= __pa(_PAGE_OFFSET(vabits_actual)))
> -                       && ((start + size) <= __pa(PAGE_END - 1)));
> +       return __is_lm_address(__va(start)) && __is_lm_address(__va(start + size));
>  }
>  
>  int arch_add_memory(int nid, u64 start, u64 size,

Will/Ard,

Any thoughts about this ? __is_lm_address() now checks for a range instead
of a bit. This will be compatible later on, even if linear mapping range
changes from current lower half scheme.

- Anshuman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ