lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Mar 2016 17:40:14 +0700
From:	Ard Biesheuvel <ard.biesheuvel@...aro.org>
To:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"stable@...r.kernel.org" <stable@...r.kernel.org>,
	Will Deacon <will.deacon@....com>
Subject: Re: [PATCH 4.4 34/74] arm64: vmemmap: use virtual projection of
 linear region

On 8 March 2016 at 07:02, Greg Kroah-Hartman <gregkh@...uxfoundation.org> wrote:
> 4.4-stable review patch.  If anyone has any objections, please let me know.
>

Please hold off on this one. We are seeing some breakage on 64k pages systems

> ------------------
>
> From: Ard Biesheuvel <ard.biesheuvel@...aro.org>
>
> commit dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e upstream.
>
> Commit dd006da21646 ("arm64: mm: increase VA range of identity map") made
> some changes to the memory mapping code to allow physical memory to reside
> at an offset that exceeds the size of the virtual mapping.
>
> However, since the size of the vmemmap area is proportional to the size of
> the VA area, but it is populated relative to the physical space, we may
> end up with the struct page array being mapped outside of the vmemmap
> region. For instance, on my Seattle A0 box, I can see the following output
> in the dmesg log.
>
>    vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
>              0xffffffbfc0000000 - 0xffffffbfd0000000   (   256 MB actual)
>
> We can fix this by deciding that the vmemmap region is not a projection of
> the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
> linear region. This way, we are guaranteed that the vmemmap region is of
> sufficient size, and we can even reduce the size by half.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
> Signed-off-by: Will Deacon <will.deacon@....com>
> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
>
> ---
>  arch/arm64/include/asm/pgtable.h |    7 ++++---
>  arch/arm64/mm/init.c             |    4 ++--
>  2 files changed, 6 insertions(+), 5 deletions(-)
>
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -34,13 +34,13 @@
>  /*
>   * VMALLOC and SPARSEMEM_VMEMMAP ranges.
>   *
> - * VMEMAP_SIZE: allows the whole VA space to be covered by a struct page array
> + * VMEMAP_SIZE: allows the whole linear region to be covered by a struct page array
>   *     (rounded up to PUD_SIZE).
>   * VMALLOC_START: beginning of the kernel VA space
>   * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
>   *     fixed mappings and modules
>   */
> -#define VMEMMAP_SIZE           ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
> +#define VMEMMAP_SIZE           ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
>
>  #ifndef CONFIG_KASAN
>  #define VMALLOC_START          (VA_START)
> @@ -51,7 +51,8 @@
>
>  #define VMALLOC_END            (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
>
> -#define vmemmap                        ((struct page *)(VMALLOC_END + SZ_64K))
> +#define VMEMMAP_START          (VMALLOC_END + SZ_64K)
> +#define vmemmap                        ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
>
>  #define FIRST_USER_ADDRESS     0UL
>
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -319,8 +319,8 @@ void __init mem_init(void)
>  #endif
>                   MLG(VMALLOC_START, VMALLOC_END),
>  #ifdef CONFIG_SPARSEMEM_VMEMMAP
> -                 MLG((unsigned long)vmemmap,
> -                     (unsigned long)vmemmap + VMEMMAP_SIZE),
> +                 MLG(VMEMMAP_START,
> +                     VMEMMAP_START + VMEMMAP_SIZE),
>                   MLM((unsigned long)virt_to_page(PAGE_OFFSET),
>                       (unsigned long)virt_to_page(high_memory)),
>  #endif
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ