lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1af39001-8d8f-b573-8159-666999d25543@arm.com>
Date:   Fri, 11 Sep 2020 12:35:42 +0530
From:   Anshuman Khandual <anshuman.khandual@....com>
To:     Gavin Shan <gshan@...hat.com>, linux-arm-kernel@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, catalin.marinas@....com,
        will@...nel.org, shan.gavin@...il.com
Subject: Re: [PATCH v2 3/3] arm64/mm: Unitify CONT_PMD_SHIFT



On 09/10/2020 03:29 PM, Gavin Shan wrote:
> Similar to how CONT_PTE_SHIFT is determined, this introduces a new
> kernel option (CONFIG_CONT_PMD_SHIFT) to determine CONT_PMD_SHIFT.
> 
> Signed-off-by: Gavin Shan <gshan@...hat.com>
> ---
>  arch/arm64/Kconfig                     |  6 ++++++
>  arch/arm64/include/asm/pgtable-hwdef.h | 10 ++--------
>  2 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7ec30dd56300..d58e17fe9473 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -217,6 +217,12 @@ config ARM64_CONT_PTE_SHIFT
>  	default 7 if ARM64_16K_PAGES
>  	default 4
>  
> +config ARM64_CONT_PMD_SHIFT
> +	int
> +	default 5 if ARM64_64K_PAGES
> +	default 5 if ARM64_16K_PAGES
> +	default 4
> +
>  config ARCH_MMAP_RND_BITS_MIN
>         default 14 if ARM64_64K_PAGES
>         default 16 if ARM64_16K_PAGES
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index 6c9c67f62551..94b3f2ac2e9d 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -82,17 +82,11 @@
>   * Contiguous page definitions.
>   */
>  #define CONT_PTE_SHIFT		(CONFIG_ARM64_CONT_PTE_SHIFT + PAGE_SHIFT)
> -#ifdef CONFIG_ARM64_64K_PAGES
> -#define CONT_PMD_SHIFT		(5 + PMD_SHIFT)
> -#elif defined(CONFIG_ARM64_16K_PAGES)
> -#define CONT_PMD_SHIFT		(5 + PMD_SHIFT)
> -#else
> -#define CONT_PMD_SHIFT		(4 + PMD_SHIFT)
> -#endif
> -
>  #define CONT_PTES		(1 << (CONT_PTE_SHIFT - PAGE_SHIFT))
>  #define CONT_PTE_SIZE		(CONT_PTES * PAGE_SIZE)
>  #define CONT_PTE_MASK		(~(CONT_PTE_SIZE - 1))
> +
> +#define CONT_PMD_SHIFT		(CONFIG_ARM64_CONT_PMD_SHIFT + PMD_SHIFT)
>  #define CONT_PMDS		(1 << (CONT_PMD_SHIFT - PMD_SHIFT))
>  #define CONT_PMD_SIZE		(CONT_PMDS * PMD_SIZE)
>  #define CONT_PMD_MASK		(~(CONT_PMD_SIZE - 1))
> 

This is cleaner and more uniform. Did not see any problem while
running some quick hugetlb tests across multiple page size configs
after applying all patches in this series.

Adding this new configuration ARM64_CONT_PMD_SHIFT makes sense, as
it eliminates existing constant values that are used in an ad hoc
manner, while computing contiguous page table entry properties.

Reviewed-by: Anshuman Khandual <anshuman.khandual@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ