lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200916082819.GB27496@willie-the-truck>
Date:   Wed, 16 Sep 2020 09:28:20 +0100
From:   Will Deacon <will@...nel.org>
To:     Gavin Shan <gshan@...hat.com>
Cc:     linux-arm-kernel@...ts.infradead.org, mark.rutland@....com,
        anshuman.khandual@....com, catalin.marinas@....com,
        linux-kernel@...r.kernel.org, shan.gavin@...il.com
Subject: Re: [PATCH 2/2] arm64/mm: Enable color zero pages

On Wed, Sep 16, 2020 at 01:25:23PM +1000, Gavin Shan wrote:
> This enables color zero pages by allocating contigous page frames
> for it. The number of pages for this is determined by L1 dCache
> (or iCache) size, which is probbed from the hardware.
> 
>    * Add cache_total_size() to return L1 dCache (or iCache) size
> 
>    * Implement setup_zero_pages(), which is called after the page
>      allocator begins to work, to allocate the contigous pages
>      needed by color zero page.
> 
>    * Reworked ZERO_PAGE() and define __HAVE_COLOR_ZERO_PAGE.
> 
> Signed-off-by: Gavin Shan <gshan@...hat.com>
> ---
>  arch/arm64/include/asm/cache.h   | 22 ++++++++++++++++++++
>  arch/arm64/include/asm/pgtable.h |  9 ++++++--
>  arch/arm64/kernel/cacheinfo.c    | 34 +++++++++++++++++++++++++++++++
>  arch/arm64/mm/init.c             | 35 ++++++++++++++++++++++++++++++++
>  arch/arm64/mm/mmu.c              |  7 -------
>  5 files changed, 98 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> index a4d1b5f771f6..420e9dde2c51 100644
> --- a/arch/arm64/include/asm/cache.h
> +++ b/arch/arm64/include/asm/cache.h
> @@ -39,6 +39,27 @@
>  #define CLIDR_LOC(clidr)	(((clidr) >> CLIDR_LOC_SHIFT) & 0x7)
>  #define CLIDR_LOUIS(clidr)	(((clidr) >> CLIDR_LOUIS_SHIFT) & 0x7)
>  
> +#define CSSELR_TND_SHIFT	4
> +#define CSSELR_TND_MASK		(UL(1) << CSSELR_TND_SHIFT)
> +#define CSSELR_LEVEL_SHIFT	1
> +#define CSSELR_LEVEL_MASK	(UL(7) << CSSELR_LEVEL_SHIFT)
> +#define CSSELR_IND_SHIFT	0
> +#define CSSERL_IND_MASK		(UL(1) << CSSELR_IND_SHIFT)
> +
> +#define CCSIDR_64_LS_SHIFT	0
> +#define CCSIDR_64_LS_MASK	(UL(7) << CCSIDR_64_LS_SHIFT)
> +#define CCSIDR_64_ASSOC_SHIFT	3
> +#define CCSIDR_64_ASSOC_MASK	(UL(0x1FFFFF) << CCSIDR_64_ASSOC_SHIFT)
> +#define CCSIDR_64_SET_SHIFT	32
> +#define CCSIDR_64_SET_MASK	(UL(0xFFFFFF) << CCSIDR_64_SET_SHIFT)
> +
> +#define CCSIDR_32_LS_SHIFT	0
> +#define CCSIDR_32_LS_MASK	(UL(7) << CCSIDR_32_LS_SHIFT)
> +#define CCSIDR_32_ASSOC_SHIFT	3
> +#define CCSIDR_32_ASSOC_MASK	(UL(0x3FF) << CCSIDR_32_ASSOC_SHIFT)
> +#define CCSIDR_32_SET_SHIFT	13
> +#define CCSIDR_32_SET_MASK	(UL(0x7FFF) << CCSIDR_32_SET_SHIFT)

I don't think we should be inferring cache structure from these register
values. The Arm ARM helpfully says:

  | You cannot make any inference about the actual sizes of caches based
  | on these parameters.

so we need to take the topology information from elsewhere.

But before we get into that, can you justify why we need to do this at all,
please? Do you have data to show the benefit of adding this complexity?

Cheers,

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ