lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Aug 2020 13:01:09 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     Sami Tolvanen <samitolvanen@...gle.com>
Cc:     Will Deacon <will@...nel.org>, Zhenyu Ye <yezhenyu2@...wei.com>,
        Mark Rutland <mark.rutland@....com>,
        Marc Zyngier <maz@...nel.org>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        Nathan Chancellor <natechancellor@...il.com>,
        Kees Cook <keescook@...omium.org>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        clang-built-linux@...glegroups.com
Subject: Re: [PATCH] arm64: tlb: fix ARM64_TLB_RANGE with LLVM's integrated
 assembler

On Wed, Aug 05, 2020 at 11:19:20AM -0700, Sami Tolvanen wrote:
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index d493174415db..66c2aab5e9cb 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -16,6 +16,16 @@
>  #include <asm/cputype.h>
>  #include <asm/mmu.h>
>  
> +/*
> + * Enable ARMv8.4-TLBI instructions with ARM64_TLB_RANGE. Note that binutils
> + * doesn't support .arch_extension tlb-rmi, so use .arch armv8.4-a instead.
> + */
> +#ifdef CONFIG_ARM64_TLB_RANGE
> +#define __TLBI_PREAMBLE	".arch armv8.4-a\n"
> +#else
> +#define __TLBI_PREAMBLE
> +#endif
> +
>  /*
>   * Raw TLBI operations.
>   *
> @@ -28,14 +38,16 @@
>   * not. The macros handles invoking the asm with or without the
>   * register argument as appropriate.
>   */
> -#define __TLBI_0(op, arg) asm ("tlbi " #op "\n"				       \
> +#define __TLBI_0(op, arg) asm (__TLBI_PREAMBLE				       \
> +			       "tlbi " #op "\n"				       \
>  		   ALTERNATIVE("nop\n			nop",		       \
>  			       "dsb ish\n		tlbi " #op,	       \
>  			       ARM64_WORKAROUND_REPEAT_TLBI,		       \
>  			       CONFIG_ARM64_WORKAROUND_REPEAT_TLBI)	       \
>  			    : : )
>  
> -#define __TLBI_1(op, arg) asm ("tlbi " #op ", %0\n"			       \
> +#define __TLBI_1(op, arg) asm (__TLBI_PREAMBLE				       \
> +			       "tlbi " #op ", %0\n"			       \
>  		   ALTERNATIVE("nop\n			nop",		       \
>  			       "dsb ish\n		tlbi " #op ", %0",     \
>  			       ARM64_WORKAROUND_REPEAT_TLBI,		       \

A potential problem here is that for gas (not sure about the integrated
assembler), .arch overrides any other .arch. So if we end up with two
preambles included in the same generated .S files in the future, it will
lead to some random behaviour.

Does the LLVM integrated assembler have the same behaviour on .arch
overriding a prior .arch?

Maybe a better solution is for all inline asm on arm64 to have a
standard preamble which is the maximum supported architecture version.
We can add individual .arch_extension as those are not overriding.

-- 
Catalin

Powered by blists - more mailing lists