[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200508202835.GA550540@ubuntu-s3-xlarge-x86>
Date: Fri, 8 May 2020 13:28:35 -0700
From: Nathan Chancellor <natechancellor@...il.com>
To: Nick Desaulniers <ndesaulniers@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Sedat Dilek <sedat.dilek@...il.com>,
"kernelci . org bot" <bot@...nelci.org>,
Andy Shevchenko <andriy.shevchenko@...el.com>,
Brian Gerst <brgerst@...il.com>,
"H . Peter Anvin" <hpa@...or.com>,
Ilie Halip <ilie.halip@...il.com>, x86@...nel.org,
Marco Elver <elver@...gle.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Luc Van Oostenryck <luc.vanoostenryck@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Masahiro Yamada <yamada.masahiro@...ionext.com>,
Daniel Axtens <dja@...ens.net>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
linux-kernel@...r.kernel.org, clang-built-linux@...glegroups.com
Subject: Re: [PATCH v5] x86: bitops: fix build regression
On Fri, May 08, 2020 at 11:32:29AM -0700, Nick Desaulniers wrote:
> This is easily reproducible via CC=clang+CONFIG_STAGING=y+CONFIG_VT6656=m.
>
> It turns out that if your config tickles __builtin_constant_p via
> differences in choices to inline or not, these statements produce
> invalid assembly:
>
> $ cat foo.c
> long a(long b, long c) {
> asm("orb\t%1, %0" : "+q"(c): "r"(b));
> return c;
> }
> $ gcc foo.c
> foo.c: Assembler messages:
> foo.c:2: Error: `%rax' not allowed with `orb'
>
> Use the `%b` "x86 Operand Modifier" to instead force register allocation
> to select a lower-8-bit GPR operand.
>
> The "q" constraint only has meaning on -m32 otherwise is treated as
> "r". Not all GPRs have low-8-bit aliases for -m32.
>
> Cc: Jesse Brandeburg <jesse.brandeburg@...el.com>
> Link: https://github.com/ClangBuiltLinux/linux/issues/961
> Link: https://lore.kernel.org/lkml/20200504193524.GA221287@google.com/
> Link: https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#x86Operandmodifiers
> Fixes: 1651e700664b4 ("x86: Fix bitops.h warning with a moved cast")
> Reported-by: Sedat Dilek <sedat.dilek@...il.com>
> Reported-by: kernelci.org bot <bot@...nelci.org>
> Suggested-by: Andy Shevchenko <andriy.shevchenko@...el.com>
> Suggested-by: Brian Gerst <brgerst@...il.com>
> Suggested-by: H. Peter Anvin <hpa@...or.com>
> Suggested-by: Ilie Halip <ilie.halip@...il.com>
> Signed-off-by: Nick Desaulniers <ndesaulniers@...gle.com>
Reviewed-by: Nathan Chancellor <natechancellor@...il.com>
Tested-by: Nathan Chancellor <natechancellor@...il.com> # build, clang-11
> ---
> Changes V4 -> V5:
> * actually use `%b` in arch_change_bit().
>
> Changes V3 -> V4:
> * drop (u8) cast from arch_change_bit() as well.
>
> Changes V2 -> V3:
> * use `%b` "x86 Operand Modifier" instead of bitwise op then cast.
> * reword commit message.
> * add Brian and HPA suggested by tags
> * drop Nathan & Sedat Tested by/reviewed by tags (new patch is different
> enough).
> * Take over authorship.
>
> Changes V1 -> V2:
> * change authorship/signed-off-by to Ilie
> * add Nathan's Tested by/reviewed by
> * update commit message slightly with info sent to HPA.
> arch/x86/include/asm/bitops.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index b392571c1f1d..35460fef39b8 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -52,9 +52,9 @@ static __always_inline void
> arch_set_bit(long nr, volatile unsigned long *addr)
> {
> if (__builtin_constant_p(nr)) {
> - asm volatile(LOCK_PREFIX "orb %1,%0"
> + asm volatile(LOCK_PREFIX "orb %b1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" (CONST_MASK(nr) & 0xff)
> + : "iq" (CONST_MASK(nr))
> : "memory");
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> @@ -72,9 +72,9 @@ static __always_inline void
> arch_clear_bit(long nr, volatile unsigned long *addr)
> {
> if (__builtin_constant_p(nr)) {
> - asm volatile(LOCK_PREFIX "andb %1,%0"
> + asm volatile(LOCK_PREFIX "andb %b1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" (CONST_MASK(nr) ^ 0xff));
> + : "iq" (~CONST_MASK(nr)));
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
> @@ -123,9 +123,9 @@ static __always_inline void
> arch_change_bit(long nr, volatile unsigned long *addr)
> {
> if (__builtin_constant_p(nr)) {
> - asm volatile(LOCK_PREFIX "xorb %1,%0"
> + asm volatile(LOCK_PREFIX "xorb %b1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)CONST_MASK(nr)));
> + : "iq" (CONST_MASK(nr)));
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
> --
> 2.26.2.645.ge9eca65c58-goog
>
Powered by blists - more mailing lists