[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180226180551.GM26147@arm.com>
Date: Mon, 26 Feb 2018 18:05:52 +0000
From: Will Deacon <will.deacon@....com>
To: mattst88@...il.com, rth@...ddle.net, tglx@...utronix.de,
hpa@...or.com, stern@...land.harvard.edu, parri.andrea@...il.com,
ink@...assic.park.msu.ru, akpm@...ux-foundation.org,
paulmck@...ux.vnet.ibm.com, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...nel.org
Cc: linux-tip-commits@...r.kernel.org
Subject: Re: [tip:locking/urgent] locking/xchg/alpha: Clean up barrier usage
by using smp_mb() in place of __ASM__MB
Hi Andrea,
I know this is in mainline now, but I think the way you've got the barriers
here:
On Fri, Feb 23, 2018 at 12:27:54AM -0800, tip-bot for Andrea Parri wrote:
> diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
> index 46ebf14aed4e..8a2b331e43fe 100644
> --- a/arch/alpha/include/asm/cmpxchg.h
> +++ b/arch/alpha/include/asm/cmpxchg.h
> @@ -6,7 +6,6 @@
> * Atomic exchange routines.
> */
>
> -#define __ASM__MB
> #define ____xchg(type, args...) __xchg ## type ## _local(args)
> #define ____cmpxchg(type, args...) __cmpxchg ## type ## _local(args)
> #include <asm/xchg.h>
> @@ -33,10 +32,6 @@
> cmpxchg_local((ptr), (o), (n)); \
> })
>
> -#ifdef CONFIG_SMP
> -#undef __ASM__MB
> -#define __ASM__MB "\tmb\n"
> -#endif
> #undef ____xchg
> #undef ____cmpxchg
> #define ____xchg(type, args...) __xchg ##type(args)
> @@ -64,7 +59,6 @@
> cmpxchg((ptr), (o), (n)); \
> })
>
> -#undef __ASM__MB
> #undef ____cmpxchg
>
> #endif /* _ALPHA_CMPXCHG_H */
> diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> index e2660866ce97..e1facf6fc244 100644
> --- a/arch/alpha/include/asm/xchg.h
> +++ b/arch/alpha/include/asm/xchg.h
> @@ -28,12 +28,12 @@ ____xchg(_u8, volatile char *m, unsigned long val)
> " or %1,%2,%2\n"
> " stq_c %2,0(%3)\n"
> " beq %2,2f\n"
> - __ASM__MB
> ".subsection 2\n"
> "2: br 1b\n"
> ".previous"
> : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64)
> : "r" ((long)m), "1" (val) : "memory");
> + smp_mb();
>
> return ret;
ends up adding unnecessary barriers to the _local variants, which the
previous code took care to avoid. That's why I suggesting adding
the smp_mb() into the cmpxchg macro rather than the ____cmpxchg variants.
I think it's worth spinning another patch to fix this properly.
Will
Powered by blists - more mailing lists