[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV594z0bNQR-vo2b@ashyti-mobl2.lan>
Date: Wed, 22 Nov 2023 23:17:07 +0100
From: Andi Shyti <andi.shyti@...ux.intel.com>
To: "wuqiang.matt" <wuqiang.matt@...edance.com>
Cc: ubizjak@...il.com, mark.rutland@....com, vgupta@...nel.org,
bcain@...cinc.com, jonas@...thpole.se,
stefan.kristiansson@...nalahti.fi, shorne@...il.com,
chris@...kel.net, jcmvbkbc@...il.com, geert@...ux-m68k.org,
andi.shyti@...ux.intel.com, mingo@...nel.org, palmer@...osinc.com,
andrzej.hajda@...el.com, arnd@...db.de, peterz@...radead.org,
mhiramat@...nel.org, linux-arch@...r.kernel.org,
linux-snps-arc@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-hexagon@...r.kernel.org, linux-openrisc@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, mattwu@....com,
linux@...ck-us.net
Subject: Re: [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should
check data size
Hi Wuqiang,
On Tue, Nov 21, 2023 at 10:23:43PM +0800, wuqiang.matt wrote:
> arch_cmpxchg() should check data size rather than pointer size in case
> CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to
> emphasize it's explicit support of 32bit data size with BUILD_BUG_ON()
> added to avoid any possible misuses with unsupported data types.
>
> In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock
> to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary.
>
> v2 -> v3:
> - Patches regrouped and has the improvement for xtensa included
> - Comments refined to address why these changes are needed
>
> v1 -> v2:
> - Try using native cmpxchg variants if avaialble, as Arnd advised
>
> Signed-off-by: wuqiang.matt <wuqiang.matt@...edance.com>
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
> ---
> arch/arc/include/asm/cmpxchg.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
> index e138fde067de..bf46514f6f12 100644
> --- a/arch/arc/include/asm/cmpxchg.h
> +++ b/arch/arc/include/asm/cmpxchg.h
> @@ -18,14 +18,16 @@
> * if (*ptr == @old)
> * *ptr = @new
> */
> -#define __cmpxchg(ptr, old, new) \
> +#define __cmpxchg_32(ptr, old, new) \
> ({ \
> __typeof__(*(ptr)) _prev; \
> \
> + BUILD_BUG_ON(sizeof(*(ptr)) != 4); \
> + \
> __asm__ __volatile__( \
> - "1: llock %0, [%1] \n" \
> + "1: llock %0, [%1] \n" \
> " brne %0, %2, 2f \n" \
> - " scond %3, [%1] \n" \
> + " scond %3, [%1] \n" \
> " bnz 1b \n" \
> "2: \n" \
> : "=&r"(_prev) /* Early clobber prevent reg reuse */ \
> @@ -47,7 +49,7 @@
> \
> switch(sizeof((_p_))) { \
> case 4: \
> - _prev_ = __cmpxchg(_p_, _o_, _n_); \
> + _prev_ = __cmpxchg_32(_p_, _o_, _n_); \
> break; \
> default: \
> BUILD_BUG(); \
> @@ -65,8 +67,6 @@
> __typeof__(*(ptr)) _prev_; \
> unsigned long __flags; \
> \
> - BUILD_BUG_ON(sizeof(_p_) != 4); \
> - \
I think I made some comments here that have not been addressed or
replied.
Thanks,
Andi
> /* \
> * spin lock/unlock provide the needed smp_mb() before/after \
> */ \
> --
> 2.40.1
Powered by blists - more mailing lists