[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210511091621.GA6152@C02TD0UTHF1T.local>
Date: Tue, 11 May 2021 10:16:21 +0100
From: Mark Rutland <mark.rutland@....com>
To: linux-kernel@...r.kernel.org, will@...nel.org,
boqun.feng@...il.com, peterz@...radead.org
Cc: aou@...s.berkeley.edu, arnd@...db.de, bcain@...eaurora.org,
benh@...nel.crashing.org, chris@...kel.net, dalias@...c.org,
davem@...emloft.net, deanbo422@...il.com, deller@....de,
geert@...ux-m68k.org, green.hu@...il.com, guoren@...nel.org,
ink@...assic.park.msu.ru, James.Bottomley@...senPartnership.com,
jcmvbkbc@...il.com, jonas@...thpole.se, ley.foon.tan@...el.com,
linux@...linux.org.uk, mattst88@...il.com, monstr@...str.eu,
mpe@...erman.id.au, nickhu@...estech.com, palmer@...belt.com,
paulus@...ba.org, paul.walmsley@...ive.com, rth@...ddle.net,
shorne@...il.com, stefan.kristiansson@...nalahti.fi,
tsbogend@...ha.franken.de, vgupta@...opsys.com,
ysato@...rs.sourceforge.jp
Subject: Re: [PATCH 27/33] locking/atomic: powerpc: move to ARCH_ATOMIC
On Mon, May 10, 2021 at 10:37:47AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Michael Ellerman <mpe@...erman.id.au>
> Cc: Paul Mackerras <paulus@...ba.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will@...nel.org>
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/atomic.h | 140 +++++++++++++++++++------------------
> arch/powerpc/include/asm/cmpxchg.h | 30 ++++----
> 3 files changed, 89 insertions(+), 82 deletions(-)
The kbuild test robot spotted a couple of bits I'd got wrong; I've noted
those below (and both are now fixed in my kernel.org branch).
> static __always_inline bool
> -atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
> +arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
Since this isn't part of the core atomic API, and is used directly by
powerpc's spinlock implementation, this should have stayed as-is (or we
should use the `arch_` prefix consitently and update the spinlock code).
I've dropped the `arch_` prefix for now.
[...]
> /**
> * atomic64_fetch_add_unless - add unless the number is a given value
> @@ -518,7 +524,7 @@ static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
> * Atomically adds @a to @v, so long as it was not @u.
> * Returns the old value of @v.
> */
> -static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> +static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> s64 t;
>
> @@ -539,7 +545,7 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>
> return t;
> }
> -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> +#define arch_atomic64_fetch_add_unless atomic64_fetch_add_unless
Looks like I forgot the `arch_` prefix on the right hand side here; this
should have been:
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
Thanks,
Mark.
Powered by blists - more mailing lists