[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-bc7520a5-9be6-4f9f-b3f1-3a44d7da233e@palmer-si-x1c4>
Date: Mon, 04 Jun 2018 16:17:25 -0700 (PDT)
From: Palmer Dabbelt <palmer@...ive.com>
To: mark.rutland@....com
CC: linux-kernel@...r.kernel.org, mark.rutland@....com,
boqun.feng@...il.com, Will Deacon <will.deacon@....com>
Subject: Re: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional
On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutland@....com wrote:
> We define a trivial fallback for atomic_inc_not_zero(), but don't do
> the same for atmic64_inc_not_zero(), leading most architectures to
> define the same boilerplate.
atmic64
> Let's add a fallback in <linux/atomic.h>, and remove the redundant
> implementations. Note that atomic64_add_unless() is always defined in
> <linux/atomic.h>, and promotes its arguments to the requisite types, so
> we need not do this explicitly.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Will Deacon <will.deacon@....com>
> ---
> arch/alpha/include/asm/atomic.h | 2 --
> arch/arc/include/asm/atomic.h | 1 -
> arch/arm/include/asm/atomic.h | 1 -
> arch/arm64/include/asm/atomic.h | 2 --
> arch/ia64/include/asm/atomic.h | 2 --
> arch/mips/include/asm/atomic.h | 2 --
> arch/parisc/include/asm/atomic.h | 2 --
> arch/powerpc/include/asm/atomic.h | 1 +
> arch/riscv/include/asm/atomic.h | 7 -------
> arch/s390/include/asm/atomic.h | 1 -
> arch/sparc/include/asm/atomic_64.h | 2 --
> arch/x86/include/asm/atomic64_32.h | 2 +-
> arch/x86/include/asm/atomic64_64.h | 2 --
> include/asm-generic/atomic-instrumented.h | 3 +++
> include/asm-generic/atomic64.h | 1 -
> include/linux/atomic.h | 11 +++++++++++
> 16 files changed, 16 insertions(+), 26 deletions(-)
> [...]
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 0e27e050ba14..18259e90f57e 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u)
> }
> #endif
>
> -#ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_inc_not_zero(atomic64_t *v)
> -{
> - return atomic64_add_unless(v, 1, 0);
> -}
> -#endif
> -
> /*
> * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
> * {cmp,}xchg and the operations that return, so they need a full barrier.
Acked-by: Palmer Dabbelt <palmer@...ive.com>
Thanks!
Powered by blists - more mailing lists