lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-91e430d3-6e37-41ea-974a-520587871f4e@palmer-si-x1c4>
Date:   Mon, 04 Jun 2018 16:17:21 -0700 (PDT)
From:   Palmer Dabbelt <palmer@...ive.com>
To:     mark.rutland@....com
CC:     linux-kernel@...r.kernel.org, mark.rutland@....com,
        boqun.feng@...il.com, Will Deacon <will.deacon@....com>,
        albert@...ive.com
Subject:     Re: [PATCHv2 11/16] atomics/riscv: define atomic64_fetch_add_unless()

On Tue, 29 May 2018 08:43:41 PDT (-0700), mark.rutland@....com wrote:
> As a step towards unifying the atomic/atomic64/atomic_long APIs, this
> patch converts the arch/riscv implementation of atomic64_add_unless()
> into an implementation of atomic64_fetch_add_unless().
>
> A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of
> this, provided it is given a preprocessor definition.
>
> No functional change is intended as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Will Deacon <will.deacon@....com>
> Cc: Palmer Dabbelt <palmer@...ive.com>
> Cc: Albert Ou <albert@...ive.com>
> ---
>  arch/riscv/include/asm/atomic.h | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 5f161daefcd2..d959bbaaad41 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -352,7 +352,7 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  #define atomic_fetch_add_unless atomic_fetch_add_unless
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u)
> +static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
>  {
>         long prev, rc;
>
> @@ -369,11 +369,7 @@ static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u)
>  		: "memory");
>  	return prev;
>  }
> -
> -static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u)
> -{
> -	return __atomic64_add_unless(v, a, u) != u;
> -}
> +#define atomic64_fetch_add_unless atomic64_fetch_add_unless
>  #endif
>
>  /*

For some reason I remember there being a reason we were doing this in such an 
odd fashion but I can't remember what it was any more.  Assuming this still 
builds, feel free to add an

Acked-by Palmer Dabbelt <palmer@...ive.com>

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ