lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Aug 2018 13:42:23 +0000
From:   Vineet Gupta <Vineet.Gupta1@...opsys.com>
To:     Eugeniy Paltsev <Eugeniy.Paltsev@...opsys.com>,
        "linux-snps-arc@...ts.infradead.org" 
        <linux-snps-arc@...ts.infradead.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Alexey Brodkin" <Alexey.Brodkin@...opsys.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Will Deacon" <will.deacon@....com>,
        Boqun Feng <boqun.feng@...il.com>
Subject: Re: [PATCH] ARC: atomic64: fix atomic64_add_unless function

On 08/11/2018 09:09 AM, Eugeniy Paltsev wrote:
> Current implementation of 'atomic64_add_unless' function
> (and hence 'atomic64_inc_not_zero') return incorrect value
> if lover 32 bits of compared 64-bit number are equal and
> higher 32 bits aren't.
>
> For in following example atomic64_add_unless must return '1'
> but it actually returns '0':
> --------->8---------
> atomic64_t val = ATOMIC64_INIT(0x4444000000000000LL);
> int ret = atomic64_add_unless(&val, 1LL, 0LL)
> --------->8---------
>
> This happens because we write '0' to returned variable regardless
> of higher 32 bits comparison result.
>
> So fix it.
>
> NOTE:
>  this change was tested with atomic64_test.
>
> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@...opsys.com>

LGTM. Curious, was this from code review or did u actually run into this ?

Thx,
-Vineet

> ---
>  arch/arc/include/asm/atomic.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
> index 11859287c52a..e840cb1763b2 100644
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -578,11 +578,11 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
>  
>  	__asm__ __volatile__(
>  	"1:	llockd  %0, [%2]	\n"
> -	"	mov	%1, 1		\n"
>  	"	brne	%L0, %L4, 2f	# continue to add since v != u \n"
>  	"	breq.d	%H0, %H4, 3f	# return since v == u \n"
>  	"	mov	%1, 0		\n"
>  	"2:				\n"
> +	"	mov	%1, 1		\n"
>  	"	add.f   %L0, %L0, %L3	\n"
>  	"	adc     %H0, %H0, %H3	\n"
>  	"	scondd  %0, [%2]	\n"

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ