lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Mar 2013 10:34:54 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Ming Lei <tom.leiming@...il.com>, Shaohua Li <shli@...nel.org>,
	Al Viro <viro@...iv.linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: +
 atomic-improve-atomic_inc_unless_negative-atomic_dec_unless_positive
	.patch added to -mm tree

On Thu, Mar 21, 2013 at 06:08:27PM +0100, Oleg Nesterov wrote:
> On 03/17, Paul E. McKenney wrote:
> >
> > On Sat, Mar 16, 2013 at 07:30:22PM +0100, Oleg Nesterov wrote:
> > >
> > > > The rule is that if an atomic primitive returns non-void, then there is
> > > > a full memory barrier before and after.
> > >
> > > This case is documented...
> > >
> > > > This applies to primitives
> > > > returning boolean as well, with atomic_dec_and_test() setting this
> > > > precedent from what I can see.
> > >
> > > I don't think this is the "fair" comparison. Unlike atomic_add_unless(),
> > > atomic_dec_and_test() always changes the memory even if it "fails".
> > >
> > > If atomic_add_unless() returns 0, nothing was changed and if we add
> > > the barrier it is not clear what it should be paired with.
> > >
> > > But OK. I have to agree that "keep the rules simple" makes sense, so
> > > we should change atomic_add_unless() as well.
> >
> > Agreed!
> 
> OK... since nobody volunteered to make a patch, what do you think about
> the change below?
> 
> It should "fix" atomic_add_unless() (only on x86) and optimize
> atomic_inc/dec_unless.
> 
> With this change atomic_*_unless() can do the unnecessary mb() after
> cmpxchg() fails, but I think this case is very unlikely.
> 
> And, in the likely case atomic_inc/dec_unless avoids the 1st cmpxchg()
> which in most cases just reads the memory for the next cmpxchg().

Thank you, Oleg!

Reviewed-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>

> Oleg.
> 
> --- x/arch/x86/include/asm/atomic.h
> +++ x/arch/x86/include/asm/atomic.h
> @@ -212,15 +212,12 @@ static inline int atomic_xchg(atomic_t *
>  static inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  {
>  	int c, old;
> -	c = atomic_read(v);
> -	for (;;) {
> -		if (unlikely(c == (u)))
> -			break;
> -		old = atomic_cmpxchg((v), c, c + (a));
> +	for (c = atomic_read(v); c != u; c = old) {
> +		old = atomic_cmpxchg(v, c, c + a);
>  		if (likely(old == c))
> -			break;
> -		c = old;
> +			return c;
>  	}
> +	smp_mb();
>  	return c;
>  }
> 
> --- x/include/linux/atomic.h
> +++ x/include/linux/atomic.h
> @@ -64,11 +64,12 @@ static inline int atomic_inc_not_zero_hi
>  static inline int atomic_inc_unless_negative(atomic_t *p)
>  {
>  	int v, v1;
> -	for (v = 0; v >= 0; v = v1) {
> +	for (v = atomic_read(p); v >= 0; v = v1) {
>  		v1 = atomic_cmpxchg(p, v, v + 1);
>  		if (likely(v1 == v))
>  			return 1;
>  	}
> +	smp_mb();
>  	return 0;
>  }
>  #endif
> @@ -77,11 +78,12 @@ static inline int atomic_inc_unless_nega
>  static inline int atomic_dec_unless_positive(atomic_t *p)
>  {
>  	int v, v1;
> -	for (v = 0; v <= 0; v = v1) {
> +	for (atomic_read(p); v <= 0; v = v1) {
>  		v1 = atomic_cmpxchg(p, v, v - 1);
>  		if (likely(v1 == v))
>  			return 1;
>  	}
> +	smp_mb();
>  	return 0;
>  }
>  #endif
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ