lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161115123337.GD12110@tardis.cn.ibm.com>
Date:   Tue, 15 Nov 2016 20:33:37 +0800
From:   Boqun Feng <boqun.feng@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     gregkh@...uxfoundation.org, keescook@...omium.org,
        will.deacon@....com, elena.reshetova@...el.com, arnd@...db.de,
        tglx@...utronix.de, mingo@...nel.org, hpa@...or.com,
        dave@...gbits.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 7/7] kref: Implement using refcount_t

Hi Peter,

On Mon, Nov 14, 2016 at 06:39:53PM +0100, Peter Zijlstra wrote:
[...]
> +/*
> + * Similar to atomic_dec_and_test(), it will BUG on underflow and fail to
> + * decrement when saturated at UINT_MAX.
> + *
> + * Provides release memory ordering, such that prior loads and stores are done
> + * before a subsequent free.

I'm not sure this is correct, the RELEASE semantics is for the STORE
part of cmpxchg, and semantically it will guarantee that memory
operations after cmpxchg won't be reordered upwards, for example, on
ARM64, the following code:

	WRITE_ONCE(x, 1)
	
	atomic_cmpxchg_release(&a, 1, 2);
	  r1 = ll(&a)
	  if (r1 == 1) {
	    sc_release(&a, 2);
	  }
	
	free()

could be reordered as, I think:

	atomic_cmpxchg_release(&a, 1, 2);
	  r1 = ll(&a)
	  if (r1 == 1) {
	    free()
	    WRITE_ONCE(x, 1)
	    sc_release(&a, 2);
	  }

Of course, we need to wait for Will to confirm about this. But if this
could happen, we'd better to use a smp_mb()+atomic_cmpxchg_relaxed()
here and for other refcount_dec_and_*().

That said, I think the really ordering guarantee we need here is that
two cmpxchg()s in refcount_dec_and_*() could be paired with each other
to ensure nobody observes a freed object inside a refcount critical
section. In that case, we need atomic_cmpxchg() here for ordering on
both sides. Or maybe replace atomic_read() with smp_load_acquire().

Regards,
Boqun


> + */
> +static inline __must_check
> +bool refcount_dec_and_test(refcount_t *r)
> +{
> +	unsigned int old, new, val = atomic_read(&r->refs);
> +
> +	for (;;) {
> +		if (val == UINT_MAX)
> +			return false;
> +
> +		new = val - 1;
> +		if (new > val)
> +			BUG(); /* underflow */
> +
> +		old = atomic_cmpxchg_release(&r->refs, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	return !new;
> +}
> +
> +/*
> + * Similar to atomic_dec_and_mutex_lock(), it will BUG on underflow and fail
> + * to decrement when saturated at UINT_MAX.
> + *
> + * Provides release memory ordering, such that prior loads and stores are done
> + * before a subsequent free. This allows free() while holding the mutex.
> + */
> +static inline __must_check
> +bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)
> +{
> +	unsigned int old, new, val = atomic_read(&r->refs);
> +	bool locked = false;
> +
> +	for (;;) {
> +		if (val == UINT_MAX)
> +			return false;
> +
> +		if (val == 1 && !locked) {
> +			locked = true;
> +			mutex_lock(lock);
> +		}
> +
> +		new = val - 1;
> +		if (new > val) {
> +			if (locked)
> +				mutex_unlock(lock);
> +			BUG(); /* underflow */
> +		}
> +
> +		old = atomic_cmpxchg_release(&r->refs, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	if (new && locked)
> +		mutex_unlock(lock);
> +
> +	return !new;
> +}
> +
> +/*
> + * Similar to atomic_dec_and_lock(), it will BUG on underflow and fail
> + * to decrement when saturated at UINT_MAX.
> + *
> + * Provides release memory ordering, such that prior loads and stores are done
> + * before a subsequent free. This allows free() while holding the lock.
> + */
> +static inline __must_check
> +bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock)
> +{
> +	unsigned int old, new, val = atomic_read(&r->refs);
> +	bool locked = false;
> +
> +	for (;;) {
> +		if (val == UINT_MAX)
> +			return false;
> +
> +		if (val == 1 && !locked) {
> +			locked = true;
> +			spin_lock(lock);
> +		}
> +
> +		new = val - 1;
> +		if (new > val) {
> +			if (locked)
> +				spin_unlock(lock);
> +			BUG(); /* underflow */
> +		}
> +
> +		old = atomic_cmpxchg_release(&r->refs, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	if (new && locked)
> +		spin_unlock(lock);
> +
> +	return !new;
> +}
> +
> +#endif /* _LINUX_REFCOUNT_H */
> 
> 

Download attachment "signature.asc" of type "application/pgp-signature" (456 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ