lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Nov 2016 11:46:08 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     gregkh@...uxfoundation.org, keescook@...omium.org,
        will.deacon@....com, elena.reshetova@...el.com, arnd@...db.de,
        tglx@...utronix.de, hpa@...or.com, dave@...gbits.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 7/7] kref: Implement using refcount_t

On Tue, Nov 15, 2016 at 11:03:59AM +0100, Ingo Molnar wrote:
> > Should I also make a CONFIG knob that implements refcount_t with the
> > 'normal' atomic_t primitives?
> 
> I'd suggest doing the saturation/safe-wrap semantics only for now (i.e. the 
> current patch, split into two perhaps), and reconsider if there's any complaints?
> 
> > And possibly another knob to toggle the BUG()s into WARN()s. With the
> > full saturation semantics WARN() is a lot safer and will not corrupt
> > kernel state as much.
> 
> I'd suggest changing it to a WARN() straight away, no extra knobs.

OK, a little like so then? Note that the overflow tests went away
because increments guarantee we saturate before we overflow.

---
Subject: refcount_t: A special purpose refcount type
From: Peter Zijlstra <peterz@...radead.org>
Date: Mon Nov 14 18:06:19 CET 2016

Provide refcount_t, an atomic_t like primitive built just for
refcounting.

It provides saturation semantics such that overflow becomes impossible
and thereby 'spurious' use-after-free is avoided.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 include/linux/refcount.h |  229 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 229 insertions(+)

--- /dev/null
+++ b/include/linux/refcount.h
@@ -0,0 +1,229 @@
+#ifndef _LINUX_REFCOUNT_H
+#define _LINUX_REFCOUNT_H
+
+/*
+ * Variant of atomic_t specialized for refcounting.
+ *
+ * The interface matches the atomic_t interface (to aid in porting) but only
+ * provides the few functions one should use for refcounting.
+ *
+ * It differs in that the counter saturates at UINT_MAX and will not move once
+ * there. This avoids wrapping the counter and causing 'spurious'
+ * use-after-free issues.
+ *
+ * Memory ordering rules are slightly relaxed wrt regular atomic_t functions
+ * and provide only what is strictly required for refcounts.
+ *
+ * The increments are fully relaxed; these will not provide ordering. The
+ * rationale is that whatever is used to obtain the object we're increasing the
+ * reference count on will provide the ordering. For locked data structures,
+ * its the lock acquire, for RCU/lockless data structures its the dependent
+ * load.
+ *
+ * Do note that inc_not_zero() provides a control dependency which will order
+ * future stores against the inc, this ensures we'll never modify the object
+ * if we did not in fact acquire a reference.
+ *
+ * The decrements will provide release order, such that all the prior loads and
+ * stores will be issued before we proceed with freeing the object.
+ *
+ * Note: the implementation hard relies on increments, bigger than 1 additions
+ *       need explicit overflow -> saturation logic.
+ *
+ */
+
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+typedef struct refcount_struct {
+	atomic_t refs;
+} refcount_t;
+
+#define REFCOUNT_INIT(n)	{ .refs = ATOMIC_INIT(n), }
+
+static inline void refcount_set(refcount_t *r, int n)
+{
+	atomic_set(&r->refs, n);
+}
+
+static inline unsigned int refcount_read(const refcount_t *r)
+{
+	return atomic_read(&r->refs);
+}
+
+/*
+ * Similar to atomic_inc(), will saturate at UINT_MAX and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller already has a
+ * reference on the object, will WARN when this is not so.
+ */
+static inline void refcount_inc(refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		WARN(!val, "refcount_t: increment on 0; use-after-free.\n");
+
+		if (unlikely(val == UINT_MAX))
+			return;
+
+		new = val + 1;
+		old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	WARN(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n");
+}
+
+/*
+ * Similar to atomic_inc_not_zero(), will saturate at UINT_MAX and WARN.
+ *
+ * Provides no memory ordering, it is assumed the caller has guaranteed the
+ * object memory to be stable (RCU, etc.). It does provide a control dependency
+ * and thereby orders future stores.
+ */
+static inline __must_check
+bool refcount_inc_not_zero(refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		if (!val)
+			return false;
+
+		if (unlikely(val == UINT_MAX))
+			return true;
+
+		new = val + 1;
+		old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	WARN(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n");
+
+	return true;
+}
+
+/*
+ * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
+ * decrement when saturated at UINT_MAX.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before a subsequent free.
+ */
+static inline __must_check
+bool refcount_dec_and_test(refcount_t *r)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+
+	for (;;) {
+		if (val == UINT_MAX)
+			return false;
+
+		new = val - 1;
+		if (WARN(new > val, "refcount_t: underflow; use-after-free.\n"))
+			return false;
+
+		old = atomic_cmpxchg_release(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	return !new;
+}
+
+/*
+ * Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail
+ * to decrement when saturated at UINT_MAX.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before a subsequent free. This allows free() while holding the mutex.
+ */
+static inline __must_check
+bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+	bool locked = false;
+
+	for (;;) {
+		if (val == UINT_MAX)
+			return false;
+
+		if (val == 1 && !locked) {
+			locked = true;
+			mutex_lock(lock);
+		}
+
+		new = val - 1;
+		if (WARN(new > val, "refcount_t: underflow; use-after-free.\n")) {
+			if (locked)
+				mutex_unlock(lock);
+			return false;
+		}
+
+		old = atomic_cmpxchg_release(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	if (new && locked)
+		mutex_unlock(lock);
+
+	return !new;
+}
+
+/*
+ * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to
+ * decrement when saturated at UINT_MAX.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before a subsequent free. This allows free() while holding the lock.
+ */
+static inline __must_check
+bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock)
+{
+	unsigned int old, new, val = atomic_read(&r->refs);
+	bool locked = false;
+
+	for (;;) {
+		if (val == UINT_MAX)
+			return false;
+
+		if (val == 1 && !locked) {
+			locked = true;
+			spin_lock(lock);
+		}
+
+		new = val - 1;
+		if (WARN(new > val, "refcount_t: underflow; use-after-free.\n")) {
+			if (locked)
+				mutex_unlock(lock);
+			return false;
+		}
+
+		old = atomic_cmpxchg_release(&r->refs, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	if (new && locked)
+		spin_unlock(lock);
+
+	return !new;
+}
+
+#endif /* _LINUX_REFCOUNT_H */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ