[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161215045810.3065-1-Jason@zx2c4.com>
Date: Thu, 15 Dec 2016 05:58:10 +0100
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Dave Airlie <airlied@...hat.com>, linux-kernel@...r.kernel.org,
Thomas Hellstrom <thellstrom@...are.com>
Cc: "Jason A. Donenfeld" <Jason@...c4.com>
Subject: [PATCH] kref: prefer atomic_inc_not_zero to atomic_add_unless
On most platforms, there exists this ifdef:
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
This makes this patch functionally useless. However, on PPC, there is
actually an explicit definition of atomic_inc_not_zero with its own
assembly that is slightly more optimized than atomic_add_unless. So,
this patch changes kref to use atomic_inc_not_zero instead, for PPC and
any future platforms that might provide an explicit implementation.
This also puts this usage of kref more in line with a verbatim reading
of the examples in Paul McKenney's paper [1] in the section titled "2.4
Atomic Counting With Check and Release Memory Barrier", which uses
atomic_inc_not_zero.
[1] http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2167.pdf
Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
Reviewed-by: Thomas Hellstrom <thellstrom@...are.com>
---
This was reviewed favorably 14 months ago but never picked up.
I'm resubmitting it now in hopes that you can finally queue it
up for 4.10.
include/linux/kref.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/kref.h b/include/linux/kref.h
index e15828fd71f1..62f0a84ae94e 100644
--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -133,6 +133,6 @@ static inline int kref_put_mutex(struct kref *kref,
*/
static inline int __must_check kref_get_unless_zero(struct kref *kref)
{
- return atomic_add_unless(&kref->refcount, 1, 0);
+ return atomic_inc_not_zero(&kref->refcount);
}
#endif /* _KREF_H_ */
--
2.11.0
Powered by blists - more mailing lists