[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220630135934.1799248-3-aahringo@redhat.com>
Date: Thu, 30 Jun 2022 09:59:34 -0400
From: Alexander Aring <aahringo@...hat.com>
To: will@...nel.org
Cc: peterz@...radead.org, boqun.feng@...il.com, mark.rutland@....com,
thunder.leizhen@...wei.com, jacob.e.keller@...el.com,
akpm@...ux-foundation.org, linux-sparse@...r.kernel.org,
cluster-devel@...hat.com, luc.vanoostenryck@...il.com,
torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
aahringo@...hat.com
Subject: [RFC 2/2] kref: move kref_put_lock() callback to caller
This patch moves the release callback call to the caller of kref_put_lock()
functionality. Since refcount_dec_and_lock() uses __cond_lock() we get
the following warning for e.g. net/sunrpc/svcauth.c:
warning: context imbalance in 'auth_domain_put' - wrong count at exit
The warning occurs now because it seems that before __cond_lock() change
sparse was able to detect the correct locking behaviour. Now it thinks
there is an additional lock acquire. However the __cond_lock()
instrumentation in refcount_dec_and_lock() was making it possible to
avoid sparse warnings by evaluating the return value and unlock the lock
if conditional necessary.
This patch solves the problem by just do the passed release callback
call based by the return value of kref_put_lock() and not inside of
kref_put_lock() and evaluating the return value of
refcount_dec_and_lock() that surprisingly sparse can recognize.
It seems it's only possible to have the one way or the other. This patch
changes the kref_put_lock() in way that it works like
refcount_dec_and_lock() way with __cond_lock().
Signed-off-by: Alexander Aring <aahringo@...hat.com>
---
include/linux/kref.h | 24 ++++++++----------------
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/include/linux/kref.h b/include/linux/kref.h
index d32e21a2538c..a70d45940d55 100644
--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -68,27 +68,19 @@ static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref)
return 0;
}
-static inline int kref_put_mutex(struct kref *kref,
- void (*release)(struct kref *kref),
- struct mutex *lock)
+static inline bool raw_kref_put_mutex(struct kref *kref, struct mutex *lock)
{
- if (refcount_dec_and_mutex_lock(&kref->refcount, lock)) {
- release(kref);
- return 1;
- }
- return 0;
+ return refcount_dec_and_mutex_lock(&kref->refcount, lock);
}
+#define kref_put_mutex(kref, release, lock) \
+ ((raw_kref_put_mutex(kref, lock)) ? ({ release(kref); 1; }) : 0)
-static inline int kref_put_lock(struct kref *kref,
- void (*release)(struct kref *kref),
- spinlock_t *lock)
+static inline bool raw_kref_put_lock(struct kref *kref, spinlock_t *lock)
{
- if (refcount_dec_and_lock(&kref->refcount, lock)) {
- release(kref);
- return 1;
- }
- return 0;
+ return refcount_dec_and_lock(&kref->refcount, lock);
}
+#define kref_put_lock(kref, release, lock) \
+ ((raw_kref_put_lock(kref, lock)) ? ({ release(kref); 1; }) : 0)
/**
* kref_get_unless_zero - Increment refcount for object unless it is zero.
--
2.31.1
Powered by blists - more mailing lists