[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5j++hLfOucZB__hxoYYhCTo_+a+PPYMLEtBwW=uYKiCwUw@mail.gmail.com>
Date: Mon, 14 Nov 2016 12:35:48 -0800
From: Kees Cook <keescook@...omium.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Greg KH <gregkh@...uxfoundation.org>,
Will Deacon <will.deacon@....com>,
"Reshetova, Elena" <elena.reshetova@...el.com>,
Arnd Bergmann <arnd@...db.de>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
David Windsor <dave@...gbits.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH 5/7] kref: Implement kref_put_lock()
On Mon, Nov 14, 2016 at 9:39 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> Because home-rolling your own is _awesome_, stop doing it. Provide
> kref_put_lock(), just like kref_put_mutex() but for a spinlock.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> include/linux/kref.h | 21 +++++++++++++++------
> net/sunrpc/svcauth.c | 15 ++++++++++-----
> 2 files changed, 25 insertions(+), 11 deletions(-)
>
> --- a/include/linux/kref.h
> +++ b/include/linux/kref.h
> @@ -86,12 +86,21 @@ static inline int kref_put_mutex(struct
> struct mutex *lock)
> {
> WARN_ON(release == NULL);
This WARN_ON makes sense, yes, though it seems like it should be deal
with differently. If it's NULL, we'll just Oops when we call release()
later... Seems like this should saturate the kref or something else
similar.
> - if (unlikely(!atomic_add_unless(&kref->refcount, -1, 1))) {
> - mutex_lock(lock);
> - if (unlikely(!atomic_dec_and_test(&kref->refcount))) {
> - mutex_unlock(lock);
> - return 0;
> - }
> +
> + if (atomic_dec_and_mutex_lock(&kref->refcount, lock)) {
> + release(kref);
> + return 1;
> + }
> + return 0;
> +}
> +
> +static inline int kref_put_lock(struct kref *kref,
> + void (*release)(struct kref *kref),
> + spinlock_t *lock)
> +{
> + WARN_ON(release == NULL);
> +
> + if (atomic_dec_and_lock(&kref->refcount, lock)) {
> release(kref);
> return 1;
> }
> --- a/net/sunrpc/svcauth.c
> +++ b/net/sunrpc/svcauth.c
> @@ -127,13 +127,18 @@ static struct hlist_head auth_domain_tab
> static spinlock_t auth_domain_lock =
> __SPIN_LOCK_UNLOCKED(auth_domain_lock);
>
> +static void auth_domain_release(struct kref *kref)
> +{
> + struct auth_domain *dom = container_of(kref, struct auth_domain, ref);
> +
> + hlist_del(&dom->hash);
> + dom->flavour->domain_release(dom);
> + spin_unlock(&auth_domain_lock);
> +}
> +
> void auth_domain_put(struct auth_domain *dom)
> {
> - if (atomic_dec_and_lock(&dom->ref.refcount, &auth_domain_lock)) {
> - hlist_del(&dom->hash);
> - dom->flavour->domain_release(dom);
> - spin_unlock(&auth_domain_lock);
> - }
> + kref_put_lock(&dom->ref, auth_domain_release, &auth_domain_lock);
> }
> EXPORT_SYMBOL_GPL(auth_domain_put);
>
>
>
--
Kees Cook
Nexus Security
Powered by blists - more mailing lists