[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241219172040.GA25368@willie-the-truck>
Date: Thu, 19 Dec 2024 17:20:41 +0000
From: Will Deacon <will@...nel.org>
To: Antonio Quartulli <antonio@...nvpn.net>
Cc: netdev@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Donald Hunter <donald.hunter@...il.com>,
Shuah Khan <shuah@...nel.org>, sd@...asysnail.net,
ryazanov.s.a@...il.com, Andrew Lunn <andrew+netdev@...n.ch>,
Simon Horman <horms@...nel.org>, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, Xiao Liang <shaw.leon@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Mark Rutland <mark.rutland@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH net-next v16 06/26] kref/refcount: implement
kref_put_sock()
On Thu, Dec 19, 2024 at 02:42:00AM +0100, Antonio Quartulli wrote:
> Similarly so kref_put_lock(), decrease the refcount
> and call bh_lock_sock(sk) if it reached 0.
>
> This kref_put variant comes handy when in need of
> atomically cleanup any socket context along with
> setting the refcount to 0.
>
> Cc: Will Deacon <will@...nel.org> (maintainer:ATOMIC INFRASTRUCTURE)
> Cc: Peter Zijlstra <peterz@...radead.org> (maintainer:ATOMIC INFRASTRUCTURE)
> Cc: Boqun Feng <boqun.feng@...il.com> (reviewer:ATOMIC INFRASTRUCTURE)
> Cc: Mark Rutland <mark.rutland@....com> (reviewer:ATOMIC INFRASTRUCTURE)
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Signed-off-by: Antonio Quartulli <antonio@...nvpn.net>
> ---
> include/linux/kref.h | 11 +++++++++++
> include/linux/refcount.h | 3 +++
> lib/refcount.c | 32 ++++++++++++++++++++++++++++++++
[...]
> diff --git a/lib/refcount.c b/lib/refcount.c
> index a207a8f22b3ca35890671e51c480266d89e4d8d6..76a728581aa49a41ef13f5141f3f2e9816d72e75 100644
> --- a/lib/refcount.c
> +++ b/lib/refcount.c
> @@ -7,6 +7,7 @@
> #include <linux/refcount.h>
> #include <linux/spinlock.h>
> #include <linux/bug.h>
> +#include <net/sock.h>
>
> #define REFCOUNT_WARN(str) WARN_ONCE(1, "refcount_t: " str ".\n")
>
> @@ -156,6 +157,37 @@ bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock)
> }
> EXPORT_SYMBOL(refcount_dec_and_lock);
>
> +/**
> + * refcount_dec_and_lock_sock - return holding locked sock if able to decrement
> + * refcount to 0
> + * @r: the refcount
> + * @sock: the sock to be locked
> + *
> + * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to
> + * decrement when saturated at REFCOUNT_SATURATED.
> + *
> + * Provides release memory ordering, such that prior loads and stores are done
> + * before, and provides a control dependency such that free() must come after.
> + * See the comment on top.
> + *
> + * Return: true and hold sock if able to decrement refcount to 0, false
> + * otherwise
> + */
> +bool refcount_dec_and_lock_sock(refcount_t *r, struct sock *sock)
> +{
> + if (refcount_dec_not_one(r))
> + return false;
> +
> + bh_lock_sock(sock);
> + if (!refcount_dec_and_test(r)) {
> + bh_unlock_sock(sock);
> + return false;
> + }
> +
> + return true;
> +}
> +EXPORT_SYMBOL(refcount_dec_and_lock_sock);
It feels a little out-of-place to me having socket-specific functions in
lib/refcount.c. I'd suggest sticking this somewhere else _or_ maybe we
could generate this pattern of code:
#define REFCOUNT_DEC_AND_LOCKNAME(lockname, locktype, lock, unlock) \
static __always_inline \
bool refcount_dec_and_lock_##lockname(refcount_t *r, locktype *l) \
{ \
...
inside a generator macro in refcount.h, like we do for seqlocks in
linux/seqlock.h. The downside of that is the cost of inlining.
Will
Powered by blists - more mailing lists