[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201116222750.nmfyxnj6jvd3rww4@ltop.local>
Date: Mon, 16 Nov 2020 23:27:50 +0100
From: Luc Van Oostenryck <luc.vanoostenryck@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
linux-sparse@...r.kernel.org
Subject: Re: [PATCH net-next] net: add annotation for sock_{lock,unlock}_fast
On Mon, Nov 16, 2020 at 11:36:39AM +0100, Paolo Abeni wrote:
> The static checker is fooled by the non-static locking scheme
> implemented by the mentioned helpers.
> Let's make its life easier adding some unconditional annotation
> so that the helpers are now interpreted as a plain spinlock from
> sparse.
>
> Signed-off-by: Paolo Abeni <pabeni@...hat.com>
> ---
> include/net/sock.h | 9 ++++++---
> net/core/sock.c | 3 ++-
> 2 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 1d29aeae74fd..60d321c6b5a5 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1595,7 +1595,8 @@ void release_sock(struct sock *sk);
> SINGLE_DEPTH_NESTING)
> #define bh_unlock_sock(__sk) spin_unlock(&((__sk)->sk_lock.slock))
>
> -bool lock_sock_fast(struct sock *sk);
> +bool lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock);
> +
Good.
> /**
> * unlock_sock_fast - complement of lock_sock_fast
> * @sk: socket
> @@ -1606,10 +1607,12 @@ bool lock_sock_fast(struct sock *sk);
> */
> static inline void unlock_sock_fast(struct sock *sk, bool slow)
> {
> - if (slow)
> + if (slow) {
> release_sock(sk);
> - else
> + __release(&sk->sk_lock.slock);
The correct solution would be to annotate the declaration of
release_sock() with '__releases(&sk->sk_lock.slock)'.
> /* Used by processes to "lock" a socket state, so that
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 727ea1cc633c..9badbe7bb4e4 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -3078,7 +3078,7 @@ EXPORT_SYMBOL(release_sock);
> *
> * sk_lock.slock unlocked, owned = 1, BH enabled
> */
> -bool lock_sock_fast(struct sock *sk)
> +bool lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock)
> {
> might_sleep();
> spin_lock_bh(&sk->sk_lock.slock);
> @@ -3096,6 +3096,7 @@ bool lock_sock_fast(struct sock *sk)
> * The sk_lock has mutex_lock() semantics here:
> */
> mutex_acquire(&sk->sk_lock.dep_map, 0, 0, _RET_IP_);
> + __acquire(&sk->sk_lock.slock);
OK, given that the mutexes are not annotated.
-- Luc
Powered by blists - more mailing lists