lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d73b3f08-a01e-95db-8beb-a11b33d834e8@gmail.com>
Date:   Tue, 4 Feb 2020 09:22:57 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Qian Cai <cai@....pw>, davem@...emloft.net
Cc:     kuba@...nel.org, elver@...gle.com, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] skbuff: fix a data race in skb_queue_len()



On 2/4/20 8:15 AM, Qian Cai wrote:
> sk_buff.qlen can be accessed concurrently as noticed by KCSAN,
> 
> 
> Since only the read is operating as lockless, it could introduce a logic
> bug in unix_recvq_full() due to the load tearing. Fix it by adding
> a lockless variant of skb_queue_len() and unix_recvq_full() where
> READ_ONCE() is on the read while WRITE_ONCE() is on the write similar to
> the commit d7d16a89350a ("net: add skb_queue_empty_lockless()").
> 
> Signed-off-by: Qian Cai <cai@....pw>
> ---
> 
> v2: add lockless variant helpers and WRITE_ONCE().
> 
>  include/linux/skbuff.h | 14 +++++++++++++-
>  net/unix/af_unix.c     |  9 ++++++++-
>  2 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 3d13a4b717e9..de5eade20e52 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -1822,6 +1822,18 @@ static inline __u32 skb_queue_len(const struct sk_buff_head *list_)
>  }
>  
>  /**
> + *	skb_queue_len	- get queue length

Please fix to use the exact name.

> + *	@list_: list to measure
> + *
> + *	Return the length of an &sk_buff queue.
> + *	This variant can be used in lockless contexts.
> + */
> +static inline __u32 skb_queue_len_lockless(const struct sk_buff_head *list_)
> +{
> +	return READ_ONCE(list_->qlen);
> +}
> +
> +/**
>   *	__skb_queue_head_init - initialize non-spinlock portions of sk_buff_head
>   *	@list: queue to initialize
>   *
> @@ -2026,7 +2038,7 @@ static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
>  {
>  	struct sk_buff *next, *prev;
>  
> -	list->qlen--;
> +	WRITE_ONCE(list->qlen, list->qlen - 1);
>  	next	   = skb->next;
>  	prev	   = skb->prev;
>  	skb->next  = skb->prev = NULL;
> diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
> index 321af97c7bbe..349e7fbfbc67 100644
> --- a/net/unix/af_unix.c
> +++ b/net/unix/af_unix.c
> @@ -194,6 +194,12 @@ static inline int unix_recvq_full(struct sock const *sk)
>  	return skb_queue_len(&sk->sk_receive_queue) > sk->sk_max_ack_backlog;
>  }
>  
> +static inline int unix_recvq_full_lockless(struct sock const *sk)

The const attribute is misplaced. It should be :

static inline bool unix_recvq_full_lockless(const struct sock *sk)

> +{
> +	return skb_queue_len_lockless(&sk->sk_receive_queue) >
> +		sk->sk_max_ack_backlog;

You probably also need a READ_ONCE() for sk->sk_max_ack_backlog

It is a matter of time before syzbot finds how to trigger the race.

Since you added a nice unix_recvq_full_lockless() helper, lets make it right.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ