lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Sep 2017 15:53:49 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Willem de Bruijn <willemb@...gle.com>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, nixiaoming@...wei.com
Subject: Re: [PATCH net] packet: hold bind lock when rebinding to fanout hook

On Thu, 2017-09-14 at 17:14 -0400, Willem de Bruijn wrote:
> Packet socket bind operations must hold the po->bind_lock. This keeps
> po->running consistent with whether the socket is actually on a ptype
> list to receive packets.
> 
> fanout_add unbinds a socket and its packet_rcv/tpacket_rcv call, then
> binds the fanout object to receive through packet_rcv_fanout.
> 
> Make it hold the po->bind_lock when testing po->running and rebinding.
> Else, it can race with other rebind operations, such as that in
> packet_set_ring from packet_rcv to tpacket_rcv. Concurrent updates
> can result in a socket being added to a fanout group twice, causing
> use-after-free KASAN bug reports, among others.
> 
> Reported independently by both trinity and syzkaller.
> Verified that the syzkaller reproducer passes after this patch.
> 
> Reported-by: nixioaming <nixiaoming@...wei.com>
> Signed-off-by: Willem de Bruijn <willemb@...gle.com>
> ---
>  net/packet/af_packet.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
> index c26172995511..d288f52c53f7 100644
> --- a/net/packet/af_packet.c
> +++ b/net/packet/af_packet.c
> @@ -1684,10 +1684,6 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
>  
>  	mutex_lock(&fanout_mutex);
>  
> -	err = -EINVAL;
> -	if (!po->running)
> -		goto out;
> -
>  	err = -EALREADY;
>  	if (po->fanout)
>  		goto out;
> @@ -1749,7 +1745,10 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
>  		list_add(&match->list, &fanout_list);
>  	}
>  	err = -EINVAL;
> -	if (match->type == type &&
> +
> +	spin_lock(&po->bind_lock);
> +	if (po->running &&
> +	    match->type == type &&
>  	    match->prot_hook.type == po->prot_hook.type &&
>  	    match->prot_hook.dev == po->prot_hook.dev) {
>  		err = -ENOSPC;
> @@ -1761,6 +1760,13 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
>  			err = 0;
>  		}
>  	}
> +	spin_unlock(&po->bind_lock);
> +
> +	if (err && !refcount_read(&match->sk_ref)) {

It seems sk_ref is always read/changed under 
mutex_lock(&fanout_mutex) protection.

Not sure why we use a refcount_t (or an atomic_t in older kernels)

All these atomic/spinlock/mutexes are a maze. 

> +		list_del(&match->list);
> +		kfree(match);
> +	}
> +
>  out:
>  	if (err && rollover) {
>  		kfree(rollover);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ