lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1345475500.5158.321.camel@edumazet-glaptop>
Date:	Mon, 20 Aug 2012 17:11:40 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Pavel Emelyanov <xemul@...allels.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] packet: Protect packet sk list with mutex

On Mon, 2012-08-20 at 18:50 +0400, Pavel Emelyanov wrote:
> In patch eea68e2f (packet: Report socket mclist info via diag module) I've
> introduced a "scheduling in atomic" problem in packet diag module -- the
> socket list is traversed under rcu_read_lock() while performed under it sk
> mclist access requires rtnl lock (i.e. -- mutex) to be taken. Similar thing
> was then re-introduced by further packet diag patches (fanount mutex and 
> pgvec mutex for rings) :(
> 
> Apart from being terribly sorry for the above, I propose to change the
> packet sk list protection from spinlock to mutex. This lock currently 
> protects only the sklist modifications (that already happen in sleeping 
> context) and nothing more.
> 
> Am I wrong again and a fine-grained atomic locking is required for
> everything that is reported by packet diag instead?
> 
> Signed-off-by: Pavel Emelyanov <xemul@...allels.com>
> 
> ---
> 
> diff --git a/include/net/netns/packet.h b/include/net/netns/packet.h
> index cb4e894..4780b08 100644
> --- a/include/net/netns/packet.h
> +++ b/include/net/netns/packet.h
> @@ -8,7 +8,7 @@
>  #include <linux/spinlock.h>
>  
>  struct netns_packet {
> -	spinlock_t		sklist_lock;
> +	struct mutex		sklist_lock;
>  	struct hlist_head	sklist;
>  };
>  
> diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
> index 226b2cd..5048672 100644
> --- a/net/packet/af_packet.c
> +++ b/net/packet/af_packet.c
> @@ -2308,10 +2308,10 @@ static int packet_release(struct socket *sock)
>  	net = sock_net(sk);
>  	po = pkt_sk(sk);
>  
> -	spin_lock_bh(&net->packet.sklist_lock);
> +	mutex_lock(&net->packet.sklist_lock);
>  	sk_del_node_init_rcu(sk);
>  	sock_prot_inuse_add(net, sk->sk_prot, -1);

Last time I checked, sock_prot_inuse_add() needed BH protection.

( This could be relaxed somehow on x86 thanks to this_cpu_add() ... but
thats another point)

Could you please report the full stack trace ?


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ