lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Mar 2016 21:48:27 +0200
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	David Miller <davem@...emloft.net>
Cc:	daniel@...earbox.net, eric.dumazet@...il.com,
	alexei.starovoitov@...il.com, mkubecek@...e.cz,
	sasha.levin@...cle.com, jslaby@...e.cz, mst@...hat.com,
	netdev@...r.kernel.org
Subject: Re: [PATCH net] tun, bpf: fix suspicious RCU usage in
 tun_{attach,detach}_filter

On 31.03.2016 21:36, David Miller wrote:
> From: Hannes Frederic Sowa <hannes@...essinduktion.org>
> Date: Thu, 31 Mar 2016 21:24:12 +0200
>
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 4b81b71171b4ce..8ab270d5ce5507 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -1166,7 +1166,8 @@ static int __sk_attach_prog(struct bpf_prog
>> *prog, struct sock *sk)
>>   	}
>>
>>   	old_fp = rcu_dereference_protected(sk->sk_filter,
>> -					   sock_owned_by_user(sk));
>> +					   lockdep_rtnl_is_held() ||
>> +					   lockdep_sock_is_held(sk));
>>   	rcu_assign_pointer(sk->sk_filter, fp);
>>
>>   	if (old_fp)
>
> I have the same objections Daniel did.
>
> Not all socket filter clients use RTNL as the synchornization
> mechanism.  The caller, or some descriptive element, should tell us
> what that synchronizing element is.
>
> Yes, I understand how these RTNL checks can pass "accidently" but
> the opposite is true too.  A socket locking synchornizing user,
> who didn't lock the socket, might now pass because RTNL happens
> to be held elsewhere.

Actually lockdep_rtnl_is_held checks if this specific code/thread holds 
the lock and no other cpu/thread. So it will not pass here in case 
another cpu has the lock.

lockdep stores the current held locks in current->held_locks, if we 
preempt we switch current pointer, if we take a spin_lock we can't sleep 
thus not preempt. Thus we always know that this specific code has the lock.

Using sock_owned_by_user actually has this problem, and thus I am 
replacing it. We don't know who has the socket locked.

Tightest solution would probably be to combine both patches.

bool called_by_tuntap;

old_fp = rcu_dereference_protected(sk->sk_filter, called_by_tuntap ? 
lockdep_rtnl_is_held() : lockdep_sock_is_held());

Bye,
Hannes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ