lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kwm2e8a.fsf@cloudflare.com>
Date:   Thu, 23 Jan 2020 19:56:53 +0100
From:   Jakub Sitnicki <jakub@...udflare.com>
To:     Eric Dumazet <eric.dumazet@...il.com>,
        John Fastabend <john.fastabend@...il.com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org,
        kernel-team@...udflare.com,
        John Fastabend <john.fastabend@...il.com>,
        Lorenz Bauer <lmb@...udflare.com>, Martin Lau <kafai@...com>
Subject: Re: [PATCH bpf-next v4 02/12] net, sk_msg: Annotate lockless access to sk_prot on clone

On Thu, Jan 23, 2020 at 06:18 PM CET, Eric Dumazet wrote:
> On 1/23/20 7:55 AM, Jakub Sitnicki wrote:
>> sk_msg and ULP frameworks override protocol callbacks pointer in
>> sk->sk_prot, while tcp accesses it locklessly when cloning the listening
>> socket, that is with neither sk_lock nor sk_callback_lock held.
>>
>> Once we enable use of listening sockets with sockmap (and hence sk_msg),
>> there will be shared access to sk->sk_prot if socket is getting cloned
>> while being inserted/deleted to/from the sockmap from another CPU:
>>
>> Read side:
>>
>> tcp_v4_rcv
>>   sk = __inet_lookup_skb(...)
>>   tcp_check_req(sk)
>>     inet_csk(sk)->icsk_af_ops->syn_recv_sock
>>       tcp_v4_syn_recv_sock
>>         tcp_create_openreq_child
>>           inet_csk_clone_lock
>>             sk_clone_lock
>>               READ_ONCE(sk->sk_prot)
>>
>> Write side:
>>
>> sock_map_ops->map_update_elem
>>   sock_map_update_elem
>>     sock_map_update_common
>>       sock_map_link_no_progs
>>         tcp_bpf_init
>>           tcp_bpf_update_sk_prot
>>             sk_psock_update_proto
>>               WRITE_ONCE(sk->sk_prot, ops)
>>
>> sock_map_ops->map_delete_elem
>>   sock_map_delete_elem
>>     __sock_map_delete
>>      sock_map_unref
>>        sk_psock_put
>>          sk_psock_drop
>>            sk_psock_restore_proto
>>              tcp_update_ulp
>>                WRITE_ONCE(sk->sk_prot, proto)
>>
>> Mark the shared access with READ_ONCE/WRITE_ONCE annotations.
>>
>> Acked-by: Martin KaFai Lau <kafai@...com>
>> Signed-off-by: Jakub Sitnicki <jakub@...udflare.com>
>> ---
>>  include/linux/skmsg.h | 3 ++-
>>  net/core/sock.c       | 5 +++--
>>  net/ipv4/tcp_bpf.c    | 4 +++-
>>  net/ipv4/tcp_ulp.c    | 3 ++-
>>  net/tls/tls_main.c    | 3 ++-
>>  5 files changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
>> index 41ea1258d15e..55c834a5c25e 100644
>> --- a/include/linux/skmsg.h
>> +++ b/include/linux/skmsg.h
>> @@ -352,7 +352,8 @@ static inline void sk_psock_update_proto(struct sock *sk,
>>  	psock->saved_write_space = sk->sk_write_space;
>>
>>  	psock->sk_proto = sk->sk_prot;
>> -	sk->sk_prot = ops;
>> +	/* Pairs with lockless read in sk_clone_lock() */
>> +	WRITE_ONCE(sk->sk_prot, ops);
>
>
> Note there are dozens of calls like
>
> if (sk->sk_prot->handler)
>     sk->sk_prot->handler(...);
>
> Some of them being done lockless.
>
> I know it is painful, but presumably we need
>
> const struct proto *ops = READ_ONCE(sk->sk_prot);
>
> if (ops->handler)
>     ops->handler(....);

Yikes! That will be quite an audit. Thank you for taking a look.

Now I think I understand what John had in mind when asking for pushing
these annotations to the bpf tree as well [0].

Considering these are lacking today, can I do it as a follow up?

[0] https://lore.kernel.org/bpf/20200110105027.257877-1-jakub@cloudflare.com/T/#m6a4f84a922a393719a7ea7b33dafdb6c66b72827

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ