[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180615001856.dc2huljl7o554vzn@kafai-mbp.dhcp.thefacebook.com>
Date: Thu, 14 Jun 2018 17:18:56 -0700
From: Martin KaFai Lau <kafai@...com>
To: John Fastabend <john.fastabend@...il.com>
CC: <ast@...nel.org>, <daniel@...earbox.net>, <netdev@...r.kernel.org>
Subject: Re: [bpf PATCH v2 2/6] bpf: sockmap only allow ESTABLISHED sock state
On Thu, Jun 14, 2018 at 09:44:52AM -0700, John Fastabend wrote:
> Per the note in the TLS ULP (which is actually a generic statement
> regarding ULPs)
>
> /* The TLS ulp is currently supported only for TCP sockets
> * in ESTABLISHED state.
> * Supporting sockets in LISTEN state will require us
> * to modify the accept implementation to clone rather then
> * share the ulp context.
> */
Can you explain how that apply to bpf_tcp ulp?
My understanding is the "ulp context" referred in TLS ulp is
the tls_context stored in icsk_ulp_data but I don't see bpf_tcp's
ulp is using icsk_ulp_data.
Others LGTM.
>
> After this patch we only allow socks that are in ESTABLISHED state or
> are being added via a sock_ops event that is transitioning into an
> ESTABLISHED state. By allowing sock_ops events we allow users to
> manage sockmaps directly from sock ops programs. The two supported
> sock_ops ops are BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB and
> BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB.
>
> >From the userspace BPF update API the sock lock is also taken now
> to ensure we don't race with state changes after the ESTABLISHED
> check. The BPF program sock ops hook already has the sock lock
> taken.
>
> Also tested with 'netserver -6' and 'netperf -H [IPv6]' as well as
> 'netperf -H [IPv4]'.
>
> Reported-by: Eric Dumazet <edumazet@...gle.com>
> Signed-off-by: John Fastabend <john.fastabend@...il.com>
> ---
> 0 files changed
>
> diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
> index f6dd4cd..f1ab52d 100644
> --- a/kernel/bpf/sockmap.c
> +++ b/kernel/bpf/sockmap.c
> @@ -1976,13 +1976,20 @@ static int sock_map_update_elem(struct bpf_map *map,
> return -EINVAL;
> }
>
> + lock_sock(skops.sk);
> + /* ULPs are currently supported only for TCP sockets in ESTABLISHED
> + * state.
> + */
> if (skops.sk->sk_type != SOCK_STREAM ||
> - skops.sk->sk_protocol != IPPROTO_TCP) {
> - fput(socket->file);
> - return -EOPNOTSUPP;
> + skops.sk->sk_protocol != IPPROTO_TCP ||
> + skops.sk->sk_state != TCP_ESTABLISHED) {
> + err = -EOPNOTSUPP;
> + goto out;
> }
>
> err = sock_map_ctx_update_elem(&skops, map, key, flags);
> +out:
> + release_sock(skops.sk);
> fput(socket->file);
> return err;
> }
> @@ -2247,10 +2254,6 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
>
> sock = skops->sk;
>
> - if (sock->sk_type != SOCK_STREAM ||
> - sock->sk_protocol != IPPROTO_TCP)
> - return -EOPNOTSUPP;
> -
> if (unlikely(map_flags > BPF_EXIST))
> return -EINVAL;
>
> @@ -2338,7 +2341,20 @@ static int sock_hash_update_elem(struct bpf_map *map,
> return -EINVAL;
> }
>
> + lock_sock(skops.sk);
> + /* ULPs are currently supported only for TCP sockets in ESTABLISHED
> + * state.
> + */
> + if (skops.sk->sk_type != SOCK_STREAM ||
> + skops.sk->sk_protocol != IPPROTO_TCP ||
> + skops.sk->sk_state != TCP_ESTABLISHED) {
> + err = -EOPNOTSUPP;
> + goto out;
> + }
> +
> err = sock_hash_ctx_update_elem(&skops, map, key, flags);
> +out:
> + release_sock(skops.sk);
> fput(socket->file);
> return err;
> }
> @@ -2423,10 +2439,19 @@ struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key)
> .map_delete_elem = sock_hash_delete_elem,
> };
>
> +static bool bpf_is_valid_sock(struct bpf_sock_ops_kern *ops)
> +{
> + return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB ||
> + ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB;
> +}
> +
> BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
> struct bpf_map *, map, void *, key, u64, flags)
> {
> WARN_ON_ONCE(!rcu_read_lock_held());
> +
> + if (!bpf_is_valid_sock(bpf_sock))
> + return -EOPNOTSUPP;
> return sock_map_ctx_update_elem(bpf_sock, map, key, flags);
> }
>
> @@ -2445,6 +2470,9 @@ struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key)
> struct bpf_map *, map, void *, key, u64, flags)
> {
> WARN_ON_ONCE(!rcu_read_lock_held());
> +
> + if (!bpf_is_valid_sock(bpf_sock))
> + return -EOPNOTSUPP;
> return sock_hash_ctx_update_elem(bpf_sock, map, key, flags);
> }
>
>
Powered by blists - more mailing lists