[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AEEC7BD8-7913-4D5C-A3D6-0AA384BF6FB4@fb.com>
Date: Wed, 21 Jun 2017 23:55:36 +0000
From: Lawrence Brakmo <brakmo@...com>
To: Craig Gallek <kraigatgoog@...il.com>
CC: netdev <netdev@...r.kernel.org>, Kernel Team <Kernel-team@...com>,
"Blake Matheny" <bmatheny@...com>, Alexei Starovoitov <ast@...com>,
Daniel Borkmann <daniel@...earbox.net>,
David Ahern <dsa@...ulusnetworks.com>
Subject: Re: [PATCH net-next v3 07/15] bpf: Add setsockopt helper function to
bpf
On 6/21/17, 10:13 AM, "Craig Gallek" <kraigatgoog@...il.com> wrote:
On Wed, Jun 21, 2017 at 12:51 PM, Lawrence Brakmo <brakmo@...com> wrote:
>
> On 6/20/17, 2:25 PM, "Craig Gallek" <kraigatgoog@...il.com> wrote:
>
> On Mon, Jun 19, 2017 at 11:00 PM, Lawrence Brakmo <brakmo@...com> wrote:
> > Added support for calling a subset of socket setsockopts from
> > BPF_PROG_TYPE_SOCK_OPS programs. The code was duplicated rather
> > than making the changes to call the socket setsockopt function because
> > the changes required would have been larger.
> >
> > @@ -2671,6 +2672,69 @@ static const struct bpf_func_proto bpf_get_socket_uid_proto = {
> > .arg1_type = ARG_PTR_TO_CTX,
> > };
> >
> > +BPF_CALL_5(bpf_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> > + int, level, int, optname, char *, optval, int, optlen)
> > +{
> > + struct sock *sk = bpf_sock->sk;
> > + int ret = 0;
> > + int val;
> > +
> > + if (bpf_sock->is_req_sock)
> > + return -EINVAL;
> > +
> > + if (level == SOL_SOCKET) {
> > + /* Only some socketops are supported */
> > + val = *((int *)optval);
> > +
> > + switch (optname) {
> > + case SO_RCVBUF:
> > + sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
> > + sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF);
> > + break;
> > + case SO_SNDBUF:
> > + sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
> > + sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);
> > + break;
> > + case SO_MAX_PACING_RATE:
> > + sk->sk_max_pacing_rate = val;
> > + sk->sk_pacing_rate = min(sk->sk_pacing_rate,
> > + sk->sk_max_pacing_rate);
> > + break;
> > + case SO_PRIORITY:
> > + sk->sk_priority = val;
> > + break;
> > + case SO_RCVLOWAT:
> > + if (val < 0)
> > + val = INT_MAX;
> > + sk->sk_rcvlowat = val ? : 1;
> > + break;
> > + case SO_MARK:
> > + sk->sk_mark = val;
> > + break;
>
> Isn't the socket lock required when manipulating these fields? It's
> not obvious that the lock is held from every bpf hook point that could
> trigger this function...
>
> The sock_ops BPF programs are being called from within the network
> stack and my understanding is that lock has already been taken.
> Currently they are only called:
> (1) after a packet is received, where there is the call to
> bh_lock_sock_nested() in tcp_v4_rcv() before calling
> tcp_v4_do_rcv().
> (2) in tcp_connect(), where there should be no issue
Someone who understands the TCP stack better than I should verify
this, but even if it's OK to do in these specific spots, it's not
unreasonable to believe that someone will add another socket-context
bpf hook in the future where it would not be safe. Without some
additional check to prevent this setsockopt function from being called
in those spots, we could run into trouble. The only other
socket-context point currently is the cgroup one, which happens during
socket creation and should also be safe.
The cgroup socket (BPF_PROG_TYPE_CGROUP_SOCK) is a different prog type
than BPF_PROG_TYPE_SOCK_OPS and it cannot call the bpf_setsockops function.
And it should not because the context arguments are different (struct sock *
vs. struct bpf_socks_ops_kern *)
> Just in case I added a check “sock_owned_by_me(sk)” in tcp_call_bpf()
> Do you think this is enough, or should I explicitly add a bh_lock_sock_nested
> in the bpf_setsockopt function?
Adding the check is certainly a way to test the behavior as
implemented, but this bpf function could be called by any
socket-context bpf (not just the tcp_call_bpf ones). I believe the
Can only be called by tcp_call_bpf ones
current bpf hook points only guarantee RCU read-side lock. Adding an
additional lock guarantee may have undesirable performance
implications. If this is just for socket creation or other rare
events it's probably not a big deal, but if it's for a hook in the
fast path it's probably a non-starter.
I agree
I guess the higher level question is what should the locking
guarantees for socket-context bpf programs be?
Since the whole point of this new bpf prog type is to be able to
modify sock and TCP parameters, we should insure that
tcp_call_bpf is only called when it is safe to change tcp sock state
(either because the sock lock is held or because it is safe for
other reasons).
Powered by blists - more mailing lists