[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190226012959.4gttysrj3khqumc5@kafai-mbp.dhcp.thefacebook.com>
Date: Tue, 26 Feb 2019 01:30:02 +0000
From: Martin Lau <kafai@...com>
To: Stanislav Fomichev <sdf@...ichev.me>
CC: Lawrence Brakmo <brakmo@...com>, netdev <netdev@...r.kernel.org>,
"Alexei Starovoitov" <ast@...com>,
Daniel Borkmann <daniel@...earbox.net>,
"Eric Dumazet" <eric.dumazet@...il.com>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next 2/9] bpf: Add bpf helper bpf_tcp_enter_cwr
On Mon, Feb 25, 2019 at 03:14:38PM -0800, Stanislav Fomichev wrote:
[ ... ]
> >
> > To ensure it is only called from BPF_CGROUP_INET_EGRESS, the
> > attr->expected_attach_type must be specified as BPF_CGROUP_INET_EGRESS
> > during load time if the prog uses this new helper.
> > The newly added prog->enforce_expected_attach_type bit will also be set
> > if this new helper is used. This bit is for backward compatibility reason
> > because currently prog->expected_attach_type has been ignored in
> > BPF_PROG_TYPE_CGROUP_SKB. During attach time,
> > prog->expected_attach_type is only enforced if the
> > prog->enforce_expected_attach_type bit is set.
> > i.e. prog->expected_attach_type is only enforced if this new helper
> > is used by the prog.
> >
[ ... ]
> > @@ -1725,6 +1733,10 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
> > case BPF_PROG_TYPE_CGROUP_SOCK:
> > case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> > return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
> > + case BPF_PROG_TYPE_CGROUP_SKB:
> > + return prog->enforce_expected_attach_type &&
> > + prog->expected_attach_type != attach_type ?
> > + -EINVAL : 0;
> > default:
> > return 0;
> > }
[ ... ]
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index 97916eedfe69..ca57ef25279c 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -5426,6 +5426,24 @@ static const struct bpf_func_proto bpf_tcp_sock_proto = {
> > .arg1_type = ARG_PTR_TO_SOCK_COMMON,
> > };
> >
> > +BPF_CALL_1(bpf_tcp_enter_cwr, struct tcp_sock *, tp)
> > +{
> > + struct sock *sk = (struct sock *)tp;
> > +
> > + if (sk->sk_state == TCP_ESTABLISHED) {
> > + tcp_enter_cwr(sk);
> > + return 0;
> > + }
> > +
> > + return -EINVAL;
> > +}
> > +
> > +static const struct bpf_func_proto bpf_tcp_enter_cwr_proto = {
> > + .func = bpf_tcp_enter_cwr,
> > + .gpl_only = false,
> > + .ret_type = RET_INTEGER,
> > + .arg1_type = ARG_PTR_TO_TCP_SOCK,
> > +};
> > #endif /* CONFIG_INET */
> >
> > bool bpf_helper_changes_pkt_data(void *func)
> > @@ -5585,6 +5603,13 @@ cg_skb_func_proto(enum bpf_func_id func_id, struct bpf_prog *prog)
> > #ifdef CONFIG_INET
> > case BPF_FUNC_tcp_sock:
> > return &bpf_tcp_sock_proto;
>
> [...]
> > + case BPF_FUNC_tcp_enter_cwr:
> > + if (prog->expected_attach_type == BPF_CGROUP_INET_EGRESS) {
> > + prog->enforce_expected_attach_type = 1;
> > + return &bpf_tcp_enter_cwr_proto;
> Instead of this back and forth with enforce_expected_attach_type, can we
> just do here:
>
> if (prog->expected_attach_type == BPF_CGROUP_INET_EGRESS)
> return &bpf_tcp_enter_cwr_proto;
> else
> return null;
>
> Wouldn't it have the same effect?
The attr->expected_attach_type is currently ignored (i.e. not checked)
during the bpf load time.
How to avoid breaking backward compatibility without selectively
enforcing prog->expected_attach_type during attach time?
Powered by blists - more mailing lists