[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoD6fAhqUABGL-ERn-AZZtm0kEq587a607vz3f7b6kTo5g@mail.gmail.com>
Date: Wed, 5 Feb 2025 23:50:19 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, dsahern@...nel.org, willemb@...gle.com, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...ux.dev,
eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
haoluo@...gle.com, jolsa@...nel.org, horms@...nel.org, bpf@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v8 04/12] bpf: stop calling some sock_op BPF
CALLs in new timestamping callbacks
On Wed, Feb 5, 2025 at 11:26 PM Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>
> Jason Xing wrote:
> > Simply disallow calling bpf_sock_ops_setsockopt/getsockopt,
> > bpf_sock_ops_cb_flags_set, and the bpf_sock_ops_load_hdr_opt for
> > the new timestamping callbacks for the safety consideration.
>
> Please reword this: Disallow .. unless this is operating on a locked
> TCP socket. Or something along those lines.
Will adjust it.
>
> > Besides, In the next round, the UDP proto for SO_TIMESTAMPING bpf
> > extension will be supported, so there should be no safety problem,
> > which is usually caused by UDP socket trying to access TCP fields.
>
> Besides is probably the wrong word here: this is not an aside, but
> the actual reason for this test, if I follow correctly.
Right, will fix it. Thanks.
>
> > Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> > ---
> > net/core/filter.c | 17 +++++++++++++++++
> > 1 file changed, 17 insertions(+)
> >
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index dc0e67c5776a..d3395ffe058e 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -5523,6 +5523,11 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> > return -EINVAL;
> > }
> >
> > +static bool is_locked_tcp_sock_ops(struct bpf_sock_ops_kern *bpf_sock)
> > +{
> > + return bpf_sock->op <= BPF_SOCK_OPS_WRITE_HDR_OPT_CB;
> > +}
> > +
> > static int _bpf_setsockopt(struct sock *sk, int level, int optname,
> > char *optval, int optlen)
> > {
> > @@ -5673,6 +5678,9 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = {
> > BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> > int, level, int, optname, char *, optval, int, optlen)
> > {
> > + if (!is_locked_tcp_sock_ops(bpf_sock))
> > + return -EOPNOTSUPP;
> > +
> > return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen);
> > }
> >
> > @@ -5758,6 +5766,9 @@ static int bpf_sock_ops_get_syn(struct bpf_sock_ops_kern *bpf_sock,
> > BPF_CALL_5(bpf_sock_ops_getsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> > int, level, int, optname, char *, optval, int, optlen)
> > {
> > + if (!is_locked_tcp_sock_ops(bpf_sock))
> > + return -EOPNOTSUPP;
> > +
> > if (IS_ENABLED(CONFIG_INET) && level == SOL_TCP &&
> > optname >= TCP_BPF_SYN && optname <= TCP_BPF_SYN_MAC) {
> > int ret, copy_len = 0;
> > @@ -5800,6 +5811,9 @@ BPF_CALL_2(bpf_sock_ops_cb_flags_set, struct bpf_sock_ops_kern *, bpf_sock,
> > struct sock *sk = bpf_sock->sk;
> > int val = argval & BPF_SOCK_OPS_ALL_CB_FLAGS;
> >
> > + if (!is_locked_tcp_sock_ops(bpf_sock))
> > + return -EOPNOTSUPP;
> > +
> > if (!IS_ENABLED(CONFIG_INET) || !sk_fullsock(sk))
> > return -EINVAL;
> >
> > @@ -7609,6 +7623,9 @@ BPF_CALL_4(bpf_sock_ops_load_hdr_opt, struct bpf_sock_ops_kern *, bpf_sock,
> > u8 search_kind, search_len, copy_len, magic_len;
> > int ret;
> >
> > + if (!is_locked_tcp_sock_ops(bpf_sock))
> > + return -EOPNOTSUPP;
> > +
> > /* 2 byte is the minimal option len except TCPOPT_NOP and
> > * TCPOPT_EOL which are useless for the bpf prog to learn
> > * and this helper disallow loading them also.
> > --
> > 2.43.5
> >
>
>
Powered by blists - more mailing lists