[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67a3830cbe106_14e083294f9@willemb.c.googlers.com.notmuch>
Date: Wed, 05 Feb 2025 10:26:04 -0500
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Jason Xing <kerneljasonxing@...il.com>,
davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com,
dsahern@...nel.org,
willemdebruijn.kernel@...il.com,
willemb@...gle.com,
ast@...nel.org,
daniel@...earbox.net,
andrii@...nel.org,
martin.lau@...ux.dev,
eddyz87@...il.com,
song@...nel.org,
yonghong.song@...ux.dev,
john.fastabend@...il.com,
kpsingh@...nel.org,
sdf@...ichev.me,
haoluo@...gle.com,
jolsa@...nel.org,
horms@...nel.org
Cc: bpf@...r.kernel.org,
netdev@...r.kernel.org,
Jason Xing <kerneljasonxing@...il.com>
Subject: Re: [PATCH bpf-next v8 04/12] bpf: stop calling some sock_op BPF
CALLs in new timestamping callbacks
Jason Xing wrote:
> Simply disallow calling bpf_sock_ops_setsockopt/getsockopt,
> bpf_sock_ops_cb_flags_set, and the bpf_sock_ops_load_hdr_opt for
> the new timestamping callbacks for the safety consideration.
Please reword this: Disallow .. unless this is operating on a locked
TCP socket. Or something along those lines.
> Besides, In the next round, the UDP proto for SO_TIMESTAMPING bpf
> extension will be supported, so there should be no safety problem,
> which is usually caused by UDP socket trying to access TCP fields.
Besides is probably the wrong word here: this is not an aside, but
the actual reason for this test, if I follow correctly.
> Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> ---
> net/core/filter.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index dc0e67c5776a..d3395ffe058e 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -5523,6 +5523,11 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> return -EINVAL;
> }
>
> +static bool is_locked_tcp_sock_ops(struct bpf_sock_ops_kern *bpf_sock)
> +{
> + return bpf_sock->op <= BPF_SOCK_OPS_WRITE_HDR_OPT_CB;
> +}
> +
> static int _bpf_setsockopt(struct sock *sk, int level, int optname,
> char *optval, int optlen)
> {
> @@ -5673,6 +5678,9 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = {
> BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> int, level, int, optname, char *, optval, int, optlen)
> {
> + if (!is_locked_tcp_sock_ops(bpf_sock))
> + return -EOPNOTSUPP;
> +
> return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen);
> }
>
> @@ -5758,6 +5766,9 @@ static int bpf_sock_ops_get_syn(struct bpf_sock_ops_kern *bpf_sock,
> BPF_CALL_5(bpf_sock_ops_getsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> int, level, int, optname, char *, optval, int, optlen)
> {
> + if (!is_locked_tcp_sock_ops(bpf_sock))
> + return -EOPNOTSUPP;
> +
> if (IS_ENABLED(CONFIG_INET) && level == SOL_TCP &&
> optname >= TCP_BPF_SYN && optname <= TCP_BPF_SYN_MAC) {
> int ret, copy_len = 0;
> @@ -5800,6 +5811,9 @@ BPF_CALL_2(bpf_sock_ops_cb_flags_set, struct bpf_sock_ops_kern *, bpf_sock,
> struct sock *sk = bpf_sock->sk;
> int val = argval & BPF_SOCK_OPS_ALL_CB_FLAGS;
>
> + if (!is_locked_tcp_sock_ops(bpf_sock))
> + return -EOPNOTSUPP;
> +
> if (!IS_ENABLED(CONFIG_INET) || !sk_fullsock(sk))
> return -EINVAL;
>
> @@ -7609,6 +7623,9 @@ BPF_CALL_4(bpf_sock_ops_load_hdr_opt, struct bpf_sock_ops_kern *, bpf_sock,
> u8 search_kind, search_len, copy_len, magic_len;
> int ret;
>
> + if (!is_locked_tcp_sock_ops(bpf_sock))
> + return -EOPNOTSUPP;
> +
> /* 2 byte is the minimal option len except TCPOPT_NOP and
> * TCPOPT_EOL which are useless for the bpf prog to learn
> * and this helper disallow loading them also.
> --
> 2.43.5
>
Powered by blists - more mailing lists