[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <80309f62-0900-4946-bb2c-d73a2b724739@linux.dev>
Date: Wed, 15 Jan 2025 13:22:10 -0800
From: Martin KaFai Lau <martin.lau@...ux.dev>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, dsahern@...nel.org, willemdebruijn.kernel@...il.com,
willemb@...gle.com, ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
haoluo@...gle.com, jolsa@...nel.org, horms@...nel.org, bpf@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next v5 04/15] net-timestamp: support SK_BPF_CB_FLAGS
only in bpf_sock_ops_setsockopt
On 1/12/25 3:37 AM, Jason Xing wrote:
> We will allow both TCP and UDP sockets to use this helper to
> enable this feature. So let SK_BPF_CB_FLAGS pass the check:
> 1. skip is_fullsock check
> 2. skip owned by me check
>
> Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> ---
> net/core/filter.c | 27 +++++++++++++++++++++------
> 1 file changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 1ac996ec5e0f..0e915268db5f 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -5507,12 +5507,9 @@ static int sol_ipv6_sockopt(struct sock *sk, int optname,
> KERNEL_SOCKPTR(optval), *optlen);
> }
>
> -static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> - char *optval, int optlen)
> +static int ___bpf_setsockopt(struct sock *sk, int level, int optname,
> + char *optval, int optlen)
> {
> - if (!sk_fullsock(sk))
> - return -EINVAL;
> -
> if (level == SOL_SOCKET)
> return sol_socket_sockopt(sk, optname, optval, &optlen, false);
> else if (IS_ENABLED(CONFIG_INET) && level == SOL_IP)
> @@ -5525,6 +5522,15 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> return -EINVAL;
> }
>
> +static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> + char *optval, int optlen)
> +{
> + if (!sk_fullsock(sk))
> + return -EINVAL;
> +
> + return ___bpf_setsockopt(sk, level, optname, optval, optlen);
> +}
> +
> static int _bpf_setsockopt(struct sock *sk, int level, int optname,
> char *optval, int optlen)
> {
> @@ -5675,7 +5681,16 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = {
> BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> int, level, int, optname, char *, optval, int, optlen)
> {
> - return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen);
> + struct sock *sk = bpf_sock->sk;
> +
> + if (optname != SK_BPF_CB_FLAGS) {
> + if (sk_fullsock(sk))
> + sock_owned_by_me(sk);
> + else if (optname != SK_BPF_CB_FLAGS)
This is redundant considering the outer "if" has the same check.
Regardless, "optname != SK_BPF_CB_FLAGS" is not the right check. The new
callback (e.g. BPF_SOCK_OPS_TS_SCHED_OPT_CB) can still call
bpf_setsockopt(TCP_*) which will be broken without a lock.
It needs to check for bpf_sock->op. I saw patch 5 has the bpf_sock->op check but
that check is also incorrect. I will comment in there together.
> + return -EINVAL;
> + }
> +
> + return ___bpf_setsockopt(sk, level, optname, optval, optlen);
> }
>
> static const struct bpf_func_proto bpf_sock_ops_setsockopt_proto = {
Powered by blists - more mailing lists