[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoAmtW=bGWXpNQBtNtzFA62CN4jEZNswxui-wd7wPQqnHQ@mail.gmail.com>
Date: Sat, 25 Jan 2025 09:32:12 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Martin KaFai Lau <martin.lau@...ux.dev>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, dsahern@...nel.org, willemdebruijn.kernel@...il.com,
willemb@...gle.com, ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
haoluo@...gle.com, jolsa@...nel.org, horms@...nel.org, bpf@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next v6 04/13] bpf: stop UDP sock accessing TCP
fields in sock_op BPF CALLs
On Sat, Jan 25, 2025 at 9:15 AM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> On Sat, Jan 25, 2025 at 8:28 AM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
> >
> > On 1/20/25 5:28 PM, Jason Xing wrote:
> > > In the next round, we will support the UDP proto for SO_TIMESTAMPING
> > > bpf extension, so we need to ensure there is no safety problem, which
> > > is ususally caused by UDP socket trying to access TCP fields.
> > >
> > > These approaches can be categorized into two groups:
> > > 1. add TCP protocol check
> > > 2. add sock op check
> >
> > Same as patch 3. The commit message needs adjustment. I would combine patch 3
> > and patch 4 because ...
>
> I wonder if you refer to "squashing" patch 4 into patch 3?
>
> >
> > >
> > > Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> > > ---
> > > net/core/filter.c | 19 +++++++++++++++++--
> > > 1 file changed, 17 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/net/core/filter.c b/net/core/filter.c
> > > index fdd305b4cfbb..934431886876 100644
> > > --- a/net/core/filter.c
> > > +++ b/net/core/filter.c
> > > @@ -5523,6 +5523,11 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname,
> > > return -EINVAL;
> > > }
> > >
> > > +static bool is_locked_tcp_sock_ops(struct bpf_sock_ops_kern *bpf_sock)
> > > +{
> > > + return bpf_sock->op <= BPF_SOCK_OPS_WRITE_HDR_OPT_CB;
> >
> > More bike shedding...
> >
> > After sleeping on it again, I think it can just test the
> > bpf_sock->allow_tcp_access instead.
>
> Sorry, I don't think it can work for all the cases because:
> 1) please see BPF_SOCK_OPS_WRITE_HDR_OPT_CB/BPF_SOCK_OPS_HDR_OPT_LEN_CB,
> if req exists, there is no allow_tcp_access initialization. Then
> calling some function like bpf_sock_ops_setsockopt will be rejected
> because allow_tcp_access is zero.
> 2) tcp_call_bpf() only set allow_tcp_access only when the socket is
> fullsock. As far as I know, all the callers have the full stock for
> now, but in the future it might not.
>
> If we should use allow_tcp_access to test, then the following patch
> should be folded into patch 3, right?
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> index 0e5b9a654254..9cd7d4446617 100644
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -525,6 +525,7 @@ static void bpf_skops_hdr_opt_len(struct sock *sk,
> struct sk_buff *skb,
> sock_ops.sk = sk;
> }
>
> + sock_ops.allow_tcp_access = 1;
> sock_ops.args[0] = bpf_skops_write_hdr_opt_arg0(skb, synack_type);
> sock_ops.remaining_opt_len = *remaining;
> /* tcp_current_mss() does not pass a skb */
>
>
> >
> >
> > > +}
> > > +
> > > static int _bpf_setsockopt(struct sock *sk, int level, int optname,
> > > char *optval, int optlen)
> > > {
> > > @@ -5673,7 +5678,12 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = {
> > > BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> > > int, level, int, optname, char *, optval, int, optlen)
> > > {
> > > - return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen);
> > > + struct sock *sk = bpf_sock->sk;
> > > +
> > > + if (is_locked_tcp_sock_ops(bpf_sock) && sk_fullsock(sk))
> >
> > afaict, the new timestamping callbacks still can do setsockopt and it is
> > incorrect. It should be:
> >
> > if (!bpf_sock->allow_tcp_access)
> > return -EOPNOTSUPP;
> >
> > I recalled I have asked in v5 but it may be buried in the long thread, so asking
> > here again. Please add test(s) to check that the new timestamping callbacks
> > cannot call setsockopt and read/write to some of the tcp_sock fields through the
> > bpf_sock_ops.
> >
> > > + sock_owned_by_me(sk);
> >
> > Not needed and instead...
>
> Sorry I don't get you here. What I was doing was letting non
> timestamping callbacks be checked by the sock_owned_by_me() function.
>
> If the callback belongs to timestamping, we will skip the check.
>
> >
> > > +
> > > + return __bpf_setsockopt(sk, level, optname, optval, optlen);
> >
> > keep the original _bpf_setsockopt().
>
> Oh, I remembered we've already assumed/agreed the timestamping socket
> must be full sock. I will use it.
Oh, no. We cannot use it because it will WARN us if the socket is not held:
static int _bpf_setsockopt(struct sock *sk, int level, int optname,
char *optval, int optlen)
{
if (sk_fullsock(sk))
sock_owned_by_me(sk);
return __bpf_setsockopt(sk, level, optname, optval, optlen);
}
Let me rephrase what I know about the TCP and UDP cases:
1) the sockets are full socket.
2) the sockets are under the protection of socket lock, but in the
future they might not.
So we need to check if it's a fullsock but we don't expect to get any
warnings because the socket is not locked.
Am I right about those two?
Thanks,
Jason
>
> >
> > > }
> > >
> > > static const struct bpf_func_proto bpf_sock_ops_setsockopt_proto = {
> > > @@ -5759,6 +5769,7 @@ BPF_CALL_5(bpf_sock_ops_getsockopt, struct bpf_sock_ops_kern *, bpf_sock,
> > > int, level, int, optname, char *, optval, int, optlen)
> > > {
> > > if (IS_ENABLED(CONFIG_INET) && level == SOL_TCP &&
> > > + bpf_sock->sk->sk_protocol == IPPROTO_TCP &&
> > > optname >= TCP_BPF_SYN && optname <= TCP_BPF_SYN_MAC) {
> >
> > No need to allow getsockopt regardless what SOL_* it is asking. To keep it
> > simple, I would just disable both getsockopt and setsockopt for all SOL_* for
>
> Really? I'm shocked because the selftests in this series call
> bpf_sock_ops_getsockopt() and bpf_sock_ops_setsockopt() in patch
> [13/13]:
> ...
> if (bpf_setsockopt(ctx, level, opt, &new, sizeof(new)))
> ...
>
> > the new timestamping callbacks. Nothing is lost, the bpf prog can directly read
> > the sk.
> >
> > > int ret, copy_len = 0;
> > > const u8 *start;
> > > @@ -5800,7 +5811,8 @@ BPF_CALL_2(bpf_sock_ops_cb_flags_set, struct bpf_sock_ops_kern *, bpf_sock,
> > > struct sock *sk = bpf_sock->sk;
> > > int val = argval & BPF_SOCK_OPS_ALL_CB_FLAGS;
> > >
> > > - if (!IS_ENABLED(CONFIG_INET) || !sk_fullsock(sk))
> > > + if (!IS_ENABLED(CONFIG_INET) || !sk_fullsock(sk) ||
> > > + sk->sk_protocol != IPPROTO_TCP)
> >
> > Same here. It should disallow this "set" helper for the timestamping callbacks
> > which do not hold the lock.
> >
> > > return -EINVAL;
> > >
> > > tcp_sk(sk)->bpf_sock_ops_cb_flags = val;
> > > @@ -7609,6 +7621,9 @@ BPF_CALL_4(bpf_sock_ops_load_hdr_opt, struct bpf_sock_ops_kern *, bpf_sock,
> > > u8 search_kind, search_len, copy_len, magic_len;
> > > int ret;
> > >
> > > + if (!is_locked_tcp_sock_ops(bpf_sock))
> > > + return -EOPNOTSUPP;
> >
> > This is correct, just change it to "!bpf_sock->allow_tcp_access".
> >
> > All the above changed helpers should use the same test and the same return handling.
> >
> > > +
> > > /* 2 byte is the minimal option len except TCPOPT_NOP and
> > > * TCPOPT_EOL which are useless for the bpf prog to learn
> > > * and this helper disallow loading them also.
> >
Powered by blists - more mailing lists