lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e529a40f-4c77-834e-3ac8-b58763b58993@linux.dev>
Date:   Wed, 28 Sep 2022 22:31:04 -0700
From:   Martin KaFai Lau <martin.lau@...ux.dev>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     bpf <bpf@...r.kernel.org>, netdev <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Andrii Nakryiko <andrii@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        kernel-team <kernel-team@...com>,
        Paolo Abeni <pabeni@...hat.com>,
        Martin KaFai Lau <martin.lau@...nel.org>
Subject: Re: [PATCH v2 bpf-next 4/5] bpf: tcp: Stop
 bpf_setsockopt(TCP_CONGESTION) in init ops to recur itself

On 9/28/22 7:04 PM, Eric Dumazet wrote:
> On Fri, Sep 23, 2022 at 3:48 PM Martin KaFai Lau <kafai@...com> wrote:
>>
>> From: Martin KaFai Lau <martin.lau@...nel.org>
>>
>> When a bad bpf prog '.init' calls
>> bpf_setsockopt(TCP_CONGESTION, "itself"), it will trigger this loop:
>>
>> .init => bpf_setsockopt(tcp_cc) => .init => bpf_setsockopt(tcp_cc) ...
>> ... => .init => bpf_setsockopt(tcp_cc).
>>
>> It was prevented by the prog->active counter before but the prog->active
>> detection cannot be used in struct_ops as explained in the earlier
>> patch of the set.
>>
>> In this patch, the second bpf_setsockopt(tcp_cc) is not allowed
>> in order to break the loop.  This is done by using a bit of
>> an existing 1 byte hole in tcp_sock to check if there is
>> on-going bpf_setsockopt(TCP_CONGESTION) in this tcp_sock.
>>
>> Note that this essentially limits only the first '.init' can
>> call bpf_setsockopt(TCP_CONGESTION) to pick a fallback cc (eg. peer
>> does not support ECN) and the second '.init' cannot fallback to
>> another cc.  This applies even the second
>> bpf_setsockopt(TCP_CONGESTION) will not cause a loop.
>>
>> Signed-off-by: Martin KaFai Lau <martin.lau@...nel.org>
>> ---
>>   include/linux/tcp.h |  6 ++++++
>>   net/core/filter.c   | 28 +++++++++++++++++++++++++++-
>>   2 files changed, 33 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
>> index a9fbe22732c3..3bdf687e2fb3 100644
>> --- a/include/linux/tcp.h
>> +++ b/include/linux/tcp.h
>> @@ -388,6 +388,12 @@ struct tcp_sock {
>>          u8      bpf_sock_ops_cb_flags;  /* Control calling BPF programs
>>                                           * values defined in uapi/linux/tcp.h
>>                                           */
>> +       u8      bpf_chg_cc_inprogress:1; /* In the middle of
>> +                                         * bpf_setsockopt(TCP_CONGESTION),
>> +                                         * it is to avoid the bpf_tcp_cc->init()
>> +                                         * to recur itself by calling
>> +                                         * bpf_setsockopt(TCP_CONGESTION, "itself").
>> +                                         */
>>   #define BPF_SOCK_OPS_TEST_FLAG(TP, ARG) (TP->bpf_sock_ops_cb_flags & ARG)
>>   #else
>>   #define BPF_SOCK_OPS_TEST_FLAG(TP, ARG) 0
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 96f2f7a65e65..ac4c45c02da5 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -5105,6 +5105,9 @@ static int bpf_sol_tcp_setsockopt(struct sock *sk, int optname,
>>   static int sol_tcp_sockopt_congestion(struct sock *sk, char *optval,
>>                                        int *optlen, bool getopt)
>>   {
>> +       struct tcp_sock *tp;
>> +       int ret;
>> +
>>          if (*optlen < 2)
>>                  return -EINVAL;
>>
>> @@ -5125,8 +5128,31 @@ static int sol_tcp_sockopt_congestion(struct sock *sk, char *optval,
>>          if (*optlen >= sizeof("cdg") - 1 && !strncmp("cdg", optval, *optlen))
>>                  return -ENOTSUPP;
>>
>> -       return do_tcp_setsockopt(sk, SOL_TCP, TCP_CONGESTION,
>> +       /* It stops this looping
>> +        *
>> +        * .init => bpf_setsockopt(tcp_cc) => .init =>
>> +        * bpf_setsockopt(tcp_cc)" => .init => ....
>> +        *
>> +        * The second bpf_setsockopt(tcp_cc) is not allowed
>> +        * in order to break the loop when both .init
>> +        * are the same bpf prog.
>> +        *
>> +        * This applies even the second bpf_setsockopt(tcp_cc)
>> +        * does not cause a loop.  This limits only the first
>> +        * '.init' can call bpf_setsockopt(TCP_CONGESTION) to
>> +        * pick a fallback cc (eg. peer does not support ECN)
>> +        * and the second '.init' cannot fallback to
>> +        * another.
>> +        */
>> +       tp = tcp_sk(sk);
>> +       if (tp->bpf_chg_cc_inprogress)
>> +               return -EBUSY;
>> +
> 
> Is the socket locked (and owned by current thread) at this point ?
> If not, changing bpf_chg_cc_inprogress would be racy.

Yes, the socket is locked and owned.  There is a sock_owned_by_me check earlier 
in _bpf_setsockopt().

> 
> 
>> +       tp->bpf_chg_cc_inprogress = 1;
>> +       ret = do_tcp_setsockopt(sk, SOL_TCP, TCP_CONGESTION,
>>                                  KERNEL_SOCKPTR(optval), *optlen);
>> +       tp->bpf_chg_cc_inprogress = 0;
>> +       return ret;
>>   }
>>
>>   static int sol_tcp_sockopt(struct sock *sk, int optname,
>> --
>> 2.30.2
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ