lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+KL0=p2mchoZCOsZ1YoF9xhoUoubkub6YyLOY2wpSJtg@mail.gmail.com>
Date: Thu, 31 Oct 2024 15:08:20 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Wang Liang <wangliang74@...wei.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com, horms@...nel.org, 
	dsahern@...nel.org, yuehaibing@...wei.com, zhangchangzhong@...wei.com, 
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH net] net: fix data-races around sk->sk_forward_alloc

On Thu, Oct 31, 2024 at 1:06 PM Wang Liang <wangliang74@...wei.com> wrote:
>
> Syzkaller reported this warning:

Was this a public report ?

> [   65.568203][    C0] ------------[ cut here ]------------
> [   65.569339][    C0] WARNING: CPU: 0 PID: 16 at net/ipv4/af_inet.c:156 inet_sock_destruct+0x1c5/0x1e0
> [   65.575017][    C0] Modules linked in:
> [   65.575699][    C0] CPU: 0 UID: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.12.0-rc5 #26
> [   65.577086][    C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
> [   65.577094][    C0] RIP: 0010:inet_sock_destruct+0x1c5/0x1e0
> [   65.577100][    C0] Code: 24 12 4c 89 e2 5b 48 c7 c7 98 ec bb 82 41 5c e9 d1 18 17 ff 4c 89 e6 5b 48 c7 c7 d0 ec bb 82 41 5c e9 bf 18 17 ff 0f 0b eb 83 <0f> 0b eb 97 0f 0b eb 87 0f 0b e9 68 ff ff ff 66 66 2e 0f 1f 84 00
> [   65.577107][    C0] RSP: 0018:ffffc9000008bd90 EFLAGS: 00010206
> [   65.577113][    C0] RAX: 0000000000000300 RBX: ffff88810b172a90 RCX: 0000000000000007
> [   65.577117][    C0] RDX: 0000000000000002 RSI: 0000000000000300 RDI: ffff88810b172a00
> [   65.577120][    C0] RBP: ffff88810b172a00 R08: ffff888104273c00 R09: 0000000000100007
> [   65.577123][    C0] R10: 0000000000020000 R11: 0000000000000006 R12: ffff88810b172a00
> [   65.577125][    C0] R13: 0000000000000004 R14: 0000000000000000 R15: ffff888237c31f78
> [   65.577131][    C0] FS:  0000000000000000(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000
> [   65.592485][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   65.592489][    C0] CR2: 00007ffc63fecac8 CR3: 000000000342e000 CR4: 00000000000006f0
> [   65.592491][    C0] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [   65.592492][    C0] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [   65.592495][    C0] Call Trace:
> [   65.596277][    C0]  <TASK>
> [   65.598171][    C0]  ? __warn+0x88/0x130
> [   65.598874][    C0]  ? inet_sock_destruct+0x1c5/0x1e0
> [   65.598879][    C0]  ? report_bug+0x18e/0x1a0
> [   65.598883][    C0]  ? handle_bug+0x53/0x90
> [   65.598886][    C0]  ? exc_invalid_op+0x18/0x70
> [   65.598888][    C0]  ? asm_exc_invalid_op+0x1a/0x20
> [   65.598893][    C0]  ? inet_sock_destruct+0x1c5/0x1e0
> [   65.598897][    C0]  __sk_destruct+0x2a/0x200
> [   65.604664][    C0]  rcu_do_batch+0x1aa/0x530
> [   65.605450][    C0]  ? rcu_do_batch+0x13b/0x530
> [   65.605456][    C0]  rcu_core+0x159/0x2f0
> [   65.605466][    C0]  handle_softirqs+0xd3/0x2b0
> [   65.607689][    C0]  ? __pfx_smpboot_thread_fn+0x10/0x10
> [   65.607695][    C0]  run_ksoftirqd+0x25/0x30
> [   65.607699][    C0]  smpboot_thread_fn+0xdd/0x1d0
> [   65.610152][    C0]  kthread+0xd3/0x100
> [   65.610158][    C0]  ? __pfx_kthread+0x10/0x10
> [   65.610160][    C0]  ret_from_fork+0x34/0x50
> [   65.610170][    C0]  ? __pfx_kthread+0x10/0x10
> [   65.610172][    C0]  ret_from_fork_asm+0x1a/0x30
> [   65.610181][    C0]  </TASK>
> [   65.610182][    C0] ---[ end trace 0000000000000000 ]---
>
> Its possible that two threads call tcp_v6_do_rcv()/sk_forward_alloc_add()
> concurrently when sk->sk_state == TCP_LISTEN with sk->sk_lock unlocked,
> which triggers a data-race around sk->sk_forward_alloc:
> tcp_v6_rcv
>     tcp_v6_do_rcv
>         skb_clone_and_charge_r
>             sk_rmem_schedule
>                 __sk_mem_schedule
>                     sk_forward_alloc_add()
>             skb_set_owner_r
>                 sk_mem_charge
>                     sk_forward_alloc_add()
>         __kfree_skb
>             skb_release_all
>                 skb_release_head_state
>                     sock_rfree
>                         sk_mem_uncharge
>                             sk_forward_alloc_add()
>                             sk_mem_reclaim
>                                 // set local var reclaimable
>                                 __sk_mem_reclaim
>                                     sk_forward_alloc_add()
>
> In this syzkaller testcase, two threads call tcp_v6_do_rcv() with
> skb->truesize=768, the sk_forward_alloc changes like this:
>  (cpu 1)             | (cpu 2)             | sk_forward_alloc
>  ...                 | ...                 | 0
>  __sk_mem_schedule() |                     | +4096 = 4096
>                      | __sk_mem_schedule() | +4096 = 8192
>  sk_mem_charge()     |                     | -768  = 7424
>                      | sk_mem_charge()     | -768  = 6656
>  ...                 |    ...              |
>  sk_mem_uncharge()   |                     | +768  = 7424
>  reclaimable=7424    |                     |
>                      | sk_mem_uncharge()   | +768  = 8192
>                      | reclaimable=8192    |
>  __sk_mem_reclaim()  |                     | -4096 = 4096
>                      | __sk_mem_reclaim()  | -8192 = -4096 != 0
>
> Add lock around tcp_v6_do_rcv() in tcp_v6_rcv() will have some the
> performance impacts, only add lock when opt_skb clone occurs. In some
> scenes, tcp_v6_do_rcv() is embraced by sk->sk_lock, add
> TCP_SKB_CB(skb)->sk_lock_capability to avoid re-locking.
>
> Fixes: e994b2f0fb92 ("tcp: do not lock listener to process SYN packets")
> Signed-off-by: Wang Liang <wangliang74@...wei.com>
> ---
>  include/net/tcp.h   |  3 ++-
>  net/ipv6/tcp_ipv6.c | 21 ++++++++++++++++-----
>  2 files changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index d1948d357dad..110a23dda1eb 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -961,7 +961,8 @@ struct tcp_skb_cb {
>         __u8            txstamp_ack:1,  /* Record TX timestamp for ack? */
>                         eor:1,          /* Is skb MSG_EOR marked? */
>                         has_rxtstamp:1, /* SKB has a RX timestamp       */
> -                       unused:5;
> +                       sk_lock_capability:1, /* Avoid re-lock flag */
> +                       unused:4;
>         __u32           ack_seq;        /* Sequence number ACK'd        */
>         union {
>                 struct {

Oh the horror, this is completely wrong and unsafe anyway.

TCP listen path MUST be lockless, and stay lockless.

Ask yourself : Why would a listener even hold a pktoptions in the first place ?

Normally, each request socket can hold an ireq->pktopts (see in
tcp_v6_init_req())

The skb_clone_and_charge_r() happen later in tcp_v6_syn_recv_sock()

The correct fix is to _not_ call skb_clone_and_charge_r() for a
listener socket, of course, this never made _any_ sense.

The following patch should fix both TCP  and DCCP, and as a bonus make
TCP SYN processing faster
for listeners requesting these IPV6_PKTOPTIONS things.

diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index da5dba120bc9a55c5fd9d6feda791b0ffc887423..d6649246188d72b3df6c74750779b7aa5910dcb7
100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -618,7 +618,7 @@ static int dccp_v6_do_rcv(struct sock *sk, struct
sk_buff *skb)
           by tcp. Feel free to propose better solution.
                                               --ANK (980728)
         */
-       if (np->rxopt.all)
+       if (np->rxopt.all && sk->sk_state != DCCP_LISTEN)
                opt_skb = skb_clone_and_charge_r(skb, sk);

        if (sk->sk_state == DCCP_OPEN) { /* Fast path */
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index d71ab4e1efe1c6598cf3d3e4334adf0881064ce9..e643dbaec9ccc92eb2d9103baf185c957ad1dd2e
100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1605,25 +1605,12 @@ int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
         *      is currently called with bh processing disabled.
         */

-       /* Do Stevens' IPV6_PKTOPTIONS.
-
-          Yes, guys, it is the only place in our code, where we
-          may make it not affecting IPv4.
-          The rest of code is protocol independent,
-          and I do not like idea to uglify IPv4.
-
-          Actually, all the idea behind IPV6_PKTOPTIONS
-          looks not very well thought. For now we latch
-          options, received in the last packet, enqueued
-          by tcp. Feel free to propose better solution.
-                                              --ANK (980728)
-        */
-       if (np->rxopt.all)
-               opt_skb = skb_clone_and_charge_r(skb, sk);

        if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
                struct dst_entry *dst;

+               if (np->rxopt.all)
+                       opt_skb = skb_clone_and_charge_r(skb, sk);
                dst = rcu_dereference_protected(sk->sk_rx_dst,
                                                lockdep_sock_is_held(sk));

@@ -1656,13 +1643,13 @@ int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
                                if (reason)
                                        goto reset;
                        }
-                       if (opt_skb)
-                               __kfree_skb(opt_skb);
                        return 0;
                }
        } else
                sock_rps_save_rxhash(sk, skb);

+       if (np->rxopt.all)
+               opt_skb = skb_clone_and_charge_r(skb, sk);
        reason = tcp_rcv_state_process(sk, skb);
        if (reason)
                goto reset;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ