lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c21660c3-a8ac-4a8b-a312-f52ac781a353@huawei.com>
Date: Fri, 8 Nov 2024 09:34:01 +0800
From: Wang Liang <wangliang74@...wei.com>
To: Simon Horman <horms@...nel.org>, Eric Dumazet <edumazet@...gle.com>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>,
	<dsahern@...nel.org>, <kuniyu@...zon.com>, <luoxuanqiang@...inos.cn>,
	<kernelxing@...cent.com>, <kirjanov@...il.com>, <yuehaibing@...wei.com>,
	<zhangchangzhong@...wei.com>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH net v2] net: fix data-races around
 sk->sk_forward_alloc


在 2024/11/6 23:14, Simon Horman 写道:
> On Tue, Nov 05, 2024 at 10:52:34AM +0100, Eric Dumazet wrote:
>> On Tue, Nov 5, 2024 at 8:46 AM Wang Liang <wangliang74@...wei.com> wrote:
>>> Syzkaller reported this warning:
>>>   ------------[ cut here ]------------
>>>   WARNING: CPU: 0 PID: 16 at net/ipv4/af_inet.c:156 inet_sock_destruct+0x1c5/0x1e0
>>>   Modules linked in:
>>>   CPU: 0 UID: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.12.0-rc5 #26
>>>   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
>>>   RIP: 0010:inet_sock_destruct+0x1c5/0x1e0
>>>   Code: 24 12 4c 89 e2 5b 48 c7 c7 98 ec bb 82 41 5c e9 d1 18 17 ff 4c 89 e6 5b 48 c7 c7 d0 ec bb 82 41 5c e9 bf 18 17 ff 0f 0b eb 83 <0f> 0b eb 97 0f 0b eb 87 0f 0b e9 68 ff ff ff 66 66 2e 0f 1f 84 00
>>>   RSP: 0018:ffffc9000008bd90 EFLAGS: 00010206
>>>   RAX: 0000000000000300 RBX: ffff88810b172a90 RCX: 0000000000000007
>>>   RDX: 0000000000000002 RSI: 0000000000000300 RDI: ffff88810b172a00
>>>   RBP: ffff88810b172a00 R08: ffff888104273c00 R09: 0000000000100007
>>>   R10: 0000000000020000 R11: 0000000000000006 R12: ffff88810b172a00
>>>   R13: 0000000000000004 R14: 0000000000000000 R15: ffff888237c31f78
>>>   FS:  0000000000000000(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000
>>>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>   CR2: 00007ffc63fecac8 CR3: 000000000342e000 CR4: 00000000000006f0
>>>   DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>>   DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>>   Call Trace:
>>>    <TASK>
>>>    ? __warn+0x88/0x130
>>>    ? inet_sock_destruct+0x1c5/0x1e0
>>>    ? report_bug+0x18e/0x1a0
>>>    ? handle_bug+0x53/0x90
>>>    ? exc_invalid_op+0x18/0x70
>>>    ? asm_exc_invalid_op+0x1a/0x20
>>>    ? inet_sock_destruct+0x1c5/0x1e0
>>>    __sk_destruct+0x2a/0x200
>>>    rcu_do_batch+0x1aa/0x530
>>>    ? rcu_do_batch+0x13b/0x530
>>>    rcu_core+0x159/0x2f0
>>>    handle_softirqs+0xd3/0x2b0
>>>    ? __pfx_smpboot_thread_fn+0x10/0x10
>>>    run_ksoftirqd+0x25/0x30
>>>    smpboot_thread_fn+0xdd/0x1d0
>>>    kthread+0xd3/0x100
>>>    ? __pfx_kthread+0x10/0x10
>>>    ret_from_fork+0x34/0x50
>>>    ? __pfx_kthread+0x10/0x10
>>>    ret_from_fork_asm+0x1a/0x30
>>>    </TASK>
>>>   ---[ end trace 0000000000000000 ]---
>>>
>>> Its possible that two threads call tcp_v6_do_rcv()/sk_forward_alloc_add()
>>> concurrently when sk->sk_state == TCP_LISTEN with sk->sk_lock unlocked,
>>> which triggers a data-race around sk->sk_forward_alloc:
>>> tcp_v6_rcv
>>>      tcp_v6_do_rcv
>>>          skb_clone_and_charge_r
>>>              sk_rmem_schedule
>>>                  __sk_mem_schedule
>>>                      sk_forward_alloc_add()
>>>              skb_set_owner_r
>>>                  sk_mem_charge
>>>                      sk_forward_alloc_add()
>>>          __kfree_skb
>>>              skb_release_all
>>>                  skb_release_head_state
>>>                      sock_rfree
>>>                          sk_mem_uncharge
>>>                              sk_forward_alloc_add()
>>>                              sk_mem_reclaim
>>>                                  // set local var reclaimable
>>>                                  __sk_mem_reclaim
>>>                                      sk_forward_alloc_add()
>>>
>>> In this syzkaller testcase, two threads call
>>> tcp_v6_do_rcv() with skb->truesize=768, the sk_forward_alloc changes like
>>> this:
>>>   (cpu 1)             | (cpu 2)             | sk_forward_alloc
>>>   ...                 | ...                 | 0
>>>   __sk_mem_schedule() |                     | +4096 = 4096
>>>                       | __sk_mem_schedule() | +4096 = 8192
>>>   sk_mem_charge()     |                     | -768  = 7424
>>>                       | sk_mem_charge()     | -768  = 6656
>>>   ...                 |    ...              |
>>>   sk_mem_uncharge()   |                     | +768  = 7424
>>>   reclaimable=7424    |                     |
>>>                       | sk_mem_uncharge()   | +768  = 8192
>>>                       | reclaimable=8192    |
>>>   __sk_mem_reclaim()  |                     | -4096 = 4096
>>>                       | __sk_mem_reclaim()  | -8192 = -4096 != 0
>>>
>>> The skb_clone_and_charge_r() should not be called in tcp_v6_do_rcv() when
>>> sk->sk_state is TCP_LISTEN, it happens later in tcp_v6_syn_recv_sock().
>>> Fix the same issue in dccp_v6_do_rcv().
>>>
>>> Suggested-by: Eric Dumazet <edumazet@...gle.com>
>>> Fixes: e994b2f0fb92 ("tcp: do not lock listener to process SYN packets")
>>> Signed-off-by: Wang Liang <wangliang74@...wei.com>
>> Reviewed-by: Eric Dumazet <edumazet@...gle.com>
> Hi Wang Liang,
>
> Please post a non-RFC variant of this patch so it can be considered for
> inclusion in net. And please include Eric's Reviewed-by tag.
>
> Thanks!


Thanks very much for your suggestion!

I have send the patch("[PATCH net] net: fix data-races around 
sk->sk_forward_alloc") with Reviewed-by tag, and remove the RFC.

Please check it.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ