lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_E5D983AC5A7B056F07B32ED79BFBCA1E8005@qq.com>
Date: Thu, 20 Mar 2025 08:26:01 -0400
From: "ffhgfv" <xnxc22xnxc22@...com>
To: "davem" <davem@...emloft.net>, "edumazet" <edumazet@...gle.com>, "kuba" <kuba@...nel.org>, "pabeni" <pabeni@...hat.com>, "horms" <horms@...nel.org>, "kuniyu" <kuniyu@...zon.com>
Cc: "netdev" <netdev@...r.kernel.org>, "linux-kernel" <linux-kernel@...r.kernel.org>
Subject: Linux6.14-rc5 Bug:    INFO: task hung in rtnl_dumpit 

Hello, I found a bug titled "  INFO: task hung in rtnl_dumpit " with modified syzkaller in the Linux6.14-rc5.
If you fix this issue, please add the following tag to the commit:  Reported-by: Jianzhou Zhao <xnxc22xnxc22@...com>,    xingwei lee <xrivendell7@...il.com>, Zhizhuo Tang <strforexctzzchange@...mail.com>

I use the same kernel as syzbot instance upstream: 7eb172143d5508b4da468ed59ee857c6e5e01da6
kernel config: https://syzkaller.appspot.com/text?tag=KernelConfig&amp;x=da4b04ae798b7ef6
compiler: gcc version 11.4.0
------------[ cut here ]-----------------------------------------
 TITLE:   INFO: task hung in rtnl_dumpit 
==================================================================
INFO: task ifquery:17951 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc5-dirty #17
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:ifquery         state:D stack:25008 pid:17951 tgid:17951 ppid:17950  task_flags:0x400000 flags:0x00000000
Call Trace:
 <task>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x1074/0x4d30 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0xd4/0x210 kernel/sched/core.c:6857
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6914
 __mutex_lock_common kernel/locking/mutex.c:662 [inline]
 __mutex_lock+0x1042/0x2020 kernel/locking/mutex.c:730
 rtnl_lock net/core/rtnetlink.c:79 [inline]
 rtnl_dumpit+0x198/0x200 net/core/rtnetlink.c:6780
 netlink_dump+0x5c3/0xc80 net/netlink/af_netlink.c:2308
 netlink_recvmsg+0xaeb/0xf00 net/netlink/af_netlink.c:1964
 sock_recvmsg_nosec net/socket.c:1023 [inline]
 sock_recvmsg+0x218/0x290 net/socket.c:1045
 ____sys_recvmsg+0x210/0x6e0 net/socket.c:2793
 ___sys_recvmsg+0xff/0x190 net/socket.c:2835
 __sys_recvmsg+0x14e/0x200 net/socket.c:2868
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcf/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fcea582eda3
RSP: 002b:00007fffdea52c38 EFLAGS: 00000246 ORIG_RAX: 000000000000002f
RAX: ffffffffffffffda RBX: 00007fffdea53d90 RCX: 00007fcea582eda3
RDX: 0000000000000000 RSI: 00007fffdea53c90 RDI: 0000000000000003
RBP: 00007fffdea53d20 R08: 0000000000000000 R09: 00007fcea5900be0
R10: 000000000000006e R11: 0000000000000246 R12: 00007fffdea53c90
R13: 00007fffdea53c80 R14: 00007fffdea53c74 R15: 0000000000000f1c
 </task>

Showing all locks held in the system:
1 lock held by systemd/1:
 #0: ffff88802736c148 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff88802736c148 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
1 lock held by kthreadd/2:
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by kworker/u8:2/15:
5 locks held by kworker/u10:0/29:
1 lock held by khungtaskd/35:
 #0: ffffffff8e1bbae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e1bbae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e1bbae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x75/0x340 kernel/locking/lockdep.c:6746
3 locks held by kworker/u9:2/56:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc90000aefd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc90000aefd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
3 locks held by kworker/u10:3/90:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc90000d8fd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc90000d8fd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by kswapd0/99:
1 lock held by kswapd1/100:
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xbe2/0x1740 mm/vmscan.c:7012
3 locks held by kworker/u10:5/1081:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc90005697d28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc90005697d28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
3 locks held by kworker/u10:6/4514:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc9000bed7d28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc9000bed7d28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by systemd-journal/5206:
2 locks held by systemd-udevd/5218:
 #0: ffff88804b4e8580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff88804b4e8580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
3 locks held by kworker/u10:7/7061:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc900092afd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc900092afd28 ((work_completion)(&amp;sub_info-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
3 locks held by in:imklog/8742:
 #0: ffff8880460e7cf8 (&amp;f-&gt;f_pos_lock){+.+.}-{4:4}, at: fdget_pos+0x283/0x3a0 fs/file.c:1192
 #1: ffff8880589047e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: mmap_read_trylock include/linux/mmap_lock.h:209 [inline]
 #1: ffff8880589047e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: get_mmap_lock_carefully mm/memory.c:6249 [inline]
 #1: ffff8880589047e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: lock_mm_and_find_vma+0x35/0x700 mm/memory.c:6309
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by rs:main Q:Reg/8743:
 #0: ffff888047be0580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff888047be0580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by sshd/9414:
 #0: ffff88804b64d1e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: mmap_read_trylock include/linux/mmap_lock.h:209 [inline]
 #0: ffff88804b64d1e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: get_mmap_lock_carefully mm/memory.c:6249 [inline]
 #0: ffff88804b64d1e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: lock_mm_and_find_vma+0x35/0x700 mm/memory.c:6309
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by syz-executor/9420:
 #0: ffff8880209b2070 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff8880209b2070 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
4 locks held by kworker/u8:1/9433:
 #0: ffff88801beeb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801beeb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc9000a457d28 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc9000a457d28 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8fecfdd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xca/0xb80 net/core/net_namespace.c:606
 #3: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: wg_destruct+0x29/0x3d0 drivers/net/wireguard/device.c:246
3 locks held by kworker/0:3/9445:
 #0: ffff88801b078d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b078d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc90002097d28 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc90002097d28 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by systemd-udevd/12130:
 #0: ffff888058a6a580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff888058a6a580 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
3 locks held by kworker/u8:4/12206:
 #0: ffff888043850948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff888043850948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc900022d7d28 ((work_completion)(&amp;(&amp;ifa-&gt;dad_work)-&gt;work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc900022d7d28 ((work_completion)(&amp;(&amp;ifa-&gt;dad_work)-&gt;work)){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x100/0x1530 net/ipv6/addrconf.c:4190
3 locks held by kworker/u9:8/12730:
3 locks held by kworker/u10:8/14664:
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3217 [inline]
 #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x1161/0x1af0 kernel/workqueue.c:3330
 #1: ffffc900059f7d28 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3218 [inline]
 #1: ffffc900059f7d28 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x526/0x1af0 kernel/workqueue.c:3330
 #2: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xf/0x70 net/core/link_watch.c:285
1 lock held by syz-executor/17647:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x230 drivers/net/tun.c:3517
2 locks held by syz-executor/17656:
 #0: ffffffff9061a420 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:164 [inline]
 #0: ffffffff9061a420 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:256 [inline]
 #0: ffffffff9061a420 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: rtnl_link_ops_get+0x111/0x2c0 net/core/rtnetlink.c:568
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x775/0x1ca0 net/core/rtnetlink.c:4021
1 lock held by syz-executor/17710:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x376/0xfc0 net/core/rtnetlink.c:6918
1 lock held by syz-executor/17749:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x775/0x1ca0 net/core/rtnetlink.c:4021
2 locks held by syz-executor/17750:
 #0: ffffffff8fecfdd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x2c3/0x640 net/core/net_namespace.c:512
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1b/0x220 drivers/net/wireguard/device.c:415
1 lock held by syz-executor/17768:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: __tun_chr_ioctl+0x20a/0x4790 drivers/net/tun.c:3121
3 locks held by syz-executor/17774:
 #0: ffffffff9061a120 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:164 [inline]
 #0: ffffffff9061a120 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:256 [inline]
 #0: ffffffff9061a120 (&amp;ops-&gt;srcu){.+.+}-{0:0}, at: rtnl_link_ops_get+0x111/0x2c0 net/core/rtnetlink.c:568
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x775/0x1ca0 net/core/rtnetlink.c:4021
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #2: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by syz-executor/17799:
 #0: ffffffff8fecfdd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x2c3/0x640 net/core/net_namespace.c:512
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1b/0x220 drivers/net/wireguard/device.c:415
2 locks held by ifquery/17951:
 #0: ffff88805b46d6c8 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}, at: netlink_dump+0x6f3/0xc80 net/netlink/af_netlink.c:2254
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x198/0x200 net/core/rtnetlink.c:6780
1 lock held by syz-executor/18083:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x303/0x14d0 net/ipv4/devinet.c:987
1 lock held by syz-executor/18094:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x303/0x14d0 net/ipv4/devinet.c:987
1 lock held by syz-executor/18101:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x303/0x14d0 net/ipv4/devinet.c:987
1 lock held by syz-executor/18108:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x303/0x14d0 net/ipv4/devinet.c:987
1 lock held by syz-executor/18116:
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
1 lock held by syz-executor/18117:
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fee5d68 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x303/0x14d0 net/ipv4/devinet.c:987
3 locks held by syz-executor/18123:
 #0: ffff88804b64c7e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:152 [inline]
 #0: ffff88804b64c7e0 (&amp;mm-&gt;mmap_lock){++++}-{4:4}, at: setup_arg_pages+0x2a8/0xcf0 fs/exec.c:762
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
 #2: ffff88807ee3ecd8 (&amp;rq-&gt;__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2f/0x130 kernel/sched/core.c:598
1 lock held by syz-executor/18135:
1 lock held by syz-executor/18136:
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
1 lock held by syz-executor/18138:
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #0: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by syz-executor/18139:
 #0: ffff88804ff949b8 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff88804ff949b8 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382
2 locks held by systemd-sysctl/18144:
 #0: ffff88805b82b4a8 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff88805b82b4a8 (&amp;vma-&gt;vm_lock-&gt;lock){++++}-{4:4}, at: lock_vma_under_rcu+0x141/0x990 mm/memory.c:6378
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline]
 #1: ffffffff8e351e20 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x70c/0x2290 mm/page_alloc.c:4382

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 35 Comm: khungtaskd Not tainted 6.14.0-rc5-dirty #17
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Call Trace:
 <task>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1b0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x2a0/0x350 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29b/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0xf4c/0x1210 kernel/hung_task.c:399
 kthread+0x427/0x880 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </task>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 502 Comm: kworker/u10:4 Not tainted 6.14.0-rc5-dirty #17
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:lookup_object lib/debugobjects.c:423 [inline]
RIP: 0010:debug_object_deactivate+0x180/0x390 lib/debugobjects.c:875
Code: 80 56 ab 9a 45 31 f6 48 85 db 74 44 49 bf 00 00 00 00 00 fc ff df 48 8d 7b 18 41 83 c6 01 48 89 fa 48 c1 ea 03 42 80 3c 3a 00 &lt;0f&gt; 85 6c 01 00 00 4c 3b 6b 18 74 4b 48 89 d8 48 c1 e8 03 42 80 3c
RSP: 0000:ffffc900001f8d30 EFLAGS: 00000046
RAX: 1ffff1100d6b8f7a RBX: ffff88806e117a48 RCX: ffffffff81970fe3
RDX: 1ffff1100dc22f4c RSI: 0000000000000004 RDI: ffff88806e117a60
RBP: ffffc900001f8e10 R08: ffffffff9aaf6e78 R09: 0000000000000006
R10: fffff5200003f194 R11: 0000000000000003 R12: 1ffff9200003f1a8
R13: ffff8880665ef810 R14: 0000000000000002 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff88807ee00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f11e8cebc28 CR3: 00000000271f0000 CR4: 0000000000752ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 <nmi>
 </nmi>
 <irq>
 debug_timer_deactivate kernel/time/timer.c:840 [inline]
 debug_deactivate kernel/time/timer.c:884 [inline]
 detach_timer kernel/time/timer.c:931 [inline]
 expire_timers kernel/time/timer.c:1823 [inline]
 __run_timers+0x55f/0xb20 kernel/time/timer.c:2414
 __run_timer_base kernel/time/timer.c:2426 [inline]
 __run_timer_base kernel/time/timer.c:2418 [inline]
 run_timer_base+0xc5/0x120 kernel/time/timer.c:2435
 run_timer_softirq+0x1a/0x40 kernel/time/timer.c:2445
 handle_softirqs+0x1c1/0x8a0 kernel/softirq.c:561
 do_softirq.part.0+0x8f/0xd0 kernel/softirq.c:462
 </irq>
 <task>
 do_softirq kernel/softirq.c:454 [inline]
 __local_bh_enable_ip+0x10e/0x130 kernel/softirq.c:389
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:820 [inline]
 nsim_dev_trap_report_work+0x24e/0xcf0 drivers/net/netdevsim/dev.c:851
 process_one_work kernel/workqueue.c:3246 [inline]
 process_scheduled_works+0x61a/0x1af0 kernel/workqueue.c:3330
 worker_thread+0x59f/0xcf0 kernel/workqueue.c:3411
 kthread+0x427/0x880 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </task>

==================================================================


I hope it helps.
Best regards
Jianzhou Zhao</strforexctzzchange@...mail.com></xrivendell7@...il.com></xnxc22xnxc22@...com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ