lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <003e73f8-2be4-43ab-8028-d7e175a957ed.bugreport@ubisectech.com>
Date: Sat, 03 Feb 2024 14:56:27 +0800
From: "Ubisectech Sirius" <bugreport@...sectech.com>
To: "linux-trace-kernel" <linux-trace-kernel@...r.kernel.org>,
  "linux-kernel" <linux-kernel@...r.kernel.org>
Cc: "tj" <tj@...nel.org>
Subject: INFO: rcu detected stall in idle_cull_fn

Hello.
We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec. Recently, our team has discovered a issue in Linux kernel 6.8.0-rc1-gecb1b8288dc7. Attached to the email were a POC file of the issue.

Stack dump:
rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 1-.... } 2642 jiffies s: 1157 root: 0x2/.
rcu: blocking rcu_node structures (internal RCU debug):
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 1046 Comm: kworker/u6:5 Not tainted 6.8.0-rc1-gecb1b8288dc7 #21
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Workqueue: events_unbound idle_cull_fn
RIP: 0010:lockdep_enabled kernel/locking/lockdep.c:122 [inline]
RIP: 0010:lock_release+0x128/0x680 kernel/locking/lockdep.c:5767
Code: 85 e7 02 00 00 65 4c 8b 34 25 40 c2 03 00 49 8d be bc 0a 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 0f b6 14 02 <48> 89 f8 83 e0 07 83 c0 03 38 d0 7c 08 84 d2 0f 85 cc 04 00 00 41
RSP: 0018:ffffc900004b8c70 EFLAGS: 00000803
RAX: dffffc0000000000 RBX: ffffffff8ef50fb8 RCX: ffffffff81669395
RDX: 0000000000000000 RSI: 0000000000010004 RDI: ffff8880407d8abc
RBP: 1ffff92000097190 R08: 0000000000000001 R09: fffffbfff1de9b82
R10: ffffffff8ef4dc17 R11: 0000000000000000 R12: ffffffff927aad98
R13: ffff888044352340 R14: ffff8880407d8000 R15: 1ffff920000971ac
FS:  0000000000000000(0000) GS:ffff88807ec00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005555566fed68 CR3: 000000000cb78000 CR4: 0000000000750ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:149 [inline]
 _raw_spin_unlock_irqrestore+0x1a/0x70 kernel/locking/spinlock.c:194
 debug_object_deactivate+0x212/0x390 lib/debugobjects.c:778
 debug_hrtimer_deactivate kernel/time/hrtimer.c:427 [inline]
 debug_deactivate kernel/time/hrtimer.c:483 [inline]
 __run_hrtimer kernel/time/hrtimer.c:1656 [inline]
 __hrtimer_run_queues+0x3fd/0xc10 kernel/time/hrtimer.c:1752
 hrtimer_interrupt+0x320/0x7b0 kernel/time/hrtimer.c:1814
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1065 [inline]
 __sysvec_apic_timer_interrupt+0x105/0x400 arch/x86/kernel/apic/apic.c:1082
 sysvec_apic_timer_interrupt+0x94/0xb0 arch/x86/kernel/apic/apic.c:1076
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
RIP: 0010:_raw_spin_unlock_irq+0x29/0x50 kernel/locking/spinlock.c:202
Code: 90 f3 0f 1e fa 55 48 8b 74 24 08 48 89 fd 48 83 c7 18 e8 da 4d 03 f7 48 89 ef e8 c2 bb 03 f7 e8 3d 74 29 f7 fb bf 01 00 00 00 <e8> b2 84 f5 f6 65 8b 05 13 7d a0 75 85 c0 74 02 5d c3 e8 90 3a 9d
RSP: 0018:ffffc9000517fc18 EFLAGS: 00000202
RAX: 0000000000040167 RBX: ffff8880133e00d8 RCX: 1ffffffff239be89
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
RBP: ffff8880133e0000 R08: 0000000000000001 R09: fffffbfff239abea
R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffffffe8d3
R13: 0000000100002e5a R14: ffff88801d705330 R15: ffff8880133e0074
 idle_cull_fn+0x1ac/0x3d0 kernel/workqueue.c:2382
 process_one_work+0x878/0x15c0 kernel/workqueue.c:2633
 process_scheduled_works kernel/workqueue.c:2706 [inline]
 worker_thread+0x855/0x1200 kernel/workqueue.c:2787
 kthread+0x2cc/0x3b0 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
 </TASK>
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 4.119 msecs

Thank you for taking the time to read this email and we look forward to working with you further.






Download attachment "poc.c" of type "application/octet-stream" (42069 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ