lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAH2r5muJ4raSHGUh4Rk1jDuvBsEmkJ4aFd3BYGA9V5hUY9CxmA@mail.gmail.com>
Date: Fri, 1 Aug 2025 14:01:22 -0500
From: Steve French <smfrench@...il.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org
Subject: [BUG] soft lockup in psi_avgs_work() / collect_percpu_times() on 6.17-rc

Running xfstests against current mainline from yesterday I see the
following soft lockup during xfstest generic/211

[Fri Aug 1 13:33:48 2025] watchdog: BUG: soft lockup - CPU#6 stuck for
22s! [kworker/6:5:58759]
[Fri Aug 1 13:33:48 2025] Modules linked in: cifs(E) cmac(E)
nls_utf8(E) cifs_arc4(E) nls_ucs2_utils(E) cifs_md4(E)
rpcsec_gss_krb5(E) auth_rpcgss(E) nfsv4(E) dns_resolver(E) nfs(E)
lockd(E) grace(E) netfs(E) nf_conntrack_netbios_ns(E)
nf_conntrack_broadcast(E) nft_fib_inet(E) nft_fib_ipv4(E)
nft_fib_ipv6(E) nft_fib(E) nft_reject_inet(E) nf_reject_ipv4(E)
nf_reject_ipv6(E) nft_reject(E) nft_ct(E) nft_chain_nat(E) nf_nat(E)
nf_conntrack(E) nf_defrag_ipv6(E) nf_defrag_ipv4(E) nf_tables(E)
ip_set(E) sunrpc(E) kvm_intel(E) kvm(E) irqbypass(E) virtio_net(E)
net_failover(E) failover(E) virtio_balloon(E) loop(E) fuse(E)
dm_multipath(E) nfnetlink(E) zram(E) xfs(E) bochs(E) drm_client_lib(E)
drm_shmem_helper(E) drm_kms_helper(E) virtio_blk(E)
ghash_clmulni_intel(E) drm(E) floppy(E) qemu_fw_cfg(E)
virtio_console(E) [last unloaded: cifs(E)]
[Fri Aug 1 13:33:48 2025] irq event stamp: 275924
[Fri Aug 1 13:33:48 2025] hardirqs last enabled at (275923):
[<ffffffff950015ca>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[Fri Aug 1 13:33:48 2025] hardirqs last disabled at (275924):
[<ffffffff969a7fef>] sysvec_apic_timer_interrupt+0xf/0x90
[Fri Aug 1 13:33:48 2025] softirqs last enabled at (275922):
[<ffffffff95325785>] __irq_exit_rcu+0x135/0x160
[Fri Aug 1 13:33:48 2025] softirqs last disabled at (275905):
[<ffffffff95325785>] __irq_exit_rcu+0x135/0x160
[Fri Aug 1 13:33:48 2025] CPU: 6 UID: 0 PID: 58759 Comm: kworker/6:5
Tainted: G E 6.16.0 #1 PREEMPT(voluntary)
[Fri Aug 1 13:33:48 2025] Tainted: [E]=UNSIGNED_MODULE
[Fri Aug 1 13:33:48 2025] Hardware name: Red Hat KVM, BIOS
1.16.3-4.el9 04/01/2014
[Fri Aug 1 13:33:48 2025] Workqueue: events psi_avgs_work
[Fri Aug 1 13:33:48 2025] RIP: 0010:collect_percpu_times+0x4c7/0x540
[Fri Aug 1 13:33:48 2025] Code: 00 00 00 c7 40 0c 00 00 00 00 48 c7 40
14 00 00 00 00 48 81 c4 68 01 00 00 5b 5d 41 5c 41 5d 41 5e 41 5f c3
cc cc cc cc f3 90 <4c> 89 ef e8 91 3b 49 00 48 8b 44 24 08 4c 8b 74 24
20 e9 15 fd ff
[Fri Aug 1 13:33:48 2025] RSP: 0018:ff110001370a7ae8 EFLAGS: 00000202
[Fri Aug 1 13:33:48 2025] RAX: 0000000000000000 RBX: ffd1ffffffd1f740
RCX: dffffc0000000000
[Fri Aug 1 13:33:48 2025] RDX: 0000000000000003 RSI: 0000000000000000
RDI: ff110004cb32cec0
[Fri Aug 1 13:33:48 2025] RBP: ffffffff98e09ec0 R08: ffffffff953e5db3
R09: 0000000000000001
[Fri Aug 1 13:33:48 2025] R10: ffffffff97fe80e7 R11: 0000000000000000
R12: 0000000000000006
[Fri Aug 1 13:33:48 2025] R13: ffffffff97479050 R14: 000000000001e72b
R15: ff110001370a7bb0
[Fri Aug 1 13:33:48 2025] FS: 0000000000000000(0000)
GS:ff11000532523000(0000) knlGS:0000000000000000
[Fri Aug 1 13:33:48 2025] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Fri Aug 1 13:33:48 2025] CR2: 00007fd174c8cc58 CR3: 0000000119668001
CR4: 0000000000373ef0
[Fri Aug 1 13:33:48 2025] Call Trace:
[Fri Aug 1 13:33:48 2025] <TASK>
[Fri Aug 1 13:33:48 2025] ? __pfx_collect_percpu_times+0x10/0x10
[Fri Aug 1 13:33:48 2025] psi_avgs_work+0x89/0x140
[Fri Aug 1 13:33:48 2025] ? __pfx_psi_avgs_work+0x10/0x10
[Fri Aug 1 13:33:48 2025] ? lock_release+0xc4/0x270
[Fri Aug 1 13:33:48 2025] ? rcu_is_watching+0x20/0x50
[Fri Aug 1 13:33:48 2025] process_one_work+0x4bf/0xb40
[Fri Aug 1 13:33:48 2025] ? lock_acquire+0x143/0x2d0
[Fri Aug 1 13:33:48 2025] ? __pfx_process_one_work+0x10/0x10
[Fri Aug 1 13:33:48 2025] ? assign_work+0xd6/0x110
[Fri Aug 1 13:33:48 2025] worker_thread+0x2c9/0x550
[Fri Aug 1 13:33:48 2025] ? __pfx_worker_thread+0x10/0x10
[Fri Aug 1 13:33:48 2025] kthread+0x216/0x3e0
[Fri Aug 1 13:33:48 2025] ? __pfx_kthread+0x10/0x10
[Fri Aug 1 13:33:48 2025] ? __pfx_kthread+0x10/0x10
[Fri Aug 1 13:33:48 2025] ? lock_release+0xc4/0x270
[Fri Aug 1 13:33:48 2025] ? rcu_is_watching+0x20/0x50
[Fri Aug 1 13:33:48 2025] ? __pfx_kthread+0x10/0x10
[Fri Aug 1 13:33:48 2025] ret_from_fork+0x23a/0x2e0
[Fri Aug 1 13:33:48 2025] ? __pfx_kthread+0x10/0x10
[Fri Aug 1 13:33:48 2025] ret_from_fork_asm+0x1a/0x30
[Fri Aug 1 13:33:48 2025] </TASK>

-- 
Thanks,

Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ