[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <43a29995-9941-4890-b85f-d378e0689fc9@paulmck-laptop>
Date: Mon, 12 Jan 2026 13:17:55 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: 王志 <23009200614@....xidian.edu.cn>
Cc: linux-rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
syzkaller@...glegroups.com
Subject: Re: [BUG] rcu_preempt stall detected by syzkaller on Linux v6.18
On Sun, Jan 04, 2026 at 10:56:07AM +0800, 王志 wrote:
> Hello RCU maintainers,
>
> This is a kernel bug found by syzkaller while fuzzing the upstream Linux kernel v6.18.
>
> The kernel reports an RCU preempt stall, where the RCU grace-period kthread appears to be starved for a long time. After the stall, the system becomes largely unresponsive and may eventually hit OOM.
>
> Kernel version:
> Linux v6.18
> Source: https://github.com/torvalds/linux/tree/v6.18
> The issue was triggered by syz-executor. The relevant kernel log is shown below.
> ============================================================
>
> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> rcu: (detected by 3, t=105004 jiffies, g=328445, q=426 ncpus=4)
> rcu: rcu_preempt kthread starved for 105017 jiffies!
> rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
The solution is likely to adjust your testing so that the rcu_preempt
kthread gets CPU time. Alternatively, set the rcutree.kthread_prio
kernel boot parameter to some real-time priority that ensures that it
gets the CPU time that it needs.
However, there are quite a few other kthreads that don't like being
starved of CPU, so more adjustment will likely be needed.
Another approach is to use affinity or cgroups to make sure that enough
CPU is reserved for such kthreads.
Thanx, Paul
> RCU grace-period kthread stack dump:
> Call Trace:
> rcu_gp_fqs_loop+0x195/0x780 kernel/rcu/tree.c:2083
> rcu_gp_kthread+0x1da/0x270 kernel/rcu/tree.c:2285
> kthread+0x27c/0x430 kernel/kthread.c:463
> ret_from_fork+0x2a5/0x370 arch/x86/kernel/process.c:158
>
> ============================================================
>
> Unfortunately, I do not yet have a minimal reproducer, but I can provide additional logs or testing if needed.
>
> Please let me know if more information is required.
>
> Best regards,
> Zhi Wang
>
>
>
>
Powered by blists - more mailing lists