[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251228065008.2396573-1-mawupeng1@huawei.com>
Date: Sun, 28 Dec 2025 14:50:07 +0800
From: Wupeng Ma <mawupeng1@...wei.com>
To: <rostedt@...dmis.org>, <mhiramat@...nel.org>,
<mathieu.desnoyers@...icios.com>
CC: <mawupeng1@...wei.com>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>
Subject: [PATCH] ring-buffer: Avoid softlockup in ring_buffer_resize() during memory free
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be freed, it may not give up cpu
for a long time and finally cause a softlockup.
To avoid it, call cond_resched() after each cpu buffer free as Commit
f6bd2c92488c ("ring-buffer: Avoid softlockup in ring_buffer_resize()")
does.
Detailed call trace as follow:
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 24-....: (14837 ticks this GP) idle=521c/1/0x4000000000000000 softirq=230597/230597 fqs=5329
rcu: (t=15004 jiffies g=26003221 q=211022 ncpus=96)
CPU: 24 UID: 0 PID: 11253 Comm: bash Kdump: loaded Tainted: G EL 6.18.2+ #278 NONE
pc : arch_local_irq_restore+0x8/0x20
arch_local_irq_restore+0x8/0x20 (P)
free_frozen_page_commit+0x28c/0x3b0
__free_frozen_pages+0x1c0/0x678
___free_pages+0xc0/0xe0
free_pages+0x3c/0x50
ring_buffer_resize.part.0+0x6a8/0x880
ring_buffer_resize+0x3c/0x58
__tracing_resize_ring_buffer.part.0+0x34/0xd8
tracing_resize_ring_buffer+0x8c/0xd0
tracing_entries_write+0x74/0xd8
vfs_write+0xcc/0x288
ksys_write+0x74/0x118
__arm64_sys_write+0x24/0x38
Signed-off-by: Wupeng Ma <mawupeng1@...wei.com>
---
kernel/trace/ring_buffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index afcd3747264d2..3ba08fc1b7d05 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3121,6 +3121,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
list) {
list_del_init(&bpage->list);
free_buffer_page(bpage);
+
+ cond_resched();
}
}
out_err_unlock:
--
2.43.0
Powered by blists - more mailing lists