[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230906081930.3939106-1-zhengyejian1@huawei.com>
Date: Wed, 6 Sep 2023 16:19:30 +0800
From: Zheng Yejian <zhengyejian1@...wei.com>
To: <rostedt@...dmis.org>, <mhiramat@...nel.org>
CC: <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>, <yeweihua4@...wei.com>,
<zhengyejian1@...wei.com>
Subject: [PATCH] ring-buffer: Avoid softlockup in ring_buffer_resize()
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.
To avoid it, call cond_resched() after each cpu buffer allocation.
Signed-off-by: Zheng Yejian <zhengyejian1@...wei.com>
---
kernel/trace/ring_buffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 78502d4c7214..72ccf75defd0 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2198,6 +2198,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+ cond_resched();
}
cpus_read_lock();
--
2.25.1
Powered by blists - more mailing lists