[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230924131857.1276330-8-sashal@kernel.org>
Date: Sun, 24 Sep 2023 09:18:45 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Zheng Yejian <zhengyejian1@...wei.com>, mhiramat@...nel.org,
Steven Rostedt <rostedt@...dmis.org>,
Sasha Levin <sashal@...nel.org>,
linux-trace-kernel@...r.kernel.org
Subject: [PATCH AUTOSEL 5.15 08/18] ring-buffer: Avoid softlockup in ring_buffer_resize()
From: Zheng Yejian <zhengyejian1@...wei.com>
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.
To avoid it, call cond_resched() after each cpu buffer allocation.
Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com
Cc: <mhiramat@...nel.org>
Signed-off-by: Zheng Yejian <zhengyejian1@...wei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/trace/ring_buffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index db7cefd196cec..b15d72284c7f7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2176,6 +2176,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+ cond_resched();
}
cpus_read_lock();
--
2.40.1
Powered by blists - more mailing lists