[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230408052226.25268-1-Tze-nan.Wu@mediatek.com>
Date: Sat, 8 Apr 2023 13:22:26 +0800
From: Tze-nan Wu <Tze-nan.Wu@...iatek.com>
To: <rostedt@...dmis.org>, <mhiramat@...nel.org>
CC: <bobule.chang@...iatek.com>, <wsd_upstream@...iatek.com>,
<Tze-nan.Wu@...iatek.com>, <stable@...r.kernel.org>,
"AngeloGioacchino Del Regno"
<angelogioacchino.delregno@...labora.com>,
<linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>
Subject: [PATCH] ring-buffer: Prevent inconsistent operation on cpu_buffer->resize_disabled
Sometimes, write to buffer_size_kb can be permanently failure if we change
the cpu_online_mask between two for_each_online_buffer_cpu loops in
function ring_buffer_reset_online_cpus.
The number of increasing and decreasing on cpu_buffer->resize_disable
may be inconsistent, leading the resize_disabled in some CPUs becoming
none zero after ring_buffer_reset_online_cpus return.
This issue can be reproduced by "echo 0 > trace" and hotplug cpu at the
same time. After reproducing succeess, we can find out the attempt to
write to buffer_size_kb node failure every time.
This patch prevent the inconsistent increasing and decreasing on
cpu_buffer->resize_disabled by copying the cpu_online_mask at the
beginning of the function.
But I wonder if there's any side-effect of this patch,
since the behavior changed, if we turn on a cpu between the two loops,
reset_disabled_cpu_buffer() of that cpu won't be run as before,
meaning the cpu_buffer on that cpu just awake will not be cleaned up.
Cc: stable@...r.kernel.org
Signed-off-by: Tze-nan Wu <Tze-nan.Wu@...iatek.com>
---
kernel/trace/ring_buffer.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 76a2d91eecad..468f46bba71e 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -288,9 +288,6 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data);
#define for_each_buffer_cpu(buffer, cpu) \
for_each_cpu(cpu, buffer->cpumask)
-#define for_each_online_buffer_cpu(buffer, cpu) \
- for_each_cpu_and(cpu, buffer->cpumask, cpu_online_mask)
-
#define TS_SHIFT 27
#define TS_MASK ((1ULL << TS_SHIFT) - 1)
#define TS_DELTA_TEST (~TS_MASK)
@@ -5353,12 +5350,15 @@ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
{
struct ring_buffer_per_cpu *cpu_buffer;
+ cpumask_var_t reset_online_mask;
int cpu;
/* prevent another thread from changing buffer sizes */
mutex_lock(&buffer->mutex);
- for_each_online_buffer_cpu(buffer, cpu) {
+ cpumask_copy(reset_online_mask, cpu_online_mask);
+
+ for_each_cpu_and(cpu, buffer->cpumask, reset_online_mask) {
cpu_buffer = buffer->buffers[cpu];
atomic_inc(&cpu_buffer->resize_disabled);
@@ -5368,7 +5368,7 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
/* Make sure all commits have finished */
synchronize_rcu();
- for_each_online_buffer_cpu(buffer, cpu) {
+ for_each_cpu_and(cpu, buffer->cpumask, reset_online_mask) {
cpu_buffer = buffer->buffers[cpu];
reset_disabled_cpu_buffer(cpu_buffer);
--
2.18.0
Powered by blists - more mailing lists