[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230813054702.22ce16d9191a1f6b84942a1e@kernel.org>
Date: Sun, 13 Aug 2023 05:47:02 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Zheng Yejian <zhengyejian1@...wei.com>
Cc: Steven Rostedt <rostedt@...dmis.org>, <mhiramat@...nel.org>,
<laijs@...fujitsu.com>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>
Subject: Re: [PATCH] tracing: Fix race when concurrently splice_read
trace_pipe
On Sat, 12 Aug 2023 09:45:52 +0800
Zheng Yejian <zhengyejian1@...wei.com> wrote:
> On 2023/8/12 03:25, Steven Rostedt wrote:
> > On Thu, 10 Aug 2023 20:39:05 +0800
> > Zheng Yejian <zhengyejian1@...wei.com> wrote:
> >
> >> When concurrently splice_read file trace_pipe and per_cpu/cpu*/trace_pipe,
> >> there are more data being read out than expected.
>
> Sorry, I didn't make clear here. It not just read more but also lost
> some data. My case is that, for example:
> 1) Inject 3 events into ring_buffer: event1, event2, event3;
> 2) Concurrently splice_read through trace_pipes;
> 3) Then actually read out: event1, event3, event3. No event2, but 2
> event3.
>
> >
> > Honestly the real fix is to prevent that use case. We should probably have
> > access to trace_pipe lock all the per_cpu trace_pipes too.
>
> Yes, we could do that, but would it seem not that effective?
> because per_cpu trace_pipe only read its own ring_buffer and not race
> with ring_buffers in other cpus.
I think Steve said that only one of below is usable.
- Read trace_pipe
or
- Read per_cpu/cpu*/trace_pipe concurrently
And I think this makes sence, especially if you use splice (this *moves*
the page from the ring_buffer to other pipe).
Thank you,
>
> >
> > -- Steve
> >
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists