[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230822091540.99e581b579aa790a90e335bc@kernel.org>
Date: Tue, 22 Aug 2023 09:15:40 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: "Masami Hiramatsu (Google)" <mhiramat@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH] tracing: Fix to avoid wakeup loop in splice read of
per-cpu buffer
On Mon, 21 Aug 2023 23:19:18 +0900
"Masami Hiramatsu (Google)" <mhiramat@...nel.org> wrote:
> From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
>
> ftrace user can set 0 or small number to the 'buffer_percent' for quick
> response for the ring buffer. In that case wait_on_pipe() will return
> before filling a page of the ring buffer. That is too soon for splice()
> because ring_buffer_read_page() will fail again.
> This leads unnecessary loop in tracing_buffers_splice_read().
>
> Set a minimum percentage of the buffer which is enough to fill a page to
> wait_on_pipe() to avoid this situation.
>
> Fixes: 03329f993978 ("tracing: Add tracefs file buffer_percentage")
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
> ---
> kernel/trace/trace.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index b8870078ef58..88448e8d8214 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -8462,6 +8462,8 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
> /* did we read anything? */
> if (!spd.nr_pages) {
> long wait_index;
> + size_t nr_pages;
> + size_t full;
>
> if (ret)
> goto out;
> @@ -8472,7 +8474,15 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
>
> wait_index = READ_ONCE(iter->wait_index);
>
> - ret = wait_on_pipe(iter, iter->tr->buffer_percent);
> + /* For splice, we have to ensure at least 1 page is filled */
> + nr_pages = ring_buffer_nr_pages(iter->array_buffer->buffer, iter->cpu_file);
> + if (nr_pages * iter->tr->buffer_percent < 100) {
> + full = nr_pages + 99;
> + do_div(full, nr_pages);
> + } else
> + full = iter->tr->buffer_percent;
Ah I must have to take a sleep well. What I need is to ensure full >= 1.
static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int full)
{
...
return (dirty * 100) > (full * nr_pages);
}
If dirty = 0, this always false.
But I think if full == 0, this should return true.
If dirty = 1,
- nr_pages < 100, this is always true and that is good.
- nr_pages > 100, even if full is 1 (smallest), it doesn't true. But that is OK
because dirty page number will be increased later.
- nr_pages == 100 is the corner case. I think this should be
return (dirty * 100) >= (full * nr_pages);
Let me update the patch.
Thank you,
> +
> + ret = wait_on_pipe(iter, full);
> if (ret)
> goto out;
>
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists