[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230714173824.6fd0302c6d1e60770af77ea4@kernel.org>
Date: Fri, 14 Jul 2023 17:38:24 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Zheng Yejian <zhengyejian1@...wei.com>
Subject: Re: [PATCH 1/2] tracing: Remove unnecessary copying of
tr->current_trace
On Thu, 13 Jul 2023 11:46:26 -0400
Steven Rostedt <rostedt@...dmis.org> wrote:
> From: "Steven Rostedt (Google)" <rostedt@...dmis.org>
>
> I have no idea why the iterator had to allocate a descriptor to make a
> copy of "current_trace" (now "tr->current_trace"). The content of that
> pointer never changes, so it's sufficient to just copy the pointer to
> maintain integrity with reading it.
>
> This is more of a clean up than a fix.
Interesting bug. Anyway looks good to me.
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
Thank you,
>
> Fixes: d7350c3f45694 ("tracing/core: make the read callbacks reentrants")
> Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> ---
> kernel/trace/trace.c | 22 +++-------------------
> 1 file changed, 3 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index be847d45d81c..1c370ffbe062 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -4205,15 +4205,9 @@ static void *s_start(struct seq_file *m, loff_t *pos)
> loff_t l = 0;
> int cpu;
>
> - /*
> - * copy the tracer to avoid using a global lock all around.
> - * iter->trace is a copy of current_trace, the pointer to the
> - * name may be used instead of a strcmp(), as iter->trace->name
> - * will point to the same string as current_trace->name.
> - */
> mutex_lock(&trace_types_lock);
> - if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name))
> - *iter->trace = *tr->current_trace;
> + if (unlikely(tr->current_trace != iter->trace))
> + iter->trace = tr->current_trace;
> mutex_unlock(&trace_types_lock);
>
> #ifdef CONFIG_TRACER_MAX_TRACE
> @@ -4862,16 +4856,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
> iter->fmt = NULL;
> iter->fmt_size = 0;
>
> - /*
> - * We make a copy of the current tracer to avoid concurrent
> - * changes on it while we are reading.
> - */
> mutex_lock(&trace_types_lock);
> - iter->trace = kzalloc(sizeof(*iter->trace), GFP_KERNEL);
> - if (!iter->trace)
> - goto fail;
> -
> - *iter->trace = *tr->current_trace;
> + iter->trace = tr->current_trace;
>
> if (!zalloc_cpumask_var(&iter->started, GFP_KERNEL))
> goto fail;
> @@ -4936,7 +4922,6 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
>
> fail:
> mutex_unlock(&trace_types_lock);
> - kfree(iter->trace);
> kfree(iter->temp);
> kfree(iter->buffer_iter);
> release:
> @@ -5021,7 +5006,6 @@ static int tracing_release(struct inode *inode, struct file *file)
> free_cpumask_var(iter->started);
> kfree(iter->fmt);
> kfree(iter->temp);
> - kfree(iter->trace);
> kfree(iter->buffer_iter);
> seq_release_private(inode, file);
>
> --
> 2.40.1
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists