[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210923093359.30da8ba6@gandalf.local.home>
Date: Thu, 23 Sep 2021 09:33:59 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: 王贇 <yun.wang@...ux.alibaba.com>
Cc: Ingo Molnar <mingo@...hat.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] trace: prevent preemption in
perf_ftrace_function_call()
On Thu, 23 Sep 2021 11:42:56 +0800
王贇 <yun.wang@...ux.alibaba.com> wrote:
> With CONFIG_DEBUG_PREEMPT we observed reports like:
>
> BUG: using smp_processor_id() in preemptible
> caller is perf_ftrace_function_call+0x6f/0x2e0
> CPU: 1 PID: 680 Comm: a.out Not tainted
> Call Trace:
> <TASK>
> dump_stack_lvl+0x8d/0xcf
> check_preemption_disabled+0x104/0x110
> ? optimize_nops.isra.7+0x230/0x230
> ? text_poke_bp_batch+0x9f/0x310
> perf_ftrace_function_call+0x6f/0x2e0
> ...
> __text_poke+0x5/0x620
> text_poke_bp_batch+0x9f/0x310
>
> This tell us the CPU could be changed after task is preempted, and
> the checking on CPU before preemption is helpless.
>
> This patch just turn off preemption in perf_ftrace_function_call()
> to prevent CPU changing.
>
> Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
> ---
> kernel/trace/trace_event_perf.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
> index 6aed10e..5486b18 100644
> --- a/kernel/trace/trace_event_perf.c
> +++ b/kernel/trace/trace_event_perf.c
> @@ -438,15 +438,17 @@ void perf_trace_buf_update(void *record, u16 type)
> int rctx;
> int bit;
>
> + preempt_disable_notrace();
> +
> if (!rcu_is_watching())
> - return;
> + goto out;
You don't need preemption disabled for the above check. It's better to
disable it here and leave the above return untouched.
-- Steve
>
> if ((unsigned long)ops->private != smp_processor_id())
> - return;
> + goto out;
>
> bit = ftrace_test_recursion_trylock(ip, parent_ip);
> if (bit < 0)
> - return;
> + goto out;
>
> event = container_of(ops, struct perf_event, ftrace_ops);
>
> @@ -468,16 +470,18 @@ void perf_trace_buf_update(void *record, u16 type)
>
> entry = perf_trace_buf_alloc(ENTRY_SIZE, NULL, &rctx);
> if (!entry)
> - goto out;
> + goto out_unlock;
>
> entry->ip = ip;
> entry->parent_ip = parent_ip;
> perf_trace_buf_submit(entry, ENTRY_SIZE, rctx, TRACE_FN,
> 1, ®s, &head, NULL);
>
> -out:
> +out_unlock:
> ftrace_test_recursion_unlock(bit);
> #undef ENTRY_SIZE
> +out:
> + preempt_enable_notrace();
> }
>
> static int perf_ftrace_function_register(struct perf_event *event)
Powered by blists - more mailing lists