lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211008200328.5b88422d@oasis.local.home>
Date:   Fri, 8 Oct 2021 20:03:28 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     王贇 <yun.wang@...ux.alibaba.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        open list <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Jiri Olsa <jolsa@...hat.com>
Subject: Re: [RESEND PATCH v2] trace: prevent preemption in
 perf_ftrace_function_call()

On Tue, 28 Sep 2021 11:12:24 +0800
王贇 <yun.wang@...ux.alibaba.com> wrote:

> With CONFIG_DEBUG_PREEMPT we observed reports like:
> 
>   BUG: using smp_processor_id() in preemptible
>   caller is perf_ftrace_function_call+0x6f/0x2e0
>   CPU: 1 PID: 680 Comm: a.out Not tainted
>   Call Trace:
>    <TASK>
>    dump_stack_lvl+0x8d/0xcf
>    check_preemption_disabled+0x104/0x110
>    ? optimize_nops.isra.7+0x230/0x230
>    ? text_poke_bp_batch+0x9f/0x310
>    perf_ftrace_function_call+0x6f/0x2e0
>    ...

It would be useful to see the full backtrace, to know how this
happened. Please do not shorten back traces.

-- Steve

>    __text_poke+0x5/0x620
>    text_poke_bp_batch+0x9f/0x310
> 
> This telling us the CPU could be changed after task is preempted, and
> the checking on CPU before preemption will be invalid.
> 
> This patch just turn off preemption in perf_ftrace_function_call()
> to prevent CPU changing.
> 
> Reported-by: Abaci <abaci@...ux.alibaba.com>
> Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
> ---
>  kernel/trace/trace_event_perf.c | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
> index 6aed10e..dcbefdf 100644
> --- a/kernel/trace/trace_event_perf.c
> +++ b/kernel/trace/trace_event_perf.c
> @@ -441,12 +441,19 @@ void perf_trace_buf_update(void *record, u16 type)
>  	if (!rcu_is_watching())
>  		return;
> 
> +	/*
> +	 * Prevent CPU changing from now on. rcu must
> +	 * be in watching if the task was migrated and
> +	 * scheduled.
> +	 */
> +	preempt_disable_notrace();
> +
>  	if ((unsigned long)ops->private != smp_processor_id())
> -		return;
> +		goto out;
> 
>  	bit = ftrace_test_recursion_trylock(ip, parent_ip);
>  	if (bit < 0)
> -		return;
> +		goto out;
> 
>  	event = container_of(ops, struct perf_event, ftrace_ops);
> 
> @@ -468,16 +475,18 @@ void perf_trace_buf_update(void *record, u16 type)
> 
>  	entry = perf_trace_buf_alloc(ENTRY_SIZE, NULL, &rctx);
>  	if (!entry)
> -		goto out;
> +		goto unlock;
> 
>  	entry->ip = ip;
>  	entry->parent_ip = parent_ip;
>  	perf_trace_buf_submit(entry, ENTRY_SIZE, rctx, TRACE_FN,
>  			      1, &regs, &head, NULL);
> 
> -out:
> +unlock:
>  	ftrace_test_recursion_unlock(bit);
>  #undef ENTRY_SIZE
> +out:
> +	preempt_enable_notrace();
>  }
> 
>  static int perf_ftrace_function_register(struct perf_event *event)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ