lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180806230536.ef217b2c2c008838606bedc3@kernel.org>
Date:   Mon, 6 Aug 2018 23:05:36 +0900
From:   Masami Hiramatsu <mhiramat@...nel.org>
To:     "Joel Fernandes (Google)" <joel@...lfernandes.org>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...roid.com,
        Ingo Molnar <mingo@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        paulmck@...ux.vnet.ibm.com, mathieu.desnoyers@...icios.com,
        namhyung@...nel.org, peterz@...radead.org
Subject: Re: [PATCH ftrace/core] tracing: irqsoff: Account for additional
 preempt_disable

On Sun,  5 Aug 2018 20:40:49 -0700
"Joel Fernandes (Google)" <joel@...lfernandes.org> wrote:

> Recently we tried to make the preemptirqsoff tracer to use irqsoff
> tracepoint probes. However this causes issues as reported by Masami:
> 
> [2.271078] Testing tracer preemptirqsoff: .. no entries found ..FAILED!
> [2.381015] WARNING: CPU: 0 PID: 1 at /home/mhiramat/ksrc/linux/kernel/
> trace/trace.c:1512 run_tracer_selftest+0xf3/0x154
> 
> This is due to the tracepoint code increasing the preempt nesting count
> by calling an additional preempt_disable before calling into the
> preemptoff tracer which messes up the preempt_count() check in
> tracer_hardirqs_off.
> 
> To fix this, make the irqsoff tracer probes balance the additional outer
> preempt_disable with a preempt_enable_notrace.

I've tested it and ensured this fixes the problem.

Tested-by: Masami Hiramatsu <mhiramat@...nel.org>

And I have a comment on this code.

> 
> The other way to fix this is to just use SRCU for all tracepoints.
> However we can't do that because we can't use NMIs from RCU context.
> 
> Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints
>                       and unify their usage")
> Fixes: e6753f23d961 ("tracepoint: Make rcuidle tracepoint callers use SRCU")
> Reported-by: Masami Hiramatsu <mhiramat@...nel.org>
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
>  kernel/trace/trace_irqsoff.c | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index 770cd30cda40..ffbf1505d5bc 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -603,14 +603,40 @@ static void irqsoff_tracer_stop(struct trace_array *tr)
>   */
>  static void tracer_hardirqs_on(void *none, unsigned long a0, unsigned long a1)
>  {

To ensure this function must not be preempted even if we increment preempt
count, I think you should check irq_disabled() whole this process, put below
here.

	if (unlikely(!irq_disabled()))
		return;

Since irq_disabled() will be checked in irq_trace() anyway, so no problem
to return here when !irq_disabled().

> +	/*
> +	 * Tracepoint probes are expected to be called with preempt disabled,
> +	 * We don't care about being called with preempt disabled but we need
> +	 * to know in the future if that changes so we can remove the next
> +	 * preempt_enable.
> +	 */
> +	WARN_ON_ONCE(!preempt_count());
> +
> +	/* Tracepoint probes disable preemption atleast once, account for that */

	 * Even we do this here, we are sure that irq is disabled in this context.
	 * which means preemption is kept disabled.

> +	preempt_enable_notrace();
> +
>  	if (!preempt_trace() && irq_trace())
>  		stop_critical_timing(a0, a1);
> +
> +	preempt_disable_notrace();
>  }
>  
>  static void tracer_hardirqs_off(void *none, unsigned long a0, unsigned long a1)
>  {

Same here.


Any though :) ?

Thank you,

> +	/*
> +	 * Tracepoint probes are expected to be called with preempt disabled,
> +	 * We don't care about being called with preempt disabled but we need
> +	 * to know in the future if that changes so we can remove the next
> +	 * preempt_enable.
> +	 */
> +	WARN_ON_ONCE(!preempt_count());
> +
> +	/* Tracepoint probes disable preemption atleast once, account for that */
> +	preempt_enable_notrace();
> +
>  	if (!preempt_trace() && irq_trace())
>  		start_critical_timing(a0, a1);
> +
> +	preempt_disable_notrace();
>  }
>  
>  static int irqsoff_tracer_init(struct trace_array *tr)
> -- 
> 2.18.0.597.ga71716f1ad-goog
> 


-- 
Masami Hiramatsu <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ