lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Aug 2016 09:49:32 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...nel.org>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC] ftrace / perf 'recursion'

On Wed, 17 Aug 2016 12:57:16 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> On Wed, Aug 17, 2016 at 12:33:06PM +0200, Peter Zijlstra wrote:
> 
> > So how to extend the same to function tracer .... we'd have to mark
> > exiting_irq() -> irq_exit() and everything from that as notrace, which
> > seems somewhat excessive, fragile and undesired because tracing those
> > functions is useful in other context :/  
> 
> Steve, would something like so work? It would completely kill function
> tracing for the irq_work exit path, but that seems fairly sane over-all.
> After all, all the common irq_work code is already notrace as well.
> 
>  arch/x86/kernel/irq_work.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
> index 3512ba607361..24de793c35c6 100644
> --- a/arch/x86/kernel/irq_work.c
> +++ b/arch/x86/kernel/irq_work.c
> @@ -10,17 +10,34 @@
>  #include <asm/apic.h>
>  #include <asm/trace/irq_vectors.h>
>  
> -static inline void __smp_irq_work_interrupt(void)
> +static inline notrace void __smp_irq_work_interrupt(void)

FYI, anything marked "inline" is also marked "notrace", because it only
gets traced if gcc decides not to inline it, and because that
"randomness" caused issues in the past, we define all "inline"s to
include "notrace" so a function marked inline will never be traced
regardless if gcc decides not to inline it.


>  {
>  	inc_irq_stat(apic_irq_work_irqs);
>  	irq_work_run();
>  }
>  
> +static inline notrace void exiting_irq_work(void)
> +{
> +#ifdef CONFIG_TRACING
> +	if (unlikely(1 /* function_tracing_enabled() */)) {
> +		unsigned long trace_recursion = current->trace_recursion;
> +
> +		current->trace_recursion |= 1 << 10; /* TRACE_INTERNAL_IRQ_BIT */
> +		barrier();
> +		exiting_irq();
> +		barrier();
> +		current->trace_recursion = trace_recursion;
> +		return;
> +	}
> +#endif

yuck. This looks very fragile. What happens if perf gets hooked to
function graph tracing, then this wont help that on function exit.

Also, it will prevent any tracing of NMIs that occur in there.

I would really like to keep this fix within perf if possible. If
anything, the flag should just tell the perf function handler not to
trace, this shouldn't stop all function handlers.

Maybe just have a flag that says that you are in a irq_work, and if
that is set, then have the perf tracepoints and function handle not
trigger more events?

-- Steve


> +	exiting_irq();
> +}
> +
>  __visible void smp_irq_work_interrupt(struct pt_regs *regs)
>  {
>  	ipi_entering_ack_irq();
>  	__smp_irq_work_interrupt();
> -	exiting_irq();
> +	exiting_irq_work();
>  }
>  
>  __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
> @@ -29,7 +46,7 @@ __visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
>  	trace_irq_work_entry(IRQ_WORK_VECTOR);
>  	__smp_irq_work_interrupt();
>  	trace_irq_work_exit(IRQ_WORK_VECTOR);
> -	exiting_irq();
> +	exiting_irq_work();
>  }
>  
>  void arch_irq_work_raise(void)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ