lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0903242203480.22830@gandalf.stny.rr.com>
Date:	Tue, 24 Mar 2009 22:09:07 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Christoph Hellwig <hch@...radead.org>,
	Jason Baron <jbaron@...hat.com>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, ltt-dev@...ts.casi.polymtl.ca,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Russell King <rmk+lkml@....linux.org.uk>,
	Masami Hiramatsu <mhiramat@...hat.com>,
	"Frank Ch. Eigler" <fche@...hat.com>,
	Hideo AOKI <haoki@...hat.com>,
	Takashi Nishiie <t-nishiie@...css.fujitsu.com>,
	Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
Subject: Re: [patch 2/9] LTTng instrumentation - irq


On Tue, 24 Mar 2009, Mathieu Desnoyers wrote:
> 
> This third type of tracepoint for IRQs you are talking about is actually
> what I had in LTTng. I decided to add the irq_next_handler and to add a
> action field to irq_entry to include the irq handler information needed
> by Jason.
> 
> If we want to do this logically, without thinking about tracer
> performance impact, we could/should do :
> 
> trace_irq_entry(irqno, pt_regs)
>   for_each_handler() {
>     trace_irq_handler_entry(action)
>     action->handler()
>     trace_irq_handler_exit(ret)
>   }
> trace_irq_exit(retval)
> 
> And add the irq_entry/irq_exit events to the arch-specific reschedule,
> tlb flush, local timer irq, as I have in my lttng tree already.
> 
> But given the trace_irq_handler_entry/trace_irq_handler_exit events
> could be combined, given we can record action and ret in the
> irq_entry/exit events, I decided to remove 2 tracepoints (out of 4) from
> the single-handler fast path by adding this information to the irq
> entry/exit events, and decided to combine the irq_handler entry/exit
> into a single next_handler event, which records the previous ret value
> and the next action to execute.
> 
> On an interrupt-driven workload, it will have a significant impact.
> (2 events vs 4).

I thought tracepoints while not active are very low overhead.

If you only want to use 2 of the 4, would that be just as fast?

> 
> If we add interrupt threads to the kernel, then we can switch to the
> following scheme :
> 
> * instrumentation of the real interrupt handler :
> 
> trace_irq_entry(irqno, pt_regs)
> 
> trace_irq_exit(ret)
> 
> * instrumentation of the irq threads :
> 
> trace_irq_thread_entry()
> 
> trace_irq_thread_exit()
> 
> I don't see why we should mind trying to make the tracepoints "logical",
> especially if it hurts performance considerably. Doing these
> implementation-specific versions of irq tracepoints would provide the
> best performance we can get when tracing. It's up to the tracers to
> specialize their analysis based on the underlying IRQ mechanism
> (non-threaded vs threaded).

Perhaps we want to make them logical so that other things might hook into 
these trace points besides a tracer. I do not agree that the code should 
be modified just to make the trace points faster. The trace points are 
just hooks into code, and should have no effect when disabled. Once the 
code starts to change due to better placement of tracepoints for tracers, 
that's when those trace points should be NACKed.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ