[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090326182859.GB6399@Krystal>
Date: Thu, 26 Mar 2009 14:28:59 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
Christoph Hellwig <hch@...radead.org>,
Jason Baron <jbaron@...hat.com>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, ltt-dev@...ts.casi.polymtl.ca,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Russell King <rmk+lkml@....linux.org.uk>,
Masami Hiramatsu <mhiramat@...hat.com>,
"Frank Ch. Eigler" <fche@...hat.com>,
Hideo AOKI <haoki@...hat.com>,
Takashi Nishiie <t-nishiie@...css.fujitsu.com>,
Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
Subject: Re: [patch 2/9] LTTng instrumentation - irq
* Steven Rostedt (rostedt@...dmis.org) wrote:
>
> On Tue, 24 Mar 2009, Mathieu Desnoyers wrote:
> >
> > This third type of tracepoint for IRQs you are talking about is actually
> > what I had in LTTng. I decided to add the irq_next_handler and to add a
> > action field to irq_entry to include the irq handler information needed
> > by Jason.
> >
> > If we want to do this logically, without thinking about tracer
> > performance impact, we could/should do :
> >
> > trace_irq_entry(irqno, pt_regs)
> > for_each_handler() {
> > trace_irq_handler_entry(action)
> > action->handler()
> > trace_irq_handler_exit(ret)
> > }
> > trace_irq_exit(retval)
> >
> > And add the irq_entry/irq_exit events to the arch-specific reschedule,
> > tlb flush, local timer irq, as I have in my lttng tree already.
> >
> > But given the trace_irq_handler_entry/trace_irq_handler_exit events
> > could be combined, given we can record action and ret in the
> > irq_entry/exit events, I decided to remove 2 tracepoints (out of 4) from
> > the single-handler fast path by adding this information to the irq
> > entry/exit events, and decided to combine the irq_handler entry/exit
> > into a single next_handler event, which records the previous ret value
> > and the next action to execute.
> >
> > On an interrupt-driven workload, it will have a significant impact.
> > (2 events vs 4).
>
> I thought tracepoints while not active are very low overhead.
>
> If you only want to use 2 of the 4, would that be just as fast?
>
Probably, but I was talking about active tracing overhead here.
> >
> > If we add interrupt threads to the kernel, then we can switch to the
> > following scheme :
> >
> > * instrumentation of the real interrupt handler :
> >
> > trace_irq_entry(irqno, pt_regs)
> >
> > trace_irq_exit(ret)
> >
> > * instrumentation of the irq threads :
> >
> > trace_irq_thread_entry()
> >
> > trace_irq_thread_exit()
> >
> > I don't see why we should mind trying to make the tracepoints "logical",
> > especially if it hurts performance considerably. Doing these
> > implementation-specific versions of irq tracepoints would provide the
> > best performance we can get when tracing. It's up to the tracers to
> > specialize their analysis based on the underlying IRQ mechanism
> > (non-threaded vs threaded).
>
> Perhaps we want to make them logical so that other things might hook into
> these trace points besides a tracer. I do not agree that the code should
> be modified just to make the trace points faster. The trace points are
> just hooks into code, and should have no effect when disabled. Once the
> code starts to change due to better placement of tracepoints for tracers,
> that's when those trace points should be NACKed.
>
> -- Steve
>
If it makes the code messy, then yes, I agree that those tracepoints
should not go in.
Mathieu
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists