[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090324175750.GE3129@redhat.com>
Date: Tue, 24 Mar 2009 13:57:50 -0400
From: Jason Baron <jbaron@...hat.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
ltt-dev@...ts.casi.polymtl.ca,
Frederic Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Russell King <rmk+lkml@....linux.org.uk>,
Masami Hiramatsu <mhiramat@...hat.com>,
"Frank Ch. Eigler" <fche@...hat.com>,
Hideo AOKI <haoki@...hat.com>,
Takashi Nishiie <t-nishiie@...css.fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
Subject: Re: [patch 2/9] LTTng instrumentation - irq
On Tue, Mar 24, 2009 at 06:50:49PM +0100, Ingo Molnar wrote:
> * Jason Baron <jbaron@...hat.com> wrote:
>
> > On Tue, Mar 24, 2009 at 11:56:27AM -0400, Mathieu Desnoyers wrote:
> > > Instrumentation of IRQ related events : irq_entry, irq_exit and
> > > irq_next_handler.
> > >
> > > It allows tracers to perform latency analysis on those various types of
> > > interrupts and to detect interrupts with max/min/avg duration. It helps
> > > detecting driver or hardware problems which cause an ISR to take ages to
> > > execute. It has been shown to be the case with bogus hardware causing an mmio
> > > read to take a few milliseconds.
> > >
> > > Those tracepoints are used by LTTng.
> > >
> > > About the performance impact of tracepoints (which is comparable to markers),
> > > even without immediate values optimizations, tests done by Hideo Aoki on ia64
> > > show no regression. His test case was using hackbench on a kernel where
> > > scheduler instrumentation (about 5 events in code scheduler code) was added.
> > > See the "Tracepoints" patch header for performance result detail.
> > >
> > > irq_entry and irq_exit not declared static because they appear in x86 arch code.
> > >
> > > The idea behind logging irq/softirq/tasklet/(and eventually syscall) entry and
> > > exit events is to be able to recreate the kernel execution state at a given
> > > point in time. Knowing which execution context is responsible for a given trace
> > > event is _very_ valuable in trace data analysis.
> > >
> > > The IRQ instrumentation instruments the IRQ handler entry and exit. Jason
> > > instrumented the irq notifier chain calls (irq_handler_entry/exit). His approach
> > > provides information about which handler is being called, but does not map
> > > correctly to the fact that _multiple_ handlers are being called from within the
> > > same interrupt handler. From an interrupt latency analysis POV, this is
> > > incorrect.
> > >
> >
> > Since we are passing back the irq number, and we can not be
> > interrupted by the same irq, I think it should be pretty clear we
> > are in the same handler. That said, the extra entry/exit
> > tracepoints could make the sequence of events simpler to decipher,
> > which is important. The code looks good, and provides at least as
> > much information as the patch that I proposed. So i'll be happy
> > either way :)
>
> We already have your patch merged up in the tracing tree and it
> gives entry+exit tracepoints.
>
> Ingo
maybe i wasn't clear. Entry and exit as I proposed and as in the tracing
tree are for entry and exit into each handler per irq. Mathieu is
proposing an entry/exit tracepoint per irq, and a 3rd tracepoint to
tell us which handler is being called and its return code. hope this is
clear.
thanks,
-Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists