[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101117173124.GB13717@Krystal>
Date: Wed, 17 Nov 2010 12:31:24 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...ux.intel.com>,
Andre@...stfloor.org, ltt-dev@...ts.casi.polymtl.ca,
linux-kernel@...r.kernel.org, pierre-marc.fournier@...ymtl.ca
Subject: Re: [ANNOUNCE] New tools: lttngtrace and lttngreport
* Andi Kleen (andi@...stfloor.org) wrote:
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> writes:
> >
> > --> Blocked in RUNNING, SYSCALL 142 [sys_select+0x0/0xc0], ([...], dur: 0.029567)
> > | --> Blocked in RUNNING, SYSCALL 168 [sys_poll+0x0/0xc0], ([...], dur: 1.187935)
> > | --- Woken up by an IRQ: IRQ 0 [timer]
> > --- Woken up in context of 7401 [gnome-power-man] in high-level state RUNNING
>
> Very nice! Now how can we get that with an unpatched kernel tree?
Well, I'm afraid the collection approach "trace" is currently taking won't allow
this kind of dependency wakeup chain tracking, because they focus on tracing
operations happening on a thread and its children, but the reality is that the
wakeup chains often spread outside of this scope.
This is why lttngtrace gathers a system-wide trace even though we're mostly
intested in the wait/wakeups of a specific PID.
Wakeup dependency analysis depends on a few key events to track these chains.
It's all described in Pierre-Marc Fournier's master thesis and implemented as
a LTTV plugin. Don't worry about the French cover page, the whole chapter 2
(p. 40-74) is in English:
http://publications.polymtl.ca/117/1/2009_Pierre-MarcFournier.pdf
The instrumentation needed for it, excerpt from the thesis:
"We instrumented scheduling changes (and whether they are caused by preemption
or blocking), process wakeups, IRQ entry/exit, softIRQ entry/exit, trap
entry/exit and process forking. Additionally, special state dump events are
generated at the beginning of the trace that indicate in what control flow state
(see section 2.2.3.1) each process is initially. All this instrumentation is
kernel-based, no application instrumentation is necessary.
In order to display detailed information in reports, the Delay Analyzer also
consumes events that give the following information.
– Mapping between IRQ number and name of hardware using it
– Mapping between softIRQ number and the function handling it
– Mapping between a process ID and its name
– Arguments of the open() system call, in order to save the mapping between file
descriptor and file name
– Arguments of the read() system call, in order to know what file caused the
system call to block
– What file descriptors caused a poll() system call to wake up
Work is under way to instrument yet more specific system calls."
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists