[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090219213810.GB5084@nowhere>
Date: Thu, 19 Feb 2009 22:38:11 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jason Baron <jbaron@...hat.com>, Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <srostedt@...hat.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [rfd] function-graph augmentation
On Thu, Feb 19, 2009 at 10:28:08PM +0100, Frederic Weisbecker wrote:
> On Thu, Feb 19, 2009 at 10:01:44PM +0100, Peter Zijlstra wrote:
> > Hi,
> >
> > I was thinking how best to augment the function graph tracer with
> > various information. It seemed useful to add argument/return tracer
> > entries which, when found after a function entry, or function exit entry
> > would be rendered in the trace.
> >
> > So supposing something like;
> >
> > 3) | handle_mm_fault() {
> > 3) | count_vm_event() {
> > 3) 0.243 us | test_ti_thread_flag();
> > 3) 0.754 us | }
> > 3) 0.249 us | pud_alloc();
> > 3) 0.251 us | pmd_alloc();
> > 3) | __do_fault() {
> > 3) | filemap_fault() {
> > 3) | find_lock_page() {
> > 3) | find_get_page() {
> > 3) 0.248 us | test_ti_thread_flag();
> > 3) 0.844 us | }
> > 3) 1.341 us | }
> > 3) 1.837 us | }
> > 3) 0.275 us | _spin_lock();
> > 3) 0.257 us | page_add_file_rmap();
> > 3) 0.233 us | native_set_pte_at();
> > 3) | _spin_unlock() {
> > 3) 0.248 us | test_ti_thread_flag();
> > 3) 0.742 us | }
> > 3) | unlock_page() {
> > 3) 0.243 us | page_waitqueue();
> > 3) 0.237 us | __wake_up_bit();
> > 3) 1.209 us | }
> > 3) 6.274 us | }
> > 3) 8.806 us | }
> >
> > Say we found:
> >
> > trace_graph_entry -- handle_mm_fault()
> > trace_func_arg -- address:0xffffffff
> > trace_func_arg -- write_access:1
> >
> > We'd render:
> >
> > 3) | handle_mm_fault(.address=0xffffffff, .write_access=1) {
>
>
> Good solution, except that I wonder about preemption races.
> Imagine the following scenario:
>
> CPU#0
> trace_graph_entry -> commit to ring_buffer CPU#0
>
> //entering a the function
> //task is scheduled out and reschduled later on CPU#1
>
> CPU#1
> trace_func_arg -> commit to ring_buffer CPU#1
>
> Later on the graph output process:
>
> print("func(")
> search_for_args on buffer but don't find them because they are another
> cpu ring_buffer.
>
> Well I guess it should be rare, but it can happen.
> Another race will be interrupts. And interrupt can fill a lot of entries
> between a trace entry and the parameters.
>
> And yet another thing: the ring buffer does not allow yet to peek more than one entry
> further. But, I guess it doesn't require a lot of change.
>
> The other solution would be to have a hashtable of functions for which we want to store the
> parameters where we can find the number of parameters to store, and their type.
> This way we could atomically submit those parameter entries and be more sure they will follow
> the current one.
>
>
> > trace_graph_return -- handle_mm_fault()
> > trace_func_ret -- 2
> >
> > We'd render:
> >
> > 3) 8.806 us | } = 2
> >
> > Then we can register with tracepoints inside functions to add these
> > generic trace_func_arg/_ret entries to augment the graph (and or
> > function) tracer.
>
>
> We have some vague discussions about he return value.
Grr, s/have/had
Well, storing the return value as an unsigned long is generic.
It will probably match most of the return types...
Or we can follow the functions hashtable idea for return types: for most of the functions
we store an unsigned long that we will output in hex, for some other we can store special
values depending of the type given for the specific entry (or why not with a callback called
on output time).
> The most simple would be to pick it on the return value register and send it the rest of
> the trace. By default we can just print a hex value of an unsigned long size.
> That would be generic...
>
> >
> > Does that make sense?
> >
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists