[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <571AED68.6070508@gmail.com>
Date: Fri, 22 Apr 2016 20:35:04 -0700
From: Chris Phlipot <cphlipot0@...il.com>
To: Adrian Hunter <adrian.hunter@...el.com>, acme@...nel.org,
mingo@...hat.com, peterz@...radead.org
Cc: linux-kernel@...r.kernel.org, Jiri Olsa <jolsa@...nel.org>
Subject: Re: [PATCH 1/5] perf tools: fix incorrect ordering of callchain
entries
Modifying perf script to print call trees in the opposite order or
applying patch 2 from this series and comparing the results output from
export-to-postgtresql.py are the easiest ways to see the bug, however it
can still be seen in current builds using perf report.
Here is how i can reproduce the bug using perf report:
$ ./perf record --call-graph=dwarf stress -c 1 -t 5
when i run this command:
$./perf report --call-graph=flat,0,0,callee
i see this calltree contained in the output, which looks correct (callee
order):
gen8_irq_handler
handle_irq_event_percpu
handle_irq_event
handle_edge_irq
handle_irq
do_IRQ
ret_from_intr
__random
rand
0x558f2a04dded
0x558f2a04c774
__libc_start_main
0x558f2a04dcd9
Now I run this command:
$./perf report --call-graph=flat,0,0,caller
I would expect to see the exact reverse of the above when using caller
order(with "0x558f2a04dcd9" at the top and "gen8_irq_handler" at the
bottom) in the output, but it is nowhere to be found.
instead you see this:
ret_from_intr
do_IRQ
handle_irq
handle_edge_irq
handle_irq_event
handle_irq_event_percpu
gen8_irq_handler
0x558f2a04dcd9
__libc_start_main
0x558f2a04c774
0x558f2a04dded
rand
__random
Notice how internally the kernel symbols are reversed and the user space
symbols are reversed, but the kernel symbols still appear above the user
space symbols.
if you apply this patch and re-run, you will see the expected output
(with "0x558f2a04dcd9" at the top and "gen8_irq_handler" at the bottom):
0x558f2a04dcd9
__libc_start_main
0x558f2a04c774
0x558f2a04dded
rand
__random
ret_from_intr
do_IRQ
handle_irq
handle_edge_irq
handle_irq_event
handle_irq_event_percpu
gen8_irq_handler
Thanks,
Chris
On 04/22/2016 12:59 AM, Adrian Hunter wrote:
> +Jiri since he wrote the original code
>
> On 22/04/16 10:55, Adrian Hunter wrote:
>> On 19/04/16 11:56, Chris Phlipot wrote:
>>> The existing implentation implementation of thread__resolve_callchain,
>> Remove 'implentation'
>>
>>> under certain circumstanes, can assemble callchain entries in the
>> 'circumstanes' -> 'circumstances'
>>
>>> incorrect order.
>>>
>>> A the callchain entries are resolved incorrectly for a sample when all
>>> of the following conditions are met:
>>>
>>> 1. callchain_param.order is set to ORDER_CALLER
>>>
>>> 2. thread__resolve_callchain_sample is able to resolve callchain entries
>>> for the sample.
>>>
>>> 3. unwind__get_entries is also able to resolve callchain entries for the
>>> sample.
>>>
>>> The fix is accomplished by reversing the order in which
>>> thread__resolve_callchain_sample and unwind__get_entries are called
>>> when callchain_param.order is set to ORDER_CALLER.
>> Can you give an example of the commands you used and what the call chain
>> looked like before and after.
>>
>> Also please run ./scripts/checkpatch.pl
>>
>>> Unwind specific code from thread__resolve_callchain is also moved into a
>>> new static function to improve readability of the fix.
>>>
>>> Signed-off-by: Chris Phlipot <cphlipot0@...il.com>
>>> ---
>>> tools/perf/util/machine.c | 57 ++++++++++++++++++++++++++++++++++-------------
>>> 1 file changed, 41 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
>>> index 0c4dabc..dd086c8 100644
>>> --- a/tools/perf/util/machine.c
>>> +++ b/tools/perf/util/machine.c
>>> @@ -1806,8 +1806,6 @@ static int thread__resolve_callchain_sample(struct thread *thread,
>>> int skip_idx = -1;
>>> int first_call = 0;
>>>
>>> - callchain_cursor_reset(cursor);
>>> -
>>> if (has_branch_callstack(evsel)) {
>>> err = resolve_lbr_callchain_sample(thread, cursor, sample, parent,
>>> root_al, max_stack);
>>> @@ -1918,20 +1916,12 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
>>> entry->map, entry->sym);
>>> }
>>>
>>> -int thread__resolve_callchain(struct thread *thread,
>>> - struct callchain_cursor *cursor,
>>> - struct perf_evsel *evsel,
>>> - struct perf_sample *sample,
>>> - struct symbol **parent,
>>> - struct addr_location *root_al,
>>> - int max_stack)
>>> -{
>>> - int ret = thread__resolve_callchain_sample(thread, cursor, evsel,
>>> - sample, parent,
>>> - root_al, max_stack);
>>> - if (ret)
>>> - return ret;
>>> -
>>> +static int thread__resolve_callchain_unwind(struct thread *thread,
>>> + struct callchain_cursor *cursor,
>>> + struct perf_evsel *evsel,
>>> + struct perf_sample *sample,
>>> + int max_stack)
>>> +{
>>> /* Can we do dwarf post unwind? */
>>> if (!((evsel->attr.sample_type & PERF_SAMPLE_REGS_USER) &&
>>> (evsel->attr.sample_type & PERF_SAMPLE_STACK_USER)))
>>> @@ -1944,7 +1934,42 @@ int thread__resolve_callchain(struct thread *thread,
>>>
>>> return unwind__get_entries(unwind_entry, cursor,
>>> thread, sample, max_stack);
>>> +}
>>> +
>>> +int thread__resolve_callchain(struct thread *thread,
>>> + struct callchain_cursor *cursor,
>>> + struct perf_evsel *evsel,
>>> + struct perf_sample *sample,
>>> + struct symbol **parent,
>>> + struct addr_location *root_al,
>>> + int max_stack)
>>> +{
>>> + int ret = 0;
>>> + callchain_cursor_reset(&callchain_cursor);
>>> +
>>> + if (callchain_param.order == ORDER_CALLEE) {
>>> + ret = thread__resolve_callchain_sample(thread, cursor,
>>> + evsel, sample,
>>> + parent, root_al,
>>> + max_stack);
>>> + if (ret)
>>> + return ret;
>>> + ret = thread__resolve_callchain_unwind(thread, cursor,
>>> + evsel, sample,
>>> + max_stack);
>>> + } else {
>>> + ret = thread__resolve_callchain_unwind(thread, cursor,
>>> + evsel, sample,
>>> + max_stack);
>>> + if (ret)
>>> + return ret;
>>> + ret = thread__resolve_callchain_sample(thread, cursor,
>>> + evsel, sample,
>>> + parent, root_al,
>>> + max_stack);
>>> + }
>>>
>>> + return ret;
>>> }
>>>
>>> int machine__for_each_thread(struct machine *machine,
>>>
>>
Powered by blists - more mailing lists