[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5719D909.9020303@intel.com>
Date: Fri, 22 Apr 2016 10:55:53 +0300
From: Adrian Hunter <adrian.hunter@...el.com>
To: Chris Phlipot <cphlipot0@...il.com>, acme@...nel.org,
mingo@...hat.com, peterz@...radead.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/5] perf tools: fix incorrect ordering of callchain
entries
On 19/04/16 11:56, Chris Phlipot wrote:
> The existing implentation implementation of thread__resolve_callchain,
Remove 'implentation'
> under certain circumstanes, can assemble callchain entries in the
'circumstanes' -> 'circumstances'
> incorrect order.
>
> A the callchain entries are resolved incorrectly for a sample when all
> of the following conditions are met:
>
> 1. callchain_param.order is set to ORDER_CALLER
>
> 2. thread__resolve_callchain_sample is able to resolve callchain entries
> for the sample.
>
> 3. unwind__get_entries is also able to resolve callchain entries for the
> sample.
>
> The fix is accomplished by reversing the order in which
> thread__resolve_callchain_sample and unwind__get_entries are called
> when callchain_param.order is set to ORDER_CALLER.
Can you give an example of the commands you used and what the call chain
looked like before and after.
Also please run ./scripts/checkpatch.pl
>
> Unwind specific code from thread__resolve_callchain is also moved into a
> new static function to improve readability of the fix.
>
> Signed-off-by: Chris Phlipot <cphlipot0@...il.com>
> ---
> tools/perf/util/machine.c | 57 ++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 41 insertions(+), 16 deletions(-)
>
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 0c4dabc..dd086c8 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -1806,8 +1806,6 @@ static int thread__resolve_callchain_sample(struct thread *thread,
> int skip_idx = -1;
> int first_call = 0;
>
> - callchain_cursor_reset(cursor);
> -
> if (has_branch_callstack(evsel)) {
> err = resolve_lbr_callchain_sample(thread, cursor, sample, parent,
> root_al, max_stack);
> @@ -1918,20 +1916,12 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
> entry->map, entry->sym);
> }
>
> -int thread__resolve_callchain(struct thread *thread,
> - struct callchain_cursor *cursor,
> - struct perf_evsel *evsel,
> - struct perf_sample *sample,
> - struct symbol **parent,
> - struct addr_location *root_al,
> - int max_stack)
> -{
> - int ret = thread__resolve_callchain_sample(thread, cursor, evsel,
> - sample, parent,
> - root_al, max_stack);
> - if (ret)
> - return ret;
> -
> +static int thread__resolve_callchain_unwind(struct thread *thread,
> + struct callchain_cursor *cursor,
> + struct perf_evsel *evsel,
> + struct perf_sample *sample,
> + int max_stack)
> +{
> /* Can we do dwarf post unwind? */
> if (!((evsel->attr.sample_type & PERF_SAMPLE_REGS_USER) &&
> (evsel->attr.sample_type & PERF_SAMPLE_STACK_USER)))
> @@ -1944,7 +1934,42 @@ int thread__resolve_callchain(struct thread *thread,
>
> return unwind__get_entries(unwind_entry, cursor,
> thread, sample, max_stack);
> +}
> +
> +int thread__resolve_callchain(struct thread *thread,
> + struct callchain_cursor *cursor,
> + struct perf_evsel *evsel,
> + struct perf_sample *sample,
> + struct symbol **parent,
> + struct addr_location *root_al,
> + int max_stack)
> +{
> + int ret = 0;
> + callchain_cursor_reset(&callchain_cursor);
> +
> + if (callchain_param.order == ORDER_CALLEE) {
> + ret = thread__resolve_callchain_sample(thread, cursor,
> + evsel, sample,
> + parent, root_al,
> + max_stack);
> + if (ret)
> + return ret;
> + ret = thread__resolve_callchain_unwind(thread, cursor,
> + evsel, sample,
> + max_stack);
> + } else {
> + ret = thread__resolve_callchain_unwind(thread, cursor,
> + evsel, sample,
> + max_stack);
> + if (ret)
> + return ret;
> + ret = thread__resolve_callchain_sample(thread, cursor,
> + evsel, sample,
> + parent, root_al,
> + max_stack);
> + }
>
> + return ret;
> }
>
> int machine__for_each_thread(struct machine *machine,
>
Powered by blists - more mailing lists