[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d292w41p.fsf@sejong.aot.lge.com>
Date: Wed, 05 Nov 2014 15:32:34 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: "Liang\, Kan" <kan.liang@...el.com>
Cc: "acme\@kernel.org" <acme@...nel.org>,
"jolsa\@kernel.org" <jolsa@...nel.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"andi\@firstfloor.org" <andi@...stfloor.org>
Subject: Re: [PATCH 1/1] perf tools: perf diff for different binaries
Hi Kan,
On Tue, 4 Nov 2014 17:07:43 +0000, Kan Liang wrote:
> Hi Namhyung,
>
>> > tchain_edit [.] f1
>> > 0.14% 3.913444 tchain_edit [.] f2
>> > 99.82% 1.005478 tchain_edit [.] f3
>>
>> Hmm.. I think it should be a default behavior for perf diff, otherwise -s
>> symbol is almost meaningless IMHO.
>
> I think we need both instruction level and function level diff.
> For debugging scaling issue, I think we need to do deeper analysis for some
> cache or lock issue. The function level is too high granularity.
>
> The new option can be used to debug scaling regression issue.
> If the binary/kernel is updated, it doesn't make sense to compare the
> symbol address, since it should be changed. So comparing the function
> should be more useful.
>
>
>> What about setting the
>> sort_sym.se_collapse in data_process() so that hists__match() can use
>> symbol names?
>
> Yes, we can set it if we only do function level diff. But I'd like to keep
> both. So I defined two sort keys.
> "symbol" means "symbol address executed at the time of sample "
> "symbol_name" means "name of function executed at the time of sample"
Hmm.. I don't think the symbol sort key provides the instruction level
diff that you want. If it finds a symbol it just use the start address
of the symbol, not the exact address of the sample. Am I missing
something?
Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists