[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2fc66e5-d33a-0ed2-e46b-6cfbc4a941a6@linux.intel.com>
Date: Tue, 28 Apr 2020 16:10:12 +0800
From: "Jin, Yao" <yao.jin@...ux.intel.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
mingo@...hat.com, alexander.shishkin@...ux.intel.com,
Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
kan.liang@...el.com, yao.jin@...el.com
Subject: Re: [PATCH v3 0/7] perf: Stream comparison
Hi Jiri,
On 4/27/2020 6:10 PM, Jiri Olsa wrote:
> On Mon, Apr 20, 2020 at 09:04:44AM +0800, Jin Yao wrote:
>
> SNIP
>
>> compute_flag div.c:25 compute_flag div.c:25
>> compute_flag div.c:22 compute_flag div.c:22
>> main div.c:40 main div.c:40
>> main div.c:40 main div.c:40
>> main div.c:39 main div.c:39*
>>
>> [ Hot chains in old perf data only ]
>>
>> hot chain 1:
>> cycles: 2, hits: 4.08%
>> --------------------------
>> main div.c:42
>> compute_flag div.c:28
>>
>> [ Hot chains in new perf data only ]
>>
>> hot chain 1:
>> cycles: 36, hits: 3.36%
>> --------------------------
>> __random_r random_r.c:357
>> __random random.c:293
>> __random random.c:293
>> __random random.c:291
>> __random random.c:291
>> __random random.c:291
>> __random random.c:288
>> rand rand.c:27
>> rand rand.c:26
>> rand@plt
>> rand@plt
>> compute_flag div.c:25
>> compute_flag div.c:22
>> main div.c:40
>> main div.c:40
>>
>> Now we can see, following streams pair is moved to another section
>> "[ Hot chains in old perf data but source line changed (*) in new perf data ]"
>>
>> cycles: 1, hits: 26.80% cycles: 1, hits: 27.30%
>> --------------------------- --------------------------
>> main div.c:39 main div.c:39*
>> main div.c:44 main div.c:44
>>
>
>
> so I tried following:
>
> # ./perf record -e cycles:u -b ./perf bench sched pipe
> # ./perf record -e cycles:u -b ./perf bench sched pipe
> # ./perf diff -f --stream --before $PWD --after $PWD >out 2>&1
>
> and the out file looks like this:
>
> [ Matched hot chains between old perf data and new perf data ]
>
> [ Hot chains in old perf data but source line changed (*) in new perf data ]
>
> [ Hot chains in old perf data only ]
>
> hot chain 1:
> cycles: 0, hits: 4.20%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 2:
> cycles: 0, hits: 4.11%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 3:
> cycles: 0, hits: 8.22%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 4:
> cycles: 0, hits: 5.54%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 5:
> cycles: 0, hits: 6.10%
> --------------------------
> 0xffffffff89c00163
>
> [ Hot chains in new perf data only ]
>
> hot chain 1:
> cycles: 0, hits: 5.21%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 2:
> cycles: 0, hits: 4.79%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 3:
> cycles: 0, hits: 5.44%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 4:
> cycles: 0, hits: 5.50%
> --------------------------
> 0xffffffff89c00163
>
> hot chain 5:
> cycles: 0, hits: 7.14%
> --------------------------
> 0xffffffff89c00163
>
>
> I'd expected more common paths, from what I can see from 'perf report --branch-history'
> on bpth perf.data and perf.data.old
>
> jirka
>
I used the same command line but I can see more callchain entries.
perf record -e cycles:u -b perf bench sched pipe
perf record -e cycles:u -b perf bench sched pipe
perf diff --stream
[ Matched hot chains between old perf data and new perf data ]
hot chain pair 1:
cycles: 0, hits: 7.95% cycles: 0, hits: 6.61%
--------------------------- --------------------------
__libc_read read.c:27 __libc_read read.c:27
0xffffffffa9800163 0xffffffffa9800163
[ Hot chains in old perf data but source line changed (*) in new perf data ]
[ Hot chains in old perf data only ]
hot chain 1:
cycles: 49, hits: 4.98%
--------------------------
worker_thread sched-pipe.c:64
worker_thread sched-pipe.c:63
__libc_read read.c:28
__libc_read read.c:27
0xffffffffa9800163
hot chain 2:
cycles: 0, hits: 6.68%
--------------------------
0xffffffffa9800163
hot chain 3:
cycles: 0, hits: 6.57%
--------------------------
0xffffffffa9800163
hot chain 4:
cycles: 60, hits: 5.20%
--------------------------
worker_thread sched-pipe.c:67
worker_thread sched-pipe.c:60
worker_thread sched-pipe.c:70
worker_thread sched-pipe.c:70
__libc_read read.c:28
__libc_read read.c:27
0xffffffffa9800163
[ Hot chains in new perf data only ]
hot chain 1:
cycles: 68, hits: 7.83%
--------------------------
worker_thread sched-pipe.c:68
__libc_write write.c:28
__libc_write write.c:27
0xffffffffa9800163
__libc_write write.c:27
write@plt
write@plt
worker_thread sched-pipe.c:67
worker_thread sched-pipe.c:60
worker_thread sched-pipe.c:70
worker_thread sched-pipe.c:70
__libc_read read.c:28
hot chain 2:
cycles: 70, hits: 4.34%
--------------------------
worker_thread unistd.h:44
worker_thread sched-pipe.c:61
worker_thread sched-pipe.c:65
__libc_write write.c:28
__libc_write write.c:27
0xffffffffa9800163
__libc_write write.c:27
write@plt
write@plt
worker_thread sched-pipe.c:64
worker_thread sched-pipe.c:63
__libc_read read.c:28
hot chain 3:
cycles: 0, hits: 5.67%
--------------------------
0xffffffffa9800163
hot chain 4:
cycles: 0, hits: 5.47%
--------------------------
0xffffffffa9800163
It's interesting that some leaked kernel address are displayed in callchains
even we use the -e cycles:u. Should be the skid issue. I have a patch for
processing the kernel leaked samples but have not posted it.
But I'm no idea why only the leaked kernel address displayed in your example. :(
Thanks
Jin Yao
Powered by blists - more mailing lists