[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180727070623.GA24770@krava>
Date: Fri, 27 Jul 2018 09:06:23 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: rodia@...istici.org
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
David Ahern <dsahern@...il.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH] perf c2c report: Fix crash for empty browser
On Thu, Jul 26, 2018 at 11:31:34PM +0000, rodia@...istici.org wrote:
> On 2018-07-26 19:30, Arnaldo Carvalho de Melo wrote:
> > Em Tue, Jul 24, 2018 at 08:20:08AM +0200, Jiri Olsa escreveu:
> > > Do not try to display entry details if there's
> > > not any. Currently this ends up in crash:
> > > $ perf c2c report
> > > perf: Segmentation fault
> >
> > How to replicate this?
> >
> > I tried:
> >
> > $ perf record sleep 1
> > $ perf c2c report
> >
> > But it didn't segfault
>
> Similarly I have tried :
> $ perf record sleep 1
> $ perf c2c report
> Then Press `d` to show the cache-line contents.
yep, sry I forgot to mention you need to press the 'd' to show details
> This replies the segfault on my machine (4.17.8-1).
> The patch mentioned above should solve it, even tough I am not sure as I
> haven't been able to recompile the kernel.
no need to recompile kernel
>
> The segfault by itself seems to be due to the report logic, as it did not
> expect to report on an empty browser.
> What has stepped me back is that application which I have been testing with
> rely on multiple threads instantiated through pthread, which should be
> counted in user-level threads right? But they still seem to return an empty
> browser.
right, c2c scans read/write accesses and tries to find false sharing
cases maybe there was nothing to be found
> When instead c2c is runned system-wide, with an application running on
> multiple threads like firefox or julia, cache hits are measured and also
> they are traced back in the source code.
I got a cache line (attached) for 'perf bench sched messaging'
NOT being traced system wide and just for user (you'll get plenty
of detected cachelines in kernel space):
jirka
---
[root@...va perf]# ./perf c2c record --all-user -- ./perf bench sched messaging -l 100000
[root@...va perf]# ./perf c2c report --stdio
=================================================
Shared Data Cache Line Table
=================================================
#
# ----------- Cacheline ---------- Total Tot ----- LLC Load Hitm ----- ---- Store Reference ---- --- Load Dram ---- LLC Total ----- Core Load Hit ----- -- LLC Load Hit --
# Index Address Node PA cnt records Hitm Total Lcl Rmt Total L1Hit L1Miss Lcl Rmt Ld Miss Loads FB L1 L2 Llc Rmt
# ..... .................. .... ...... ....... ....... ....... ....... ....... ....... ....... ....... ........ ........ ....... ....... ....... ....... ....... ........ ........
#
0 0x7fff5b729cc0 0 1 44 100.00% 1 1 0 21 21 0 2 0 2 23 0 0 9 11 0
=================================================
Shared Cache Line Distribution Pareto
=================================================
#
# ----- HITM ----- -- Store Refs -- --------- Data address --------- ---------- cycles ---------- Total cpu Shared
# Num Rmt Lcl L1 Hit L1 Miss Offset Node PA cnt Pid Code address rmt hitm lcl hitm load records cnt Symbol Object Source:Line Node
# ..... ....... ....... ....... ....... .................. .... ...... ....... .................. ........ ........ ........ ....... ........ ............... .................. ..................... ....
#
-------------------------------------------------------------
0 0 1 21 0 0x7fff5b729cc0
-------------------------------------------------------------
0.00% 100.00% 0.00% 0.00% 0x38 0 1 17356 0x7febaf7e1a46 0 142 101 23 4 [.] __libc_read libpthread-2.27.so read.c:28 0
0.00% 0.00% 100.00% 0.00% 0x38 0 1 17356 0x494e4e 0 0 0 21 4 [.] receiver perf sched-messaging.c:129 0
Powered by blists - more mailing lists