[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201223221747.GB236568@krava>
Date: Wed, 23 Dec 2020 23:17:47 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>,
James Clark <james.clark@....com>
Cc: linux-perf-users@...r.kernel.org,
John Garry <john.garry@...wei.com>,
linux-kernel@...r.kernel.org, namhyung@...nel.org,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Linuxarm <linuxarm@...wei.com>
Subject: Re: [PATCH v6 00/12] perf tools: fix perf stat with large socket IDs
On Fri, Dec 04, 2020 at 11:48:36AM +0000, John Garry wrote:
> On 03/12/2020 15:39, Jiri Olsa wrote:
>
> +
>
> > On Thu, Nov 26, 2020 at 04:13:16PM +0200, James Clark wrote:
> > > Changes since v5:
> > > * Fix test for cpu_map__get_die() by shifting id before testing.
> > > * Fix test for cpu_map__get_socket() by not using cpu_map__id_to_socket()
> > > which is only valid in CPU aggregation mode.
> > >
> > > James Clark (12):
> > > perf tools: Improve topology test
> > > perf tools: Use allocator for perf_cpu_map
> > > perf tools: Add new struct for cpu aggregation
> > > perf tools: Replace aggregation ID with a struct
> > > perf tools: add new map type for aggregation
> > > perf tools: drop in cpu_aggr_map struct
> > > perf tools: Start using cpu_aggr_id in map
> > > perf tools: Add separate node member
> > > perf tools: Add separate socket member
> > > perf tools: Add separate die member
> > > perf tools: Add separate core member
> > > perf tools: Add separate thread member
> >
> > Acked-by: Jiri Olsa <jolsa@...hat.com>
> >
>
> Tested-by: John Garry <john.garry@...wei.com>
hi,
I was wondering where this went, and noticed that
Arnaldo was not CC-ed on the cover letter ;-)
jirka
>
> I still think that vendors (like us) need to fix/improve their firmware
> tables so that we don't get silly big numbers for socket/package IDs, like
> S5418-D0, below:
>
> $./perf stat -a --per-die
>
> Performance counter stats for 'system wide':
>
> S36-D0 48 72,216.31 msec cpu-clock # 47.933 CPUs utilized
> S36-D0 48 174 context-switches # 0.002 K/sec
> S36-D0 48 48 cpu-migrations # 0.001 K/sec
> S36-D0 48 0 page-faults # 0.000 K/sec
> S36-D0 48 7,991,698 cycles # 0.000 GHz
> S36-D0 48 4,750,040 instructions # 0.59 insn per cycle
> S36-D0 1 <not supported> branches
> S36-D0 48 32,928 branch-misses # 0.00% of all branches
> S5418-D0 48 72,189.54 msec cpu-clock # 47.915 CPUs utilized
> S5418-D0 48 176 context-switches # 0.002 K/sec
> S5418-D0 48 48 cpu-migrations # 0.001 K/sec
> S5418-D0 48 0 page-faults # 0.000 K/sec
> S5418-D0 48 5,677,218 cycles # 0.000 GHz
> S5418-D0 48 3,872,285 instructions # 0.68 insn per cycle
> S5418-D0 1 <not supported> branches
> S5418-D0 48 29,208 branch-misses # 0.00% of all branches
>
> 1.506615297 seconds time elapsed
>
> but at least it works now. Thanks.
>
> >
> > >
> > > tools/perf/builtin-stat.c | 128 ++++++++++++------------
> > > tools/perf/tests/topology.c | 64 ++++++++++--
> > > tools/perf/util/cpumap.c | 171 ++++++++++++++++++++++-----------
> > > tools/perf/util/cpumap.h | 55 ++++++-----
> > > tools/perf/util/stat-display.c | 102 ++++++++++++--------
> > > tools/perf/util/stat.c | 2 +-
> > > tools/perf/util/stat.h | 9 +-
> > > 7 files changed, 337 insertions(+), 194 deletions(-)
> > >
> > > --
> > > 2.28.0
> > >
> >
> > .
> >
>
Powered by blists - more mailing lists