[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQwyMFRvk0gZg88v@google.com>
Date: Wed, 5 Nov 2025 21:29:20 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
James Clark <james.clark@...aro.org>, Xu Yang <xu.yang_2@....com>,
Chun-Tse Shao <ctshao@...gle.com>,
Thomas Richter <tmricht@...ux.ibm.com>,
Sumanth Korikkar <sumanthk@...ux.ibm.com>,
Collin Funk <collin.funk1@...il.com>,
Thomas Falcon <thomas.falcon@...el.com>,
Howard Chu <howardchu95@...il.com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Levi Yun <yeoreum.yun@....com>,
Yang Li <yang.lee@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
On Mon, Nov 03, 2025 at 09:09:14PM -0800, Ian Rogers wrote:
> On Mon, Nov 3, 2025 at 8:47 PM Namhyung Kim <namhyung@...nel.org> wrote:
> >
> > Hi Ian,
> >
> > On Fri, Oct 24, 2025 at 10:58:35AM -0700, Ian Rogers wrote:
> > > Prior to this series stat-shadow would produce hard coded metrics if
> > > certain events appeared in the evlist. This series produces equivalent
> > > json metrics and cleans up the consequences in tests and display
> > > output. A before and after of the default display output on a
> > > tigerlake is:
> > >
> > > Before:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > > Performance counter stats for 'system wide':
> > >
> > > 16,041,816,418 cpu-clock # 15.995 CPUs utilized
> > > 5,749 context-switches # 358.376 /sec
> > > 121 cpu-migrations # 7.543 /sec
> > > 1,806 page-faults # 112.581 /sec
> > > 825,965,204 instructions # 0.70 insn per cycle
> > > 1,180,799,101 cycles # 0.074 GHz
> > > 168,945,109 branches # 10.532 M/sec
> > > 4,629,567 branch-misses # 2.74% of all branches
> > > # 30.2 % tma_backend_bound
> > > # 7.8 % tma_bad_speculation
> > > # 47.1 % tma_frontend_bound
> > > # 14.9 % tma_retiring
> > > ```
> > >
> > > After:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > > Performance counter stats for 'system wide':
> > >
> > > 2,890 context-switches # 179.9 cs/sec cs_per_second
> > > 16,061,923,339 cpu-clock # 16.0 CPUs CPUs_utilized
> > > 43 cpu-migrations # 2.7 migrations/sec migrations_per_second
> > > 5,645 page-faults # 351.5 faults/sec page_faults_per_second
> > > 5,708,413 branch-misses # 1.4 % branch_miss_rate (88.83%)
> > > 429,978,120 branches # 26.8 K/sec branch_frequency (88.85%)
> > > 1,626,915,897 cpu-cycles # 0.1 GHz cycles_frequency (88.84%)
> > > 2,556,805,534 instructions # 1.5 instructions insn_per_cycle (88.86%)
> > > TopdownL1 # 20.1 % tma_backend_bound
> > > # 40.5 % tma_bad_speculation (88.90%)
> > > # 17.2 % tma_frontend_bound (78.05%)
> > > # 22.2 % tma_retiring (88.89%)
> > >
> > > 1.002994394 seconds time elapsed
> > > ```
> >
> > While this looks nicer, I worry about the changes in the output. And I'm
> > curious why only the "After" output shows the multiplexing percent.
> >
> > >
> > > Having the metrics in json brings greater uniformity, allows events to
> > > be shared by metrics, and it also allows descriptions like:
> > > ```
> > > $ perf list cs_per_second
> > > ...
> > > cs_per_second
> > > [Context switches per CPU second]
> > > ```
> > >
> > > A thorn in the side of doing this work was that the hard coded metrics
> > > were used by perf script with '-F metric'. This functionality didn't
> > > work for me (I was testing `perf record -e instructions,cycles` and
> > > then `perf script -F metric` but saw nothing but empty lines)
> >
> > The documentation says:
> >
> > With the metric option perf script can compute metrics for
> > sampling periods, similar to perf stat. This requires
> > specifying a group with multiple events defining metrics with the :S option
> > for perf record. perf will sample on the first event, and
> > print computed metrics for all the events in the group. Please note
> > that the metric computed is averaged over the whole sampling
> > period (since the last sample), not just for the sample point.
> >
> > So I guess it should have 'S' modifiers in a group.
>
> Thanks Namhyung. Yes, this is the silly behavior where leader sample
> events are both treated as an event but then the constituent parts
> turned into individual events with the period set to the leader sample
> read counts. Most recently this behavior was disabled by struct
> perf_tool's dont_split_sample_group in the case of perf inject as it
> causes events to be processed multiple times. The perf script behavior
> doesn't rely anywhere on the grouping of the leader sample events and
> even with it the metric format option doesn't work either - I'll save
> pasting a screen full of blank lines here.
Right, it seems to be broken at some point.
>
> > > but anyway I decided to fix it to the best of my ability in this
> > > series. So the script side counters were removed and the regular ones
> > > associated with the evsel used. The json metrics were all searched
> > > looking for ones that have a subset of events matching those in the
> > > perf script session, and all metrics are printed. This is kind of
> > > weird as the counters are being set by the period of samples, but I
> > > carried the behavior forward. I suspect there needs to be follow up
> > > work to make this better, but what is in the series is superior to
> > > what is currently in the tree. Follow up work could include finding
> > > metrics for the machine in the perf.data rather than using the host,
> > > allowing multiple metrics even if the metric ids of the events differ,
> > > fixing pre-existing `perf stat record/report` issues, etc.
> > >
> > > There is a lot of stat tests that, for example, assume '-e
> > > instructions,cycles' will produce an IPC metric. These things needed
> > > tidying as now the metric must be explicitly asked for and when doing
> > > this ones using software events were preferred to increase
> > > compatibility. As the test updates were numerous they are distinct to
> > > the patches updating the functionality causing periods in the series
> > > where not all tests are passing. If this is undesirable the test fixes
> > > can be squashed into the functionality updates.
> >
> > Hmm.. how many of them? I think it'd better to have the test changes at
> > the same time so that we can assure test success count after the change.
> > Can the test changes be squashed into one or two commits?
>
> So the patches are below. The first set are all clean up:
>
> > > Ian Rogers (22):
> > > perf evsel: Remove unused metric_events variable
> > > perf metricgroup: Update comment on location of metric_event list
> > > perf metricgroup: Missed free on error path
> > > perf metricgroup: When copy metrics copy default information
> > > perf metricgroup: Add care to picking the evsel for displaying a
> > > metric
> > > perf jevents: Make all tables static
I've applied most of this part to perf-tools-next, will take a look at
others later.
Thanks,
Namhyung
>
> Then there is the addition of the legacy metrics as json:
>
> > > perf expr: Add #target_cpu literal
> > > perf jevents: Add set of common metrics based on default ones
> > > perf jevents: Add metric DefaultShowEvents
> > > perf stat: Add detail -d,-dd,-ddd metrics
>
> Then there is the change to make perf script metric format work:
>
> > > perf script: Change metric format to use json metrics
>
> Then there is a clean up patch:
>
> > > perf stat: Remove hard coded shadow metrics
>
> Then there are fixes to perf stat's already broken output:
>
> > > perf stat: Fix default metricgroup display on hybrid
> > > perf stat: Sort default events/metrics
> > > perf stat: Remove "unit" workarounds for metric-only
>
> Then there are 7 patches updating test expectations. Each patch deals
> with a separate test to make the resolution clear.
>
> > > perf test stat+json: Improve metric-only testing
> > > perf test stat: Ignore failures in Default[234] metricgroups
> > > perf test stat: Update std_output testing metric expectations
> > > perf test metrics: Update all metrics for possibly failing default
> > > metrics
> > > perf test stat: Update shadow test to use metrics
> > > perf test stat: Update test expectations and events
> > > perf test stat csv: Update test expectations and events
>
> The patch "perf jevents: Add set of common metrics based on default
> ones" most impacts the output but we don't want to verify the default
> stat output with the hardcoded metrics that are removed in "perf stat:
> Remove hard coded shadow metrics". Having a test for both hard coded
> and json metrics in an intermediate state makes little sense and the
> default output is impacting by the 3 patches fixing it and removing
> workarounds.
>
> It is possible to squash things together but I think something is lost
> in doing so, hence presenting it this way.
>
> Thanks,
> Ian
Powered by blists - more mailing lists