[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRPd-7nVSrKEwUDN@google.com>
Date: Tue, 11 Nov 2025 17:08:11 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
James Clark <james.clark@...aro.org>, Xu Yang <xu.yang_2@....com>,
Chun-Tse Shao <ctshao@...gle.com>,
Thomas Richter <tmricht@...ux.ibm.com>,
Sumanth Korikkar <sumanthk@...ux.ibm.com>,
Collin Funk <collin.funk1@...il.com>,
Thomas Falcon <thomas.falcon@...el.com>,
Howard Chu <howardchu95@...il.com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Levi Yun <yeoreum.yun@....com>,
Yang Li <yang.lee@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
Weilin Wang <weilin.wang@...el.com>
Subject: Re: [PATCH v4 00/18]
On Tue, Nov 11, 2025 at 03:13:35PM -0800, Ian Rogers wrote:
> On Tue, Nov 11, 2025 at 2:42 PM Namhyung Kim <namhyung@...nel.org> wrote:
> >
> > On Tue, Nov 11, 2025 at 01:21:48PM -0800, Ian Rogers wrote:
> > > Prior to this series stat-shadow would produce hard coded metrics if
> > > certain events appeared in the evlist. This series produces equivalent
> > > json metrics and cleans up the consequences in tests and display
> > > output. A before and after of the default display output on a
> > > tigerlake is:
> > >
> > > Before:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > > Performance counter stats for 'system wide':
> > >
> > > 16,041,816,418 cpu-clock # 15.995 CPUs utilized
> > > 5,749 context-switches # 358.376 /sec
> > > 121 cpu-migrations # 7.543 /sec
> > > 1,806 page-faults # 112.581 /sec
> > > 825,965,204 instructions # 0.70 insn per cycle
> > > 1,180,799,101 cycles # 0.074 GHz
> > > 168,945,109 branches # 10.532 M/sec
> > > 4,629,567 branch-misses # 2.74% of all branches
> > > # 30.2 % tma_backend_bound
> > > # 7.8 % tma_bad_speculation
> > > # 47.1 % tma_frontend_bound
> > > # 14.9 % tma_retiring
> > > ```
> > >
> > > After:
> > > ```
> > > $ perf stat -a sleep 1
> > >
> > > Performance counter stats for 'system wide':
> > >
> > > 2,890 context-switches # 179.9 cs/sec cs_per_second
> > > 16,061,923,339 cpu-clock # 16.0 CPUs CPUs_utilized
> > > 43 cpu-migrations # 2.7 migrations/sec migrations_per_second
> > > 5,645 page-faults # 351.5 faults/sec page_faults_per_second
> > > 5,708,413 branch-misses # 1.4 % branch_miss_rate (88.83%)
> > > 429,978,120 branches # 26.8 M/sec branch_frequency (88.85%)
> > > 1,626,915,897 cpu-cycles # 0.1 GHz cycles_frequency (88.84%)
> > > 2,556,805,534 instructions # 1.5 instructions insn_per_cycle (88.86%)
> > > TopdownL1 # 20.1 % tma_backend_bound
> > > # 40.5 % tma_bad_speculation (88.90%)
> > > # 17.2 % tma_frontend_bound (78.05%)
> > > # 22.2 % tma_retiring (88.89%)
> > >
> > > 1.002994394 seconds time elapsed
> > > ```
> > >
> > > Having the metrics in json brings greater uniformity, allows events to
> > > be shared by metrics, and it also allows descriptions like:
> > > ```
> > > $ perf list cs_per_second
> > > ...
> > > cs_per_second
> > > [Context switches per CPU second]
> > > ```
> > >
> > > A thorn in the side of doing this work was that the hard coded metrics
> > > were used by perf script with '-F metric'. This functionality didn't
> > > work for me (I was testing `perf record -e instructions,cycles`
> > > with/without leader sampling and then `perf script -F metric` but saw
> > > nothing but empty lines) but anyway I decided to fix it to the best of
> > > my ability in this series. So the script side counters were removed
> > > and the regular ones associated with the evsel used. The json metrics
> > > were all searched looking for ones that have a subset of events
> > > matching those in the perf script session, and all metrics are
> > > printed. This is kind of weird as the counters are being set by the
> > > period of samples, but I carried the behavior forward. I suspect there
> > > needs to be follow up work to make this better, but what is in the
> > > series is superior to what is currently in the tree. Follow up work
> > > could include finding metrics for the machine in the perf.data rather
> > > than using the host, allowing multiple metrics even if the metric ids
> > > of the events differ, fixing pre-existing `perf stat record/report`
> > > issues, etc.
> > >
> > > There is a lot of stat tests that, for example, assume '-e
> > > instructions,cycles' will produce an IPC metric. These things needed
> > > tidying as now the metric must be explicitly asked for and when doing
> > > this ones using software events were preferred to increase
> > > compatibility. As the test updates were numerous they are distinct to
> > > the patches updating the functionality causing periods in the series
> > > where not all tests are passing. If this is undesirable the test fixes
> > > can be squashed into the functionality updates, but this will be kind
> > > of messy, especially as at some points in the series both the old
> > > metrics and the new metrics will be displayed.
> > >
> > > v4: K/sec to M/sec on branch frequency (Namhyung), perf script -F
> > > metric to-done a system-wide calculation (Namhyung) and don't
> > > crash because of the CPU map index couldn't be found. Regenerate
> > > commit messages but the cpu-clock was always yielding 0 on my
> > > machine leading to a lot of nan metric values.
> >
> > This is strange. The cpu-clock should not be 0 as long as you ran it.
> > Do you think it's related to the scale unit change? I tested v3 and
> > didn't see the problem.
>
> It looked like a kernel issue. The raw counts were 0 before being
> scaled. All metrics always work on unscaled values. It is only the
> commit messages and the formatting is more important than the numeric
> values - which were correct for a cpu-clock of 0.
Hmm.. ok. I don't see the problem when I test the series so it may be
a problem in your environment.
Thanks,
Namhyung
Powered by blists - more mailing lists