[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e2a2eb762f9d373dd5cc5b0f4b8f41e2984a0b49.camel@intel.com>
Date: Mon, 17 Nov 2025 19:47:11 +0000
From: "Falcon, Thomas" <thomas.falcon@...el.com>
To: "james.clark@...aro.org" <james.clark@...aro.org>,
"alexander.shishkin@...ux.intel.com" <alexander.shishkin@...ux.intel.com>,
"ashelat@...hat.com" <ashelat@...hat.com>, "ravi.bangoria@....com"
<ravi.bangoria@....com>, "peterz@...radead.org" <peterz@...radead.org>,
"acme@...nel.org" <acme@...nel.org>, "mingo@...hat.com" <mingo@...hat.com>,
"Hunter, Adrian" <adrian.hunter@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "namhyung@...nel.org" <namhyung@...nel.org>,
"jolsa@...nel.org" <jolsa@...nel.org>, "howardchu95@...il.com"
<howardchu95@...il.com>, "irogers@...gle.com" <irogers@...gle.com>,
"linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
"quic_zhonhan@...cinc.com" <quic_zhonhan@...cinc.com>
Subject: Re: [PATCH v1 3/3] perf evsel: Skip store_evsel_ids for
non-perf-event PMUs
On Fri, 2025-11-14 at 14:05 -0800, Ian Rogers wrote:
> The IDs are associated with perf events and not applicable to non-perf
> event PMUs. The failure to generate the ids was causing perf stat
> record to fail.
>
Hi Ian, looks good to me.
Tested-by: Thomas Falcon <thomas.falcon@...el.com>
Here's the output on my alder lake:
% sudo ./perf stat record -a sleep 1
Performance counter stats for 'system wide':
3,485 context-switches # 144.8 cs/sec cs_per_second
24,075.55 msec cpu-clock # 24.0 CPUs CPUs_utilized
206 cpu-migrations # 8.6 migrations/sec migrations_per_second
678 page-faults # 28.2 faults/sec page_faults_per_second
1,508,292 cpu_core/branch-misses/ # 2.1 % branch_miss_rate
73,298,298 cpu_core/branches/ # 3.0 M/sec branch_frequency
319,787,502 cpu_core/cpu-cycles/ # 0.0 GHz cycles_frequency
366,691,216 cpu_core/instructions/ # 1.1 instructions insn_per_cycle
455,948 cpu_atom/branch-misses/ # 1.6 % branch_miss_rate (49.87%)
28,573,057 cpu_atom/branches/ # 1.2 M/sec branch_frequency (49.98%)
235,791,714 cpu_atom/cpu-cycles/ # 0.0 GHz cycles_frequency (50.07%)
158,014,230 cpu_atom/instructions/ # 0.7 instructions insn_per_cycle (50.15%)
TopdownL1 (cpu_core) # 8.1 % tma_bad_speculation
# 37.0 % tma_frontend_bound
# 36.6 % tma_backend_bound
# 18.2 % tma_retiring
TopdownL1 (cpu_atom) # 51.0 % tma_backend_bound (59.99%)
# 21.4 % tma_frontend_bound (59.90%)
# 11.5 % tma_bad_speculation
# 16.2 % tma_retiring (59.82%)
1.003087466 seconds time elapsed
% sudo ./perf stat report
Performance counter stats for '/home/tfalcon/perf-tools-next/tools/perf/perf':
1,005,135,862 duration_time
<not counted> duration_time
<not counted> duration_time
<not counted> duration_time
<not counted> duration_time
<not counted> duration_time
3,485 context-switches
24,075.55 msec cpu-clock
206 cpu-migrations
678 page-faults
1,508,292 cpu_core/branch-misses/
73,299,004 cpu_core/branches/
73,298,298 cpu_core/branches/
319,787,502 cpu_core/cpu-cycles/
319,799,050 cpu_core/cpu-cycles/
366,691,216 cpu_core/instructions/
<not counted> cpu_core/cpu-cycles/
<not counted> cpu_core/stalled-cycles-frontend/
<not counted> cpu_core/cpu-cycles/
<not counted> cpu_core/stalled-cycles-backend/
<not counted> cpu_core/stalled-cycles-backend/
<not counted> cpu_core/instructions/
<not counted> cpu_core/stalled-cycles-frontend/
455,948 cpu_atom/branch-misses/ (49.87%)
29,378,879 cpu_atom/branches/ (49.87%)
28,573,057 cpu_atom/branches/ (49.98%)
235,791,714 cpu_atom/cpu-cycles/ (50.07%)
231,878,974 cpu_atom/cpu-cycles/ (50.15%)
158,014,230 cpu_atom/instructions/ (50.15%)
<not counted> cpu_atom/cpu-cycles/
<not counted> cpu_atom/stalled-cycles-frontend/
<not counted> cpu_atom/cpu-cycles/
<not counted> cpu_atom/stalled-cycles-backend/
<not counted> cpu_atom/stalled-cycles-backend/
<not counted> cpu_atom/instructions/
<not counted> cpu_atom/stalled-cycles-frontend/
2,082,641 cpu_core/INT_MISC.UOP_DROPPING/
1,895,277,552 cpu_core/TOPDOWN.SLOTS/
345,024,206 cpu_core/topdown-retiring/
152,096,310 cpu_core/topdown-bad-spec/
704,850,131 cpu_core/topdown-fe-bound/
695,054,658 cpu_core/topdown-be-bound/
231,791,063 cpu_atom/CPU_CLK_UNHALTED.CORE/ (60.09%)
590,930,101 cpu_atom/TOPDOWN_BE_BOUND.ALL/ (59.99%)
247,501,143 cpu_atom/TOPDOWN_FE_BOUND.ALL/ (59.90%)
187,767,093 cpu_atom/TOPDOWN_RETIRING.ALL/ (59.82%)
1.003087466 seconds time elapsed
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
Thanks,
Tom
> ```
> $ perf stat record -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 47,941 context-switches # nan cs/sec cs_per_second
> 0.00 msec cpu-clock # 0.0 CPUs CPUs_utilized
> 3,261 cpu-migrations # nan migrations/sec migrations_per_second
> 516 page-faults # nan faults/sec page_faults_per_second
> 7,525,483 cpu_core/branch-misses/ # 2.3 % branch_miss_rate
> 322,069,004 cpu_core/branches/ # nan M/sec branch_frequency
> 1,895,684,291 cpu_core/cpu-cycles/ # nan GHz cycles_frequency
> 2,789,777,426 cpu_core/instructions/ # 1.5 instructions insn_per_cycle
> 7,074,765 cpu_atom/branch-misses/ # 3.2 % branch_miss_rate (49.89%)
> 224,225,412 cpu_atom/branches/ # nan M/sec branch_frequency (50.29%)
> 2,061,679,981 cpu_atom/cpu-cycles/ # nan GHz cycles_frequency (50.33%)
> 2,011,242,533 cpu_atom/instructions/ # 1.0 instructions insn_per_cycle (50.33%)
> TopdownL1 (cpu_core) # 9.0 % tma_bad_speculation
> # 28.3 % tma_frontend_bound
> # 35.2 % tma_backend_bound
> # 27.5 % tma_retiring
> TopdownL1 (cpu_atom) # 36.8 % tma_backend_bound (59.65%)
> # 22.8 % tma_frontend_bound (59.60%)
> # 11.6 % tma_bad_speculation
> # 28.8 % tma_retiring (59.59%)
>
> 1.006777519 seconds time elapsed
>
> $ perf stat report
>
> Performance counter stats for 'perf':
>
> 1,013,376,154 duration_time
> <not counted> duration_time
> <not counted> duration_time
> <not counted> duration_time
> <not counted> duration_time
> <not counted> duration_time
> 47,941 context-switches
> 0.00 msec cpu-clock
> 3,261 cpu-migrations
> 516 page-faults
> 7,525,483 cpu_core/branch-misses/
> 322,069,814 cpu_core/branches/
> 322,069,004 cpu_core/branches/
> 1,895,684,291 cpu_core/cpu-cycles/
> 1,895,679,209 cpu_core/cpu-cycles/
> 2,789,777,426 cpu_core/instructions/
> <not counted> cpu_core/cpu-cycles/
> <not counted> cpu_core/stalled-cycles-frontend/
> <not counted> cpu_core/cpu-cycles/
> <not counted> cpu_core/stalled-cycles-backend/
> <not counted> cpu_core/stalled-cycles-backend/
> <not counted> cpu_core/instructions/
> <not counted> cpu_core/stalled-cycles-frontend/
> 7,074,765 cpu_atom/branch-misses/ (49.89%)
> 221,679,088 cpu_atom/branches/ (49.89%)
> 224,225,412 cpu_atom/branches/ (50.29%)
> 2,061,679,981 cpu_atom/cpu-cycles/ (50.33%)
> 2,016,259,567 cpu_atom/cpu-cycles/ (50.33%)
> 2,011,242,533 cpu_atom/instructions/ (50.33%)
> <not counted> cpu_atom/cpu-cycles/
> <not counted> cpu_atom/stalled-cycles-frontend/
> <not counted> cpu_atom/cpu-cycles/
> <not counted> cpu_atom/stalled-cycles-backend/
> <not counted> cpu_atom/stalled-cycles-backend/
> <not counted> cpu_atom/instructions/
> <not counted> cpu_atom/stalled-cycles-frontend/
> 17,145,113 cpu_core/INT_MISC.UOP_DROPPING/
> 10,594,226,100 cpu_core/TOPDOWN.SLOTS/
> 2,919,021,401 cpu_core/topdown-retiring/
> 943,101,838 cpu_core/topdown-bad-spec/
> 3,031,152,533 cpu_core/topdown-fe-bound/
> 3,739,756,791 cpu_core/topdown-be-bound/
> 1,909,501,648 cpu_atom/CPU_CLK_UNHALTED.CORE/ (60.04%)
> 3,516,608,359 cpu_atom/TOPDOWN_BE_BOUND.ALL/ (59.65%)
> 2,179,403,876 cpu_atom/TOPDOWN_FE_BOUND.ALL/ (59.60%)
> 2,745,732,458 cpu_atom/TOPDOWN_RETIRING.ALL/ (59.59%)
>
> 1.006777519 seconds time elapsed
>
> Some events weren't counted. Try disabling the NMI watchdog:
> echo 0 > /proc/sys/kernel/nmi_watchdog
> perf stat ...
> echo 1 > /proc/sys/kernel/nmi_watchdog
> ```
>
> Reported-by: James Clark <james.clark@...aro.org>
> Closes: https://lore.kernel.org/lkml/ca0f0cd3-7335-48f9-8737-2f70a75b019a@linaro.org/
> Signed-off-by: Ian Rogers <irogers@...gle.com>
> ---
> I looked into adding metrics to perf stat report but there would be a
> merge conflict with:
> https://lore.kernel.org/lkml/20251113180517.44096-1-irogers@google.com/
> so holding off for now.
> ---
> tools/perf/util/evsel.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index 989c56d4a23f..aee42666e882 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -3974,6 +3974,9 @@ static int store_evsel_ids(struct evsel *evsel, struct evlist *evlist)
> if (evsel__is_retire_lat(evsel))
> return 0;
>
> + if (perf_pmu__kind(evsel->pmu) != PERF_PMU_KIND_PE)
> + return 0;
> +
> for (cpu_map_idx = 0; cpu_map_idx < xyarray__max_x(evsel->core.fd); cpu_map_idx++) {
> for (thread = 0; thread < xyarray__max_y(evsel->core.fd);
> thread++) {
Powered by blists - more mailing lists