[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZcW8Zk02wPbpXJI@kernel.org>
Date: Thu, 4 Jan 2024 17:37:05 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Kan Liang <kan.liang@...ux.intel.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Edward Baker <edward.baker@...el.com>
Subject: Re: [PATCH v1 1/4] perf vendor events intel: Alderlake/rocketlake
metric fixes
Em Thu, Jan 04, 2024 at 05:56:22AM -0800, Ian Rogers escreveu:
> On Thu, Jan 4, 2024 at 4:39 AM Arnaldo Carvalho de Melo <acme@...nel.org> wrote:
> > Em Wed, Jan 03, 2024 at 11:42:56PM -0800, Ian Rogers escreveu:
> > > Fix that the core PMU is being specified for 2 uncore events. Specify
> > > a PMU for the alderlake UNCORE_FREQ metric.
<SNIP>
> > 101: perf all metricgroups test : Ok
> > 102: perf all metrics test : FAILED!
> > 107: perf metrics value validation : Ok
> > 102 is now failing due to some other problem:
> > root@...ber:~# perf test -v 102
> > 102: perf all metrics test :
> > --- start ---
> > test child forked, pid 2701034
> > Testing tma_core_bound
> > Testing tma_info_core_ilp
<SNIP>
> > Testing tma_memory_fence
> > Metric 'tma_memory_fence' not printed in:
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> > Average synthesis took: 49.458 usec (+- 0.033 usec)
> > Average num. events: 47.000 (+- 0.000)
> > Average time per event 1.052 usec
> > Average data synthesis took: 53.268 usec (+- 0.027 usec)
> > Average num. events: 244.000 (+- 0.000)
> > Average time per event 0.218 usec
> > Performance counter stats for 'perf bench internals synthesize':
> > <not counted> cpu_core/TOPDOWN.SLOTS/ (0.00%)
> > <not counted> cpu_core/topdown-retiring/ (0.00%)
> > <not counted> cpu_core/topdown-mem-bound/ (0.00%)
> > <not counted> cpu_core/topdown-bad-spec/ (0.00%)
> > <not counted> cpu_core/topdown-fe-bound/ (0.00%)
> > <not counted> cpu_core/topdown-be-bound/ (0.00%)
> > <not counted> cpu_core/RESOURCE_STALLS.SCOREBOARD/ (0.00%)
> > <not counted> cpu_core/EXE_ACTIVITY.1_PORTS_UTIL/ (0.00%)
> > <not counted> cpu_core/EXE_ACTIVITY.BOUND_ON_LOADS/ (0.00%)
> > <not counted> cpu_core/MISC2_RETIRED.LFENCE/ (0.00%)
> > <not counted> cpu_core/CYCLE_ACTIVITY.STALLS_TOTAL/ (0.00%)
> > <not counted> cpu_core/CPU_CLK_UNHALTED.THREAD/ (0.00%)
> > <not counted> cpu_core/ARITH.DIV_ACTIVE/ (0.00%)
> > <not counted> cpu_core/EXE_ACTIVITY.2_PORTS_UTIL,umask=0xc/ (0.00%)
> > <not counted> cpu_core/EXE_ACTIVITY.3_PORTS_UTIL,umask=0x80/ (0.00%)
> > 1.177929044 seconds time elapsed
> > 0.434552000 seconds user
> > 0.736874000 seconds sys
> > Testing tma_port_1
<SNIP>
> > test child finished with -1
> > ---- end ----
> > perf all metrics test: FAILED!
> > root@...ber:~#
> Have a try disabling the NMI watchdog. Agreed that there is more to
Did the trick, added this to the cset log message:
--------------------------------------- 8< ----------------------------
Test 102 is failing for another reason, not being able to get as many
counters as needed, Ian Rogers suggested disabling the NMI watchdog to
have more counters available:
root@...ber:/home/acme# cat /proc/sys/kernel/nmi_watchdog
1
root@...ber:/home/acme# echo 0 > /proc/sys/kernel/nmi_watchdog
root@...ber:/home/acme# perf test 102
102: perf all metrics test : Ok
root@...ber:/home/acme#
--------------------------------------- 8< ----------------------------
- Arnaldo
> fix here but I think the PMU driver is in part to blame because
> manually breaking the weak group of events is a fix. Fwiw, if we
> switch to the buddy watchdog mechanism then we'll no longer need to
> disable the NMI watchdog:
> https://lore.kernel.org/lkml/20230421155255.1.I6bf789d21d0c3d75d382e7e51a804a7a51315f2c@changeid/
Powered by blists - more mailing lists