[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fVDF4-qYL1Lm7efgiHk7X=_nw_nEFMBZFMcsnOOJgX4Kg@mail.gmail.com>
Date: Thu, 15 May 2025 15:35:41 -0700
From: Ian Rogers <irogers@...gle.com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>, Namhyung Kim <namhyung@...nel.org>
Cc: Weilin Wang <weilin.wang@...el.com>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>, Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>, Ravi Bangoria <ravi.bangoria@....com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] perf pmu intel: Adjust cpumaks for sub-NUMA clusters
on graniterapids
On Thu, May 15, 2025 at 2:01 PM Liang, Kan <kan.liang@...ux.intel.com> wrote:
>
> On 2025-05-15 2:14 p.m., Ian Rogers wrote:
> > On graniterapids the cache home agent (CHA) and memory controller
> > (IMC) PMUs all have their cpumask set to per-socket information. In
> > order for per NUMA node aggregation to work correctly the PMUs cpumask
> > needs to be set to CPUs for the relevant sub-NUMA grouping.
> >
> > For example, on a 2 socket graniterapids machine with sub NUMA
> > clustering of 3, for uncore_cha and uncore_imc PMUs the cpumask is
> > "0,120" leading to aggregation only on NUMA nodes 0 and 3:
> > ```
> > $ perf stat --per-node -e 'UNC_CHA_CLOCKTICKS,UNC_M_CLOCKTICKS' -a sleep 1
> >
> > Performance counter stats for 'system wide':
> >
> > N0 1 277,835,681,344 UNC_CHA_CLOCKTICKS
> > N0 1 19,242,894,228 UNC_M_CLOCKTICKS
> > N3 1 277,803,448,124 UNC_CHA_CLOCKTICKS
> > N3 1 19,240,741,498 UNC_M_CLOCKTICKS
> >
> > 1.002113847 seconds time elapsed
> > ```
> >
> > By updating the PMUs cpumasks to "0,120", "40,160" and "80,200" then
> > the correctly 6 NUMA node aggregations are achieved:
> > ```
> > $ perf stat --per-node -e 'UNC_CHA_CLOCKTICKS,UNC_M_CLOCKTICKS' -a sleep 1
> >
> > Performance counter stats for 'system wide':
> >
> > N0 1 92,748,667,796 UNC_CHA_CLOCKTICKS
> > N0 0 6,424,021,142 UNC_M_CLOCKTICKS
> > N1 0 92,753,504,424 UNC_CHA_CLOCKTICKS
> > N1 1 6,424,308,338 UNC_M_CLOCKTICKS
> > N2 0 92,751,170,084 UNC_CHA_CLOCKTICKS
> > N2 0 6,424,227,402 UNC_M_CLOCKTICKS
> > N3 1 92,745,944,144 UNC_CHA_CLOCKTICKS
> > N3 0 6,423,752,086 UNC_M_CLOCKTICKS
> > N4 0 92,725,793,788 UNC_CHA_CLOCKTICKS
> > N4 1 6,422,393,266 UNC_M_CLOCKTICKS
> > N5 0 92,717,504,388 UNC_CHA_CLOCKTICKS
> > N5 0 6,421,842,618 UNC_M_CLOCKTICKS
>
> Is the second coloum the number of units?
> If so, it's wrong.
>
> On my GNR with SNC-2, I observed the similar issue.
>
> $ sudo ./perf stat -e 'UNC_M_CLOCKTICKS' --per-node -a sleep 1
> Performance counter stats for 'system wide':
>
> N0 0 6,405,811,284 UNC_M_CLOCKTICKS
> N1 1 6,405,895,988 UNC_M_CLOCKTICKS
> N2 0 6,152,906,692 UNC_M_CLOCKTICKS
> N3 1 6,063,415,630 UNC_M_CLOCKTICKS
>
> It's supposed to be 4?
Agreed it is weird, but it is what has historically been displayed.
The number is the aggregation number:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/stat-display.c?h=perf-tools-next#n307
which comes from:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/stat-display.c?h=perf-tools-next#n135
which comes from:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/stat.c?h=perf-tools-next#n435
However, I think it is missing updates from:
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/stat.c?h=perf-tools-next#n526
but there is a comment there saying "don't increase aggr.nr for
aliases" and all the uncore events are aliases. I don't understand
what the aggregation number is supposed to be, it is commented as
"number of entries (CPUs) aggregated":
https://web.git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/stat.h?h=perf-tools-next#n26
it would seem to make sense in the CHA case with SNC3 and 42 evsels
per NUMA node that the value should be 42. Maybe Namhyung (who did the
evsel__merge_aggr_counters clean up) knows why it is this way but in
my eyes it just seems like something that has been broken for a long
time.
Thanks,
Ian
Powered by blists - more mailing lists