[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fVy6LysuDLWRNgWZocfAs=khzdK_aOG7HYVs2E_a4Bpzg@mail.gmail.com>
Date: Mon, 13 Dec 2021 08:17:07 -0800
From: Ian Rogers <irogers@...gle.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Andi Kleen <ak@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
John Garry <john.garry@...wei.com>,
Kajol Jain <kjain@...ux.ibm.com>,
"Paul A . Clarke" <pc@...ibm.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Riccardo Mancini <rickyman7@...il.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Vineet Singh <vineet.singh@...el.com>,
James Clark <james.clark@....com>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Suzuki K Poulose <suzuki.poulose@....com>,
Mike Leach <mike.leach@...aro.org>,
Leo Yan <leo.yan@...aro.org>, coresight@...ts.linaro.org,
linux-arm-kernel@...ts.infradead.org, eranian@...gle.com
Subject: Re: [PATCH 03/22] perf stat: Switch aggregation to use for_each loop
On Sat, Dec 11, 2021 at 11:25 AM Jiri Olsa <jolsa@...hat.com> wrote:
>
> On Tue, Dec 07, 2021 at 06:45:48PM -0800, Ian Rogers wrote:
> > Tidy up the use of cpu and index to hopefully make the code less error
> > prone. Avoid unused warnings with (void) which will be removed in a
> > later patch.
> >
> > In aggr_update_shadow, the perf_cpu_map is switched from
> > the evlist to the counter's cpu map, so the index is appropriate. This
> > addresses a problem where uncore counts, with a cpumap like:
> > $ cat /sys/devices/uncore_imc_0/cpumask
> > 0,18
> > Don't aggregate counts in CPUs based on the index of those values in the
> > cpumap (0 and 1) but on the actual CPU (0 and 18). Thereby correcting
> > metric calculations in per-socket mode for counters with without a full
> > cpumask.
> >
> > Signed-off-by: Ian Rogers <irogers@...gle.com>
> > ---
> > tools/perf/util/stat-display.c | 48 +++++++++++++++++++---------------
> > 1 file changed, 27 insertions(+), 21 deletions(-)
> >
> > diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
> > index 588601000f3f..efab39a759ff 100644
> > --- a/tools/perf/util/stat-display.c
> > +++ b/tools/perf/util/stat-display.c
> > @@ -330,8 +330,8 @@ static void print_metric_header(struct perf_stat_config *config,
> > static int first_shadow_cpu(struct perf_stat_config *config,
> > struct evsel *evsel, struct aggr_cpu_id id)
> > {
> > - struct evlist *evlist = evsel->evlist;
> > - int i;
> > + struct perf_cpu_map *cpus;
> > + int cpu, idx;
> >
> > if (config->aggr_mode == AGGR_NONE)
> > return id.core;
> > @@ -339,14 +339,11 @@ static int first_shadow_cpu(struct perf_stat_config *config,
> > if (!config->aggr_get_id)
> > return 0;
> >
> > - for (i = 0; i < evsel__nr_cpus(evsel); i++) {
> > - int cpu2 = evsel__cpus(evsel)->map[i];
> > -
> > - if (cpu_map__compare_aggr_cpu_id(
> > - config->aggr_get_id(config, evlist->core.cpus, cpu2),
> > - id)) {
> > - return cpu2;
> > - }
> > + cpus = evsel__cpus(evsel);
> > + perf_cpu_map__for_each_cpu(cpu, idx, cpus) {
> > + if (cpu_map__compare_aggr_cpu_id(config->aggr_get_id(config, cpus, idx),
> > + id))
> > + return cpu;
>
> so this looks strange, you pass idx instead of cpu2 to aggr_get_id,
> which takes idx as 3rd argument, so it looks like it was broken now,
> should this be a separate fix?
Yep, I tried to cover this in the commit message, but agree a separate
patch would be clearer. The aggregation is currently broken on
anything other than CPU 0 or when the CPU mask covers every CPU - the
case for something like topdown, hence this not being spotted.
> also the original code for some reason passed evlist->core.cpus
> to aggr_get_id, which might differ rom evsel's cpus
Part of the same fix.
> same for aggr_update_shadow change
In this case the cpu is really an index and so the change is just
renaming one to the other for the sake of clarity.
Thanks,
Ian
> jirka
>
Powered by blists - more mailing lists