[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YdRXgF4OoFnU6MUo@krava>
Date: Tue, 4 Jan 2022 15:19:44 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Ian Rogers <irogers@...gle.com>
Cc: Andi Kleen <ak@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
John Garry <john.garry@...wei.com>,
Kajol Jain <kjain@...ux.ibm.com>,
"Paul A . Clarke" <pc@...ibm.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Riccardo Mancini <rickyman7@...il.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Vineet Singh <vineet.singh@...el.com>,
James Clark <james.clark@....com>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Suzuki K Poulose <suzuki.poulose@....com>,
Mike Leach <mike.leach@...aro.org>,
Leo Yan <leo.yan@...aro.org>, coresight@...ts.linaro.org,
linux-arm-kernel@...ts.infradead.org, zhengjun.xing@...el.com,
eranian@...gle.com
Subject: Re: [PATCH v3 08/48] perf cpumap: Remove map+index get_die
On Wed, Dec 29, 2021 at 11:19:50PM -0800, Ian Rogers wrote:
> Migrate final users to appropriate cpu variant.
>
> Reviewed-by: James Clark <james.clark@....com>
> Signed-off-by: Ian Rogers <irogers@...gle.com>
> ---
> tools/perf/tests/topology.c | 2 +-
> tools/perf/util/cpumap.c | 9 ---------
> tools/perf/util/cpumap.h | 1 -
> tools/perf/util/stat.c | 2 +-
> 4 files changed, 2 insertions(+), 12 deletions(-)
>
> diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
> index 69a64074b897..ce085b6f379b 100644
> --- a/tools/perf/tests/topology.c
> +++ b/tools/perf/tests/topology.c
> @@ -136,7 +136,7 @@ static int check_cpu_topology(char *path, struct perf_cpu_map *map)
>
> // Test that die ID contains socket and die
> for (i = 0; i < map->nr; i++) {
> - id = cpu_map__get_die(map, i, NULL);
> + id = cpu_map__get_die_aggr_by_cpu(perf_cpu_map__cpu(map, i), NULL);
> TEST_ASSERT_VAL("Die map - Socket ID doesn't match",
> session->header.env.cpu[map->map[i]].socket_id == id.socket);
>
> diff --git a/tools/perf/util/cpumap.c b/tools/perf/util/cpumap.c
> index 342a5eaee9d3..ff91c32da688 100644
> --- a/tools/perf/util/cpumap.c
> +++ b/tools/perf/util/cpumap.c
> @@ -216,15 +216,6 @@ struct aggr_cpu_id cpu_map__get_die_aggr_by_cpu(int cpu, void *data)
> return id;
> }
>
> -struct aggr_cpu_id cpu_map__get_die(struct perf_cpu_map *map, int idx,
> - void *data)
> -{
> - if (idx < 0 || idx > map->nr)
> - return cpu_map__empty_aggr_cpu_id();
> -
> - return cpu_map__get_die_aggr_by_cpu(map->map[idx], data);
> -}
> -
> int cpu_map__get_core_id(int cpu)
> {
> int value, ret = cpu__get_topology_int(cpu, "core_id", &value);
> diff --git a/tools/perf/util/cpumap.h b/tools/perf/util/cpumap.h
> index a53af24301d2..365ed69699e1 100644
> --- a/tools/perf/util/cpumap.h
> +++ b/tools/perf/util/cpumap.h
> @@ -34,7 +34,6 @@ int cpu_map__get_socket_id(int cpu);
> struct aggr_cpu_id cpu_map__get_socket_aggr_by_cpu(int cpu, void *data);
> int cpu_map__get_die_id(int cpu);
> struct aggr_cpu_id cpu_map__get_die_aggr_by_cpu(int cpu, void *data);
> -struct aggr_cpu_id cpu_map__get_die(struct perf_cpu_map *map, int idx, void *data);
> int cpu_map__get_core_id(int cpu);
> struct aggr_cpu_id cpu_map__get_core_aggr_by_cpu(int cpu, void *data);
> struct aggr_cpu_id cpu_map__get_core(struct perf_cpu_map *map, int idx, void *data);
> diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
> index 9eca1111fa52..5ed99bcfe91e 100644
> --- a/tools/perf/util/stat.c
> +++ b/tools/perf/util/stat.c
> @@ -336,7 +336,7 @@ static int check_per_pkg(struct evsel *counter,
> * On multi-die system, die_id > 0. On no-die system, die_id = 0.
> * We use hashmap(socket, die) to check the used socket+die pair.
> */
> - d = cpu_map__get_die(cpus, cpu, NULL).die;
> + d = cpu_map__get_die_id(cpu);
> if (d < 0)
> return -1;
looking on this I realized that probably we have broken
perf stat record
perf stat report
if that report is run on different machine, because we
take die from current system
should be fixed in another patchset though
jirka
Powered by blists - more mailing lists