[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <399e8b83-ee7b-1b24-bd6b-74d274fcda46@linux.alibaba.com>
Date: Mon, 16 Jan 2023 10:58:50 +0800
From: Jing Zhang <renyu.zj@...ux.alibaba.com>
To: Ian Rogers <irogers@...gle.com>,
John Garry <john.g.garry@...cle.com>
Cc: Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Will Deacon <will@...nel.org>,
James Clark <james.clark@....com>,
Mike Leach <mike.leach@...aro.org>,
Leo Yan <leo.yan@...aro.org>,
linux-arm-kernel@...ts.infradead.org,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Andrew Kilroy <andrew.kilroy@....com>,
Shuai Xue <xueshuai@...ux.alibaba.com>,
Zhuo Song <zhuo.song@...ux.alibaba.com>
Subject: Re: [PATCH v7 1/9] perf pmu: Add #slots literal support for arm64
在 2023/1/15 上午6:15, Ian Rogers 写道:
> On Fri, Jan 13, 2023 at 1:22 AM Jing Zhang <renyu.zj@...ux.alibaba.com> wrote:
>>
>> The slots in each architecture may be different, so add #slots literal
>> to obtain the slots of different architectures, and the #slots can be
>> applied in the metric. Currently, The #slots just support for arm64,
>> and other architectures will return NAN.
>>
>> On arm64, the value of slots is from the register PMMIR_EL1.SLOT, which
>> I can read in /sys/bus/event_source/device/armv8_pmuv3_*/caps/slots.
>> PMMIR_EL1.SLOT might read as zero if the PMU version is lower than
>> ID_AA64DFR0_EL1_PMUVer_V3P4 or the STALL_SLOT event is not implemented.
>>
>> Signed-off-by: Jing Zhang <renyu.zj@...ux.alibaba.com>
>> ---
>> tools/perf/arch/arm64/util/pmu.c | 34 ++++++++++++++++++++++++++++++++--
>> tools/perf/util/expr.c | 5 +++++
>> tools/perf/util/pmu.c | 6 ++++++
>> tools/perf/util/pmu.h | 1 +
>> 4 files changed, 44 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/arch/arm64/util/pmu.c b/tools/perf/arch/arm64/util/pmu.c
>> index 477e513..5f8667b 100644
>> --- a/tools/perf/arch/arm64/util/pmu.c
>> +++ b/tools/perf/arch/arm64/util/pmu.c
>> @@ -3,8 +3,9 @@
>> #include <internal/cpumap.h>
>> #include "../../../util/cpumap.h"
>> #include "../../../util/pmu.h"
>> +#include <api/fs/fs.h>
>>
>> -const struct pmu_events_table *pmu_events_table__find(void)
>> +static struct perf_pmu *pmu_core__find_same(void)
>
> I'm not sure "find_same" is the best name here. I suspect it should be
> "find_core_pmu" which would agree with is_arm_pmu_core. Unfortunately
> "core" has become an overloaded term sometimes used interchangeably
> with CPU, hyperthread or SMT thread, it was a model name for Intel and
> it is used to distinguish a set of SMT threads running together from a
> single one. Anyway, for consistency I think perf_pmu__find_core_pmu is
> the most appropriate name (or pmu__find_core_pmu, I'm not sure why we
> get the extra perf_ prefix sometimes, in general that indicates the
> functionality is in libperf).
>
The reason for using "pmu_core__find_same" before is to indicate that we're
only dealing with homogeneous cores. And in the tools/perf/util/pmu.c file,
most of the static functions have "pmu_" prefix, maybe we can use
"pmu_find_same_core_pmu"? Ian, John, what do you think?
Thanks,
Jing
> Aside from that, lgtm. Thanks,
> Ian
>
>> {
>> struct perf_pmu *pmu = NULL;
>>
>> @@ -19,8 +20,37 @@ const struct pmu_events_table *pmu_events_table__find(void)
>> if (pmu->cpus->nr != cpu__max_cpu().cpu)
>> return NULL;
>>
>> - return perf_pmu__find_table(pmu);
>> + return pmu;
>> }
>>
>> return NULL;
>> }
>> +
>> +const struct pmu_events_table *pmu_events_table__find(void)
>> +{
>> + struct perf_pmu *pmu = pmu_core__find_same();
>> +
>> + if (pmu)
>> + return perf_pmu__find_table(pmu);
>> +
>> + return NULL;
>> +}
>> +
>> +double perf_pmu__cpu_slots_per_cycle(void)
>> +{
>> + char path[PATH_MAX];
>> + unsigned long long slots = 0;
>> + struct perf_pmu *pmu = pmu_core__find_same();
>> +
>> + if (pmu) {
>> + scnprintf(path, PATH_MAX,
>> + EVENT_SOURCE_DEVICE_PATH "%s/caps/slots", pmu->name);
>> + /*
>> + * The value of slots is not greater than 32 bits, but sysfs__read_int
>> + * can't read value with 0x prefix, so use sysfs__read_ull instead.
>> + */
>> + sysfs__read_ull(path, &slots);
>> + }
>> +
>> + return (double)slots;
>> +}
>> diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c
>> index 00dcde3..9d3076a 100644
>> --- a/tools/perf/util/expr.c
>> +++ b/tools/perf/util/expr.c
>> @@ -19,6 +19,7 @@
>> #include <linux/zalloc.h>
>> #include <ctype.h>
>> #include <math.h>
>> +#include "pmu.h"
>>
>> #ifdef PARSER_DEBUG
>> extern int expr_debug;
>> @@ -448,6 +449,10 @@ double expr__get_literal(const char *literal, const struct expr_scanner_ctx *ctx
>> result = topology->core_cpus_lists;
>> goto out;
>> }
>> + if (!strcmp("#slots", literal)) {
>> + result = perf_pmu__cpu_slots_per_cycle() ?: NAN;
>> + goto out;
>> + }
>>
>> pr_err("Unrecognized literal '%s'", literal);
>> out:
>> diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
>> index 2bdeb89..cbb4fbf 100644
>> --- a/tools/perf/util/pmu.c
>> +++ b/tools/perf/util/pmu.c
>> @@ -19,6 +19,7 @@
>> #include <regex.h>
>> #include <perf/cpumap.h>
>> #include <fnmatch.h>
>> +#include <math.h>
>> #include "debug.h"
>> #include "evsel.h"
>> #include "pmu.h"
>> @@ -1993,3 +1994,8 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
>> *ucpus_ptr = unmatched_cpus;
>> return 0;
>> }
>> +
>> +double __weak perf_pmu__cpu_slots_per_cycle(void)
>> +{
>> + return NAN;
>> +}
>> diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
>> index 69ca000..fd414ba 100644
>> --- a/tools/perf/util/pmu.h
>> +++ b/tools/perf/util/pmu.h
>> @@ -259,4 +259,5 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
>>
>> char *pmu_find_real_name(const char *name);
>> char *pmu_find_alias_name(const char *name);
>> +double perf_pmu__cpu_slots_per_cycle(void);
>> #endif /* __PMU_H */
>> --
>> 1.8.3.1
>>
Powered by blists - more mailing lists