[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOY9t8W1APC/Hurk@kernel.org>
Date: Wed, 23 Aug 2023 14:11:19 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
James Clark <james.clark@....com>,
Kan Liang <kan.liang@...ux.intel.com>,
John Garry <john.g.garry@...cle.com>,
Kajol Jain <kjain@...ux.ibm.com>,
Jing Zhang <renyu.zj@...ux.alibaba.com>,
Ravi Bangoria <ravi.bangoria@....com>,
Rob Herring <robh@...nel.org>,
Gaosheng Cui <cuigaosheng1@...wei.com>,
linux-perf-users <linux-perf-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1 00/25] Lazily load PMU data
Em Wed, Aug 23, 2023 at 09:45:50AM -0700, Ian Rogers escreveu:
> On Wed, Aug 23, 2023, 8:56 AM Arnaldo Carvalho de Melo <acme@...nel.org>
> wrote:
>
> > Em Wed, Aug 23, 2023 at 01:08:03AM -0700, Ian Rogers escreveu:
> > > Lazily load PMU data both from sysfs and json files. Reorganize
> > > json data to be more PMU oriented to facilitate this, for
> > > example, json data is now sorted into arrays for their PMU.
> > >
> > > In refactoring the code some changes were made to get rid of maximum
> > > encoding sizes for events (256 bytes), with input files being directly
> > > passed to the lex generated code. There is also a small event parse
> > > error message improvement.
> > >
> > > Some results from an Intel tigerlake laptop running Debian:
> > >
> > > Binary size reduction of 1.4% or 143,264 bytes because the PMU
> > > name no longer appears in the string.
> > >
> > > stat -e cpu/cycles/ minor faults reduced from 1733 to 1667, open calls
> > reduced
> > > from 171 to 94.
> > >
> > > stat default minor faults reduced from 1085 to 1727, open calls reduced
> > > from 654 to 343.
> > >
> > > Average PMU scanning reduced from 4720.641usec to 2927.293usec.
> > > Average core PMU scanning reduced from 1004.658usec to 232.668usec
> > > (4.3x faster).
> >
> > I'm now chasing this one when building it on ubuntu arm64
> >
>
> I'll fix and send a v2.
Its fixed already, I'm pushing it to tmp.perf-tools-next
diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 7c51fa182b51dab0..b8d6a953fd7423e1 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -79,9 +79,9 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
int err;
u32 val;
u64 contextid = evsel->core.attr.config &
- (perf_pmu__format_bits(&cs_etm_pmu->format, "contextid") |
- perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1") |
- perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2"));
+ (perf_pmu__format_bits(cs_etm_pmu, "contextid") |
+ perf_pmu__format_bits(cs_etm_pmu, "contextid1") |
+ perf_pmu__format_bits(cs_etm_pmu, "contextid2"));
if (!contextid)
return 0;
@@ -106,7 +106,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
}
if (contextid &
- perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1")) {
+ perf_pmu__format_bits(cs_etm_pmu, "contextid1")) {
/*
* TRCIDR2.CIDSIZE, bit [9-5], indicates whether contextID
* tracing is supported:
@@ -122,7 +122,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
}
if (contextid &
- perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")) {
+ perf_pmu__format_bits(cs_etm_pmu, "contextid2")) {
/*
* TRCIDR2.VMIDOPT[30:29] != 0 and
* TRCIDR2.VMIDSIZE[14:10] == 0b00100 (32bit virtual contextid)
@@ -151,7 +151,7 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
u32 val;
if (!(evsel->core.attr.config &
- perf_pmu__format_bits(&cs_etm_pmu->format, "timestamp")))
+ perf_pmu__format_bits(cs_etm_pmu, "timestamp")))
return 0;
if (!cs_etm_is_etmv4(itr, cpu)) {
Powered by blists - more mailing lists