[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZfHMYM3iWlsODtjP@tassilo>
Date: Wed, 13 Mar 2024 08:55:12 -0700
From: Andi Kleen <ak@...ux.intel.com>
To: "Wang, Weilin" <weilin.wang@...el.com>
Cc: Namhyung Kim <namhyung@...nel.org>, Ian Rogers <irogers@...gle.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
"Hunter, Adrian" <adrian.hunter@...el.com>,
Kan Liang <kan.liang@...ux.intel.com>,
"linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Taylor, Perry" <perry.taylor@...el.com>,
"Alt, Samantha" <samantha.alt@...el.com>,
"Biggers, Caleb" <caleb.biggers@...el.com>
Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record when
perf stat needs to get retire latency value for a metric.
On Wed, Mar 13, 2024 at 03:31:14PM +0000, Wang, Weilin wrote:
>
>
> > -----Original Message-----
> > From: Andi Kleen <ak@...ux.intel.com>
> > Sent: Tuesday, March 12, 2024 5:56 PM
> > To: Wang, Weilin <weilin.wang@...el.com>
> > Cc: Namhyung Kim <namhyung@...nel.org>; Ian Rogers
> > <irogers@...gle.com>; Arnaldo Carvalho de Melo <acme@...nel.org>; Peter
> > Zijlstra <peterz@...radead.org>; Ingo Molnar <mingo@...hat.com>;
> > Alexander Shishkin <alexander.shishkin@...ux.intel.com>; Jiri Olsa
> > <jolsa@...nel.org>; Hunter, Adrian <adrian.hunter@...el.com>; Kan Liang
> > <kan.liang@...ux.intel.com>; linux-perf-users@...r.kernel.org; linux-
> > kernel@...r.kernel.org; Taylor, Perry <perry.taylor@...el.com>; Alt, Samantha
> > <samantha.alt@...el.com>; Biggers, Caleb <caleb.biggers@...el.com>
> > Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record when
> > perf stat needs to get retire latency value for a metric.
> >
> > "Wang, Weilin" <weilin.wang@...el.com> writes:
> >
> > >> -----Original Message-----
> > >> From: Andi Kleen <ak@...ux.intel.com>
> > >> Sent: Tuesday, March 12, 2024 5:03 PM
> > >> To: Wang, Weilin <weilin.wang@...el.com>
> > >> Cc: Namhyung Kim <namhyung@...nel.org>; Ian Rogers
> > >> <irogers@...gle.com>; Arnaldo Carvalho de Melo <acme@...nel.org>;
> > Peter
> > >> Zijlstra <peterz@...radead.org>; Ingo Molnar <mingo@...hat.com>;
> > >> Alexander Shishkin <alexander.shishkin@...ux.intel.com>; Jiri Olsa
> > >> <jolsa@...nel.org>; Hunter, Adrian <adrian.hunter@...el.com>; Kan Liang
> > >> <kan.liang@...ux.intel.com>; linux-perf-users@...r.kernel.org; linux-
> > >> kernel@...r.kernel.org; Taylor, Perry <perry.taylor@...el.com>; Alt,
> > Samantha
> > >> <samantha.alt@...el.com>; Biggers, Caleb <caleb.biggers@...el.com>
> > >> Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record
> > when
> > >> perf stat needs to get retire latency value for a metric.
> > >>
> > >> weilin.wang@...el.com writes:
> > >>
> > >> > From: Weilin Wang <weilin.wang@...el.com>
> > >> >
> > >> > When retire_latency value is used in a metric formula, perf stat would fork
> > a
> > >> > perf record process with "-e" and "-W" options. Perf record will collect
> > >> > required retire_latency values in parallel while perf stat is collecting
> > >> > counting values.
> > >>
> > >> How does that work when the workload is specified on the command line?
> > >> The workload would run twice? That is very inefficient and may not
> > >> work if it's a large workload.
> > >>
> > >> The perf tool infrastructure is imho not up to the task of such
> > >> parallel collection.
> > >>
> > >> Also it won't work for very long collections because you will get a
> > >> very large perf.data. Better to use a pipeline.
> > >>
> > >> I think it would be better if you made it a separate operation that can
> > >> generate a file that is then consumed by perf stat. This is also more efficient
> > >> because often the calibration is only needed once. And it's all under
> > >> user control so no nasty surprises.
> > >>
> > >
> > > Workload runs only once with perf stat. Perf record is forked by perf stat and
> > run
> > > in parallel with perf stat. Perf stat will send perf record a signal to terminate
> > after
> > > perf stat stops collecting count value.
> >
> > I don't understand how the perf record filters on the workload created by
> > the perf stat. At a minimum you would need -p to connect to the pid
> > of the parent, but IIRC -p doesnt follow children, so if it forked
> > it wouldn't work.
> >
> > I think your approach may only work with -a, but perhaps I'm missing
> > something (-a is often not usable due to restrictions)
> >
> > Also if perf stat runs in interval mode and you only get the data
> > at the end how would that work?
> >
> > iirc i wrestled with all these questions for toplev (which has a
> > similar feature) and in the end i concluded doing it automatically
> > has far too many problems.
> >
>
> Yes, you are completely right that there are limitation that we can only support -a, -C
> and not support on -I now. I'm wondering if we could support "-I" in next step by
> processing sampled data on the go.
-I is very tricky in a separate process. How do you align the two
intervals on a long runs without drift. I don't know of a reliable
way to do it in the general case only using time.
Also just the non support for forking workloads without -a is fatal imho. That's
likely one of the most common cases.
Separate is a far better model imho:
- It is under full user control and no surprises
- No uncontrolled multiplexing
- Often it is fine to measure once and cache the data
It cannot deal with -I properly either (short of some form of
phase detection), but at least it doesn't give false promises
to that effect.
The way to do it is to have defaults in a json file
and the user can override them with a calibration step.
There is a JSON format that is used by some other tools.
This is my implementation:
https://github.com/andikleen/pmu-tools/blob/master/genretlat.py
https://github.com/andikleen/pmu-tools/blob/89861055b53e57ba0b7c6348745b2fbe6615c068/toplev.py#L1031
-Andi
Powered by blists - more mailing lists