[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161206182647.GC3107@twins.programming.kicks-ass.net>
Date: Tue, 6 Dec 2016 19:26:47 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: "mingo@...hat.com" <mingo@...hat.com>,
"acme@...nel.org" <acme@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alexander.shishkin@...ux.intel.com"
<alexander.shishkin@...ux.intel.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"jolsa@...nel.org" <jolsa@...nel.org>,
"Hunter, Adrian" <adrian.hunter@...el.com>,
"wangnan0@...wei.com" <wangnan0@...wei.com>,
"mark.rutland@....com" <mark.rutland@....com>,
"andi@...stfloor.org" <andi@...stfloor.org>
Subject: Re: [PATCH V2 03/13] perf/x86: output sampling overhead
On Tue, Dec 06, 2016 at 03:47:40PM +0000, Liang, Kan wrote:
> > It doesn't record anything, it generates the output. And it doesn't explain
> > why that needs to be in pmu::del(), in general that's a horrible thing to do.
>
> Yes, it only generate/log the output. Sorry for the confused wording.
>
> The NMI overhead is pmu specific overhead. So the NMI overhead output
> should be generated in pmu code.
True, but you're also accounting in a per-cpu bucket, which means it
includes all events. At which point the per-event overhead thing doesn't
really make sense.
It also means that previous sessions influence the numbers of our
current session; there's no explicit reset of the numbers.
> I assume that the pmu:del is the last called pmu function when perf finish.
> Is it a good place for logging?
No, its horrible. Sure, we'll call pmu::del on events, but yuck.
You really only want _one_ invocation when you stop using the event, and
we don't really have a good place for that. But instead of creating one,
you do horrible things.
Now, I realize there's a bit of a catch-22 in that the moment we know
the event is going away, its already gone from userspace. So we cannot
dump data from there in general..
Howver, if we have output redirection we can, but that would make things
depend on that and it cannot be used for the last event who's buffer
we're using.
Another option would be to introduce PERF_EVENT_IOC_STAT or something
like that, and have the tool call that when its 'done'.
Powered by blists - more mailing lists