[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20161125204233.GG26852@two.firstfloor.org>
Date: Fri, 25 Nov 2016 12:42:33 -0800
From: Andi Kleen <andi@...stfloor.org>
To: "Liang, Kan" <kan.liang@...el.com>
Cc: Andi Kleen <andi@...stfloor.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"acme@...nel.org" <acme@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alexander.shishkin@...ux.intel.com"
<alexander.shishkin@...ux.intel.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"jolsa@...nel.org" <jolsa@...nel.org>,
"Hunter, Adrian" <adrian.hunter@...el.com>,
"wangnan0@...wei.com" <wangnan0@...wei.com>,
"mark.rutland@....com" <mark.rutland@....com>
Subject: Re: [PATCH 13/14] perf tools: warn on high overhead
On Wed, Nov 23, 2016 at 10:03:24PM +0000, Liang, Kan wrote:
> > Perhaps we need two separate metrics here:
> >
> > - cost of perf record on its CPU (or later on if it gets multi threaded
> > more multiple). Warn if this is >50% or so.
>
> What's the formula for cost of perf record on its CPU?
> The cost only includes user space overhead or all overhead?
> What is the divisor?
It would be all the overhead in the process. Accounting overhead in
kernel threads or interrupts caused by IO is difficult, we could leave
that out for now.
Sum of:
For each perf thread:
thread cpu time / monotonic wall time
I guess Sum is better than average here because the perf threads are
likely running (or could be) on the same CPU. If perf record was changed to be
more aggressively flush buffers on the local CPUs this would need to
change, but I presume it's good enough for now.
>
>
> > - average perf collection overhead on a CPU. The 10% threshold here
> > seems appropiate.
> For the average, do you mean add all overheads among CPUs together
> and divide the CPU#?
Right. Possibly also max of all too.
>
> To calculate the rate, the divisor is wall clock time, right?
monotonic wall clock time yes.
-Andi
Powered by blists - more mailing lists