lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Sep 2012 14:25:57 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Jiri Olsa <jolsa@...hat.com>, linux-kernel@...r.kernel.org,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
	Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Andi Kleen <andi@...stfloor.org>,
	David Ahern <dsahern@...il.com>,
	Namhyung Kim <namhyung@...nel.org>
Subject: Re: [RFC 00/12] perf diff: Factor diff command

On Thu, Sep 06, 2012 at 08:41:09PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-09-06 at 17:46 +0200, Jiri Olsa wrote:
> > The 'perf diff' and 'std/hist' code is now changed to allow computations
> > mentioned in the paper. Two of them are implemented within this patchset:
> >   1) ratio differential profiling
> >   2) weighted differential profiling
> 
> Seems like a useful thing indeed, the explanation of the weighted diff
> method doesn't seem to contain a why. I know I could go read the paper
> but... :-)

Or you could ask the author.  ;-)

Ratio can be fooled by statistical variations on profiling buckets with
few counts.  So if you are looking for a 10% difference in execution
overhead somewhere in a large program, ratio will unhelpfully sort a
bunch of statistical 2x or 3x noise to the top of the list.

So you could use the difference in buckets instead of the ratio, but this
has problems in the case where the two runs being compared got different
amounts of work done, as is usually the case for timed benchmark runs
or throughput-based benchmark runs.  In these cases, you use the work
done (or the measured throughput, as the case may be) as weights.
The weighted difference will then pinpoint the code that suffered the
greatest per-unit-work increase in overhead between the two runs.

But ratio is still the method of choice when you are looking at pure
scalability issues, especially when the offending code increased in
overhead by a large factor.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ