lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Oct 2013 08:52:06 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Alex Shi <alex.shi@...aro.org>,
	Andi Kleen <andi@...stfloor.org>,
	Michel Lespinasse <walken@...gle.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>,
	Matthew R Wilcox <matthew.r.wilcox@...el.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Rik van Riel <riel@...hat.com>,
	Peter Hurley <peter@...leysoftware.com>,
	"Paul E.McKenney" <paulmck@...ux.vnet.ibm.com>,
	Jason Low <jason.low2@...com>,
	Waiman Long <Waiman.Long@...com>, linux-kernel@...r.kernel.org,
	linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v8 0/9] rwsem performance optimizations


* Tim Chen <tim.c.chen@...ux.intel.com> wrote:

> 
> > 
> > It would be _really_ nice to stick this into tools/perf/bench/ as:
> > 
> > 	perf bench mem pagefaults
> > 
> > or so, with a number of parallelism and workload patterns. See 
> > tools/perf/bench/numa.c for a couple of workload generators - although 
> > those are not page fault intense.
> > 
> > So that future generations can run all these tests too and such.
> > 
> > > I compare the throughput where I have the complete rwsem patchset 
> > > against vanilla and the case where I take out the optimistic spin patch.  
> > > I have increased the run time by 10x from my pervious experiments and do 
> > > 10 runs for each case.  The standard deviation is ~1.5% so any changes 
> > > under 1.5% is statistically significant.
> > > 
> > > % change in throughput vs the vanilla kernel.
> > > Threads	all	No-optspin
> > > 1		+0.4%	-0.1%
> > > 2		+2.0%	+0.2%
> > > 3		+1.1%	+1.5%
> > > 4		-0.5%	-1.4%
> > > 5		-0.1%	-0.1%
> > > 10		+2.2%	-1.2%
> > > 20		+237.3%	-2.3%
> > > 40		+548.1%	+0.3%
> > 
> > The tail is impressive. The early parts are important as well, but it's 
> > really hard to tell the significance of the early portion without having 
> > an sttdev column.
> > 
> > ( "perf stat --repeat N" will give you sttdev output, in handy percentage 
> >   form. )
> 
> Quick naive question as I haven't hacked perf bench before.  

Btw., please use tip:master, I've got a few cleanups in there that should 
make it easier to hack.

> Now perf stat gives the statistics of the performance counter or events.
> How do I get it to compute the stats of 
> the throughput reported by perf bench?

What I do is that I measure the execution time, via:

  perf stat --null --repeat 10 perf bench ...

instead of relying on benchmark output.

> Something like
> 
> perf stat -r 10 -- perf bench mm memset --iterations 10
> 
> doesn't quite give what I need.

Yeha. So, perf bench also has a 'simple' output format:

  comet:~/tip> perf bench -f simple sched pipe
  10.378

We could extend 'perf stat' with an option to not measure time, but to 
take any numeric data output from the executed task and use that as the 
measurement result.

If you'd be interested in such a feature I can give it a try.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ