lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130917071641.GD20661@gmail.com>
Date:	Tue, 17 Sep 2013 09:16:41 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Namhyung Kim <namhyung@...nel.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Jiri Olsa <jolsa@...hat.com>, David Ahern <dsahern@...il.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Stephane Eranian <eranian@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 0/4] perf tools: New comm infrastructure


* Namhyung Kim <namhyung@...nel.org> wrote:

> Hi Ingo,
> 
> On Sat, 14 Sep 2013 08:11:49 +0200, Ingo Molnar wrote:
> > * Frederic Weisbecker <fweisbec@...il.com> wrote:
> >> My patches and Namhyung's should improve the comm situation a lot but we 
> >> can't do much miracle. The only way would be perhaps to be able to limit 
> >> the deepness of the callchain branches.
> >> 
> >> Now may be we can find other big contention point in perf. It's possible 
> >> we also have some endless loop somewhere.
> >
> > Well, it was the 100,000+ step linear list walk that was causing 90% of 
> > the slowness here. Namhyung's patch should dramatically improve that. I 
> > guess time for someone to post a combined tree so that it can be tested 
> > all together?
> 
> I pushed combined tree to 'perf/callchain-v2' branch in my tree
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
> 
> 
> Please note that I also pushed other versions (v[1-3]).  The v1 is my
> previous rbtree conversion patch, v2 adds Frederic's new comm
> infrastructure series on top and v3 adds my revised patch to refer
> current comm [1] on top of v2.
> 
> I did my own test again among them.  Test data is 400MB perf.data file
> created by parallel kernel build.
> 
>   $ ls -lh perf.data.big
>   -rw-------. 1 namhyung namhyung 400M Sep  9 10:21 perf.data.big
> 
> For more precise result, I changed cpufreq governor to 'performance'

Btw., 

> 
>   # echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
> 
> and run perf report on the cpu.
> 
>   $ taskset -c 3 time -p perf --no-pager report --stdio -i perf.data.big > /dev/null

Btw., for such things you could use 'perf stat --null --sync --repeat 3', 
which will not use the PMU or even perf events, it only uses precise 
timers to measure execution time:

   $ taskset -c 3 perf stat --null --sync --repeat 3 -p perf --no-pager report --stdio -i perf.data.big > /dev/null

> I ran it multiple times for each case and the results did not vary much.

(perf stat --repeat will print a nice stddev as well.)

>         baseline	    v1            v2              v3
>   ----------------------------------------------------------
>   real    380.17	 12.63 	       10.02		9.03
>   user    378.86	 11.95 	        9.66		8.69
>   sys       0.70	  0.65 	        0.33		0.34

(Alas perf stat --null does not print a system/user time split. Might be 
nice to implement that.)

The numbers look pretty nice, a 40x speedup. Especially with the progress 
bar displayed this should be within a human-tolerable runtime.

Still it would be nice to look at some stats: number of records, number of 
call chain entries, average call chain depth, tree size, max tree depth, 
etc. - so that we get a processing cost estimation of how much we spend on 
a single call chain entry, on average.

If any of those values is suspiciously high then maybe we could cull the 
callchain depth by default, people rarely look beyond a couple of entries: 
but this gets tricky when people sort in the reverse direction though - in 
that case the deepest entries are just as valuable as well to the end 
result.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ