lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091207194802.GB5049@nowhere>
Date:	Mon, 7 Dec 2009 20:48:05 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Xiao Guangrong <xiaoguangrong@...fujitsu.com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Hitoshi Mitake <mitake@....info.waseda.ac.jp>,
	linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Mackerras <paulus@...ba.org>,
	Tom Zanussi <tzanussi@...il.com>,
	Steven Rostedt <srostedt@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: Re: [PATCH 2/2] perf lock: New subcommand "lock" to perf for
	analyzing lock statistics

On Mon, Dec 07, 2009 at 04:38:03PM +0800, Xiao Guangrong wrote:
> 
> 
> Ingo Molnar wrote:
> 
> > Also, i agree that the performance aspect is probably the most pressing 
> > issue. Note that 'perf bench sched messaging' is very locking intense so 
> > a 10x slowdown is not entirely unexpected - we still ought to optimize 
> > it all some more. 'perf lock' is an excellent testcase for this in any 
> > case.
> > 
> 
> Here are some test results to show the overhead of lockdep trace events:
> 
>                    select    pagefault   mmap    Memory par   Cont_SW
>                    latency    latency   latency   R/W BD      latency
> 
> disable ftrace        0         0         0         0          0
> 
> enable all ftrace  -16.65%    -109.80%   -93.62%   0.14%      -6.94%
> 
> enable all ftrace  -2.67%      1.08%     -3.65%   -0.52%      -0.68%
> except lockdep
> 
> 
> We also found big overhead when using kernbench and fio, but we haven't
> verified whether it's caused by lockdep events.
> 
> Thanks,
> Xiao


This profile has been done using ftrace with perf right?
It might be because the lock events are high rate events and
fill a lot of perf buffer space. More than other events.
In one of your previous mails, you showed us the difference
of the size of perf.data by capturing either scheduler events
or lock events.

And IIRC, the case of lock events resulted in a 100 MB perf.data
whereas it was a small file for sched events.

The overhead in the pagefault and mmap latency could then
result in the fact we have much more events to save, walking
through much more pages in perf buffer, then faulting more often,
etc.

Plus the fact various locks are taken in mmap and fault path,
generating more lock events.

Just a guess...

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ