lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45212F1E.3080409@us.ibm.com>
Date:	Mon, 02 Oct 2006 10:24:14 -0500
From:	"Jose R. Santos" <jrs@...ibm.com>
To:	Mathieu Desnoyers <compudj@...stal.dyndns.org>
CC:	Martin Bligh <mbligh@...gle.com>,
	"Frank Ch. Eigler" <fche@...hat.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	prasanna@...ibm.com, Andrew Morton <akpm@...l.org>,
	Ingo Molnar <mingo@...e.hu>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Paul Mundt <lethal@...ux-sh.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Jes Sorensen <jes@....com>, Tom Zanussi <zanussi@...ibm.com>,
	Richard J Moore <richardj_moore@...ibm.com>,
	Michel Dagenais <michel.dagenais@...ymtl.ca>,
	Christoph Hellwig <hch@...radead.org>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Thomas Gleixner <tglx@...utronix.de>,
	William Cohen <wcohen@...hat.com>, ltt-dev@...fik.org,
	systemtap@...rces.redhat.com, Alan Cox <alan@...rguk.ukuu.org.uk>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Karim Yaghmour <karim@...rsys.com>,
	Pavel Machek <pavel@...e.cz>, Joe Perches <joe@...ches.com>,
	"Randy.Dunlap" <rdunlap@...otime.net>
Subject: Re: Performance analysis of Linux Kernel Markers 0.20 for 2.6.17

Mathieu Desnoyers wrote:
> However, the performance impact for using a kprobe is non negligible when
> activated. Assuming that kprobes would have a mechanism to get the variables
> from the caller's stack, it would perform the same task in at least 4178.23
> cycles vs 55.74 for a marker and a probe (ratio : 75). While kprobes are very
> useful for the reason explained earlier, the high event rate paths in the kernel
> would clearly benefit from a marker mechanism when the are probed.
>   

The problem now is how do we define "high event rate".  This is 
something that is highly dependent on the workload being run as well as 
the system configuration for such workload.  There are a lot of places 
in the kernel that can be turned into high event rates with with the 
right workload even though the may not represent 99% of most user cases. 

I would guess that anything above 500 event/s per-CPU on several 
realistic workloads is a good place to start.


> On the macro-benchmark side, no significant difference in performance has been
> found between the vanilla kernel and a kernel "marked" with the standard LTTng
> instrumentation.
>   

Out of curiosity,  how many cycles does it take to process a complete 
LTTng event up until the point were it has been completely stored into 
the trace buffer.  Since this should take a lot more than 55.74 cycles, 
it would be interesting to know at what event rate would a static marker 
stop showing as big of a performance advantage compared to dynamic probing.

-JRS
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ