[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <450E55BB.80208@sgi.com>
Date: Mon, 18 Sep 2006 10:15:55 +0200
From: Jes Sorensen <jes@....com>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Ingo Molnar <mingo@...e.hu>, Roman Zippel <zippel@...ux-m68k.org>,
Andrew Morton <akpm@...l.org>, tglx@...utronix.de,
karim@...rsys.com, Paul Mundt <lethal@...ux-sh.org>,
linux-kernel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Greg Kroah-Hartman <gregkh@...e.de>,
Tom Zanussi <zanussi@...ibm.com>, ltt-dev@...fik.org,
Michel Dagenais <michel.dagenais@...ymtl.ca>
Subject: Re: [PATCH 0/11] LTTng-core (basic tracing infrastructure) 0.5.108
Mathieu Desnoyers wrote:
> And about those extra cycles.. according to :
> Documentation/kprobes.txt
> "6. Probe Overhead
>
> On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
> microseconds to process. Specifically, a benchmark that hits the same
> probepoint repeatedly, firing a simple handler each time, reports 1-2
> million hits per second, depending on the architecture. A jprobe or
> return-probe hit typically takes 50-75% longer than a kprobe hit.
> When you have a return probe set on a function, adding a kprobe at
> the entry to that function adds essentially no overhead.
[snip]
> So, 1 microsecond seems more like 1500-2000 cycles to me, not 50.
So call it 2000 cycles, now go measure it in *real* life benchmarks
and not some artificial I call this one syscall that hits the probe
every time in a tight loop, kinda thing.
Show us some *real* numbers please.
Jes
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists