[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060916191043.GA22558@elte.hu>
Date: Sat, 16 Sep 2006 21:10:43 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Jes Sorensen <jes@....com>, Roman Zippel <zippel@...ux-m68k.org>,
Andrew Morton <akpm@...l.org>, tglx@...utronix.de,
karim@...rsys.com, Paul Mundt <lethal@...ux-sh.org>,
linux-kernel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Greg Kroah-Hartman <gregkh@...e.de>,
Tom Zanussi <zanussi@...ibm.com>, ltt-dev@...fik.org,
Michel Dagenais <michel.dagenais@...ymtl.ca>
Subject: Re: [PATCH 0/11] LTTng-core (basic tracing infrastructure) 0.5.108
* Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca> wrote:
> See http://ltt.polymtl.ca/svn/tests/kernel/test-kprobes.c to insert
> the kprobe. Tests done on LTTng 0.5.111, on a x86 3GHz with
> hyperthreading.
i have done a bit of kprobes and djprobes testing on a 2160 MHz Athlon64
CPU, UP. I have tested 2 types of almost-NOP tracepoints (on 2.6.17),
where the probe function only increases a counter:
static int counter;
static void probe_func(struct djprobe *djp, struct pt_regs *regs)
{
counter++;
}
and have measured the overhead of an unmodified, kprobes-probed and
djprobes-probed sys_getpid() system-call:
sys_getpid() unmodified latency: 317 cycles [ 0.146 usecs ]
sys_getpid() kprobes latency: 815 cycles [ 0.377 usecs ]
sys_getpid() djprobes latency: 380 cycles [ 0.176 usecs ]
i.e. the kprobes overhead is +498 cycles (+0.231 usecs), the djprobes
overhead is +63 cycles (+0.029 usecs).
what do these numbers tell us? Firstly, on this CPU the kprobes overhead
is not 1000-2000 cycles but 500 cycles. Secondly, if that's not fast
enough, the "next-gen kprobes" code, djprobes have a really small
overhead of 63 cycles.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists