[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101117122425.GA25294@elte.hu>
Date: Wed, 17 Nov 2010 13:24:25 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Pekka Enberg <penberg@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Arjan van de Ven <arjan@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Frederic Weisbecker <fweisbec@...il.com>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Darren Hart <dvhart@...ux.intel.com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [patch] trace: Add user-space event tracing/injection
* Ingo Molnar <mingo@...e.hu> wrote:
> > Does this concept lend itself to tracing latencies in userspace applications
> > that run in virtual machines (e.g. the Java kind)? I'm of course interested in
> > this because of Jato [1] where bunch of interesting things can cause jitter: JIT
> > compilation, GC, kernel, and the actual application doing something (in either
> > native code or JIT'd code). It's important to be able to measure where
> > "slowness" to desktop applications and certain class of server applications
> > comes from to be able to improve things.
>
> Makes quite a bit of sense.
>
> How about the attached patch? It works fine with the simple testcase included in
> the changelog. There's a common-sense limit on the message size - but otherwise it
> adds support for apps to generate a free-form string trace event.
The entirely untested Jato patch below adds support for this to Jato's user-space
tracer. Btw., you have _hundreds_ of tracepoints in Jato, wow!
The prctl() approach is very attractive because it's very simple to integrate. It's
also reasonably fast, there's no fd baggage in prctl(). It is also arguably a
'process/task event' so fits prctl()'s original design (if it ever had one ...).
Note, i kept the original Jato buffering as well, and the prctl() does finegrained
events, one event per trace_printf() line printed.
I think it makes sense to generate a separate event for all trace_printf() calls,
because that way the events propagate immediately. OTOH i dont know how large the
trace messages are - if there's really big tables printed (or lines are constructed
out of many trace_printf() calls) then it may make sense to buffer them a bit.
Anyway, this demonstrates the concept.
Thanks,
Ingo
diff --git a/vm/trace.c b/vm/trace.c
index 0192de6..64030ff 100644
--- a/vm/trace.c
+++ b/vm/trace.c
@@ -24,10 +24,13 @@
* Please refer to the file LICENSE for details.
*/
+#include <sys/prctl.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
+#define PR_TASK_PERF_USER_TRACE 35
+
#include "jit/compiler.h"
#include "lib/string.h"
#include "vm/thread.h"
@@ -50,15 +53,24 @@ static void setup_trace_buffer(void)
int trace_printf(const char *fmt, ...)
{
+ unsigned long curr_pos;
va_list args;
int err;
setup_trace_buffer();
+ curr_pos = trace_buffer->length;
+
va_start(args, fmt);
err = str_vappend(trace_buffer, fmt, args);
va_end(args);
+ /*
+ * Send the trace buffer to perf, it will show up as user:user events:
+ * (on non-perf kernels this will produce a harmless -EINVAL)
+ */
+ prctl(PR_TASK_PERF_USER_TRACE, trace_buffer->value + curr_pos, trace_buffer->length - curr_pos);
+
return err;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists