[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1274280439.26328.770.camel@gandalf.stny.rr.com>
Date: Wed, 19 May 2010 10:47:19 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 5/5] perf: Implement perf_output_addr()
On Wed, 2010-05-19 at 09:58 +0200, Peter Zijlstra wrote:
> On Wed, 2010-05-19 at 09:21 +0200, Frederic Weisbecker wrote:
>
> > I'm still not sure what you mean here by this multiplexing. Is
> > this about per cpu multiplexing?
>
> Suppose there's two events attached to the same tracepoint. Will you
> write the tracepoint twice and risk different data in each, or will you
> do it once and copy it into each buffer?
Is this because the same function deals with the same tracepoint, and
has difficulty in knowing which event it is dealing with?
Note, the shrinking of the TRACE_EVENT() code that I pushed (and I'm
hoping makes it to 35 since it lays the ground work for lots of features
on top of TRACE_EVENT()), allows you to pass private data to each probe
registered to the tracepoint. Letting the same function handle two
different activities, or different tracepoints.
>
> > There is another problem. We need something like
> > perf_output_discard() in case the filter reject the event (which
> > must be filled for this check to happen).
>
> Yeah, I utterly hate that, I opted to let anything with a filter take
> the slow path. Not only would I have to add a discard, but I'd have to
> decrement the counter as well, which is a big no-no.
Hmm, this would impact performance on system wide recording of events
that are filtered. One would think adding a filter would speed things
up, not slow it down.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists