[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CC8C7BE.4010102@caviumnetworks.com>
Date: Wed, 27 Oct 2010 17:45:50 -0700
From: David Daney <ddaney@...iumnetworks.com>
To: Ted Ts'o <tytso@....edu>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: Perf can't deal with many tracepoints
On 10/27/2010 05:40 PM, Ted Ts'o wrote:
> On Wed, Oct 27, 2010 at 05:16:18PM -0700, David Daney wrote:
>> Tracing is supposed to be low overhead. Forcing people to decode
>> things like this at the trace point, may take more code and cause
>> the trace data to be larger, making it slower than necessary.
>>
>> If there isn't a good reason to keep perf stupid, then making it
>> smarter could be attractive.
>
> Agreed. Although one argument against making perf smarter is that
> certain things such as the dev_t MAJOR/MINOR split is an internal
> abstraction that could potentially vary from kernel to kernel.
>
> And the question is whether perf really should be so different that if
> you boot a different kernel, you had better have the right perf
> installed.
>
It may be possible to encode the dev_t split in the trace meta-data.
This is done for some other types. Then perf could decode it based on
the meta-data.
Another option is to have perf print the raw data and not crash. Then
someone looking at the output could, if they desired, decode the dev_t
themselves.
David Daney
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists