[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5df78e1d0812051036v6a619c2end01138b60217a74e@mail.gmail.com>
Date: Fri, 5 Dec 2008 10:36:34 -0800
From: Jiaying Zhang <jiayingz@...gle.com>
To: "Frank Ch. Eigler" <fche@...hat.com>
Cc: Steven Rostedt <srostedt@...hat.com>, linux-kernel@...r.kernel.org,
Michael Rubin <mrubin@...gle.com>,
Martin Bligh <mbligh@...gle.com>,
Michael Davidson <md@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 3/3] kernel tracing prototype
On Fri, Dec 5, 2008 at 7:34 AM, Frank Ch. Eigler <fche@...hat.com> wrote:
> Jiaying Zhang <jiayingz@...gle.com> writes:
>
>> To better answer the question why we want to implement a new kernel
>> tracing prototype, here are some performance results we collected before
>> with the tbench benchmark.
>
> Thanks.
>
>> - vanilla 2.6.26 kernel, CONFIG_MARKERS=n
>> Throughput 759.352 MB/sec 4
>> - markers compiled in, tracing disabled
>> Throughput 754.18 MB/sec 4
>
> Is your kernel built with -freorder-blocks? This option dramatically
> reduces the cost of inactive markers/tracepoints.
No. I will try this this flag and let you know if I see any
performance difference.
Thanks for your suggestion!
>
>> - tracing syscall entry/exit, use markers, not logging data to ring_buffer
>> Throughput 715.68 MB/sec 4
>> - tracing syscall entry/exit, use markers, logging data to ring_buffer
>> Throughput 654.056 MB/sec 4
>
> (By the way, how are you doing syscall entry/exit tracing?)
The way we trace syscall entry/exit is similar to the approach
used in LTTng. When tracing is enabled, a special flag,
TIF_KERNEL_TRACE is set so syscall entry and exit call special
syscall_trace_enter and syscall_trace_exit that are patched
to call trace_mark in our old prototype.
Jiaying
>
>
> - FChE
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists