[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091112054354.838746008@goodmis.org>
Date: Thu, 12 Nov 2009 00:43:54 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Mathieu Desnoyers <compudj@...stal.dyndns.org>
Subject: [PATCH 0/3][RFC] tracing/x86: split sched_clock in recording trace time stamps
Ingo,
In an effort to make the ring buffer as effecient as possible,
I've been using perf top to find where the trouble areas are. I've
found that it is mostly in just grabbing the time stamp.
This patch set uses the normalization feature of the ring buffer
to split sched_clock into just reading the tsc and on the read side
doing the cycles conversion.
This effort has brought the time to do a single record from 179 ns
down to 149 ns on my Intel Xeon 2.8 GHz box.
I'm sending this out as a RFC because I want the views of those that
know time keeping a bit better than I.
Thanks,
-- Steve
The following patches are in:
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
branch: tip/tracing/rfc
Steven Rostedt (3):
tracing: Add time stamp normalize to ring buffer clock selection
tracing: Make the trace_clock_local and trace_normalize_local weak
tracing: Separate out x86 time stamp reading and ns conversion
----
arch/x86/kernel/tsc.c | 35 +++++++++++++++++++++++++++++++++++
include/linux/ring_buffer.h | 3 ++-
include/linux/trace_clock.h | 2 ++
kernel/trace/ring_buffer.c | 9 ++++++++-
kernel/trace/trace.c | 18 ++++++++++++++----
kernel/trace/trace_clock.c | 18 +++++++++++++++---
6 files changed, 76 insertions(+), 9 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists