[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878wr64cmj.fsf@basil.nowhere.org>
Date: Wed, 26 Nov 2008 18:32:04 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Ingo Molnar <mingo@...e.hu>, akpm@...ux-foundation.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [patch 17/17] x86 trace clock
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca> writes:
> X86 trace clock. Depends on tsc_sync to detect if timestamp counters are
> synchronized on the machine.
For that tsc_sync needs to be fixed first? e.g. see thread about
it firing incorrectly on vmware some time ago.
> A "Big Fat" (TM) warning is shown on the console when the trace clock is used on
> systems without synchronized TSCs to tell the user to
>
How about Intel systems where the TSC only stops in C3 and deeper?
You don't seem to handle that case well.
On modern Intel systems that's a common case.
> + new_tsc = last_tsc + TRACE_CLOCK_MIN_PROBE_DURATION;
> + /*
> + * If cmpxchg fails with a value higher than the new_tsc, don't
> + * retry : the value has been incremented and the events
> + * happened almost at the same time.
> + * We must retry if cmpxchg fails with a lower value :
> + * it means that we are the CPU with highest frequency and
> + * therefore MUST update the value.
Sorry but any global TSC measurements without scaling when the TSCs
run on different frequencies just don't make sense. The results will
be always poor. You really have to scale appropimately then and also
handle the "instable period"
-Andi
--
ak@...ux.intel.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists