[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1302192056330.22263@ionos>
Date: Tue, 19 Feb 2013 21:15:29 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: John Stultz <john.stultz@...aro.org>
cc: Stephane Eranian <eranian@...gle.com>,
Pawel Moll <pawel.moll@....com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Paul Mackerras <paulus@...ba.org>,
Anton Blanchard <anton@...ba.org>,
Will Deacon <Will.Deacon@....com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
Pekka Enberg <penberg@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Robert Richter <robert.richter@....com>
Subject: Re: [RFC] perf: need to expose sched_clock to correlate user samples
with kernel samples
On Tue, 19 Feb 2013, Thomas Gleixner wrote:
> On Tue, 19 Feb 2013, John Stultz wrote:
> Would be interesting to compare and contrast that. Though you can't do
> that in the kernel as the write hold time of the timekeeper seq is way
> larger than the gtod->seq write hold time. I have a patch series in
> work which makes the timekeeper seq hold time almost as short as that
> of gtod->seq.
As a side note. There is a really interesting corner case
vs. virtualization.
VCPU0 VCPU1
update_wall_time()
write_seqlock_irqsave(&tk->lock, flags);
....
Host schedules out VCPU0
Arbitrary delay
Host schedules in VCPU0
__vdso_clock_gettime()#1
update_vsyscall();
__vdso_clock_gettime()#2
Depending on the length of the delay which kept VCPU0 away from
executing and depending on the direction of the ntp update of the
timekeeping variables __vdso_clock_gettime()#2 can observe time going
backwards.
You can reproduce that by pinning VCPU0 to physical core 0 and VCPU1
to physical core 1. Now remove all load from physical core 1 except
VCPU1 and put massive load on physical core 0 and make sure that the
NTP adjustment lowers the mult factor.
Fun, isn't it ?
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists