[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DAF37B4.3040408@kasperkp.dk>
Date: Wed, 20 Apr 2011 21:44:52 +0200
From: Kasper Pedersen <kernel@...perkp.dk>
To: john stultz <johnstul@...ibm.com>
CC: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: x86: tsc: make TSC calibration more immune to interrupts
On 04/20/2011 09:15 PM, john stultz wrote:
> On Wed, 2011-04-20 at 20:52 +0200, Kasper Pedersen wrote:
>> When a SMI or plain interrupt occurs during the delayed part
>> of TSC calibration, and the SMI/irq handler is good and fast
>> so that is does not exceed SMI_TRESHOLD, tsc_khz can be a bit
>> off (10-30ppm).
> I guess I'm curious how useful this is with the refined TSC calibration
> that was added not too long ago:
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=08ec0c58fb8a05d3191d5cb6f5d6f81adb419798
>
> Are you saying that you see the same 10-30ppm variance in the dmesg
> line: "Refined TSC clocksource calibration: XXXX.XXX MHz" ?
>
Yes, I do.
With the delayed workqueue patch, I see a wonderful 0.4ppm
offset _almost_ all of the time.
Very rarely though (about 1 in 3000) the value output by
"Refined TSC .." jumps. It can jump no further than
50000/F_TSC/delayed_time, so on a modern machine it jumps
no further than 20ppm.
This can happen when a short irq occurs between
*p = hpet_readl(HPET_COUNTER) & 0xFFFFFFFF;
and
t2 = get_cycles();
Without the delayed workqueue patch 30ppm is insignificant.
/Kasper Pederen
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists