[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1303331280.2796.154.camel@work-vm>
Date: Wed, 20 Apr 2011 13:28:00 -0700
From: john stultz <johnstul@...ibm.com>
To: Kasper Pedersen <kernel@...perkp.dk>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: x86: tsc: make TSC calibration more immune to interrupts
On Wed, 2011-04-20 at 21:44 +0200, Kasper Pedersen wrote:
> With the delayed workqueue patch, I see a wonderful 0.4ppm
> offset _almost_ all of the time.
>
> Very rarely though (about 1 in 3000) the value output by
> "Refined TSC .." jumps. It can jump no further than
> 50000/F_TSC/delayed_time, so on a modern machine it jumps
> no further than 20ppm.
>
> This can happen when a short irq occurs between
> *p = hpet_readl(HPET_COUNTER) & 0xFFFFFFFF;
> and
> t2 = get_cycles();
>
>
> Without the delayed workqueue patch 30ppm is insignificant.
Thanks for the additional details!
In that case, the patch seems reasonable to me.
If the additional ~200us (I suspect 100us is a little low, as acpi pm
can take ~3.5us access time * 5 * 8 = 140us) is an issue, you could
probably add a "good vs best" flag, which would decide between returning
the first value that is under the SMI_THRESHOLD vs your method of
returning the best value with the lowest uncertainty. This would allow
us to only use it in the refined calibration where the smaller noise
reduction makes a difference.
Ingo, Thomas, any other complaints?
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists