[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903170831020.3082@localhost.localdomain>
Date: Tue, 17 Mar 2009 08:48:19 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Ingo Molnar <mingo@...e.hu>
cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Jesper Krogh <jesper@...gh.cc>,
john stultz <johnstul@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Len Brown <len.brown@...el.com>
Subject: Re: Linux 2.6.29-rc6
On Tue, 17 Mar 2009, Ingo Molnar wrote:
>
> Cool. Will you apply it yourself (in the merge window) or should
> we pick it up?
I'll commit it. I already split it into two commits - one for the trivial
startup problem that John had, one for the "estimate error and exit when
smaller than 500ppm" part.
> Incidentally, yesterday i wrote a PIT auto-calibration routine
> (see WIP patch below).
>
> The core idea is to use _all_ thousands of measurement points
> (not just two) to calculate the frequency ratio, with a built-in
> noise detector which drops out of the loop if the observed noise
> goes below ~10 ppm.
I suspect that reaching 10 ppm is going to take too long in general.
Considering that I found a machine where reaching 500ppm took 16ms,
getting to 10ppm would take almost a second. That's a long time at bootup,
considering that people want the whole boot to take about that time ;)
I also do think it's a bit unnecessarily complicated. We really only care
about the end points - obviously we can end up being unlucky and get a
very noisy end-point due to something like SMI or virtualization, but if
that happens, we're really just better off failing quickly instead, and
we'll go on to the slower calibration routines.
On real hardware without SMI or virtualization overhead, the delays
_should_ be very stable. On my main machine, for example, the PIT read
really seems very stable at about 2.5us (which matches the expectation
that one 'inb' should take roughly one microsecond pretty closely). So
that should be the default case, and the case that the fast calibration is
designed for.
For the other cases, we really can just exit and do something else.
> It's WIP because it's not working yet (or at all?): i couldnt
> get the statistical model right - it's too noisy at 1000-2000
> ppm and the frequency result is off by 5000 ppm.
I suspect your measurement overhead is getting noticeable. You do all
those divides, but even more so, you do all those traces. Also, it looks
like you do purely local pairwise analysis at subsequent PIT modelling
points, which can't work - you need to average over a long time to
stabilize it.
So you _can_ do something like what you do, but you'd need to find a
low-noise start and end point, and do analysis over that longer range
instead of trying to do it over individual cases.
> I also like yours more because it's simpler.
In fact, it's much simpler than what we used to do. No real assumptions
about how quickly we can read the PIT, no need for magic values ("we can
distinguish a slow virtual environment from real hardware by the fact that
we can do at least 50 PIT reads in one cycle"), no nothing. Just a simple
"is it below 500ppm yet?".
(Well, technically, it compares to 1 in 2048 rather than 500 in a million,
since that is much cheaper, so it's really looking for "better than
488ppm")
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists