[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110809150624.GG28228@elte.hu>
Date: Tue, 9 Aug 2011 17:06:24 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Matthew Garrett <mjg@...hat.com>
Cc: Jack Steiner <steiner@....com>, tglx@...utronix.de,
davej@...hat.com, yinghan@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] x86: Reduce clock calibration time during slave cpu
startup
* Matthew Garrett <mjg@...hat.com> wrote:
> On Fri, Aug 05, 2011 at 11:38:36PM +0200, Ingo Molnar wrote:
>
> > Well, it still uses heuristics: it assumes frequency is the same
> > when the cpuid data tells us that two CPUs are on the same
> > socket, right?
>
> If we only assume that when we have a constant TSC then it's a
> pretty safe assumption - the delay loop will be calibrated against
> the TSC, and the TSC will be constant across the package regardless
> of what frequency the cores are actually running at.
The delay loop might be calibrated against the TSC, but the amount of
real delay we get when we loop 100,000 times will be frequency
dependent.
What we probably want is the most conservative udelay calibration:
have a lpj value measured on the highest possible frequency - this
way hardware components can never be overclocked by a driver.
Or does udelay() scale with the current frequency of the CPU?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists