lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 5 Sep 2008 15:34:33 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Alok Kataria <akataria@...are.com>
cc:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Arjan van de Veen <arjan@...radead.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Dan Hecht <dhecht@...are.com>,
	Garrett Smith <garrett@...are.com>
Subject: Re: [RFC patch 0/4] TSC calibration improvements



On Fri, 5 Sep 2008, Alok Kataria wrote:
> 
> This can happen if, in the pit_expect_msb (the one just before the 
> second read_tsc), we hit an SMI/virtualization event *after* doing the 
> 50 iterations of PIT read loop, this allows the pit_expect_msb to 
> succeed when the SMI returns.

So theoretically, on real hardware, the minimum of 50 reads will take 
100us. The 256 PTI cycles will take 214us, so in the absolute worst case, 
you can have two consecutive successful cycles despite having a 228us SMI 
(or other event) if it happens just in the middle.

Of course, then the actual _error_ on the TSC read will be just half that, 
but since there are two TSC reads - one at the beginning and one at the 
end - and if the errors of the two reads go in opposite directions, they 
can add up to 228us.

So I agree - in theory you can have a fairly big error if you hit 
everything just right. In practice, of course, even that *maximal* error 
is actually perfectly fine for TSC calibration.

So I just don't care one whit.  The fact is, fast bootup is more important 
than some crazy and totally unrealistic VM situation. The 50ms thing was 
already too long, the 250ms one is unbearable.

The thing is, you _can_ calibrate the thing more carefully _later_. Use a 
tiemr to do two events one second apart (without slowing down the boot) if 
you want to get a really good value, along with the HPET/PMTIMER 
fine-tuning. That way you should actually be able to get a _really_ 
precise thing, because you do need a long time to get precision. But that 
long time should not be in a critical path on the bootup.

			Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ