lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 01 Oct 2008 12:56:34 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	"H. Peter Anvin" <hpa@...or.com>
CC:	akataria@...are.com, "avi@...hat.com" <avi@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Gerd Hoffmann <kraxel@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	the arch/x86 maintainers <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	"Nakajima, Jun" <jun.nakajima@...el.com>,
	Dan Hecht <dhecht@...are.com>,
	Zachary Amsden <zach@...are.com>,
	virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org
Subject: Re: [RFC] CPUID usage for interaction between Hypervisors and Linux.

H. Peter Anvin wrote:
> What you'd want, at least, is a standard CPUID identification and 
> range leaf at the top.  256 leaves is a *lot*, though; I'm not saying 
> one couldn't run out, but it'd be hard.  Keep in mind that for large 
> objects there are "counting" CPUID levels, as much as I personally 
> dislike them, and one could easily argue that if you're doing 
> something that would require anywhere near 256 leaves you probably are 
> storing bulk data that belongs elsewhere.

I agree, but it just makes the proposal a bit more brittle.

> Of course, if we had some kind of central authority assigning 8-bit 
> IDs that would be even better, especially since there are tools in the 
> field which already scan on 64K boundaries.  I don't know, though, how 
> likely it is that we'll have to deal with 256 hypervisors.

I'm assuming that the likelihood of getting all possible vendors - 
current and future - to agree to a scheme like this is pretty small.  We 
need to come up with something that will work well when there are 
non-cooperative parties to deal with.

> I agree completely, of course (except that "what hypervisor is this" 
> still has limited usage, especially when it comes to dealing with bug 
> workarounds.  Similar to the way we use CPU vendor IDs and stepping 
> numbers for physical CPUs.)

I guess.  Its certainly useful to be able to identify the hypervisor for 
bug reporting and just general status information.  But making 
functional changes on that basis should be a last resort.

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ