lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 10 Mar 2007 00:02:50 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Chris Wright <chrisw@...s-sol.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Zachary Amsden <zach@...are.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	john stultz <johnstul@...ibm.com>, akpm@...ux-foundation.org,
	LKML <linux-kernel@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>, Andi Kleen <ak@...e.de>,
	Alan Cox <alan@...rguk.ukuu.org.uk>
Subject: Re: ABI coupling to hypervisors via CONFIG_PARAVIRT


* Chris Wright <chrisw@...s-sol.org> wrote:

> > ok, sure, how about the one i mentioned: long-term i'd like to have 
> > a paravirt model where the guest does not store /any/ page tables - 
> > all paging is managed by the hypervisor. The guest has a vma tree, 
> > but otherwise it does not process pagefaults, has no concept of a 
> > pte (if in paravirt mode), has no concept of kernel page tables 
> > either: there are hypercalls to allocate/free guest-kernel memory, 
> > etc. This needs some (serious) MM surgery but it's doable and it's 
> > interesting as well. How would you map this to the VMI backend?
> 
> Sounds a lot like a userspace process.  My immediate thought is, why 
> not use containers, a more natural fit.  [...]

easy: in my model the hypervisor is isolated from the guest kernel. In 
the container model it is not. [ This is a basic quality requirement for 
virtualization: a guest kernel does not get to read any hypervisor 
crypto keys to HD-DVD smut! ;-) ]

> [...] But if you have _any_ hope of booting this kernel on native 
> hardware when it's not running under a hypervisor then I'd expect the 
> same pv_ops interfaces that allow it to run on native would allow VMI 
> to build and handle the shadow (since you'd have taken it out of the 
> kernel).  Heh, so in order to run this on native we had to add 
> fork/mmap pv ops?  I agree it might be interesting, but it's still not 
> clear that it's useful w/out some code to back it up, and see the 
> value.

progress ;-) But yes, some /really/ high-level pv_ops would be needed.

[ in the end we might be able to simplify it down to a single hook! That 
  would be: run_native_image / run_guest_image ;-) ]

seriously, most of the body of x86 kernel code is in filesystems, VFS, 
networking, scheduler and the core kernel - much of which can be shared 
between native and guest. The MM is a significant and very central 
chunk, but it is less than 3% of the total codesize.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ