[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070309224640.GW10574@sequoia.sous-sol.org>
Date: Fri, 9 Mar 2007 14:46:40 -0800
From: Chris Wright <chrisw@...s-sol.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: Chris Wright <chrisw@...s-sol.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Zachary Amsden <zach@...are.com>,
Thomas Gleixner <tglx@...utronix.de>,
john stultz <johnstul@...ibm.com>, akpm@...ux-foundation.org,
LKML <linux-kernel@...r.kernel.org>,
Rusty Russell <rusty@...tcorp.com.au>, Andi Kleen <ak@...e.de>,
Alan Cox <alan@...rguk.ukuu.org.uk>
Subject: Re: ABI coupling to hypervisors via CONFIG_PARAVIRT
* Ingo Molnar (mingo@...e.hu) wrote:
> * Chris Wright <chrisw@...s-sol.org> wrote:
> > i'm not really one to argue on behalf of VMI, but i don't think it's
> > as dire make it out. [...]
>
> hey, that's what i thought when i helped do the vDSO, until i got
> slapped with cold reality called "CONFIG_COMPAT_VDSO". I'm a bit more
> careful about ABIs since then =B-)
heh, once bitten twice shy, or however that goes ;-)
> > [...] the VMI is client code of pv_ops, and as the kernel changes that
> > client code will simply have to adapt. of course there are
> > theoretical limitations, but let's keep it grounded to practical
> > reality. the whole premise is evolution. so throw out specific
> > issues, and let's adapt rather than fall deep into theoretical
> > rhetoric.
>
> ok, sure, how about the one i mentioned: long-term i'd like to have a
> paravirt model where the guest does not store /any/ page tables - all
> paging is managed by the hypervisor. The guest has a vma tree, but
> otherwise it does not process pagefaults, has no concept of a pte (if in
> paravirt mode), has no concept of kernel page tables either: there are
> hypercalls to allocate/free guest-kernel memory, etc. This needs some
> (serious) MM surgery but it's doable and it's interesting as well. How
> would you map this to the VMI backend?
Sounds a lot like a userspace process. My immediate thought is, why
not use containers, a more natural fit. But if you have _any_ hope
of booting this kernel on native hardware when it's not running under
a hypervisor then I'd expect the same pv_ops interfaces that allow it
to run on native would allow VMI to build and handle the shadow (since
you'd have taken it out of the kernel). Heh, so in order to run this on
native we had to add fork/mmap pv ops? I agree it might be interesting,
but it's still not clear that it's useful w/out some code to back it up,
and see the value.
thanks,
-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists