lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Mar 2009 01:05:21 -0800
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Nick Piggin <nickpiggin@...oo.com.au>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Xen-devel <xen-devel@...ts.xensource.com>
Subject: Re: [PATCH] xen: core dom0 support

Nick Piggin wrote:
>> I wouldn't say that KVM is necessarily disadvantaged by its design; its
>> just a particular set of tradeoffs made up-front.  It loses Xen's
>> flexibility, but the result is very familiar to Linux people.  A guest
>> domain just looks like a qemu process that happens to run in a strange
>> processor mode a lot of the time.  The qemu process provides virtual
>> device access to its domain, and accesses the normal device drivers like
>> any other usermode process would.  The domains are as isolated from each
>> other as much as processes normally are, but they're all floating around
>> in the same kernel; whether that provides enough isolation for whatever
>> technical, billing, security, compliance/regulatory or other
>> requirements you have is up to the user to judge.
>>     
>
> Well what is the advantage of KVM? Just that it is integrated into
> the kernel? Can we look at the argument the other way around and
> ask why Xen can't replace KVM?

Xen was around before KVM was even a twinkle, so KVM is redundant from 
that perspective; they're certainly broadly equivalent in 
functionality.  But Xen has had a fairly fraught history with respect to 
being merged into the kernel, and being merged gets your feet into a lot 
of doors.  The upshot is that using Xen has generally required some 
preparation - like installing special kernels - before you can use it, 
and so tends to get used for servers which are specifically intended to 
be virtualized.  KVM runs like an accelerated qemu, so it easy to just 
fire up an instance of windows in the middle of a normal Linux desktop 
session, with no special preparation.

But Xen is getting better at being on laptops and desktops, and doing 
all the things people expect there (power management, suspend/resume, 
etc).  And people are definitely interested in using KVM in server 
environments, so the lines are not very clear any more.

(Of course, we're completely forgetting VMI in all this, but VMware seem 
to have as well.  And we're all waiting for Rusty to make his World 
Domination move.)

>  (is it possible to make use of HW
> memory virtualization in Xen?)

Yes, Xen will use all available hardware features when running hvm 
domains (== fully virtualized == Windows).

>  The hypervisor is GPL, right?
>   

Yep.

>>>  Would it be possible I wonder to make
>>> a MMU virtualization layer for CPUs without support, using Xen's page
>>> table protection methods, and have KVM use that? Or does that amount
>>> to putting a significant amount of Xen hypervisor into the kernel..?
>>>       
>> At one point Avi was considering doing it, but I don't think he ever
>> made any real effort in that direction.  KVM is pretty wedded to having
>> hardware support anyway, so there's not much point in removing it in
>> this one area.
>>     
>
> Not removing it, but making it available as an alternative form of
> "hardware supported" MMU virtualization. As you say if direct protected
> page tables often are faster than existing HW solutoins anyway, then it
> could be a win for KVM even on newer CPUs.
>   

Well, yes.  I'm sure it will make someone a nice little project.  It 
should be fairly easy to try out - all the hooks are in place, so its 
just a matter of implementing the kvm bits.  But it probably wouldn't be 
a comfortable fit with the rest of Linux; all the memory mapped via 
direct pagetables would be solidly pinned down, completely unswappable, 
giving the VM subsystem much less flexibility about allocating 
resources.  I guess it would be no worse than a multi-hundred 
megabyte/gigabyte process mlocking itself down, but I don't know if 
anyone actually does that.

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ