lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200903021919.30068.nickpiggin@yahoo.com.au>
Date:	Mon, 2 Mar 2009 19:19:29 +1100
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	"the arch/x86 maintainers" <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Xen-devel" <xen-devel@...ts.xensource.com>
Subject: Re: [PATCH] xen: core dom0 support

On Monday 02 March 2009 19:05:10 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > That would kind of seem like Xen has a better design to me, OTOH if it
> > needs this dom0 for most device drivers and things, then how much
> > difference is it really? Is KVM really disadvantaged by being a part of
> > the kernel?
>
> Well, you can lump everything together in dom0 if you want, and that is
> a common way to run a Xen system.  But there's no reason you can't
> disaggregate drivers into their own domains, each with the
> responsibility for a particular device or set of devices (or indeed, any
> other service you want provided).  Xen can use hardware features like
> VT-d to really enforce the partitioning so that the domains can't
> program their hardware to touch anything except what they're allowed to
> touch, so nothing is trusted beyond its actual area of responsibility.
> It also means that killing off and restarting a driver domain is a
> fairly lightweight and straightforward operation because the state is
> isolated and self-contained; guests using a device have to be able to
> deal with a disconnect/reconnect anyway (for migration), so it doesn't
> affect them much.  Part of the reason there's a lot of academic interest
> in Xen is because it has the architectural flexibility to try out lots
> of different configurations.
>
> I wouldn't say that KVM is necessarily disadvantaged by its design; its
> just a particular set of tradeoffs made up-front.  It loses Xen's
> flexibility, but the result is very familiar to Linux people.  A guest
> domain just looks like a qemu process that happens to run in a strange
> processor mode a lot of the time.  The qemu process provides virtual
> device access to its domain, and accesses the normal device drivers like
> any other usermode process would.  The domains are as isolated from each
> other as much as processes normally are, but they're all floating around
> in the same kernel; whether that provides enough isolation for whatever
> technical, billing, security, compliance/regulatory or other
> requirements you have is up to the user to judge.

Well what is the advantage of KVM? Just that it is integrated into
the kernel? Can we look at the argument the other way around and
ask why Xen can't replace KVM? (is it possible to make use of HW
memory virtualization in Xen?) The hypervisor is GPL, right?


> >  Would it be possible I wonder to make
> > a MMU virtualization layer for CPUs without support, using Xen's page
> > table protection methods, and have KVM use that? Or does that amount
> > to putting a significant amount of Xen hypervisor into the kernel..?
>
> At one point Avi was considering doing it, but I don't think he ever
> made any real effort in that direction.  KVM is pretty wedded to having
> hardware support anyway, so there's not much point in removing it in
> this one area.

Not removing it, but making it available as an alternative form of
"hardware supported" MMU virtualization. As you say if direct protected
page tables often are faster than existing HW solutoins anyway, then it
could be a win for KVM even on newer CPUs.


> The Xen technique gets its performance from collapsing a level of
> indirection, but that has a cost in terms of flexibility; the hypervisor
> can't do as much mucking around behind the guest's back (for example,
> the guest sees real hardware memory addresses in the form of mfns, so
> Xen can't move pages around, at least not without some form of explicit
> synchronisation).

Any problem can be solved by adding another level of indirection... :)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ