lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 04 Mar 2009 11:31:49 -0600
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Nick Piggin <nickpiggin@...oo.com.au>
CC:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH] xen: core dom0 support

Nick Piggin wrote:
> On Monday 02 March 2009 10:27:29 Jeremy Fitzhardinge wrote:

>> Once important area of paravirtualization is that Xen guests directly
>> use the processor's pagetables; there is no shadow pagetable or use of
>> hardware pagetable nesting.  This means that a tlb miss is just a tlb
>> miss, and happens at full processor performance.  This is possible
>> because 1) pagetables are always read-only to the guest, and 2) the
>> guest is responsible for looking up in a table to map guest-local pfns
>> into machine-wide mfns before installing them in a pte.  Xen will check
>> that any new mapping or pagetable satisfies all the rules, by checking
>> that the writable reference count is 0, and that the domain owns (or has
>> been allowed access to) any mfn it tries to install in a pagetable.
> 
> Xen's memory virtualization is pretty neat, I'll give it that. Is it
> faster than KVM on a modern CPU?

There is nothing architecturally that prevents KVM from making use of 
Direct Paging.  KVM doesn't use Direct Paging because we don't expect it 
will not be worth it.  Modern CPUs (Barcelona and Nehalem class) include 
hardware support for MMU virtualization (via NPT and EPT respectively).

I think that for the most part (especially with large page backed 
guests), there's wide agreement that even within the context of Xen, 
NPT/EPT often beats PV performance.  TLB miss overhead increases due to 
additional memory accesses but this is largely mitigated by large pages 
(see Ben Serebin's SOSP paper from a couple years ago).

> Would it be possible I wonder to make
> a MMU virtualization layer for CPUs without support, using Xen's page
> table protection methods, and have KVM use that? Or does that amount
> to putting a significant amount of Xen hypervisor into the kernel..?

There are various benchmarks out there (check KVM Forum and Xen Summit 
presentations) showing NPT/EPT beating Direct Paging but FWIW the direct 
paging could be implemented in KVM.

A really unfortunate aspect of direct paging is that it requires the 
guest to know the host physical addresses.  This requires the guest to 
cooperate when doing any fancy memory tricks (live migration, 
save/restore, swapping, page sharing, etc.).  This introduces guest code 
paths to ensure that things like live migration works which is extremely 
undesirable.

FWIW, I'm not advocating not taking the Xen dom0 patches.  Just pointing 
out that direct paging is orthogonal to the architectural differences 
between Xen and KVM.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ