lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Apr 2009 15:24:39 +0200
From:	Gerd Hoffmann <kraxel@...hat.com>
To:	Avi Kivity <avi@...hat.com>
CC:	Anthony Liguori <anthony@...emonkey.ws>,
	Huang Ying <ying.huang@...el.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH] Add MCE support to KVM

On 04/20/09 14:43, Avi Kivity wrote:
> Gerd Hoffmann wrote:
>>> That said, I'd like to be able to emulate the Xen HVM hypercalls. But in
>>> any case, they hypercall implementation has to be in the kernel,
>>
>> No. With Xenner the xen hypercall emulation code lives in guest
>> address space.
>
> In this case the guest ring-0 code should trap the #GP, and install the
> hypercall page (which uses sysenter/syscall?). No kvm or qemu changes
> needed.

Doesn't fly.

Reason #1: In the pv-on-hvm case the guest runs on ring0.
Reason #2: Chicken-egg issue:  For the pv-on-hvm case only few,
            simple hypercalls are needed.  The code to handle them
            is small enougth that it can be loaded directly into the
            hypercall page(s).

pure-pv doesn't need it in the first place.  But, yes, there I could 
simply trap #GP because the guest kernel runs on ring #1 (or #3 on 64bit).

>>> Especially if we need to support
>>> tricky bits like continuations.
>>
>> Is there any reason to? I *think* xen does it for better scheduling
>> latency. But with xen emulation sitting in guest address space we can
>> schedule the guest at will anyway.
>
> It also improves latency within the guest itself. At least I think that
> what was the Hyper-V spec is saying. You can interrupt the execution of
> a long hypercall, inject and interrupt, and resume. Sort of like a
> rep/movs instruction, which the cpu can and will interrupt.

Hmm.  Needs investigation..  I'd expect the main source of latencies is 
page table walking.  Xen works very different from kvm+xenner here ...

> For Xenner, no (and you don't need to intercept the msr at all),  but for
> pv-on-hvm, you do need to update the code.

Xenner handling pv-on-hvm doesn't need code updates either.  Real Xen 
does as it uses vmcall, not sure how they handle migration.

cheers
   Gerd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ