lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Oct 2013 11:50:50 +0200
From:	Alexander Graf <agraf@...e.de>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc:	Paolo Bonzini <pbonzini@...hat.com>,
	Paul Mackerras <paulus@...ba.org>,
	Gleb Natapov <gleb@...hat.com>,
	Michael Ellerman <michael@...erman.id.au>,
	linux-kernel@...r.kernel.org, mpm@...enic.com,
	herbert@...dor.hengli.com.au, linuxppc-dev@...abs.org,
	kvm@...r.kernel.org, kvm-ppc@...r.kernel.org, tytso@....edu
Subject: Re: [PATCH 3/3] KVM: PPC: Book3S: Add support for hwrng found on some powernv systems


On 02.10.2013, at 11:11, Alexander Graf wrote:

> 
> On 02.10.2013, at 11:06, Benjamin Herrenschmidt wrote:
> 
>> On Wed, 2013-10-02 at 10:46 +0200, Paolo Bonzini wrote:
>> 
>>> 
>>> Thanks.  Any chance you can give some numbers of a kernel hypercall and
>>> a userspace hypercall on Power, so we have actual data?  For example a
>>> hypercall that returns H_PARAMETER as soon as possible.
>> 
>> I don't have (yet) numbers at hand but we have basically 3 places where
>> we can handle hypercalls:
>> 
>> - Kernel real mode. This is where most of our MMU stuff goes for
>> example unless it needs to trigger a page fault in Linux. This is
>> executed with translation disabled and the MMU still in guest context.
>> This is the fastest path since we don't take out the other threads nor
>> perform any expensive context change. This is where we put the
>> "accelerated" H_RANDOM as well.
>> 
>> - Kernel virtual mode. That's a full exit, so all threads are out and
>> MMU switched back to host Linux. Things like vhost MMIO emulation goes
>> there, page faults, etc...
>> 
>> - Qemu. This adds the round trip to userspace on top of the above.
> 
> Right, and the difference for the patch in question is really whether we handle in in kernel virtual mode or in QEMU, so the bulk of the overhead (kicking threads out of  guest context, switching MMU context, etc) happens either way.
> 
> So the additional overhead when handling it in QEMU here really boils down to the user space roundtrip (plus another random number read roundtrip).

Ah, sorry, I misread the patch. You're running the handler in real mode of course :).

So how do you solve live migration between a kernel that has this patch and one that doesn't?


Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ