lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55D1DE8F.4050806@redhat.com>
Date:	Mon, 17 Aug 2015 06:15:59 -0700
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Avi Kivity <avi.kivity@...il.com>,
	Radim Krčmář <rkrcmar@...hat.com>,
	linux-kernel@...r.kernel.org
Cc:	kvm@...r.kernel.org
Subject: Re: [PATCH v2 4/5] KVM: add KVM_USER_EXIT vcpu ioctl for userspace
 exit



On 16/08/2015 13:27, Avi Kivity wrote:
> On 08/05/2015 07:33 PM, Radim Krčmář wrote:
>> The guest can use KVM_USER_EXIT instead of a signal-based exiting to
>> userspace.  Availability depends on KVM_CAP_USER_EXIT.
>> Only x86 is implemented so far.
>>
>> Signed-off-by: Radim Krčmář <rkrcmar@...hat.com>
>> ---
>>   v2:
>>    * use vcpu ioctl instead of vm one [4/5]
>>    * shrink kvm_user_exit from 64 to 32 bytes [4/5]
>>
>>   Documentation/virtual/kvm/api.txt | 30 ++++++++++++++++++++++++++++++
>>   arch/x86/kvm/x86.c                | 24 ++++++++++++++++++++++++
>>   include/uapi/linux/kvm.h          |  7 +++++++
>>   virt/kvm/kvm_main.c               |  5 +++--
>>   4 files changed, 64 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt
>> b/Documentation/virtual/kvm/api.txt
>> index 3c714d43a717..c5844f0b8e7c 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -3020,6 +3020,36 @@ Returns: 0 on success, -1 on error
>>     Queues an SMI on the thread's vcpu.
>>   +
>> +4.97 KVM_USER_EXIT
>> +
>> +Capability: KVM_CAP_USER_EXIT
>> +Architectures: x86
>> +Type: vcpu ioctl
>> +Parameters: struct kvm_user_exit (in)
>> +Returns: 0 on success,
>> +         -EFAULT if the parameter couldn't be read,
>> +         -EINVAL if 'reserved' is not zeroed,
>> +
>> +struct kvm_user_exit {
>> +    __u8 reserved[32];
>> +};
>> +
>> +The ioctl is asynchronous to VCPU execution and can be issued from
>> all threads.
>> +format
> 
> This breaks an invariant of vcpu ioctls, and also forces a cacheline
> bounce when we fget() the vcpu fd.

KVM_USER_EXIT in practice should be so rare (at least with in-kernel
LAPIC) that I don't think this matters.  KVM_USER_EXIT is relatively
uninteresting, it only exists to provide an alternative to signals that
doesn't require expensive atomics on each and every KVM_RUN. :(

In addition, when you do it you have to transfer information anyway from
the signaling thread to the VCPU thread, which causes cacheline bounces
too.  For example in QEMU both the iothread mutex and
cpu->interrupt_request cachelines bounce.

> We should really try to avoid this.  One options is to have a
> KVM_VCPU_MAKE_EXITFD vcpu ioctl, which returns an eventfd that you then
> write into.  You can make as many exitfds as you like, one for each
> waking thread, so they never cause cacheline conflicts.

> Edit: I see the invariant was already broken.

Yeah, that was a long time ago.  This is when it became apparent in
kvm_vcpu_ioctl, but it was broken even before that for these 2-3
special asynchronous vcpu ioctls:

    commit 2122ff5eab8faec853e43f6de886e8dc8f31e317
    Author: Avi Kivity <avi@...hat.com>
    Date:   Thu May 13 11:25:04 2010 +0300

    KVM: move vcpu locking to dispatcher for generic vcpu ioctls

    All vcpu ioctls need to be locked, so instead of locking each one
    specifically we lock at the generic dispatcher.

    This patch only updates generic ioctls and leaves arch specific
    ioctls alone.

    Signed-off-by: Avi Kivity <avi@...hat.com>

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ