[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140922183542.GA1018@google.com>
Date: Mon, 22 Sep 2014 11:35:42 -0700
From: David Matlack <dmatlack@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>,
Gleb Natapov <gleb@...nel.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kvm: don't take vcpu mutex for obviously invalid vcpu
ioctls
On 09/22, Paolo Bonzini wrote:
> Il 22/09/2014 15:45, Christian Borntraeger ha scritto:
> > We now have an extra condition check for every valid ioctl, to make an error case go faster.
> > I know, the extra check is just a 1 or 2 cycles if branch prediction is right, but still.
>
> I applied the patch because the delay could be substantial, depending on
> what the other VCPU is doing. Perhaps something like this would be
> better?
I'm happy with either approach.
>
> (Untested, but Tested-by/Reviewed-bys are welcome).
There were a few build bugs in your diff. Here's a working version that
I tested. Feel free to add my Tested-by and Reviewed-by if you go with
this.
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c71931f..fbdcdc2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -133,12 +133,10 @@ bool kvm_is_mmio_pfn(pfn_t pfn)
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
-int vcpu_load(struct kvm_vcpu *vcpu)
+static void __vcpu_load(struct kvm_vcpu *vcpu)
{
int cpu;
- if (mutex_lock_killable(&vcpu->mutex))
- return -EINTR;
if (unlikely(vcpu->pid != current->pids[PIDTYPE_PID].pid)) {
/* The thread running this VCPU changed. */
struct pid *oldpid = vcpu->pid;
@@ -151,6 +149,14 @@ int vcpu_load(struct kvm_vcpu *vcpu)
preempt_notifier_register(&vcpu->preempt_notifier);
kvm_arch_vcpu_load(vcpu, cpu);
put_cpu();
+}
+
+int vcpu_load(struct kvm_vcpu *vcpu)
+{
+ if (mutex_lock_killable(&vcpu->mutex))
+ return -EINTR;
+
+ __vcpu_load(vcpu);
return 0;
}
@@ -2197,10 +2203,21 @@ static long kvm_vcpu_ioctl(struct file *filp,
return kvm_arch_vcpu_ioctl(filp, ioctl, arg);
#endif
+ if (!mutex_trylock(&vcpu->mutex)) {
+ /*
+ * Before a potentially long sleep, check if we'd exit anyway.
+ * The common case is for the mutex not to be contended, when
+ * this does not add overhead.
+ */
+ if (unlikely(_IOC_TYPE(ioctl) != KVMIO))
+ return -EINVAL;
+
+ if (mutex_lock_killable(&vcpu->mutex))
+ return -EINTR;
+ }
+
+ __vcpu_load(vcpu);
- r = vcpu_load(vcpu);
- if (r)
- return r;
switch (ioctl) {
case KVM_RUN:
r = -EINVAL;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists