[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5596809D.4000905@redhat.com>
Date: Fri, 3 Jul 2015 14:31:25 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Pontus Fuchs <pontus.fuchs@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
mingo@...hat.com,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
gleb@...nel.org
Subject: Re: [PATCH] sched,kvm: Fix KVM preempt_notifier usage
On 03/07/2015 14:19, Peter Zijlstra wrote:
> On Fri, Jul 03, 2015 at 01:12:11PM +0200, Paolo Bonzini wrote:
>> In fact you shouldn't have just tested the patch on a case _without_
>> preemption notifiers, you should have also benchmarked the impact that
>> static keys have _with_ preemption notifiers. In a
>> not-really-artificial case (one single-processor guest running on the
>> host), the static key patch adds a static_key_slow_inc on a relatively
>> hot path for KVM, which is not acceptable.
>
> Spawning the first vcpu is a hot path?
This is not *spawning* the first VCPU. Basically any critical section
for vcpu->mutex includes a preempt_notifier_register/unregister pair:
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
int vcpu_load(struct kvm_vcpu *vcpu)
{
int cpu;
if (mutex_lock_killable(&vcpu->mutex))
return -EINTR;
cpu = get_cpu();
preempt_notifier_register(&vcpu->preempt_notifier);
kvm_arch_vcpu_load(vcpu, cpu);
put_cpu();
return 0;
}
void vcpu_put(struct kvm_vcpu *vcpu)
{
preempt_disable();
kvm_arch_vcpu_put(vcpu);
preempt_notifier_unregister(&vcpu->preempt_notifier);
preempt_enable();
mutex_unlock(&vcpu->mutex);
}
So basically you're adding at least one static_key_slow_inc/dec pair to
every userspace exit.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists