[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3572e95a-a5eb-748b-25c8-b7e128cbbe1b@redhat.com>
Date: Mon, 31 Jul 2017 15:20:11 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: David Hildenbrand <david@...hat.com>,
"Longpeng (Mike)" <longpeng2@...wei.com>
Cc: rkrcmar@...hat.com, agraf@...e.com, borntraeger@...ibm.com,
cohuck@...hat.com, christoffer.dall@...aro.org,
marc.zyngier@....com, james.hogan@...tec.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, weidong.huang@...wei.com,
arei.gonglei@...wei.com, wangxinxin.wang@...wei.com,
longpeng.mike@...il.com
Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin
On 31/07/2017 14:27, David Hildenbrand wrote:
>> I'm not sure whether the operation of get the vcpu's priority-level is
>> expensive on all architectures, so I record it in kvm_sched_out() for
>> minimal the extra cycles cost in kvm_vcpu_on_spin().
>>
> as you only care for x86 right now either way, you can directly optimize
> here for the good (here: x86) case (keeping changes and therefore
> possible bugs minimal).
I agree with Cornelia that this is inconsistent, so you shouldn't update
me->in_kernmode in kvm_vcpu_on_spin. However, get_cpl requires
vcpu_load on Intel x86, so Mike's patch is necessary (vmx_get_cpl ->
vmx_read_guest_seg_ar -> vmcs_read32).
Alternatively, we can add a new callback kvm_x86_ops->sched_out to x86
KVM, and call vmx_get_cpl from the Intel implementation (vmx_sched_out).
This will cache the result until the next sched_in, so that
kvm_vcpu_on_spin can use it.
Paolo
Powered by blists - more mailing lists