[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFD5AED.3030307@linux.vnet.ibm.com>
Date: Wed, 11 Jul 2012 16:22:29 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
CC: "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Rik van Riel <riel@...hat.com>,
S390 <linux-s390@...r.kernel.org>,
Carsten Otte <cotte@...ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
KVM <kvm@...r.kernel.org>, chegu vinod <chegu_vinod@...com>,
"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>, X86 <x86@...nel.org>,
Gleb Natapov <gleb@...hat.com>, linux390@...ibm.com,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Joerg Roedel <joerg.roedel@....com>
Subject: Re: [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit
On 07/11/2012 02:23 PM, Avi Kivity wrote:
> On 07/09/2012 09:20 AM, Raghavendra K T wrote:
>> Signed-off-by: Raghavendra K T<raghavendra.kt@...ux.vnet.ibm.com>
>>
>> Noting pause loop exited vcpu helps in filtering right candidate to yield.
>> Yielding to same vcpu may result in more wastage of cpu.
>>
>>
>> struct kvm_lpage_info {
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index f75af40..a492f5d 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -3264,6 +3264,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm)
>>
>> static int pause_interception(struct vcpu_svm *svm)
>> {
>> + svm->vcpu.arch.plo.pause_loop_exited = true;
>> kvm_vcpu_on_spin(&(svm->vcpu));
>> return 1;
>> }
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 32eb588..600fb3c 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -4945,6 +4945,7 @@ out:
>> static int handle_pause(struct kvm_vcpu *vcpu)
>> {
>> skip_emulated_instruction(vcpu);
>> + vcpu->arch.plo.pause_loop_exited = true;
>> kvm_vcpu_on_spin(vcpu);
>>
>
> This code is duplicated. Should we move it to kvm_vcpu_on_spin?
>
> That means the .plo structure needs to be in common code, but that's not
> too bad perhaps.
>
Since PLE is very much tied to x86, and proposed changes are very much
specific to PLE handler, I thought it is better to make arch specific.
So do you think it is good to move inside vcpu_on_spin and make ple
structure belong to common code?
>> index be6d549..07dbd14 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -5331,7 +5331,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>
>> if (req_immediate_exit)
>> smp_send_reschedule(vcpu->cpu);
>> -
>> + vcpu->arch.plo.pause_loop_exited = false;
>
> This adds some tiny overhead to vcpu entry. You could remove it by
> using the vcpu->requests mechanism to clear the flag, since
> vcpu->requests is already checked on every entry.
So IIUC, let's have request bit for indicating PLE,
pause_interception() /handle_pause()
{
make_request(PLE_REQUEST)
vcpu_on_spin()
}
check_eligibility()
{
!test_request(PLE_REQUEST) || ( test_request(PLE_REQUEST) &&
dy_eligible())
.
.
}
vcpu_run()
{
check_request(PLE_REQUEST)
.
.
}
Is this is the expected flow you had in mind?
[ But my only concern was not resetting for cases where we do not do
guest_enter(). will test how that goes].
>
>> kvm_guest_enter();
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists