[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFD60EE.6020604@redhat.com>
Date: Wed, 11 Jul 2012 14:18:06 +0300
From: Avi Kivity <avi@...hat.com>
To: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
CC: "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Rik van Riel <riel@...hat.com>,
S390 <linux-s390@...r.kernel.org>,
Carsten Otte <cotte@...ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
KVM <kvm@...r.kernel.org>, chegu vinod <chegu_vinod@...com>,
"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>, X86 <x86@...nel.org>,
Gleb Natapov <gleb@...hat.com>, linux390@...ibm.com,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Joerg Roedel <joerg.roedel@....com>
Subject: Re: [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
> On 07/11/2012 02:23 PM, Avi Kivity wrote:
>> On 07/09/2012 09:20 AM, Raghavendra K T wrote:
>>> Signed-off-by: Raghavendra K T<raghavendra.kt@...ux.vnet.ibm.com>
>>>
>>> Noting pause loop exited vcpu helps in filtering right candidate to
>>> yield.
>>> Yielding to same vcpu may result in more wastage of cpu.
>>>
>>>
>>> struct kvm_lpage_info {
>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>> index f75af40..a492f5d 100644
>>> --- a/arch/x86/kvm/svm.c
>>> +++ b/arch/x86/kvm/svm.c
>>> @@ -3264,6 +3264,7 @@ static int interrupt_window_interception(struct
>>> vcpu_svm *svm)
>>>
>>> static int pause_interception(struct vcpu_svm *svm)
>>> {
>>> + svm->vcpu.arch.plo.pause_loop_exited = true;
>>> kvm_vcpu_on_spin(&(svm->vcpu));
>>> return 1;
>>> }
>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>>> index 32eb588..600fb3c 100644
>>> --- a/arch/x86/kvm/vmx.c
>>> +++ b/arch/x86/kvm/vmx.c
>>> @@ -4945,6 +4945,7 @@ out:
>>> static int handle_pause(struct kvm_vcpu *vcpu)
>>> {
>>> skip_emulated_instruction(vcpu);
>>> + vcpu->arch.plo.pause_loop_exited = true;
>>> kvm_vcpu_on_spin(vcpu);
>>>
>>
>> This code is duplicated. Should we move it to kvm_vcpu_on_spin?
>>
>> That means the .plo structure needs to be in common code, but that's not
>> too bad perhaps.
>>
>
> Since PLE is very much tied to x86, and proposed changes are very much
> specific to PLE handler, I thought it is better to make arch specific.
>
> So do you think it is good to move inside vcpu_on_spin and make ple
> structure belong to common code?
See the discussion with Christian. PLE is tied to x86, but cpu_relax()
and facilities to trap it are not.
>>
>> This adds some tiny overhead to vcpu entry. You could remove it by
>> using the vcpu->requests mechanism to clear the flag, since
>> vcpu->requests is already checked on every entry.
>
> So IIUC, let's have request bit for indicating PLE,
>
> pause_interception() /handle_pause()
> {
> make_request(PLE_REQUEST)
> vcpu_on_spin()
>
> }
>
> check_eligibility()
> {
> !test_request(PLE_REQUEST) || ( test_request(PLE_REQUEST) &&
> dy_eligible())
> .
> .
> }
>
> vcpu_run()
> {
>
> check_request(PLE_REQUEST)
> .
> .
> }
>
> Is this is the expected flow you had in mind?
Yes, something like that.
>
> [ But my only concern was not resetting for cases where we do not do
> guest_enter(). will test how that goes].
Hm, suppose we're the next-in-line for a ticket lock and exit due to
PLE. The lock holder completes and unlocks, which really assigns the
lock to us. So now we are the lock owner, yet we are marked as don't
yield-to-us in the PLE code.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists