[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FE2F62A.9090804@linux.vnet.ibm.com>
Date: Thu, 21 Jun 2012 15:53:38 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Gleb Natapov <gleb@...hat.com>
CC: Rik van Riel <riel@...hat.com>, Avi Kivity <avi@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Srikar <srikar@...ux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>, Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] kvm: handle last_boosted_vcpu = 0 case
On 06/21/2012 12:13 PM, Gleb Natapov wrote:
> On Tue, Jun 19, 2012 at 04:51:04PM -0400, Rik van Riel wrote:
>> On Wed, 20 Jun 2012 01:50:50 +0530
>> Raghavendra K T<raghavendra.kt@...ux.vnet.ibm.com> wrote:
>>
>>>
>>> In ple handler code, last_boosted_vcpu (lbv) variable is
>>> serving as reference point to start when we enter.
>>
>>> Also statistical analysis (below) is showing lbv is not very well
>>> distributed with current approach.
>>
>> You are the second person to spot this bug today (yes, today).
>>
>> Due to time zones, the first person has not had a chance yet to
>> test the patch below, which might fix the issue...
>>
>> Please let me know how it goes.
>>
>> ====8<====
>>
>> If last_boosted_vcpu == 0, then we fall through all test cases and
>> may end up with all VCPUs pouncing on vcpu 0. With a large enough
>> guest, this can result in enormous runqueue lock contention, which
>> can prevent vcpu0 from running, leading to a livelock.
>>
>> Changing< to<= makes sure we properly handle that case.
>>
>> Signed-off-by: Rik van Riel<riel@...hat.com>
>> ---
>> virt/kvm/kvm_main.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 7e14068..1da542b 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -1586,7 +1586,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>> */
>> for (pass = 0; pass< 2&& !yielded; pass++) {
>> kvm_for_each_vcpu(i, vcpu, kvm) {
>> - if (!pass&& i< last_boosted_vcpu) {
>> + if (!pass&& i<= last_boosted_vcpu) {
>> i = last_boosted_vcpu;
>> continue;
>> } else if (pass&& i> last_boosted_vcpu)
>>
> Looks correct. We can simplify this by introducing something like:
>
> #define kvm_for_each_vcpu_from(idx, n, vcpup, kvm) \
> for (n = atomic_read(&kvm->online_vcpus); \
> n&& (vcpup = kvm_get_vcpu(kvm, idx)) != NULL; \
> n--, idx = (idx+1) % atomic_read(&kvm->online_vcpus))
>
Thumbs up for this simplification. This really helps in all the places
where we want to start iterating from middle.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists