[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D0008D5.1040102@redhat.com>
Date: Wed, 08 Dec 2010 17:38:13 -0500
From: Rik van Riel <riel@...hat.com>
To: Avi Kivity <avi@...hat.com>
CC: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Anthony Liguori <aliguori@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 3/3] kvm: use yield_to instead of sleep in kvm_vcpu_on_spin
On 12/05/2010 07:56 AM, Avi Kivity wrote:
>> + if (vcpu == me)
>> + continue;
>> + if (vcpu->spinning)
>> + continue;
>
> You may well want to wake up a spinner. Suppose
>
> A takes a lock
> B preempts A
> B grabs a ticket, starts spinning, yields to A
> A releases lock
> A grabs ticket, starts spinning
>
> at this point, we want A to yield to B, but it won't because of this check.
That's a good point. I guess we'll have to benchmark both with
and without the vcpu->spinning logic.
>> + if (!task)
>> + continue;
>> + if (waitqueue_active(&vcpu->wq))
>> + continue;
>> + if (task->flags& PF_VCPU)
>> + continue;
>> + kvm->last_boosted_vcpu = i;
>> + yield_to(task);
>> + break;
>> + }
>
> I think a random selection algorithm will be a better fit against
> special guest behaviour.
Possibly, though I suspect we'd have to hit very heavy overcommit ratios
with very large VMs before round robin stops working.
>> - /* Sleep for 100 us, and hope lock-holder got scheduled */
>> - expires = ktime_add_ns(ktime_get(), 100000UL);
>> - schedule_hrtimeout(&expires, HRTIMER_MODE_ABS);
>> + if (first_round&& last_boosted_vcpu == kvm->last_boosted_vcpu) {
>> + /* We have not found anyone yet. */
>> + first_round = 0;
>> + goto again;
>
> Need to guarantee termination.
We do that by setting first_round to 0 :)
We at most walk N+1 VCPUs in a VM with N VCPUs, with
this patch.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists