[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5065B00A.4050107@linux.vnet.ibm.com>
Date: Fri, 28 Sep 2012 19:41:22 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: habanero@...ux.vnet.ibm.com
CC: Avi Kivity <avi@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Rik van Riel <riel@...hat.com>,
Srikar <srikar@...ux.vnet.ibm.com>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>, Jiannan Ouyang <ouyang@...pitt.edu>,
chegu vinod <chegu_vinod@...com>,
LKML <linux-kernel@...r.kernel.org>,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Gleb Natapov <gleb@...hat.com>,
Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios
in PLE handler
On 09/28/2012 05:10 PM, Andrew Theurer wrote:
> On Fri, 2012-09-28 at 11:08 +0530, Raghavendra K T wrote:
>> On 09/27/2012 05:33 PM, Avi Kivity wrote:
>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>>>>
[...]
>>>
>>> Also there may be a lot of false positives (deferred preemptions even
>>> when there is no contention).
>
> It will be interesting to see how this behaves with a very high lock
> activity in a guest. Once the scheduler defers preemption, is it for a
> fixed amount of time, or does it know to cut the deferral short as soon
> as the lock depth is reduced [by x]?
Design/protocol that Vatsa, had in mind was something like this:
- scheduler does not give a vcpu holding lock forever, it may give one
chance that would give only few ticks. In addition to giving chance,
scheduler also sets some indication that he has been given chance.
- vcpu once he release (all) the lock(s), if it had given chance,
it should clear that (ACK), and relinquish the cpu.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists