[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090605052755.GE11755@balbir.in.ibm.com>
Date: Fri, 5 Jun 2009 13:27:55 +0800
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
Cc: bharata@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Gautham R Shenoy <ego@...ibm.com>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Pavel Emelyanov <xemul@...nvz.org>, kvm@...r.kernel.org,
Linux Containers <containers@...ts.linux-foundation.org>,
Herbert Poetzl <herbert@...hfloor.at>
Subject: Re: [RFC] CPU hard limits
* Avi Kivity <avi@...hat.com> [2009-06-05 08:21:43]:
> Balbir Singh wrote:
>>> But then there is no other way to make a *guarantee*, guarantees come
>>> at a cost of idling resources, no? Can you show me any other
>>> combination that will provide the guarantee and without idling the
>>> system for the specified guarantees?
>>>
>>
>> OK, I see part of your concern, but I think we could do some
>> optimizations during design. For example if all groups have reached
>> their hard-limit and the system is idle, should we do start a new hard
>> limit interval and restart, so that idleness can be removed. Would
>> that be an acceptable design point?
>
> I think so. Given guarantees G1..Gn (0 <= Gi <= 1; sum(Gi) <= 1), and a
> cpu hog running in each group, how would the algorithm divide resources?
>
As per the matrix calculation, but as soon as we reach an idle point,
we redistribute the b/w and start a new quantum so to speak, where all
groups are charged up to their hard limits.
For your question, if there is a CPU hog running, it would be as per
the matrix calculation, since the system has no idle point during the
bandwidth period.
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists