[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AD466E5.4010206@openvz.org>
Date: Tue, 13 Oct 2009 15:39:17 +0400
From: Pavel Emelyanov <xemul@...nvz.org>
To: vatsa@...ibm.com, Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>
CC: linux-kernel@...r.kernel.org,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Gautham R Shenoy <ego@...ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Herbert Poetzl <herbert@...hfloor.at>,
Avi Kivity <avi@...hat.com>,
Chris Friesen <cfriesen@...tel.com>,
Paul Menage <menage@...gle.com>,
Mike Waychison <mikew@...gle.com>
Subject: Re: [RFC v2 PATCH 0/8] CFS Hard limits - v2
> IMO Pavel's requirement can be met with a hard limit of 25%
>
> 2 CPU of 1GHz = (1/2 x 4) (1/2 x 2) GHz CPUs
> = 1/4 x 4 2GHz CPU
> = 25% of (4 2GHz CPU)
>
> IOW by hard-limiting a container thread to run just 0.5sec every sec on a 2GHz
> cpu, it is effectively making progress at the rate of 1GHz?
So, any suggestions on this? I'm not against the patches themselves.
I'm just trying to tell, that setting a cpulimit with 2 numbers is
not a good way to go (at least a clean explanation of how to calculate
them should go with the patches).
I propose to first collect what *can* be done. I see the following
possibilities:
1) two times (as it is now) - I believe this is inconvenient.
2) the amount in percents (like 50%) - this is how it works in
OpenVZ and customers are quite happy with it. It's better than
two numbers, since you need to specify only one clean number.
3) virtual cpu power in M/GHz - I don't agree with Balbir that
this is difficult for administrator. This is better than two
numbers and better that the percentage, since the amount of
cpu time got by a container will not change after migrating to
a more powerful CPU.
Thoughts?
> - vatsa
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists