lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091013120354.GF24787@MAIL.13thfloor.at>
Date:	Tue, 13 Oct 2009 14:03:54 +0200
From:	Herbert Poetzl <herbert@...hfloor.at>
To:	Pavel Emelyanov <xemul@...nvz.org>
Cc:	vatsa@...ibm.com, Bharata B Rao <bharata@...ux.vnet.ibm.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Gautham R Shenoy <ego@...ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Avi Kivity <avi@...hat.com>,
	Chris Friesen <cfriesen@...tel.com>,
	Paul Menage <menage@...gle.com>,
	Mike Waychison <mikew@...gle.com>
Subject: Re: [RFC v2 PATCH 0/8] CFS Hard limits - v2

On Tue, Oct 13, 2009 at 03:39:17PM +0400, Pavel Emelyanov wrote:
> > IMO Pavel's requirement can be met with a hard limit of 25%
> > 
> > 	2 CPU of 1GHz = (1/2 x 4) (1/2 x 2) GHz CPUs
> > 		      = 1/4 x 4 2GHz CPU
> > 		      = 25% of (4 2GHz CPU)
> > 
> > IOW by hard-limiting a container thread to run just 0.5sec every
> > sec on a 2GHz cpu, it is effectively making progress at the rate of
> > 1GHz?

> So, any suggestions on this? I'm not against the patches themselves.
> I'm just trying to tell, that setting a cpulimit with 2 numbers is
> not a good way to go (at least a clean explanation of how to calculate
> them should go with the patches).

as I already stated, it seems perfectly fine for me (to have
two values, period and runtime), IMHO it is quite natural to 
understand and it allows (to some degree) to control the
scheduling behaviour by controlling the multiplicator (i.e.
use 1s/0.5s vs 100ms/50ms) ...

we already incorporated the patch (for testing) in our
current release, and it seems to work fine and do what we
need/want (see http://linux-vserver.org/util-vserver:Cgroups)

> I propose to first collect what *can* be done. I see the following
> possibilities:

> 1) two times (as it is now) - I believe this is inconvenient.
> 2) the amount in percents (like 50%) - this is how it works in
>    OpenVZ and customers are quite happy with it. It's better than
>    two numbers, since you need to specify only one clean number.

can be trivially mapped to the two values, by chosing a
fixed multiplicative base (let's say '1s' to simplify :) 

  with 50%, you get 1s/0.5s
  with 20%, you get 1s/0.2s
  with  5%, you get 1s/0.05s

well, you get the idea :)

> 3) virtual cpu power in M/GHz - I don't agree with Balbir that
>    this is difficult for administrator. This is better than two
>    numbers and better that the percentage, since the amount of
>    cpu time got by a container will not change after migrating to
>    a more powerful CPU.

I think this is completely artificial, and adds a lot
of silly cornercases, like e.g. cpu speed changes (think
SpeedStep and friends) and doesn't help the administration
in any way ... nevertheless, it should also be trivial to
map to the two values if you do the following:

  Host CPU = 4.5GHz
  Desired Guest CPU = 2.0Ghz

  2.0/4.5 = 0.44' -> 44% -> 1s/0.44s

> Thoughts?

so once again, 'we' (Linux-VServer) are perfectly happy
with the current status, the only difference to what we
used to have is that we calculated the period and runtime
in jiffies not micro seconds, and called them interval
and rate (which is as simple to map as the percentage
OpenVZ uses)

best,
Herbert

> > - vatsa
> > 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ