lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Sep 2012 11:14:47 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Tejun Heo <tj@...nel.org>, containers@...ts.linux-foundation.org,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
	Li Zefan <lizefan@...wei.com>, Michal Hocko <mhocko@...e.cz>,
	Glauber Costa <glommer@...allels.com>,
	Paul Turner <pjt@...gle.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Thomas Graf <tgraf@...g.ch>, Paul Mackerras <paulus@...ba.org>,
	Ingo Molnar <mingo@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Neil Horman <nhorman@...driver.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Serge Hallyn <serge.hallyn@...ntu.com>
Subject: Re: [RFC] cgroup TODOs

On Fri, Sep 14, 2012 at 04:53:29PM +0200, Peter Zijlstra wrote:
> On Fri, 2012-09-14 at 10:25 -0400, Vivek Goyal wrote:
> > So while % model is more intutive to users, it is hard to implement.
> 
> I don't agree with that. The fixed quota thing is counter-intuitive and
> hard to use. It begets you questions like: why, if everything is idle
> except my task, am I not getting the full throughput.

Actually by fixed quota I meant minimum fixed %. So if other groups are
idle, this group still gets to use 100% bandwidth. When resources are
highly contended, this group gets its minimum fixed %.

> 
> It also makes adding entities harder because you're constrained to 100%.
> This means you have to start each new cgroup with 0% because any !0
> value will eventually get you over 100%, it also means you have to do
> some form of admission control to make sure you never get over that
> 100%.
> 
> Starting with 0% is not convenient for people.. they think this is the
> wrong default, even though as argued above, it is the only possible
> value.

We don't have to start with 0%. We can keep a pool with dynamic % and
launch all the virtual machines from that single pool. So nobody starts
with 0%. If we require certain % for a machine, only then we look at
peers and see if we have bandwidth free and create cgroup and move virtual
machine there, otherwise we deny resources. 

So I think it is doable just that it is painful and tricky and I think
lot of it will be in user space.

> 
> >  So
> > an easier way is to stick to the model of relative weights/share and
> > let user specify relative importance of a virtual machine and actual
> > quota or % will vary dynamically depending on other tasks/components
> > in the system.
> > 
> > Thoughts? 
> 
> cpu does the relative weight, so 'users' will have to deal with it
> anyway regardless of blk, its effectively free of learning curve for all
> subsequent controllers.

I am inclined to keep it simple in kernel and just follow cpu model of
relative weights and treating tasks and gropu at same level in the
hierarchy. It makes behavior consistent across the controllers and I
think it might just work for majority of cases.

Those who really need to implement % model, they will have to do heavy
lifting in user space. I am skeptical that will take off but kernel
does not prohibit from somebody creating a group, moving all tasks
there and making sure tasks and groups are not at same level hence
% becomes more predictable. Just that, that's not the default from
kernel.

So yes, doing it cpu controller way in block controller should be
reasonable.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ