lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Jul 2010 11:40:37 +0100
From:	"Daniel P. Berrange" <berrange@...hat.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Nauman Rafique <nauman@...gle.com>,
	Munehiro Ikeda <m-ikeda@...jp.nec.com>,
	linux-kernel@...r.kernel.org, Ryo Tsuruta <ryov@...inux.co.jp>,
	taka@...inux.co.jp, Andrea Righi <righi.andrea@...il.com>,
	Gui Jianfeng <guijianfeng@...fujitsu.com>,
	akpm@...ux-foundation.org, balbir@...ux.vnet.ibm.com
Subject: Re: [RFC][PATCH 00/11] blkiocg async support

On Fri, Jul 16, 2010 at 11:12:34AM -0400, Vivek Goyal wrote:
> On Fri, Jul 16, 2010 at 03:53:09PM +0100, Daniel P. Berrange wrote:
> > On Fri, Jul 16, 2010 at 10:35:36AM -0400, Vivek Goyal wrote:
> > > On Fri, Jul 16, 2010 at 03:15:49PM +0100, Daniel P. Berrange wrote:
> > > Secondly, just because some controller allows creation of hierarchy does
> > > not mean that hierarchy is being enforced. For example, memory controller.
> > > IIUC, one needs to explicitly set "use_hierarchy" to enforce hierarchy
> > > otherwise effectively it is flat. So if libvirt is creating groups and
> > > putting machines in child groups thinking that we are not interfering
> > > with admin's policy, is not entirely correct.
> > 
> > That is true, but that 'use_hierarchy' at least provides admins
> > the mechanism required to implement the neccessary policy
> > 
> > > So how do we make progress here. I really want to see blkio controller
> > > integrated with libvirt.
> > > 
> > > About the issue of hierarchy, I can probably travel down the path of allowing
> > > creation of hierarchy but CFQ will treat it as flat. Though I don't like it
> > > because it will force me to introduce variables like "use_hierarchy" once
> > > real hierarchical support comes in but I guess I can live with that.
> > > (Anyway memory controller is already doing it.).
> > > 
> > > There is another issue though and that is by default every virtual
> > > machine going into a group of its own. As of today, it can have
> > > severe performance penalties (depending on workload) if group is not
> > > driving doing enough IO. (Especially with group_isolation=1).
> > > 
> > > I was thinking of a model where an admin moves out the bad virtual
> > > machines in separate group and limit their IO.
> > 
> > In the simple / normal case I imagine all guests VMs will be running
> > unrestricted I/O initially. Thus instead of creating the cgroup at time
> > of VM startup, we could create the cgroup only when the admin actually
> > sets an I/O limit.
> 
> That makes sense. Run all the virtual machines by default in root group
> and move out a virtual machine to a separate group of either low weight
> (if virtual machine is a bad one and driving lot of IO) or of higher weight
> (if we want to give more IO bw to this machine).
> 
> > IIUC, this should maintain the one cgroup per guest
> > model, while avoiding the performance penalty in normal use. The caveat
> > of course is that this would require blkio controller to have a dedicated
> > mount point, not shared with other controller.
> 
> Yes. Because for other controllers we seem to be putting virtual machines
> in separate cgroups by default at startup time. So it seems we will
> require a separate mount point here for blkio controller.
> 
> >  I think we might also
> > want this kind of model for net I/O, since we probably don't want to 
> > creating TC classes + net_cls groups for every VM the moment it starts
> > unless the admin has actually set a net I/O limit.
> 
> Looks like. So good, then network controller and blkio controller can
> share the this new mount point. 

After thinking about this some more there are a couple of problems with
this plan. For QEMU the 'vhostnet' (the in kernel virtio network backend)
requires that QEMU be in the cgroup at time of startup, otherwise the
vhost kernel thread won't end up in the right cgroup. For libvirt's LXC
container driver, moving the container in & out of the cgroups at runtime
is pretty difficult because there are an arbitrary number of processes
running in the container. It would require moving all the container
processes between two cgroups in a race free manner. So on second thoughts
I'm more inclined to stick with our current approach of putting all guests
into the appropriate cgroups at guest/container startup, even for blkio
and netcls. 

Daniel
-- 
|: Red Hat, Engineering, London    -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org        -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ