lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081113155834.GE7542@redhat.com>
Date:	Thu, 13 Nov 2008 10:58:34 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Ryo Tsuruta <ryov@...inux.co.jp>
Cc:	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org,
	virtualization@...ts.linux-foundation.org, jens.axboe@...cle.com,
	taka@...inux.co.jp, righi.andrea@...il.com, s-uchida@...jp.nec.com,
	fernando@....ntt.co.jp, balbir@...ux.vnet.ibm.com,
	akpm@...ux-foundation.org, menage@...gle.com, ngupta@...gle.com,
	riel@...hat.com, jmoyer@...hat.com, peterz@...radead.org,
	Fabio Checconi <fchecconi@...il.com>, paolo.valente@...more.it
Subject: Re: [patch 0/4] [RFC] Another proportional weight IO controller

On Thu, Nov 13, 2008 at 06:05:58PM +0900, Ryo Tsuruta wrote:
> Hi,
> 
> From: vgoyal@...hat.com
> Subject: [patch 0/4] [RFC] Another proportional weight IO controller 
> Date: Thu, 06 Nov 2008 10:30:22 -0500
> 
> > Hi,
> > 
> > If you are not already tired of so many io controller implementations, here
> > is another one.
> > 
> > This is a very eary very crude implementation to get early feedback to see
> > if this approach makes any sense or not.
> > 
> > This controller is a proportional weight IO controller primarily
> > based on/inspired by dm-ioband. One of the things I personally found little
> > odd about dm-ioband was need of a dm-ioband device for every device we want
> > to control.  I thought that probably we can make this control per request
> > queue and get rid of device mapper driver. This should make configuration
> > aspect easy.
> > 
> > I have picked up quite some amount of code from dm-ioband especially for
> > biocgroup implementation.
> > 
> > I have done very basic testing and that is running 2-3 dd commands in different
> > cgroups on x86_64. Wanted to throw out the code early to get some feedback.
> > 
> > More details about the design and how to are in documentation patch.
> > 
> > Your comments are welcome.
> 
> Do you have any benchmark results?
> I'm especially interested in the followings:
> - Comparison of disk performance with and without the I/O controller patch.

If I dynamically disable the bio control, then I did not observe any
impact on performance. Because in that case practically it boils down
to just an additional variable check in __make_request().

> - Put uneven I/O loads. Processes, which belong to a cgroup which is
>   given a smaller weight than another cgroup, put heavier I/O load
>   like the following.
> 
>      echo 1024 > /cgroup/bio/test1/bio.shares
>      echo 8192 > /cgroup/bio/test2/bio.shares
> 
>      echo $$ > /cgroup/bio/test1/tasks
>      dd if=/somefile1-1 of=/dev/null &
>      dd if=/somefile1-2 of=/dev/null &
>      ... 
>      dd if=/somefile1-100 of=/dev/null
>      echo $$ > /cgroup/bio/test2/tasks
>      dd if=/somefile2-1 of=/dev/null &
>      dd if=/somefile2-2 of=/dev/null &
>      ...
>      dd if=/somefile2-10 of=/dev/null &

I have not tried this case.

Ryo, do you still want to stick to two level scheduling? Given the problem
of it breaking down underlying scheduler's assumptions, probably it makes
more sense to the IO control at each individual IO scheduler.

I have had a very brief look at BFQ's hierarchical proportional
weight/priority IO control and it looks good. May be we can adopt it for
other IO schedulers also. 

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ