lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Jul 2009 21:58:42 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Balbir Singh <balbir@...ux.vnet.ibm.com>
Cc:	dhaval@...ux.vnet.ibm.com, snitzer@...hat.com, dm-devel@...hat.com,
	jens.axboe@...cle.com, agk@...hat.com, paolo.valente@...more.it,
	fernando@....ntt.co.jp, jmoyer@...hat.com, fchecconi@...il.com,
	akpm@...ux-foundation.org, containers@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org, righi.andrea@...il.com
Subject: Re: [RFC] IO scheduler based IO controller V6

On Wed, Jul 08, 2009 at 08:09:25PM +0530, Balbir Singh wrote:
> * Vivek Goyal <vgoyal@...hat.com> [2009-07-08 09:41:14]:
> 
> > On Wed, Jul 08, 2009 at 09:26:21AM +0530, Balbir Singh wrote:
> > > * Vivek Goyal <vgoyal@...hat.com> [2009-07-02 16:01:32]:
> > > 
> > > > 
> > > > Hi All,
> > > > 
> > > > Here is the V6 of the IO controller patches generated on top of 2.6.31-rc1.
> > > > 
> > > > Previous versions of the patches was posted here.
> > > > 
> > > > (V1) http://lkml.org/lkml/2009/3/11/486
> > > > (V2) http://lkml.org/lkml/2009/5/5/275
> > > > (V3) http://lkml.org/lkml/2009/5/26/472
> > > > (V4) http://lkml.org/lkml/2009/6/8/580
> > > > (V5) http://lkml.org/lkml/2009/6/19/279
> > > > 
> > > > This patchset is still work in progress but I want to keep on getting the
> > > > snapshot of my tree out at regular intervals to get the feedback hence V6.
> > > >
> > > 
> > > Hi, Vivek,
> > > 
> > > I was able to compile and boot a 2.6.31-rc1 kernel with this patchset.
> > > I have a request could you fold up all patches and make one
> > > consolidated patch available somewhere (makes it easier to test), may
> > > be a git tree?
> > > 
> > 
> > Thanks for trying it out balbir. Ok, for ease of patching and testing, I 
> > will also maintain a consolidated patch. For V6 you can download the patch
> > from here.
> > 
> > http://people.redhat.com/~vgoyal/io-controller/io-scheduler-based-io-controller-v6.patch
> >
> 
> Thanks, this will definitely help me get more testing done!
>  
> > > I did some quick tests with some io benchmarks and found in a simple
> > > scenario that the scheduler worked as expected, except that it took
> > > very long. I'll investigate further and revert back.
> > 
> > Thanks. I will wait for details.
> >
> 
> I'll try and send something out by Friday, but for now I am not even
> very sure if it is a real problem. I ran iozone on two groups with 500
> and 1000 as weights on the same parition and set fairness to 1 in
> sysfs for the partition. I used a record size of 4 (default) and tried
> to run it on a file size of 1G.
> 

Hi Balbir,

Trying iozone might be a good idea for analyzing the performance impact
of io controller patches but it might not be the best thing to test
fairness.

The biggest reason being that IO controller provides fairness only if
there is constant contention between the groups. If one group goes away
for sometime, other gets to use the disk full. Now while running above
benchmark, there are numerous occasions where disk is not contended for
and we don't see fairness numbers in user space.

I would recommend trying out fio or small tests to begin with which
can create continuously backlogged queues at the disk to see how 
accurate the io-controller is.

> BTW, I don't see anything about weights being multiple of an expected
> figure documented anywhere. I tried weights of 1024 (similar to the
> scheduler and got shouted back at :) ). Does the documentation patch
> specify the expected range for weights? 
> 

Weight range is 1-1000. I will update documentation to reflect this.
Thanks for pointing it out.

Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ