[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091011123219.GA3832@redhat.com>
Date: Sun, 11 Oct 2009 08:32:19 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Andrea Righi <righi.andrea@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, jens.axboe@...cle.com,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
taka@...inux.co.jp, guijianfeng@...fujitsu.com, jmoyer@...hat.com,
dhaval@...ux.vnet.ibm.com, balbir@...ux.vnet.ibm.com,
m-ikeda@...jp.nec.com, agk@...hat.com, peterz@...radead.org,
jmarchan@...hat.com, torvalds@...ux-foundation.org, mingo@...e.hu,
riel@...hat.com
Subject: Re: Performance numbers with IO throttling patches (Was: Re: IO
scheduler based IO controller V10)
On Sun, Oct 11, 2009 at 12:27:30AM +0200, Andrea Righi wrote:
[..]
> >
> > - Andrea, can you please also run similar tests to see if you see same
> > results or not. This is to rule out any testing methodology errors or
> > scripting bugs. :-). I also have collected the snapshot of some cgroup
> > files like bandwidth-max, throttlecnt, and stats. Let me know if you want
> > those to see what is happenig here.
>
> Sure, I'll do some tests ASAP. Another interesting test would be to set
> a blockio.iops-max limit also for the sequential readers' cgroup, to be
> sure we're not touching some iops physical disk limit.
>
> Could you post all the options you used with fio, so I can repeat some
> tests as similar as possible to yours?
>
I will respond to rest of the points later after some testing with iops-max
rules. In the mean time here are my fio options so that you can try to
replicate the tests.
I am simply copying pasting from my script. I have written my own program
"semwait" so that two different instances of fio can synchronize on an
external semaphore. Generally all the jobs go in single fio files but here
we need to put two fio instances in two different cgroups. It is important
that two fio jobs are synchronized and start at the same time after laying
out files. (Becomes primarilly useful in write testing. Reads are fine
generally onces the files have been laid out).
Sequential readers
------------------
fio_args="--rw=read --bs=4K --size=2G --runtime=30 --numjobs=$nr_jobs --direct=1"
fio $fio_args --name=$jobname --directory=/mnt/$blockdev/fio --exec_prerun="'/usr/local/bin/semwait fiocgroup'" >> $outputdir/$outputfile &
Random Reader
-------------
fio_args="--rw=randread --bs=4K --size=1G --runtime=30 --direct=1 --numjobs=$nr_jobs"
fio $fio_args --name=$jobname --directory=/mnt/$blockdev/fio --exec_prerun="'/usr/local/bin/semwait fiocgroup'" >> $outputdir/$outputfile &
Random Writer
-------------
fio_args="--rw=randwrite --bs=64K --size=2G --runtime=30 --numjobs=$nr_jobs1 --ioengine=libaio --iodepth=4 --direct=1"
fio $fio_args --name=$jobname --directory=/mnt/$blockdev/fio --exec_prerun="'/usr/local/bin/semwait fiocgroup'" >> $outputdir/$outputfile &
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists