[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090501111121.GA25964@linux>
Date: Fri, 1 May 2009 13:11:22 +0200
From: Andrea Righi <righi.andrea@...il.com>
To: "Alan D. Brunelle" <Alan.Brunelle@...com>
Cc: Paul Menage <menage@...gle.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Gui Jianfeng <guijianfeng@...fujitsu.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
agk@...rceware.org, akpm@...ux-foundation.org, axboe@...nel.dk,
baramsori72@...il.com, Carl Henrik Lunde <chlunde@...g.uio.no>,
dave@...ux.vnet.ibm.com, Divyesh Shah <dpshah@...gle.com>,
eric.rannaud@...il.com, fernando@....ntt.co.jp,
Hirokazu Takahashi <taka@...inux.co.jp>,
Li Zefan <lizf@...fujitsu.com>, matt@...ehost.com,
dradford@...ehost.com, ngupta@...gle.com, randy.dunlap@...cle.com,
roberto@...it.it, Ryo Tsuruta <ryov@...inux.co.jp>,
Satoshi UCHIDA <s-uchida@...jp.nec.com>,
subrata@...ux.vnet.ibm.com, yoshikawa.takuya@....ntt.co.jp,
containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/9] cgroup: io-throttle controller (v13)
On Thu, Apr 30, 2009 at 09:20:58AM -0400, Alan D. Brunelle wrote:
> Hi Andrea -
Hi Alan,
>
> FYI: I ran a simple test using this code to try and gauge the overhead
> incurred by enabling this technology. Using a single 400GB volume split
> into two 200GB partitions I ran two processes in parallel performing a
> mkfs (ext2) on each partition. First w/out cgroup io-throttle and then
> with it enabled (with each task having throttling enabled to
> 400MB/second (much, much more than the device is actually capable of
> doing)). The idea here is to see the base overhead of just having the
> io-throttle code in the paths.
Interesting. I've never explicitly measured the actual overhead of the
io-throttle infrastructure, I'll add a similar test to the io-throttle
testcase.
>
> Doing 30 runs of each (w/out & w/ io-throttle enabled) shows very little
> difference (time in seconds)
>
> w/out: min=80.196 avg=80.585 max=81.030 sdev=0.215 spread=0.834
> with: min=80.402 avg=80.836 max=81.623 sdev=0.327 spread=1.221
>
> So only around 0.3% overhead - and that may not be conclusive with the
> standard deviations seen.
You should see less overhead with reads respect to a pure write
workload, because with reads we don't need to check if the IO request
occurs in a different IO context. And things should be improved with
v16-rc1
(http://download.systemimager.org/~arighi/linux/patches/io-throttle/cgroup-io-throttle-v16-rc1.patch).
So, it would be also interesting to analyse the overhead of a read
stream compared to a write stream, as well a comparison of random
reads/writes. I'll do that in my next benchmarking session.
>
> --
>
> FYI: The test was run on 2.6.30-rc1+your patches on a 16-way x86_64 box
> (128GB RAM) plus a single FC volume off of a 1Gb FC RAID controller.
>
> Regards,
> Alan D. Brunelle
> Hewlett-Packard
Thanks for posting these results,
-Andrea
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists