[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20081023.202851.112594221.ryov@valinux.co.jp>
Date: Thu, 23 Oct 2008 20:28:51 +0900 (JST)
From: Ryo Tsuruta <ryov@...inux.co.jp>
To: haotian.zhang@...driver.com
Cc: zumeng.chen@...driver.com, bruce.ashfield@...driver.com,
linux-kernel@...r.kernel.org, dm-devel@...hat.com,
containers@...ts.linux-foundation.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xensource.com, fernando@....ntt.co.jp
Subject: Re: [PATCH 0/2] dm-ioband: I/O bandwidth controller v1.8.0:
Introduction
Hi Haotian,
> The results are almost the same. I can not see any change of Direct I/O
> performance for this bio-cgroup kernel feature with dm-ioband support!
>
> Does the methord to caculate throughout should be the Rate of xdd.linux
> output?
> Dose my testing approach should be correct? If not, please help me point
> out.
Could you try to run the xdd programs simultaneously?
dm-ioband controls bandwidth while I/O requests are issued
simultaneously from processes which belong to different cgroup.
If I/O requests are only issued from processes which belong to one
cgroup, the processes can use the whole bandwidth.
The following URL is an example of how bandwidth is shared to I/O
load change.
http://people.valinux.co.jp/~ryov/dm-ioband/benchmark/partition1.html
Thanks,
Ryo Tsuruta
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists