lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x493a9sl0bx.fsf@segfault.boston.devel.redhat.com>
Date:	Mon, 22 Jun 2009 12:06:42 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, dm-devel@...hat.com,
	jens.axboe@...cle.com, nauman@...gle.com, dpshah@...gle.com,
	lizf@...fujitsu.com, mikew@...gle.com, fchecconi@...il.com,
	paolo.valente@...more.it, ryov@...inux.co.jp,
	fernando@....ntt.co.jp, s-uchida@...jp.nec.com, taka@...inux.co.jp,
	guijianfeng@...fujitsu.com, dhaval@...ux.vnet.ibm.com,
	righi.andrea@...il.com, m-ikeda@...jp.nec.com, jbaron@...hat.com,
	agk@...hat.com, snitzer@...hat.com, akpm@...ux-foundation.org,
	peterz@...radead.org
Subject: Re: [RFC] IO scheduler based io controller (V5)

Vivek Goyal <vgoyal@...hat.com> writes:

> On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
>> Vivek Goyal <vgoyal@...hat.com> writes:
>> 
>> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
>> >> * Vivek Goyal <vgoyal@...hat.com> [2009-06-19 16:37:18]:
>> >> 
>> >> > 
>> >> > Hi All,
>> >> > 
>> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
>> >> [snip]
>> >> 
>> >> > Testing
>> >> > =======
>> >> >
>> >> 
>> >> [snip]
>> >> 
>> >> I've not been reading through the discussions in complete detail, but
>> >> I see no reference to async reads or aio. In the case of aio, aio
>> >> presumes the context of the user space process. Could you elaborate on
>> >> any testing you've done with these cases? 
>> >> 
>> >
>> > Hi Balbir,
>> >
>> > So far I had not done any testing with AIO. I have done some just now.
>> > Here are the results.
>> >
>> > Test1 (AIO reads)
>> > ================
>> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
>> > respectively. I am using cfq scheduler. Following are some lines from my test
>> > script.
>> >
>> > ===================================================================
>> > fio_args="--ioengine=libaio --rw=read --size=512M"
>> 
>> AIO doesn't make sense without O_DIRECT.
>> 
>
> Ok, here are the read results with --direct=1 for reads. In previous posting,
> writes were already direct.
>
> test1 statistics: time=8 16 20796   sectors=8 16 1049648
> test2 statistics: time=8 16 10551   sectors=8 16 581160
>
>
> Not sure why reads are so slow with --direct=1? In the previous test
> (no direct IO), I had cleared the caches using
> (echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
> cache?

O_DIRECT bypasses the page cache, and hence the readahead code.  Try
driving deeper queue depths and/or using larger I/O sizes.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ