lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070227094211.GR3822@kernel.dk>
Date:	Tue, 27 Feb 2007 10:42:11 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Suparna Bhattacharya <suparna@...ibm.com>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Hellwig <hch@...radead.org>,
	Andrew Morton <akpm@....com.au>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Ulrich Drepper <drepper@...hat.com>,
	Zach Brown <zach.brown@...cle.com>,
	Evgeniy Polyakov <johnpol@....mipt.ru>,
	"David S. Miller" <davem@...emloft.net>,
	Davide Libenzi <davidel@...ilserver.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: A quick fio test (was Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3)

On Tue, Feb 27 2007, Suparna Bhattacharya wrote:
> On Mon, Feb 26, 2007 at 03:45:48PM +0100, Jens Axboe wrote:
> > On Mon, Feb 26 2007, Suparna Bhattacharya wrote:
> > > On Mon, Feb 26, 2007 at 02:57:36PM +0100, Jens Axboe wrote:
> > > > 
> > > > Some more results, using a larger number of processes and io depths. A
> > > > repeat of the tests from friday, with added depth 20000 for syslet and
> > > > libaio:
> > > > 
> > > > Engine          Depth   Processes       Bw (MiB/sec)
> > > > ----------------------------------------------------
> > > > libaio            1         1            602
> > > > syslet            1         1            759
> > > > sync              1         1            776
> > > > libaio           32         1            832
> > > > syslet           32         1            898
> > > > libaio        20000         1            581
> > > > syslet        20000         1            609
> > > > 
> > > > syslet still on top. Measuring O_DIRECT reads (of 4kb size) on ramfs
> > > > with 100 processes each with a depth of 200, reading a per-process
> > > > private file of 10mb (need to fit in my ram...) 10 times each. IOW,
> > > > doing 10,000MiB of IO in total:
> > > 
> > > But, why ramfs ? Don't we want to exercise the case where O_DIRECT actually
> > > blocks ? Or am I missing something here ?
> > 
> > Just overhead numbers for that test case, lets try something like your
> > described job.
> > 
> > Test case is doing random reads from /dev/sdb, in chunks of 64kb:
> > 
> > Engine          Depth   Processes       Bw (KiB/sec)
> > ----------------------------------------------------
> > libaio           200       100            2813
> > syslet           200       100            3944
> > libaio         20000         1            2793
> > syslet         20000         1            3854
> > sync (*)       20000         1            2866
> > 
> > deadline was used for IO scheduling, to minimize impact. Not sure why
> > syslet actually does so much better here, looing at vmstat the rate is
> > steady and all runs are basically 50/50 idle/wait. One difference is
> > that the submission itself takes a long time on libaio, since the
> > io_submit() will block on request allocation.  The generated IO pattern
> > from each process is the same for all runs. The drive is a lousy sata
> > that doesn't even do queuing, FWIW.
> 
> 
> I tried the latest fio code with syslet v4, and my results are a little
> different - have yet to figure out why or what to make of it.
> I hope I have all the right pieces now.
> 
> This is an ext2 filesystem, SCSI AIC7xxx.
> 
> I used an iodepth_batch size of 8 to limit the number of ios in a single
> io_submit (thanks for adding that parameter to fio !), like we did in
> aio-stress.
> 
> Engine          Depth      Batch	Bw (KiB/sec)
> ----------------------------------------------------
> libaio		64	   8		17,226
> syslet		64	   8		17,620
> libaio		20000	   8		18,552
> syslet		20000	   8		14,935
> 
> 
> Which is not bad, actually.

It's not bad for such a high depth/batch setting, but I still wonder why
are results are so different. I'll look around for an x86 box with some
TCQ/NCQ enabled storage attached for testing. Can you pass me your
command line or job file (whatever you use) so we are on the same page?

> If I do not specify the iodepth_batch (i.e. default to depth), then the
> difference becomes more pronounced at higher depths. However, I doubt
> whether anyone would be using such high batch sizes in practice ...
>
> Engine          Depth      Batch	Bw (KiB/sec)
> ----------------------------------------------------
> libaio		64	   default	17,429
> syslet		64	   default	16,155
> libaio		20000	   default	15,494
> syslet		20000	   default	7,971
>
If iodepth_batch isn't set, the syslet queued io will be serialized and
not take advantage of queueing. How does the job file perform with
ioengine=sync?

> Often times it is the application tuning that makes all the difference,
> so am not really sure how much to read into these results.
> That's always been the hard part of async io ...

Yes I agree, it's handy to get an overview though.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ