lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070226144548.GH3822@kernel.dk>
Date:	Mon, 26 Feb 2007 15:45:48 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Suparna Bhattacharya <suparna@...ibm.com>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Hellwig <hch@...radead.org>,
	Andrew Morton <akpm@....com.au>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Ulrich Drepper <drepper@...hat.com>,
	Zach Brown <zach.brown@...cle.com>,
	Evgeniy Polyakov <johnpol@....mipt.ru>,
	"David S. Miller" <davem@...emloft.net>,
	Davide Libenzi <davidel@...ilserver.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: A quick fio test (was Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3)

On Mon, Feb 26 2007, Suparna Bhattacharya wrote:
> On Mon, Feb 26, 2007 at 02:57:36PM +0100, Jens Axboe wrote:
> > 
> > Some more results, using a larger number of processes and io depths. A
> > repeat of the tests from friday, with added depth 20000 for syslet and
> > libaio:
> > 
> > Engine          Depth   Processes       Bw (MiB/sec)
> > ----------------------------------------------------
> > libaio            1         1            602
> > syslet            1         1            759
> > sync              1         1            776
> > libaio           32         1            832
> > syslet           32         1            898
> > libaio        20000         1            581
> > syslet        20000         1            609
> > 
> > syslet still on top. Measuring O_DIRECT reads (of 4kb size) on ramfs
> > with 100 processes each with a depth of 200, reading a per-process
> > private file of 10mb (need to fit in my ram...) 10 times each. IOW,
> > doing 10,000MiB of IO in total:
> 
> But, why ramfs ? Don't we want to exercise the case where O_DIRECT actually
> blocks ? Or am I missing something here ?

Just overhead numbers for that test case, lets try something like your
described job.

Test case is doing random reads from /dev/sdb, in chunks of 64kb:

Engine          Depth   Processes       Bw (KiB/sec)
----------------------------------------------------
libaio           200       100            2813
syslet           200       100            3944
libaio         20000         1            2793
syslet         20000         1            3854
sync (*)       20000         1            2866

deadline was used for IO scheduling, to minimize impact. Not sure why
syslet actually does so much better here, looing at vmstat the rate is
steady and all runs are basically 50/50 idle/wait. One difference is
that the submission itself takes a long time on libaio, since the
io_submit() will block on request allocation.  The generated IO pattern
from each process is the same for all runs. The drive is a lousy sata
that doesn't even do queuing, FWIW.

[*] Just for comparison, the depth is obviously really 1 at the kernel
    side since it's sync.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ