lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 20 Aug 2009 19:12:07 -0400
From:	"Alan D. Brunelle" <Alan.Brunelle@...com>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	linux-kernel@...r.kernel.org, zach.brown@...cle.com,
	hch@...radead.org
Subject: Re: [PATCH 0/4] Page based O_DIRECT v2

On Thu, 2009-08-20 at 12:40 +0200, Jens Axboe wrote:
> On Wed, Aug 19 2009, Alan D. Brunelle wrote:
> > On Thu, 2009-08-20 at 00:06 +0200, Jens Axboe wrote:
> > 
> > > 
> > > Thanks a lot for the test run, Alan. I wonder why writes are down while
> > > reads are up. One possibility could be a WRITE vs WRITE_ODIRECT
> > > difference, though I think they should be the same. The patches I posted
> > > have not been benchmarked at all, it's still very much a work in
> > > progress. I just wanted to show the general direction that I thought
> > > would be interesting. So I have done absolutely zero performance
> > > testing, it's only been tested for whether it still worked or not (to
> > > some degree :-)...
> > > lib
> > > I'll poke a bit at it here, too. I want to finish the unplug/wait
> > > problem first. Is your test case using read/write or readv/writev?
> > > 
> > 
> > Hi Jens - I just had some extra cycles, so figured what the heck... :-)
> > 
> > Actually, this is using asynchronous direct I/Os (libaio/Linux native
> > AIO). If I get a chance tomorrow, I'll play with read/write (and/or
> > readv/writev). 
> 
> OK, then I wonder what the heck is up... Did you catch any io metrics?
> 

Hi Jens - 

Took a different tack: Using FIO, I put it through its paces using the
following variables:

kernels: 2.6.31-rc6 / 2.6.31-rc6 + loop-direct git branch
I/O direction: read / write
Seek behavior: sequential / random
FIO engines (modes): libaio / posixaio / psync / sync / vsync
I/O size: 4K / 16K / 64K / 256K

Up at http://free.linux.hp.com/~adb/2009-08-20/bench_modes.png I have a
(very large) .png with all the data present - left column shows
throughput (as measured by FIO), right column has %user + %system (as
measured by FIO). To view this, download the .png & run 'eog' (or
whatever your favorite .png viewer is), blow it up and scroll down to
see the 20 pairs of graphs. 

The 2.6.31-rc6 kernel data is in red, the loop-direct results are in
blue.

It's showing some strange things at this point - The most scary thing is
certainly the random & sequential writes using posixaio - HUGE drops in
performance with the loop-direct branch. But, for some reason, random
write's using libaio look better with your loop-direct branch. 

In http://free.linux.hp.com/~adb/2009-08-20/data.tar.bz2 I have /all/
the FIO job-files & FIO output files used to generate these graphs.

I've got some scripts to automate doing these runs & generating the
graphs, so I'm all primed to continue testing this with future versions
of this patch sequence. (I can easily automate it to utilize iostat
and/or blktrace if you'd like (need?) that data as well.) (It only takes
about 4 hours (plus reboot time) to do this, so it's not a big deal.)

Alan D. Brunelle
Hewlett-Packard

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ