[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1250708742.5589.23.camel@cail>
Date: Wed, 19 Aug 2009 15:05:42 -0400
From: "Alan D. Brunelle" <Alan.Brunelle@...com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: linux-kernel@...r.kernel.org, zach.brown@...cle.com,
hch@...radead.org
Subject: Re: [PATCH 0/4] Page based O_DIRECT v2
Hi Jens -
I'm not using loop, but it appears that there may be a regression in
regular asynchronous direct I/O sequential write performance when these
patches are applied. Using my "small" machine (16-way x86_64, 256GB, two
dual-port 4GB FC HBAs connected through switches to 4 HP MSA1000s - one
MSA per port), I'm seeing a small but noticeable drop in performance for
sequential writes on the order of 2 to 6%. Random asynchronous direct
I/O and sequential reads appear to unaffected.
http://free.linux.hp.com/~adb/2009-08-19/nc.png
has a set of graphs showing the data obtained when utilizing LUNs
exported by the MSAs (increasing the number of MSAs being used along the
X-axis). The critical sequential write graph has numbers like (numbers
expressed in GB/second):
Kernel 1MSA 2MSAs 3MSAs 4MSAs
------------------------ ----- ----- ----- -----
2.6.31-rc6 : 0.17 0.33 0.50 0.65
2.6.31-rc6 + loop-direct: 0.15 0.31 0.46 0.61
Using all 4 devices we're seeing a drop of slightly over 6%.
I also typically do runs utilizing just the caches on the MSAs (getting
rid of physical disk interactions (seeks &c).). Even here we see a small
drop off in sequential write performance (on the order of about 2.5%
when using all 4 MSAs)- but noticeable gains for both random reads and
(especially) random writes. That graph can be seen at:
http://free.linux.hp.com/~adb/2009-08-19/ca.png
BTW: The grace/xmgrace files that generated these can be found at -
http://free.linux.hp.com/~adb/2009-08-19/nc.agr
http://free.linux.hp.com/~adb/2009-08-19/ca.agr
- as the specifics can be seen better whilst running xmgrace on those
files.
The 2.6.31-rc6 kernel was built using your block git trees master
branch, and the other one has your loop-direct branch at:
commit 806dec7809e1b383a3a1fc328b9d3dae1f633663
Author: Jens Axboe <jens.axboe@...cle.com>
Date: Tue Aug 18 10:01:34 2009 +0200
At the same time I'm doing this, I'm doing some other testing on my
large machine - but the test program has hung (using the loop-direct
branch kernel). I'm tracking that down...
Alan D. Brunelle
Hewlett-Packard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists