[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1243241684.2560.121.camel@ymzhang>
Date: Mon, 25 May 2009 16:54:44 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, chris.mason@...cle.com,
david@...morbit.com, hch@...radead.org, akpm@...ux-foundation.org
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
On Fri, 2009-05-22 at 10:15 +0200, Jens Axboe wrote:
> > > > > > > > > > > > > This is the fourth version of this patchset. Chances since v3:
> Thanks, I'll get this reproduced and fixed. Can you post the results
> you got comparing writeback and vanilla meanwhile?
I didn't post the result because some test cases benefit from the patches
while others are hurt from the patches. Sometime one case benefit from the patches
on this machine, but is hurt on another machine.
As a matter of fact, I tested the patches on 4 machines. One machine which
triggered the bug has only 1 disk. The other 3 machines have 1 JBOD per machine.
1) machine lkp-st02 (stoakley): has a fiberchannel JBOD with 13 SCSI disks. Every
disk has 1 partition (ext3 filesystem). Memory is 8GB.
2) machine lkp-st01: has a SAS JBOD with 7 SAS disks. Every disk has 2
partitions. 8GB memory.
3) Machine lkp-ne02 (nehalem): has a SATA JBOD with 11 disks. Every disk has
2 partitions. 6GB memory.
The HBA cards connecting to JBOD have no raid capability,
or they have, but I don't turn raid on.
Mount ext3 with option '-o writeback'.
Below results focus on the 3 machines who have JBOD.
I use iozone/tiobench/fio/ffsb for this testing. With iozone/tiobench, I always
use one disk on all machines. But with fio/ffsb which has lots of subtest cases,
I use all disks of the JBOD connecting to the corresponding machine.
The comparation is between 2.6.30-rc6 and 2.6.30-rc6+V4_patches, or plus
3 new patches (starting with 0001~0003).
1) iozone: 500MB iozone testing has no result difference. But 1.2GB testing has
about 40% regression on rewrite with the 3 new patches (001~003). If no the 3 new
patches, the regression is more than 90%. write has the simular regression, but its
regression disappears with the new 3 patches.
2) tiobench: result variation is considered as fluctuation.
3) fio: consists of more than 30 sub test cases, including sync/aio/mmap,
plus the combination with block size (less4k/4k/64k, soetimes 128k) and random.
As for write testing, mostly, one thread per partition.
Mostly, fio_mmap_randwrite(randrw)_4k_preread has 5%~30% improvement. But with
the new 3 patches, the improvement becomes smaller, for example becomes 14% from 30%.
fio_mmap_randwrite has 5%~10% regression on lkp-st01 and lkp-ne02 (both machines'
JBOD has 2 partitions per disk), but has 2%~15% improvement on lkp-st02 (one partition
per disk).fio_mmap_randrw has the similar behavior.
fio_mmap_randwrite_4k_halfbusy (Use 4 disks and less workload than other fio cases)
has about 20%~30% improvement.
fio sync read has about 15%~30% regression on lkp-st01, but the regression disappears
with the 3 new patches. Other machines haven't the issue.
aio has no regression.
4) ffsb:
ffsb_create (blocksize 4k, 64k) has 10%~20% improvement on lkp-st01 and
lkp-ne02, but hasn't on lkp-st02.
The data of other ffsb test cases looks suspicious, so I need double-check it, or
tune parameters to rerun.
Yanmin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists