[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FA7A83E.6010801@pocock.com.au>
Date: Mon, 07 May 2012 10:47:26 +0000
From: Daniel Pocock <daniel@...ock.com.au>
To: linux-ext4@...r.kernel.org
Subject: ext4, barrier, md/RAID1 and write cache
I've been having some NFS performance issues, and have been
experimenting with the server filesystem (ext4) to see if that is a factor.
The setup is like this:
(Debian 6, kernel 2.6.39)
2x SATA drive (NCQ, 32MB cache, no hardware RAID)
md RAID1
LVM
ext4
a) If I use data=ordered,barrier=1 and `hdparm -W 1' on the drive, I
observe write performance over NFS of 1MB/sec (unpacking a big source
tarball)
b) If I use data=writeback,barrier=0 and `hdparm -W 1' on the drive, I
observe write performance over NFS of 10MB/sec
c) If I just use the async option on NFS, I observe up to 30MB/sec
I believe (b) and (c) are not considered safe against filesystem
corruption, so I can't use them in practice.
Can anyone suggest where I should direct my efforts to lift performance?
E.g.
- does SCSI work better with barriers, will buying SCSI drives just
solve the problem using config (a)?
- should I do away with md RAID and consider btrfs which does RAID1
within the filesystem itself?
- or must I just use option (b) but make it safer with battery-backed
write cache?
- or is there any md or lvm issue that can be tuned or fixed by
upgrading the kernel?
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists