[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A94287C.9060509@redhat.com>
Date: Tue, 25 Aug 2009 14:07:56 -0400
From: Ric Wheeler <rwheeler@...hat.com>
To: Andreas Dilger <adilger@....com>
CC: Theodore Tso <tytso@....edu>,
Christian Fischer <Christian.Fischer@...terngraphics.com>,
linux-ext4@...r.kernel.org
Subject: Re: Enable asynchronous commits by default patch revoked?
On 08/25/2009 01:52 PM, Andreas Dilger wrote:
> On Aug 24, 2009 20:15 -0400, Theodore Ts'o wrote:
>> On Mon, Aug 24, 2009 at 05:43:36PM -0600, Andreas Dilger wrote:
>>> Without transaction checksums waiting on all of the blocks together
>>> is NOT safe. If the commit record is on disk, but the rest of the
>>> transaction's blocks are not then during replay it may cause garbage
>>> to be written from the journal into the filesystem metadata.
>>
>> That's the one optimization we using journal checksums buys us.
>> Unfortunately it does not allow us to omit the barrier
>> operation.... and have real-world testing experience that without the
>> barrier, a power drop can cause significant filesystem corruption and
>> potential data loss.
>>
>> Try using Chris Mason's torture-test workload with async-checksums
>> without this patch; you will get data corruption if you try dropping
>> power while his torture-test is running. I know you really don't like
>> the barrier, but I'm afraid it's not safe to run without it, even with
>> journal checksums.
>
> In our performance testing of barriers (not with Chris' program), it
> was FAR better to disable the disk cache and wait for IO completion
> (i.e. barriers disabled) on just the journal blocks than to enable the
> cache and cause a cache flush for each "barrier". The problem is that at
> high IO rates there is much more data in the cache vs. the actual journal
> blocks, and forcing the whole cache to be flushed each transaction commit
> hurt our performance noticably.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
This entirely depends on the nature of the storage device. If you are running on
any normal S-ATA drive, I have seen running with write cache disabled can cut
your large file write speed by half over running with barriers enabled.
Certainly less true when running with SAS or fibre channel devices.
Checking on ext4 on a newish Seagate 1TB disk, I see roughly parity (F12
rawhide, RC6 kernel):
[root@...desktop ~]# hdparm -W0 /dev/sdb
/dev/sdb:
setting drive write-caching to 0 (off)
write-caching = 0 (off)
[root@...desktop ~]# mkfs.ext4 /dev/sdb
mke2fs 1.41.8 (11-July-2009)
<snip>
[root@...desktop ~]# mount -o barrier=0 /dev/sdb /mnt/
[root@...desktop ~]# dd if=/dev/zero of=/mnt/bigfile bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 6.75127 s, 155 MB/s
[root@...desktop ~]# umount /mnt
[root@...desktop ~]# hdparm -W1 /dev/sdb
/dev/sdb:
setting drive write-caching to 1 (on)
write-caching = 1 (on)
[root@...desktop ~]# mount -o barrier=1 /dev/sdb /mnt/
[root@...desktop ~]# dd if=/dev/zero of=/mnt/bigfile bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 6.74861 s, 155 MB/s
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists