[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <slrneknfdo.2in.olecom@flower.upol.cz>
Date: Fri, 3 Nov 2006 22:00:57 +0000 (UTC)
From: Oleg Verych <olecom@...wer.upol.cz>
To: linux-kernel@...r.kernel.org
Subject: Re: New filesystem for Linux
On 2006-11-03, Mikulas Patocka wrote:
[]
>> Well, at least for XFS everybody tell that it should be used with UPS only if
>> you really care about your data. I think it has something to do with heavy
>> in-RAM caching this filesystem does.
>
> System is allowed to cache anything unless sync/fsync is called. Someone
> told that XFS has some bugs that if crashed incorrectly, it can lose
> already synced data ... don't know. Plus it has that infamous feature (not
> a bug) that it commits size-increase but not data and you see zero-filed
> files.
AFAIK, XFS by design must provide _file system_ consistency, not data.
[]
>> PS. Do you have any benchmarks of your filesystem? Did you do any longer
>> automated tests to prove it is not going to loose data to easily?
>
> I have, I may find them and post them. (but the university wants me to
> post them to some conference, so I should keep them secret :-/)
Interesting to know, how "another one" FS is quick and stable.
On my new hardware, with dual core CPU 3.4G, 1G of RAM, 1/2TB disk space
(office pro, yes *office* (pro)), `dd' can suck 50M/s, (running and)
extracting 2.6.19-rc2 on fresh 20Gb partition (xfs,jfs) yields nearly 4M/s.
As for ordinary user it seems very slowly.
Ext2 (not ext3) is as fast as shmfs, until RAM will be full.
And after 13 cycles XFS has 11-12 directories with good linux source,
ext2 6-7 (IIRC). On start of 14th cycle i pushed reset button, btw.
Mounting & repairing XFS took less than minute; e2fsck spent more time
on printing, rather than reparing, i think (:.
Want to create another one? OK, but why not to improve existing? >/dev/null
____
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists