[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-id: <20090330182328.GA3199@webber.adilger.int>
Date: Mon, 30 Mar 2009 12:23:28 -0600
From: Andreas Dilger <adilger@....com>
To: Jan Kara <jack@...e.cz>
Cc: Eric Sandeen <sandeen@...hat.com>, linux-ext4@...r.kernel.org
Subject: Re: Same magic in statfs() call for ext?
On Mar 16, 2009 17:27 +0100, Jan Kara wrote:
> On Mon 16-03-09 11:13:13, Eric Sandeen wrote:
> > But off the top of my head, I think that I would prefer to see
> > applications generally do the right, posix-conformant thing w.r.t. data
> > integrity (i.e. fsync()) unless, via statfs, they find out "fsync hurts,
> > and we're likely to be reasoonably safe without it"
> >
> > IOW, adding exceptions for ext3 sounds better to me than munging ext4,
> > xfs, btrfs, and all future filesystems to conform to some behavior which
> > isn't in any API or spec ...
>
> Yes, I agree that if they want data on disk, they should use fsync(). But
> as you say for ext3 this is not really usable so they have to somehow
> recognize that "they are on a filesystem where fsync() sucks" and avoid it
> as much as possible. And I feel slightly in favor of giving them enough rope
> (i.e., different magic numbers in statfs) to hang themselves ;-).
One possibility that I've thought of in the past is to have "dynamic
data=journal" mode when fsync is being called and files are small.
What this means is that small file data will be written to the journal
on fsync instead of journaling only the metadata and flushing the data
to the filesystem in ordered mode.
While it means data is written twice to disk (once to journal, once to
fs), if there is a lot of fsync going on and the files are small then
it may still be faster than doing the seeks.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists