[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090402172652.GE17275@atrey.karlin.mff.cuni.cz>
Date: Thu, 2 Apr 2009 19:26:52 +0200
From: Jan Kara <jack@...e.cz>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Christoph Hellwig <hch@...radead.org>,
David Howells <dhowells@...hat.com>, viro@...iv.linux.org.uk,
nfsv4@...ux-nfs.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 22/43] CacheFiles: Add a hook to write a single page of data to an inode [ver #46]
> On Friday 03 April 2009 03:55:05 Christoph Hellwig wrote:
> > On Fri, Apr 03, 2009 at 03:47:20AM +1100, Nick Piggin wrote:
> > > Well they now are quite well filesystem defined. We no longer take
> > > the page lock before calling them. Not saying it's perfect, but if
> > > the backing fs is just using a known subset of ones that work
> > > (like loop does).
> >
> > The page lock doesn't matter. What matters is locks protecting the
> > io. Like the XFS iolock or cluster locks in the cluster filesystems,
> > and you will get silent data corruption that way.
>
> Hmm, I can see i_mutex being a problem, but can't see how a filesystem
> takes any other locks down that chain?
Yes, i_mutex is one problem. Then filesystems may take other locks
in their ->aio_write callbacks - as Christoph mentioned, for example
OCFS2 has to do some network messaging to synchronize nodes in the
cluster accessing the file. I could imagine some clever filesystem doing
more clever locking than just one i_mutex covering the whole file...
> Naturally a random in-kernel user misses other important things, so yes
> a simple write sounds like the best option.
Definitely. IMO it's hard to get the locking right without calling
->aio_write callback.
Honza
--
Jan Kara <jack@...e.cz>
SuSE CR Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists