[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200904030407.55471.nickpiggin@yahoo.com.au>
Date: Fri, 3 Apr 2009 04:07:54 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Christoph Hellwig <hch@...radead.org>
Cc: David Howells <dhowells@...hat.com>, viro@...iv.linux.org.uk,
nfsv4@...ux-nfs.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 22/43] CacheFiles: Add a hook to write a single page of data to an inode [ver #46]
On Friday 03 April 2009 03:55:05 Christoph Hellwig wrote:
> On Fri, Apr 03, 2009 at 03:47:20AM +1100, Nick Piggin wrote:
> > Well they now are quite well filesystem defined. We no longer take
> > the page lock before calling them. Not saying it's perfect, but if
> > the backing fs is just using a known subset of ones that work
> > (like loop does).
>
> The page lock doesn't matter. What matters is locks protecting the
> io. Like the XFS iolock or cluster locks in the cluster filesystems,
> and you will get silent data corruption that way.
Hmm, I can see i_mutex being a problem, but can't see how a filesystem
takes any other locks down that chain?
Naturally a random in-kernel user misses other important things, so yes
a simple write sounds like the best option.
> > Probably yes. But it seems like it should have more discussion IMO
> > (unless it has already been had and I missed it).
>
> This came up plenty of times.
I mean, unless the discussion agreed on write_one_page being the right
API to add, then it should not be added in fscache and fscache should
just take a workaround for the meantime.
> > I don't think "write_one_page" sounds like a particularly good new
> > API addition.
>
> I also thing it's not a nice one. I still haven't seen a really good
> explanation of why it can't just use plain ->write
Good question.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists