[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0701301456250.6541@hermes-1.csi.cam.ac.uk>
Date: Tue, 30 Jan 2007 14:58:50 +0000 (GMT)
From: Anton Altaparmakov <aia21@....ac.uk>
To: Mark Fasheh <mark.fasheh@...cle.com>
cc: Nick Piggin <nickpiggin@...oo.com.au>,
Hugh Dickins <hugh@...itas.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Linux Memory Management <linux-mm@...ck.org>,
David Howells <dhowells@...hat.com>,
Andrew Morton <akpm@...l.org>
Subject: Re: page_mkwrite caller is racy?
On Mon, 29 Jan 2007, Mark Fasheh wrote:
> On Tue, Jan 30, 2007 at 12:14:24PM +1100, Nick Piggin wrote:
> > This is another discussion, but do we want the page locked here? Or
> > are the filesystems happy to exclude truncate themselves?
>
> No page lock please. Generally, Ocfs2 wants to order cluster locks outside
> of page locks. Also, the sparse b-tree support I'm working on right now will
> need to be able to allocate in ->page_mkwrite() which would become very
> nasty if we came in with the page lock - aside from the additional cluster
> locks taken, ocfs2 will want to zero some adjacent pages (because we support
> atomic allocation up to 1 meg).
Ditto for NTFS. I will need to lock pages on both sides of the page for
large volume cluster sizes thus I will have to drop the page lock if it is
already taken so it might as well not be... Although I do not feel
strongly about it. If the page is locked I will just drop the lock and
then take it again. If possible to not have the page locked that would
make my code a little easier/more efficient I expect...
Best regards,
Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists