[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100802100729.GB9427@amd>
Date: Mon, 2 Aug 2010 20:07:29 +1000
From: Nick Piggin <npiggin@...e.de>
To: Christoph Hellwig <hch@...radead.org>
Cc: Nick Piggin <npiggin@...e.de>, Dave Chinner <david@...morbit.com>,
linux-fsdevel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.35
On Mon, Aug 02, 2010 at 05:05:42AM -0400, Christoph Hellwig wrote:
> On Mon, Aug 02, 2010 at 04:24:28AM -0400, Christoph Hellwig wrote:
> > .36. I'd much rather see the inode_lock scaling or the lockless path
> > walk going in before, but I haven't checked how complicated the
> > reordering would be. The lockless path walk also is only rather
> > theoretically useful until we do ACL checks lockless as we're having
> > ACLs enabled pretty much everywhere at least in the distros.
>
> >From a quick look it seems like the inode_lock splitup can easily
> be moved forward, and it would help us with doing some work on the
> writeback side. The problem is that it would need rebasing ontop
> of both the vfs and writeback (aka block) trees.
inode_lock splitup is much simpler than dcache_lock, yes.
And I have to rebase it on the work currently queued for 2.6.35
anyway, so that's no problem. I can easily put it in front of
dcache_lock patches in the series (as I said, I've kept everything
independent and well split up).
I do want opinions on how to do the big-picture merge, though,
before I start moving things around. And obviously reviewing
each of the parts is more important at this point than exact
way to order the thing.
But even the inode_lock patches I am wary of merging in 2.6.36
without having much review or any linux-next / vfs-tree exposure.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists