[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101017070519.GA24641@amd>
Date: Sun, 17 Oct 2010 18:05:19 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Dave Chinner <david@...morbit.com>
Cc: Nick Piggin <npiggin@...nel.dk>,
Christoph Hellwig <hch@...radead.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 17/19] fs: Reduce inode I_FREEING and factor inode
disposal
On Sun, Oct 17, 2010 at 05:52:45PM +1100, Dave Chinner wrote:
> On Sun, Oct 17, 2010 at 04:13:10PM +1100, Nick Piggin wrote:
> > On Sun, Oct 17, 2010 at 03:35:14PM +1100, Nick Piggin wrote:
> > > On Sun, Oct 17, 2010 at 03:13:13PM +1100, Dave Chinner wrote:
> > > > On Sun, Oct 17, 2010 at 01:49:23PM +1100, Nick Piggin wrote:
> > > > > On Sat, Oct 16, 2010 at 09:30:47PM -0400, Christoph Hellwig wrote:
> > > > > > > * inode->i_lock is *always* the innermost lock.
> > > > > > > *
> > > > > > > + * inode->i_lock is *always* the innermost lock.
> > > > > > > + *
> > > > > >
> > > > > > No need to repeat, we got it..
> > > > >
> > > > > Except that I didn't see where you fixed all the places where it is
> > > > > *not* the innermost lock. Like for example places that take dcache_lock
> > > > > inside i_lock.
> > > >
> > > > I can't find any code outside of ceph where the dcache_lock is used
> > > > within 200 lines of code of the inode->i_lock. The ceph code is not
> > > > nesting them, though.
> > >
> > > You mustn't have looked very hard? From ceph:
> > >
> > > spin_unlock(&dcache_lock);
> > > spin_unlock(&inode->i_lock);
> > >
> > > (and yes, acquisition side does go in i_lock->dcache_lock order)
>
> Sorry, easy to miss with a quick grep when the locks are taken in
> different functions.
Easy to see they're nested when they're dropped in adjacent lines. That
should give you a clue to go and check their lock order.
> Anyway, this one looks difficult to fix without knowing something
> about Ceph and wtf it is doing there. It's one to punt to the
> maintainer to solve as it's not critical to this patch set.
I thought the raison detre for your starting to write your own vfs
scale branch was because you objected to i_lock not being an "innermost"
lock (not that it was before my patch).
So I don't get it. If your patch mandates it to be an innermost lock,
then you absolutely do need to fix the filesystems before changing the
lock order.
> > A really quick grep reveals cifs is using GlobalSMBSeslock inside i_lock
> > too.
>
> I'm having a grep-fail day. Where is that one?
Uh, inside one of the 6 places that i_lock is taken in cifs. The only
non-trivial one, not surprisingly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists