[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1188570831.6112.64.camel@twins>
Date: Fri, 31 Aug 2007 16:33:51 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: David Chinner <dgc@....com>
Cc: Eric Sandeen <sandeen@...deen.net>,
linux-kernel Mailing List <linux-kernel@...r.kernel.org>,
xfs-oss <xfs@....sgi.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH
On Fri, 2007-08-31 at 23:50 +1000, David Chinner wrote:
> On Fri, Aug 31, 2007 at 08:39:49AM +0200, Peter Zijlstra wrote:
> > On Thu, 2007-08-30 at 23:43 -0500, Eric Sandeen wrote:
> > > The xfs filesystem can exceed the current lockdep
> > > MAX_LOCK_DEPTH, because when deleting an entire cluster of inodes,
> > > they all get locked in xfs_ifree_cluster(). The normal cluster
> > > size is 8192 bytes, and with the default (and minimum) inode size
> > > of 256 bytes, that's up to 32 inodes that get locked. Throw in a
> > > few other locks along the way, and 40 seems enough to get me through
> > > all the tests in the xfsqa suite on 4k blocks. (block sizes
> > > above 8K will still exceed this though, I think)
> >
> > As 40 will still not be enough for people with larger block sizes, this
> > does not seems like a solid solution. Could XFS possibly batch in
> > smaller (fixed sized) chunks, or does that have significant down sides?
>
> The problem is not filesystem block size, it's the xfs inode cluster buffer
> size / the size of the inodes that determines the lock depth. the common case
> is 8k/256 = 32 inodes in a buffer, and they all get looked during inode
> cluster writeback.
>
> This inode writeback clustering is one of the reasons XFS doesn't suffer from
> atime issues as much as other filesystems - it doesn't need to do as much I/O
> to write back dirty inodes to disk.
>
> IOWs, we are not going to make the inode clusters smallers - if anything they
> are going to get *larger* in future so we do less I/O during inode writeback
> than we do now.....
Since they are all trylocks it seems to suggest there is no hard _need_
to lock a whole inode cluster at once, and could iterate through it with
less inodes locked.
Granted I have absolutely no understanding of what I'm talking about :-)
Trouble is, we'd like to have a sane upper bound on the amount of held
locks at any one time, obviously this is just wanting, because a lot of
lock chains also depend on the number of online cpus...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists