[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070831163353.GC13397@filer.fsl.cs.sunysb.edu>
Date: Fri, 31 Aug 2007 12:33:53 -0400
From: Josef Sipek <jsipek@....cs.sunysb.edu>
To: Peter Zijlstra <peterz@...radead.org>
Cc: David Chinner <dgc@....com>, Eric Sandeen <sandeen@...deen.net>,
linux-kernel Mailing List <linux-kernel@...r.kernel.org>,
xfs-oss <xfs@....sgi.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH
On Fri, Aug 31, 2007 at 05:09:21PM +0200, Peter Zijlstra wrote:
> On Sat, 2007-09-01 at 01:05 +1000, David Chinner wrote:
>
> > > Trouble is, we'd like to have a sane upper bound on the amount of held
> > > locks at any one time, obviously this is just wanting, because a lot of
> > > lock chains also depend on the number of online cpus...
> >
> > Sure - this is an obvious case where it is valid to take >30 locks at
> > once in a single thread. In fact, worst case here we are taking twice this
> > number of locks - we actually take 2 per inode (ilock and flock) so a
> > full 32 inode cluster free would take >60 locks in the middle of this
> > function and we should be busting this depth couter limit all the
> > time.
>
> I think this started because jeffpc couldn't boot without XFS busting
> lockdep :-)
It booted, but if I tried to build the kernel it would make lockdep blowup -
not very useful when you have your own code you'd like lockdep to check :)
Josef 'Jeff' Sipek.
--
I'm somewhere between geek and normal.
- Linus Torvalds
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists