[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1188572961.6112.72.camel@twins>
Date: Fri, 31 Aug 2007 17:09:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: David Chinner <dgc@....com>
Cc: Eric Sandeen <sandeen@...deen.net>,
linux-kernel Mailing List <linux-kernel@...r.kernel.org>,
xfs-oss <xfs@....sgi.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH
On Sat, 2007-09-01 at 01:05 +1000, David Chinner wrote:
> > Trouble is, we'd like to have a sane upper bound on the amount of held
> > locks at any one time, obviously this is just wanting, because a lot of
> > lock chains also depend on the number of online cpus...
>
> Sure - this is an obvious case where it is valid to take >30 locks at
> once in a single thread. In fact, worst case here we are taking twice this
> number of locks - we actually take 2 per inode (ilock and flock) so a
> full 32 inode cluster free would take >60 locks in the middle of this
> function and we should be busting this depth couter limit all the
> time.
I think this started because jeffpc couldn't boot without XFS busting
lockdep :-)
> Do semaphores (the flush locks) contribute to the lock depth
> counters?
No, alas, we cannot handle semaphores in lockdep. Semaphores don't have
a strict owner, hence we cannot track them. This is one of the reasons
to rid ourselves of semaphores - that and there are very few cases where
the actual semantics of semaphores are needed. Most of the times code
using semaphores can be expressed with either a mutex or a completion.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists