[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150811013204.GC4606@fixme-laptop.cn.ibm.com>
Date: Tue, 11 Aug 2015 09:32:04 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [Question] lockdep: Is nested lock handled correctly?
Hi Peter,
On Mon, Aug 10, 2015 at 04:24:17PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 10, 2015 at 09:49:24PM +0800, Boqun Feng wrote:
<snip>
> > Though I don't want to have a locking order like that either, we can't
> > stop others from using that order(maybe a good design review will) and
> > lockdep yells something -unrelated- in such an order.
> >
> > I think we can either let lockdep complain if some one uses this
> > locking order or clean up current code a little bit to tolarent this.
> >
> > If you really think we should do something about it, I can write the
> > patch and add test cases.
>
>
> Maybe something like the below in __lock_acquire():
>
> /* Daft bugger, can't guard a nesting order with the same lock class */
> if (DEBUG_LOCKS_WARN_ON(lock == nest_lock))
> return 0;
>
> ?
I may not understand this well.. but I think this may not detect the
problem. The problem is:
A correct nesting order get disturbed by other locks acquired before the
nested lock acquired and release before the nested, which makes two
held_lock structures merged during __lock_acquire().
I think we can detect this in __lock_release():
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 8acfbf7..e75f622 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3427,6 +3427,19 @@ found_it:
curr->lockdep_depth = i;
curr->curr_chain_key = hlock->prev_chain_key;
+ /*
+ * We are going to "reacquire" the rest of stack, but we find out
+ * __lock_acquire() will merge the next hlock into prev_hlock,
+ * which means this is not a good time to release this lock and lock
+ * users might need to reconsider the locking design.
+ */
+ if (prev_hlock && (i+1) < depth) {
+ hlock = curr->held_locks + i + 1;
+ if (DEBUG_LOCKS_WARN_ON(hlock->nest_lock &&
+ hlock->class_idx == prev_hlock->class_idx))
+ return 0;
+ }
+
for (i++; i < depth; i++) {
hlock = curr->held_locks + i;
if (!__lock_acquire(hlock->instance,
Regards,
Boqun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists