[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080804172728.GJ11476@duo.random>
Date: Mon, 4 Aug 2008 19:27:28 +0200
From: Andrea Arcangeli <andrea@...ranet.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Dave Jones <davej@...hat.com>, Roland Dreier <rdreier@...co.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
David Miller <davem@...emloft.net>, jeremy@...p.org,
hugh@...itas.com, mingo@...e.hu, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 7/7] lockdep: spin_lock_nest_lock()
On Mon, Aug 04, 2008 at 06:38:55PM +0200, Peter Zijlstra wrote:
> You also need CONFIG_PROVE_LOCKING, but I'll assume that's set too.
Sorry, that wasn't the problem, but my current testing passed because
of another error... to test this report I rebuilt a kvm configured for
rhel not for mainline so it wasn't the right test...
When "long ago" I tested that this was working fine (actually when
Andrew asked me), I guess lockdep was working because the
implementation with the vmalloc array was slightly different,
otherwise I don't know. I'm fairly certain that it worked fine at some
point, and I didn't expect the refactoring to generate false positives.
> Dave Jones just handed me:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=457779
I can reproduce this now yes after a 'make sync'.
Obviously this is a bug in lockdep that it trips over this otherwise
if lockdep was right the kernel should deadlock while this is just a
false positive and everything runs fine.
I assume it can't understand the spinlock address is different (I
think it uses the address as key only for static locks), so I wonder
if you could call print_deadlock_bug() from the failure path of the
spinlock to solve this?
Do things like double_rq_lock works just because rq1 and rq2 don't
have the same name like in my case where all locks are called "mapping"->?
> David Miller just did a patch that might fix that.
Woow cool, after 11 months I lost any hope that lockdep could ever
work in that environment... Was it an actual bug or is this some way
to lower the complexity of the graph build?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists