lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1217875729.3589.54.camel@twins>
Date:	Mon, 04 Aug 2008 20:48:49 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Andrea Arcangeli <andrea@...ranet.com>
Cc:	Dave Jones <davej@...hat.com>, Roland Dreier <rdreier@...co.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	David Miller <davem@...emloft.net>, jeremy@...p.org,
	hugh@...itas.com, mingo@...e.hu, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 7/7] lockdep: spin_lock_nest_lock()

On Mon, 2008-08-04 at 19:27 +0200, Andrea Arcangeli wrote:
> On Mon, Aug 04, 2008 at 06:38:55PM +0200, Peter Zijlstra wrote:
> > You also need CONFIG_PROVE_LOCKING, but I'll assume that's set too.
> 
> Sorry, that wasn't the problem, but my current testing passed because
> of another error... to test this report I rebuilt a kvm configured for
> rhel not for mainline so it wasn't the right test...
> 
> When "long ago" I tested that this was working fine (actually when
> Andrew asked me), I guess lockdep was working because the
> implementation with the vmalloc array was slightly different,
> otherwise I don't know. I'm fairly certain that it worked fine at some
> point, and I didn't expect the refactoring to generate false positives.
> 
> > Dave Jones just handed me:
> > 
> >   https://bugzilla.redhat.com/show_bug.cgi?id=457779
> 
> I can reproduce this now yes after a 'make sync'.
> 
> Obviously this is a bug in lockdep that it trips over this otherwise
> if lockdep was right the kernel should deadlock while this is just a
> false positive and everything runs fine.
> 
> I assume it can't understand the spinlock address is different (I
> think it uses the address as key only for static locks), so I wonder
> if you could call print_deadlock_bug() from the failure path of the
> spinlock to solve this?
> 
> Do things like double_rq_lock works just because rq1 and rq2 don't
> have the same name like in my case where all locks are called "mapping"->?

Lockdep doesn't look at individual lock instances, but instead works on
lock classes. The rq locks used to have a separate class each, but the
third patch in this series changes that.

The advantage of not working on lock instances is that the dependency
graph becomes much smaller, and it will report problems much earlier,
the down-side is that you will have to augment the information a little.

Normally locks are grouped in classes based on their init site; eg.
every instance initialized by a particular spin_lock_init() will belong
to the same class.

Besides this grouping, you can use things like spin_lock_nested(), which
can be used to annotate simple recursion (up to 8 sub-classes).

Then there is the possibility to explicitly create classes and assign
locks to classes using lockdep_set_class().

Either way - there is a limit of MAX_LOCK_DEPTH (48) locks that can be
tracked per task (including nesting interrupts).

Making that dynamic is rather challenging - as it would involve
allocating memory while potentially holding allocator locks.

> > David Miller just did a patch that might fix that.
> 
> Woow cool, after 11 months I lost any hope that lockdep could ever
> work in that environment... Was it an actual bug or is this some way
> to lower the complexity of the graph build?

He reduced the search complexity of the graph, look at the first patch
in this series.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ