[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1200400787.5887.129.camel@johannes.berg>
Date: Tue, 15 Jan 2008 13:39:47 +0100
From: Johannes Berg <johannes@...solutions.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dave Young <hidave.darkstar@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>, Oleg Nesterov <oleg@...sign.ru>
Subject: Re: 2.6.24-rc7 lockdep warning when poweroff
> > To make sure now:
> > same key - different name - BAD
> > same key - same name - OK
> > different key - same name - OK
>
> Strictly speaking one can do that, although I would recommend against it
> - it leads to confusion as to which lock got into trouble when looking
> at lockdep/stat output.
True, but I don't see a good way to avoid that. Similar things also
happen with
mutex_init(&priv->mtx);
for example, no?
> > The root problem here seems to be that I use the same name as for the
> > workqueue for the lockdep_map and other code uses a non-static workqueue
> > name. Using the workqueue name for the lock is good for knowing which
> > workqueue ran into trouble though.
>
> Indeed, and also using a different key allows the workqueue to have
> different lock dependencies as well. The trouble is, lockdep works at
> the class level, a class with multiple names just doesn't make sense,
> and reporting will get it wrong (although it may appear to work
> correctly in the trivial cases).
Right.
> > mac80211 for example wants to allocate a (single-threaded) workqueue for
> > each hardware that is plugged in and wants to call it by the hardware
> > name.
>
> Right, that would require a new key for each instance.
Except, how could I do that though? Keys are required to be static, so I
can't have the object as the key. In any case, I don't think it matters
much because the workqueues are per-hardware but all have similar users,
I think that the other users here probably behave similarly.
> > If you think the patch is a correct way to solve the problem I'll submit
> > it formally and it should then be included in 2.6.24 to avoid
> > regressions with the workqueue API (the workqueue lockup detection was
> > merged early in 2.6.24.)
>
> The patch looks ok, one important thing to note is that it means that
> all workqueues instantiated by the same __create_workqueue() call-site
> share lock dependency chains - I'm unsure if that might get us into
> trouble or not.
It doesn't seem to have so far ;) I don't think it should. If some code
allocates a per-instance workqueue that's much like having an inode lock
or so.
The scenario to get into trouble with this would require having a
per-instance lock and a per-instance workqueue and flushing the
workqueue (that can contain functions taking the lock of instance A) of
instance A under the lock of instance B, but unless that is nested in a
way that it cannot happen in order BA as well it's actually a possible
ABBA deadlock.
> > Who should I send it to in that case?
>
> Me and Ingo :-)
Alright, I'll write a patch description and send it in a minute.
johannes
Download attachment "signature.asc" of type "application/pgp-signature" (829 bytes)
Powered by blists - more mailing lists