[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090616103606.GA3497@elte.hu>
Date: Tue, 16 Jun 2009 12:36:06 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Alan Cox <alan@...rguk.ukuu.org.uk>,
Joerg Roedel <joerg.roedel@....com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: bug in tty ldisc and friends
* Ingo Molnar <mingo@...e.hu> wrote:
> This is something we noticed recently: dma-debug uses a lot of
> lock classes and thus creates a really large lock-graph, depleting
> the reserves quickly.
yep, i think dma-debug is the culprit. It has:
#define HASH_SIZE 1024ULL
each hash entry has a separate lock.
But this should be easily to solve: a special lock-class for these
locks. Patch below (warning: utterly untested!).
(Btw., that SPIN_LOCK_UNLOCKED init method should really be
deprecated for real.)
Ingo
diff --git a/lib/dma-debug.c b/lib/dma-debug.c
index 3b93129..012c240 100644
--- a/lib/dma-debug.c
+++ b/lib/dma-debug.c
@@ -62,6 +62,8 @@ struct dma_debug_entry {
#endif
};
+static struct lock_class_key hash_bucket_class;
+
struct hash_bucket {
struct list_head list;
spinlock_t lock;
@@ -716,7 +718,8 @@ void dma_debug_init(u32 num_entries)
for (i = 0; i < HASH_SIZE; ++i) {
INIT_LIST_HEAD(&dma_entry_hash[i].list);
- dma_entry_hash[i].lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&dma_entry_hash[i].lock);
+ lockdep_set_lock_class(&dma_entry_hash[i].lock, &hash_bucket_class);
}
if (dma_debug_fs_init() != 0) {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists