[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5046B9EE.7000804@linux.vnet.ibm.com>
Date: Wed, 05 Sep 2012 10:33:18 +0800
From: Michael Wang <wangyun@...ux.vnet.ibm.com>
To: LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
CC: Matt Mackall <mpm@...enic.com>, Pekka Enberg <penberg@...nel.org>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: [PATCH] slab: fix the DEADLOCK issue on l3 alien lock
From: Michael Wang <wangyun@...ux.vnet.ibm.com>
DEADLOCK will be report while running a kernel with NUMA and LOCKDEP enabled,
the process of this fake report is:
kmem_cache_free() //free obj in cachep
-> cache_free_alien() //acquire cachep's l3 alien lock
-> __drain_alien_cache()
-> free_block()
-> slab_destroy()
-> kmem_cache_free() //free slab in cachep->slabp_cache
-> cache_free_alien() //acquire cachep->slabp_cache's l3 alien lock
Since the cachep and cachep->slabp_cache's l3 alien are in the same lock class,
fake report generated.
This should not happen since we already have init_lock_keys() which will
reassign the lock class for both l3 list and l3 alien.
However, init_lock_keys() was invoked at a wrong position which is before we
invoke enable_cpucache() on each cache.
Since until set slab_state to be FULL, we won't invoke enable_cpucache()
on caches to build their l3 alien while creating them, so although we invoked
init_lock_keys(), the l3 alien lock class won't change since we don't have
them until invoked enable_cpucache() later.
This patch will invoke init_lock_keys() after we done enable_cpucache()
instead of before to avoid the fake DEADLOCK report.
Signed-off-by: Michael Wang <wangyun@...ux.vnet.ibm.com>
---
mm/slab.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index d4715e5..cc679ef 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1780,9 +1780,6 @@ void __init kmem_cache_init_late(void)
slab_state = UP;
- /* Annotate slab for lockdep -- annotate the malloc caches */
- init_lock_keys();
-
/* 6) resize the head arrays to their final sizes */
mutex_lock(&slab_mutex);
list_for_each_entry(cachep, &slab_caches, list)
@@ -1790,6 +1787,9 @@ void __init kmem_cache_init_late(void)
BUG();
mutex_unlock(&slab_mutex);
+ /* Annotate slab for lockdep -- annotate the malloc caches */
+ init_lock_keys();
+
/* Done! */
slab_state = FULL;
--
1.7.4.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists