[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140207184653.GA5799@in.ibm.com>
Date: Sat, 8 Feb 2014 00:16:53 +0530
From: Gautham R Shenoy <ego@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: peterz@...radead.org, penberg@...nel.org
Subject: [PATCH] slub: Hold list_lock unconditionally before the call to
add_full.
Hi,
>From the lockdep annotation and the comment that existed before the
lockdep annotations were introduced,
mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
held.
However, there's a call path in deactivate_slab() when
(new.inuse || n->nr_partial <= s->min_partial) &&
!(new.freelist) &&
!(kmem_cache_debug(s))
which ends up calling add_full() without holding
n->list_lock.
This was discovered while onlining/offlining cpus in 3.14-rc1 due to
the lockdep annotations added by commit
c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
Fix this by unconditionally taking the lock
irrespective of the state of kmem_cache_debug(s).
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Pekka Enberg <penberg@...nel.org>
Signed-off-by: Gautham R. Shenoy <ego@...ux.vnet.ibm.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 7e3e045..1f723f7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1882,7 +1882,7 @@ redo:
}
} else {
m = M_FULL;
- if (kmem_cache_debug(s) && !lock) {
+ if (!lock) {
lock = 1;
/*
* This also ensures that the scanning of full
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists