[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140124152023.A450E599@viggo.jf.intel.com>
Date: Fri, 24 Jan 2014 07:20:23 -0800
From: Dave Hansen <dave@...1.net>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Dave Hansen <dave@...1.net>,
peterz@...radead.org, penberg@...nel.org, linux@....linux.org.uk
Subject: [linux-next][PATCH] mm: slub: work around unneeded lockdep warning
I think this is a next-only thing. Pekka, can you pick this up,
please?
--
From: Dave Hansen <dave.hansen@...ux.intel.com>
The slub code does some setup during early boot in
early_kmem_cache_node_alloc() with some local data. There is no
possible way that another CPU can see this data, so the slub code
doesn't unnecessarily lock it. However, some new lockdep asserts
check to make sure that add_partial() _always_ has the list_lock
held.
Just add the locking, even though it is technically unnecessary.
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: Russell King <linux@....linux.org.uk>
---
b/mm/slub.c | 6 ++++++
1 file changed, 6 insertions(+)
diff -puN mm/slub.c~slub-lockdep-workaround mm/slub.c
--- a/mm/slub.c~slub-lockdep-workaround 2014-01-24 07:19:23.794069012 -0800
+++ b/mm/slub.c 2014-01-24 07:19:23.799069236 -0800
@@ -2890,7 +2890,13 @@ static void early_kmem_cache_node_alloc(
init_kmem_cache_node(n);
inc_slabs_node(kmem_cache_node, node, page->objects);
+ /*
+ * the lock is for lockdep's sake, not for any actual
+ * race protection
+ */
+ spin_lock(&n->list_lock);
add_partial(n, page, DEACTIVATE_TO_HEAD);
+ spin_unlock(&n->list_lock);
}
static void free_kmem_cache_nodes(struct kmem_cache *s)
_
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists