lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140109111516.GE7572@laptop.programming.kicks-ass.net>
Date:	Thu, 9 Jan 2014 12:15:16 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	linux-kernel@...r.kernel.org
Cc:	Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Steven Rostedt <rostedt@...dmis.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [RFC][PATCH] lockdep: Introduce wait-type checks

Subject: lockdep: Introduce wait-type checks
From: Peter Zijlstra <peterz@...radead.org>
Date: Tue, 19 Nov 2013 21:45:48 +0100

This patch extends lockdep to validate lock wait-type context.

The current wait-types are:

	LD_WAIT_FREE,		/* wait free, rcu etc.. */
	LD_WAIT_SPIN,		/* spin loops, raw_spinlock_t etc.. */
	LD_WAIT_CONFIG,		/* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */
	LD_WAIT_SLEEP,		/* sleeping locks, mutex_t etc.. */

Where lockdep validates that the current lock (the one being acquired)
fits in the current wait-context (as generated by the held stack).

This ensures that we do not try and acquire mutices while holding
spinlocks, do not attempt to acquire spinlocks while holding
raw_spinlocks and so on. In other words, its a more fancy
might_sleep().

Obviously RCU made the entire ordeal more complex than a simple single
value test because we can acquire RCU in (pretty much) any context and
while it presents a context to nested locks it is not the same as it
got acquired in.

Therefore we needed to split the wait_type into two values, one
representing the acquire (outer) and one representing the nested
context (inner). For most 'normal' locks these two are the same.

[ To make static initialization easier we have the rule that:
  .outer == INV means .outer == .inner; because INV == 0. ]

It further means that we need to find the minimal .inner of the held
stack to compare against the outer of the new lock; because while
'normal' RCU presents a CONFIG type to nested locks, if it is taken
while already holding a SPIN type it obviously doesn't relax the
rules.

Below is an example output; generated by the trivial example:

	raw_spin_lock(&foo);
	spin_lock(&bar);
	spin_unlock(&bar);
	raw_spin_unlock(&foo);

The way to read it is to look at the new -{n,m} part in the lock
description; -{3:3} for our attempted lock, and try and match that up
to the held locks, which in this case is the one: -{2,2}.

This tells us that the acquiring lock requires a more relaxed
environment that presented by the lock stack.

Currently only the normal locks and RCU are converted, the rest of the
lockdep users defaults to .inner = INV which is ignored. More
convertions can be done when desired.

 [ ] =============================
 [ ] [ BUG: Invalid wait context ]
 [ ] 3.13.0-rc7-01825-g4443577b1c38-dirty #718 Not tainted
 [ ] -----------------------------
 [ ] swapper/0/1 is trying to lock:
 [ ]  (bar){......}-{3:3}, at: [<ffffffff81d0acea>] sched_init_smp+0x423/0x45e
 [ ]
 [ ] stack backtrace:
 [ ] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.13.0-rc7-01825-g4443577b1c38-dirty #718
 [ ] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
 [ ]  00000000000000a2 ffff880236847cf8 ffffffff8165d96a 0000000000000001
 [ ]  ffff880236847df0 ffffffff810df09f ffffffff81c3e698 ffffffff820b0290
 [ ]  ffff880200000000 ffffea0000000000 ffff880400000000 0000000000004140
 [ ] Call Trace:
 [ ]  [<ffffffff8165d96a>] dump_stack+0x4e/0x7a
 [ ]  [<ffffffff810df09f>] __lock_acquire+0x44f/0x2100
 [ ]  [<ffffffff810c0c8a>] ? task_rq_lock+0x5a/0xa0
 [ ]  [<ffffffff816657ad>] ? _raw_spin_unlock_irqrestore+0x6d/0x80
 [ ]  [<ffffffff810e1317>] lock_acquire+0x87/0x120
 [ ]  [<ffffffff81d0acea>] ? sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81664e5b>] _raw_spin_lock+0x3b/0x50
 [ ]  [<ffffffff81d0acea>] ? sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81d0acea>] sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81cedf03>] kernel_init_freeable+0x91/0x197
 [ ]  [<ffffffff8164f600>] ? rest_init+0xd0/0xd0
 [ ]  [<ffffffff8164f60e>] kernel_init+0xe/0x130
 [ ]  [<ffffffff8166d36c>] ret_from_fork+0x7c/0xb0
 [ ]  [<ffffffff8164f600>] ? rest_init+0xd0/0xd0
 [ ]
 [ ] other info that might help us debug this:
 [ ] 1 lock held by swapper/0/1:
 [ ]  #0:  (foo){+.+...}-{2:2}, at: [<ffffffff81d0acde>] sched_init_smp+0x417/0x45e
 [ ]
 [ ] stack backtrace:
 [ ] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.13.0-rc7-01825-g4443577b1c38-dirty #718
 [ ] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
 [ ]  00000000000000a2 ffff880236847cf8 ffffffff8165d96a 0000000000000001
 [ ]  ffff880236847df0 ffffffff810df0cf ffffffff81c3e698 ffffffff820b0290
 [ ]  ffff880200000000 ffffea0000000000 ffff880400000000 0000000000004140
 [ ] Call Trace:
 [ ]  [<ffffffff8165d96a>] dump_stack+0x4e/0x7a
 [ ]  [<ffffffff810df0cf>] __lock_acquire+0x47f/0x2100
 [ ]  [<ffffffff810c0c8a>] ? task_rq_lock+0x5a/0xa0
 [ ]  [<ffffffff816657ad>] ? _raw_spin_unlock_irqrestore+0x6d/0x80
 [ ]  [<ffffffff810e1317>] lock_acquire+0x87/0x120
 [ ]  [<ffffffff81d0acea>] ? sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81664e5b>] _raw_spin_lock+0x3b/0x50
 [ ]  [<ffffffff81d0acea>] ? sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81d0acea>] sched_init_smp+0x423/0x45e
 [ ]  [<ffffffff81cedf03>] kernel_init_freeable+0x91/0x197
 [ ]  [<ffffffff8164f600>] ? rest_init+0xd0/0xd0
 [ ]  [<ffffffff8164f60e>] kernel_init+0xe/0x130
 [ ]  [<ffffffff8166d36c>] ret_from_fork+0x7c/0xb0
 [ ]  [<ffffffff8164f600>] ? rest_init+0xd0/0xd0

Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>
Requested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Peter Zijlstra <peterz@...radead.org>
---
 include/linux/lockdep.h         |   27 +++++++++-
 include/linux/mutex.h           |    7 +-
 include/linux/rwlock_types.h    |    6 +-
 include/linux/rwsem.h           |    6 +-
 include/linux/spinlock.h        |   36 ++++++++++---
 include/linux/spinlock_types.h  |   24 +++++++--
 kernel/locking/lockdep.c        |  103 +++++++++++++++++++++++++++++++++++++---
 kernel/locking/mutex-debug.c    |    2 
 kernel/locking/rwsem-spinlock.c |    2 
 kernel/locking/rwsem-xadd.c     |    2 
 kernel/locking/spinlock_debug.c |    6 +-
 kernel/rcu/update.c             |   24 ++++++---
 12 files changed, 207 insertions(+), 38 deletions(-)

--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -104,6 +104,9 @@ struct lock_class {
 	const char			*name;
 	int				name_version;
 
+	short				wait_type_inner;
+	short				wait_type_outer;
+
 #ifdef CONFIG_LOCK_STAT
 	unsigned long			contention_point[LOCKSTAT_POINTS];
 	unsigned long			contending_point[LOCKSTAT_POINTS];
@@ -143,6 +146,17 @@ struct lock_class_stats lock_stats(struc
 void clear_lock_stats(struct lock_class *class);
 #endif
 
+enum lockdep_wait_type {
+	LD_WAIT_INV = 0,	/* not checked, catch all */
+
+	LD_WAIT_FREE,		/* wait free, rcu etc.. */
+	LD_WAIT_SPIN,		/* spin loops, raw_spinlock_t etc.. */
+	LD_WAIT_CONFIG,		/* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */
+	LD_WAIT_SLEEP,		/* sleeping locks, mutex_t etc.. */
+
+	LD_WAIT_MAX,		/* must be last */
+};
+
 /*
  * Map the lock object (the lock instance) to the lock-class object.
  * This is embedded into specific lock instances:
@@ -151,6 +165,8 @@ struct lockdep_map {
 	struct lock_class_key		*key;
 	struct lock_class		*class_cache[NR_LOCKDEP_CACHING_CLASSES];
 	const char			*name;
+	short				wait_type_outer; /* can be taken in this context */
+	short				wait_type_inner; /* presents this context */
 #ifdef CONFIG_LOCK_STAT
 	int				cpu;
 	unsigned long			ip;
@@ -276,8 +292,14 @@ extern void lockdep_on(void);
  * to lockdep:
  */
 
-extern void lockdep_init_map(struct lockdep_map *lock, const char *name,
-			     struct lock_class_key *key, int subclass);
+extern void lockdep_init_map_wait(struct lockdep_map *lock, const char *name,
+		struct lock_class_key *key, int subclass, short inner);
+
+static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+			     struct lock_class_key *key, int subclass)
+{
+	lockdep_init_map_wait(lock, name, key, subclass, LD_WAIT_INV);
+}
 
 /*
  * To initialize a lockdep_map statically use this macro.
@@ -304,6 +326,7 @@ extern void lockdep_init_map(struct lock
 
 #define lockdep_set_novalidate_class(lock) \
 	lockdep_set_class(lock, &__lockdep_no_validate__)
+
 /*
  * Compare locking classes
  */
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -100,8 +100,11 @@ static inline void mutex_destroy(struct
 #endif
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
-		, .dep_map = { .name = #lockname }
+# define __DEP_MAP_MUTEX_INITIALIZER(lockname)			\
+		, .dep_map = {					\
+			.name = #lockname,			\
+			.wait_type_inner = LD_WAIT_SLEEP,	\
+		}
 #else
 # define __DEP_MAP_MUTEX_INITIALIZER(lockname)
 #endif
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -25,7 +25,11 @@ typedef struct {
 #define RWLOCK_MAGIC		0xdeaf1eed
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define RW_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+# define RW_DEP_MAP_INIT(lockname)					\
+	.dep_map = {							\
+		.name = #lockname,					\
+		.wait_type_inner = LD_WAIT_CONFIG,			\
+	}
 #else
 # define RW_DEP_MAP_INIT(lockname)
 #endif
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -50,7 +50,11 @@ static inline int rwsem_is_locked(struct
 /* Common initializer macros and functions */
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
+# define __RWSEM_DEP_MAP_INIT(lockname)			\
+	, .dep_map = {					\
+		.name = #lockname,			\
+		.wait_type_inner = LD_WAIT_SLEEP,	\
+	}
 #else
 # define __RWSEM_DEP_MAP_INIT(lockname)
 #endif
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -91,12 +91,13 @@
 
 #ifdef CONFIG_DEBUG_SPINLOCK
   extern void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
-				   struct lock_class_key *key);
-# define raw_spin_lock_init(lock)				\
-do {								\
-	static struct lock_class_key __key;			\
-								\
-	__raw_spin_lock_init((lock), #lock, &__key);		\
+				   struct lock_class_key *key, short inner);
+
+# define raw_spin_lock_init(lock)					\
+do {									\
+	static struct lock_class_key __key;				\
+									\
+	__raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN);	\
 } while (0)
 
 #else
@@ -292,12 +293,27 @@ static inline raw_spinlock_t *spinlock_c
 	return &lock->rlock;
 }
 
-#define spin_lock_init(_lock)				\
-do {							\
-	spinlock_check(_lock);				\
-	raw_spin_lock_init(&(_lock)->rlock);		\
+#ifdef CONFIG_DEBUG_SPINLOCK
+
+# define spin_lock_init(lock)					\
+do {								\
+	static struct lock_class_key __key;			\
+								\
+	__raw_spin_lock_init(spinlock_check(lock),		\
+			     #lock, &__key, LD_WAIT_CONFIG);	\
+} while (0)
+
+#else
+
+# define spin_lock_init(_lock)			\
+do {						\
+	spinlock_check(_lock);			\
+	*(lock) = __SPIN_LOCK_UNLOCKED(lock);	\
 } while (0)
 
+#endif
+
+
 static inline void spin_lock(spinlock_t *lock)
 {
 	raw_spin_lock(&lock->rlock);
--- a/include/linux/spinlock_types.h
+++ b/include/linux/spinlock_types.h
@@ -36,8 +36,18 @@ typedef struct raw_spinlock {
 #define SPINLOCK_OWNER_INIT	((void *)-1L)
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define SPIN_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+# define RAW_SPIN_DEP_MAP_INIT(lockname)		\
+	.dep_map = {					\
+		.name = #lockname,			\
+		.wait_type_inner = LD_WAIT_SPIN,	\
+	}
+# define SPIN_DEP_MAP_INIT(lockname)			\
+	.dep_map = {					\
+		.name = #lockname,			\
+		.wait_type_inner = LD_WAIT_CONFIG,	\
+	}
 #else
+# define RAW_SPIN_DEP_MAP_INIT(lockname)
 # define SPIN_DEP_MAP_INIT(lockname)
 #endif
 
@@ -54,7 +64,7 @@ typedef struct raw_spinlock {
 	{					\
 	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
 	SPIN_DEBUG_INIT(lockname)		\
-	SPIN_DEP_MAP_INIT(lockname) }
+	RAW_SPIN_DEP_MAP_INIT(lockname) }
 
 #define __RAW_SPIN_LOCK_UNLOCKED(lockname)	\
 	(raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
@@ -75,11 +85,17 @@ typedef struct spinlock {
 	};
 } spinlock_t;
 
+#define ___SPIN_LOCK_INITIALIZER(lockname)	\
+	{					\
+	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
+	SPIN_DEBUG_INIT(lockname)		\
+	SPIN_DEP_MAP_INIT(lockname) }
+
 #define __SPIN_LOCK_INITIALIZER(lockname) \
-	{ { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
+	{ { .rlock = ___SPIN_LOCK_INITIALIZER(lockname) } }
 
 #define __SPIN_LOCK_UNLOCKED(lockname) \
-	(spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
+	(spinlock_t) __SPIN_LOCK_INITIALIZER(lockname)
 
 #define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
 
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -532,7 +532,9 @@ static void print_lock_name(struct lock_
 
 	printk(" (");
 	__print_lock_name(class);
-	printk("){%s}", usage);
+	printk("){%s}-{%hd:%hd}", usage,
+			class->wait_type_outer ?: class->wait_type_inner,
+			class->wait_type_inner);
 }
 
 static void print_lockdep_cache(struct lockdep_map *lock)
@@ -757,9 +759,11 @@ register_lock_class(struct lockdep_map *
 	 * We have to do the hash-walk again, to avoid races
 	 * with another CPU:
 	 */
-	list_for_each_entry(class, hash_head, hash_entry)
+	list_for_each_entry(class, hash_head, hash_entry) {
 		if (class->key == key)
 			goto out_unlock_set;
+	}
+
 	/*
 	 * Allocate a new key from the static array, and add it to
 	 * the hash:
@@ -784,6 +788,8 @@ register_lock_class(struct lockdep_map *
 	INIT_LIST_HEAD(&class->locks_before);
 	INIT_LIST_HEAD(&class->locks_after);
 	class->name_version = count_matching_names(class);
+	class->wait_type_inner = lock->wait_type_inner;
+	class->wait_type_outer = lock->wait_type_outer;
 	/*
 	 * We use RCU's safe list-add method to make
 	 * parallel walking of the hash-list safe:
@@ -2949,8 +2955,8 @@ static int mark_lock(struct task_struct
 /*
  * Initialize a lock instance's lock-class mapping info:
  */
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
-		      struct lock_class_key *key, int subclass)
+void lockdep_init_map_wait(struct lockdep_map *lock, const char *name,
+		struct lock_class_key *key, int subclass, short inner)
 {
 	int i;
 
@@ -2973,6 +2979,9 @@ void lockdep_init_map(struct lockdep_map
 
 	lock->name = name;
 
+	lock->wait_type_outer = LD_WAIT_INV; /* INV outer matches inner. */
+	lock->wait_type_inner = inner;
+
 	/*
 	 * No key, no joy, we need to hash something.
 	 */
@@ -2997,7 +3006,7 @@ void lockdep_init_map(struct lockdep_map
 	if (subclass)
 		register_lock_class(lock, subclass, 1);
 }
-EXPORT_SYMBOL_GPL(lockdep_init_map);
+EXPORT_SYMBOL_GPL(lockdep_init_map_wait);
 
 struct lock_class_key __lockdep_no_validate__;
 EXPORT_SYMBOL_GPL(__lockdep_no_validate__);
@@ -3036,6 +3045,85 @@ print_lock_nested_lock_not_held(struct t
 	return 0;
 }
 
+static int
+print_lock_invalid_wait_context(struct task_struct *curr,
+				struct held_lock *hlock)
+{
+	if (!debug_locks_off())
+		return 0;
+	if (debug_locks_silent)
+		return 0;
+
+	printk("\n");
+	printk("=============================\n");
+	printk("[ BUG: Invalid wait context ]\n");
+	print_kernel_ident();
+	printk("-----------------------------\n");
+
+	printk("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
+	print_lock(hlock);
+
+	/* XXX */
+
+	printk("\nstack backtrace:\n");
+	dump_stack();
+
+	printk("\nother info that might help us debug this:\n");
+	lockdep_print_held_locks(curr);
+
+	printk("\nstack backtrace:\n");
+	dump_stack();
+
+	return 0;
+}
+
+/*
+ * Verify the wait_type context.
+ *
+ * This check validates we takes locks in the right wait-type order; that is it
+ * ensures that we do not take mutexes inside spinlocks and do not attempt to
+ * acquire spinlocks inside raw_spinlocks and the sort.
+ *
+ * The entire thing is slightly more complex because of RCU, RCU is a lock that
+ * can be taken from (pretty much) any context but also has constraints.
+ * However when taken in a stricter environment the RCU lock does not loosen
+ * the constraints.
+ *
+ * Therefore we must look for the strictest environment in the lock stack and
+ * compare that to the lock we're trying to acquire.
+ */
+static int check_context(struct task_struct *curr, struct held_lock *next)
+{
+	short next_inner = hlock_class(next)->wait_type_inner;
+	short next_outer = hlock_class(next)->wait_type_outer;
+	short curr_inner = LD_WAIT_MAX;
+	int depth;
+
+	if (!curr->lockdep_depth || !next_inner)
+		return 0;
+
+	if (!next_outer)
+		next_outer = next_inner;
+
+	for (depth = 0; depth < curr->lockdep_depth; depth++) {
+		struct held_lock *prev = curr->held_locks + depth;
+		short prev_inner = hlock_class(prev)->wait_type_inner;
+
+		if (prev_inner) {
+			/*
+			 * we can have a bigger inner than a previous one
+			 * when outer is smaller than inner, as with RCU.
+			 */
+			curr_inner = min(curr_inner, prev_inner);
+		}
+	}
+
+	if (next_outer > curr_inner)
+		return print_lock_invalid_wait_context(curr, next);
+
+	return 0;
+}
+
 static int __lock_is_held(struct lockdep_map *lock);
 
 /*
@@ -3105,7 +3193,7 @@ static int __lock_acquire(struct lockdep
 
 	class_idx = class - lock_classes + 1;
 
-	if (depth) {
+	if (depth) { /* we're holding locks */
 		hlock = curr->held_locks + depth - 1;
 		if (hlock->class_idx == class_idx && nest_lock) {
 			if (hlock->references)
@@ -3138,6 +3226,9 @@ static int __lock_acquire(struct lockdep
 	hlock->holdtime_stamp = lockstat_clock();
 #endif
 
+	if (check_context(curr, hlock))
+		return 0;
+
 	if (check == 2 && !mark_irqflags(curr, hlock))
 		return 0;
 
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -93,7 +93,7 @@ void debug_mutex_init(struct mutex *lock
 	 * Make sure we are not reinitializing a held lock:
 	 */
 	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
-	lockdep_init_map(&lock->dep_map, name, key, 0);
+	lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);
 #endif
 	lock->magic = lock;
 }
--- a/kernel/locking/rwsem-spinlock.c
+++ b/kernel/locking/rwsem-spinlock.c
@@ -44,7 +44,7 @@ void __init_rwsem(struct rw_semaphore *s
 	 * Make sure we are not reinitializing a held semaphore:
 	 */
 	debug_check_no_locks_freed((void *)sem, sizeof(*sem));
-	lockdep_init_map(&sem->dep_map, name, key, 0);
+	lockdep_init_map_wait(&sem->dep_map, name, key, 0, LD_WAIT_SLEEP);
 #endif
 	sem->activity = 0;
 	raw_spin_lock_init(&sem->wait_lock);
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -22,7 +22,7 @@ void __init_rwsem(struct rw_semaphore *s
 	 * Make sure we are not reinitializing a held semaphore:
 	 */
 	debug_check_no_locks_freed((void *)sem, sizeof(*sem));
-	lockdep_init_map(&sem->dep_map, name, key, 0);
+	lockdep_init_map_wait(&sem->dep_map, name, key, 0, LD_WAIT_SLEEP);
 #endif
 	sem->count = RWSEM_UNLOCKED_VALUE;
 	raw_spin_lock_init(&sem->wait_lock);
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -14,14 +14,14 @@
 #include <linux/export.h>
 
 void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
-			  struct lock_class_key *key)
+			  struct lock_class_key *key, short inner)
 {
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 	/*
 	 * Make sure we are not reinitializing a held lock:
 	 */
 	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
-	lockdep_init_map(&lock->dep_map, name, key, 0);
+	lockdep_init_map_wait(&lock->dep_map, name, key, 0, inner);
 #endif
 	lock->raw_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
 	lock->magic = SPINLOCK_MAGIC;
@@ -39,7 +39,7 @@ void __rwlock_init(rwlock_t *lock, const
 	 * Make sure we are not reinitializing a held lock:
 	 */
 	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
-	lockdep_init_map(&lock->dep_map, name, key, 0);
+	lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_CONFIG);
 #endif
 	lock->raw_lock = (arch_rwlock_t) __ARCH_RW_LOCK_UNLOCKED;
 	lock->magic = RWLOCK_MAGIC;
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -114,18 +114,30 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 static struct lock_class_key rcu_lock_key;
-struct lockdep_map rcu_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
+struct lockdep_map rcu_lock_map = {
+	.name = "rcu_read_lock",
+	.key = &rcu_lock_key,
+	.wait_type_outer = LD_WAIT_FREE,
+	.wait_type_inner = LD_WAIT_CONFIG, /* XXX PREEMPT_RCU ? */
+};
 EXPORT_SYMBOL_GPL(rcu_lock_map);
 
 static struct lock_class_key rcu_bh_lock_key;
-struct lockdep_map rcu_bh_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
+struct lockdep_map rcu_bh_lock_map = {
+	.name = "rcu_read_lock_bh",
+	.key = &rcu_bh_lock_key,
+	.wait_type_outer = LD_WAIT_FREE,
+	.wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_LOCK also makes BH preemptible */
+};
 EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
 
 static struct lock_class_key rcu_sched_lock_key;
-struct lockdep_map rcu_sched_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
+struct lockdep_map rcu_sched_lock_map = {
+	.name = "rcu_read_lock_sched",
+	.key = &rcu_sched_lock_key,
+	.wait_type_outer = LD_WAIT_FREE,
+	.wait_type_inner = LD_WAIT_SPIN,
+};
 EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
 
 static struct lock_class_key rcu_callback_key;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ