[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1508268996-8959-2-git-send-email-longman@redhat.com>
Date: Tue, 17 Oct 2017 15:36:36 -0400
From: Waiman Long <longman@...hat.com>
To: Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.com>,
Jeff Layton <jlayton@...chiereds.net>,
"J. Bruce Fields" <bfields@...ldses.org>,
Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux-foundation.org>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Dave Chinner <dchinner@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Davidlohr Bueso <dave@...olabs.net>,
Waiman Long <longman@...hat.com>
Subject: [PATCH v7 9/9] lib/dlock-list: Unique lock class key for each allocation call site
Boqun Feng has kindly pointed out that the same lock class key is
used for all the dlock-list allocation. That can be a problem in case
a task need to acquire the locks of more than one dlock-list at the
same time with lockdep enabled.
To avoid this problem, the alloc_dlock_list_heads() function is
changed to use a different lock class key for each of its call sites
in the kernel.
Reported-by: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Waiman Long <longman@...hat.com>
---
include/linux/dlock-list.h | 16 +++++++++++++++-
lib/dlock-list.c | 21 +++++++++------------
2 files changed, 24 insertions(+), 13 deletions(-)
diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h
index 2ba7b4f..02c5f4d 100644
--- a/include/linux/dlock-list.h
+++ b/include/linux/dlock-list.h
@@ -116,9 +116,23 @@ static inline void dlock_list_relock(struct dlock_list_iter *iter)
/*
* Allocation and freeing of dlock list
*/
-extern int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe);
+extern int __alloc_dlock_list_heads(struct dlock_list_heads *dlist,
+ int irqsafe, struct lock_class_key *key);
extern void free_dlock_list_heads(struct dlock_list_heads *dlist);
+/**
+ * alloc_dlock_list_head - Initialize and allocate the list of head entries.
+ * @dlist : Pointer to the dlock_list_heads structure to be initialized
+ * @irqsafe: IRQ safe mode flag
+ * Return : 0 if successful, -ENOMEM if memory allocation error
+ */
+#define alloc_dlock_list_heads(dlist, irqsafe) \
+({ \
+ static struct lock_class_key _key; \
+ int _ret = __alloc_dlock_list_heads(dlist, irqsafe, &_key); \
+ _ret; \
+})
+
/*
* Check if a dlock list is empty or not.
*/
diff --git a/lib/dlock-list.c b/lib/dlock-list.c
index 6ce5c7193..17e182b 100644
--- a/lib/dlock-list.c
+++ b/lib/dlock-list.c
@@ -36,14 +36,6 @@
static int nr_dlock_lists __read_mostly;
/*
- * As all the locks in the dlock list are dynamically allocated, they need
- * to belong to their own special lock class to avoid warning and stack
- * trace in kernel log when lockdep is enabled. Statically allocated locks
- * don't have this problem.
- */
-static struct lock_class_key dlock_list_key;
-
-/*
* Initialize cpu2idx mapping table & nr_dlock_lists.
*
* It is possible that a dlock-list can be allocated before the cpu2idx is
@@ -98,9 +90,10 @@ static int __init cpu2idx_init(void)
postcore_initcall(cpu2idx_init);
/**
- * alloc_dlock_list_heads - Initialize and allocate the list of head entries
+ * __alloc_dlock_list_heads - Initialize and allocate the list of head entries
* @dlist : Pointer to the dlock_list_heads structure to be initialized
* @irqsafe: IRQ safe mode flag
+ * @key : The lock class key to be used for lockdep
* Return: 0 if successful, -ENOMEM if memory allocation error
*
* This function does not allocate the dlock_list_heads structure itself. The
@@ -112,8 +105,12 @@ static int __init cpu2idx_init(void)
* than necessary allocated is not a problem other than some wasted memory.
* The extra lists will not be ever used as all the cpu2idx entries will be
* 0 before initialization.
+ *
+ * Dynamically allocated locks need to have their own special lock class
+ * to avoid lockdep warning.
*/
-int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe)
+int __alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe,
+ struct lock_class_key *key)
{
int idx, cnt = nr_dlock_lists ? nr_dlock_lists : nr_cpu_ids;
@@ -128,11 +125,11 @@ int alloc_dlock_list_heads(struct dlock_list_heads *dlist, int irqsafe)
INIT_LIST_HEAD(&head->list);
head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
head->irqsafe = irqsafe;
- lockdep_set_class(&head->lock, &dlock_list_key);
+ lockdep_set_class(&head->lock, key);
}
return 0;
}
-EXPORT_SYMBOL(alloc_dlock_list_heads);
+EXPORT_SYMBOL(__alloc_dlock_list_heads);
/**
* free_dlock_list_heads - Free all the heads entries of the dlock list
--
1.8.3.1
Powered by blists - more mailing lists