[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181207011148.251812-21-bvanassche@acm.org>
Date: Thu, 6 Dec 2018 17:11:44 -0800
From: Bart Van Assche <bvanassche@....org>
To: mingo@...hat.com
Cc: peterz@...radead.org, tj@...nel.org, longman@...hat.com,
johannes.berg@...el.com, linux-kernel@...r.kernel.org,
Bart Van Assche <bvanassche@....org>,
Johannes Berg <johannes@...solutions.net>
Subject: [PATCH v3 20/24] locking/lockdep: Introduce __lockdep_free_key_range()
This patch does not change any functionality but makes the next patch
in this series easier to read.
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <longman@...hat.com>
Cc: Johannes Berg <johannes@...solutions.net>
Signed-off-by: Bart Van Assche <bvanassche@....org>
---
kernel/locking/lockdep.c | 37 ++++++++++++++++++++++---------------
1 file changed, 22 insertions(+), 15 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 78f14c151407..8c69516b1283 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4478,27 +4478,17 @@ static void schedule_free_zapped_classes(void)
}
/*
- * Used in module.c to remove lock classes from memory that is going to be
- * freed; and possibly re-used by other modules.
- *
- * We will have had one sync_sched() before getting here, so we're guaranteed
- * nobody will look up these exact classes -- they're properly dead but still
- * allocated.
+ * Remove all lock classes from the class hash table and from the
+ * all_lock_classes list whose key or name is in the address range [start,
+ * start + size). Move these lock classes to the zapped_classes list. Must
+ * be called with the graph lock held.
*/
-void lockdep_free_key_range(void *start, unsigned long size)
+static void __lockdep_free_key_range(void *start, unsigned long size)
{
struct lock_class *class;
struct hlist_head *head;
- unsigned long flags;
int i;
- int locked;
-
- raw_local_irq_save(flags);
- locked = graph_lock();
- /*
- * Unhash all classes that were created by this module:
- */
for (i = 0; i < CLASSHASH_SIZE; i++) {
head = classhash_table + i;
hlist_for_each_entry_rcu(class, head, hash_entry) {
@@ -4511,7 +4501,24 @@ void lockdep_free_key_range(void *start, unsigned long size)
}
schedule_free_zapped_classes();
+}
+/*
+ * Used in module.c to remove lock classes from memory that is going to be
+ * freed; and possibly re-used by other modules.
+ *
+ * We will have had one sync_sched() before getting here, so we're guaranteed
+ * nobody will look up these exact classes -- they're properly dead but still
+ * allocated.
+ */
+void lockdep_free_key_range(void *start, unsigned long size)
+{
+ unsigned long flags;
+ int locked;
+
+ raw_local_irq_save(flags);
+ locked = graph_lock();
+ __lockdep_free_key_range(start, size);
if (locked)
graph_unlock();
raw_local_irq_restore(flags);
--
2.20.0.rc2.403.gdbc3b29805-goog
Powered by blists - more mailing lists