[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190628091528.17059-25-duyuyang@gmail.com>
Date: Fri, 28 Jun 2019 17:15:22 +0800
From: Yuyang Du <duyuyang@...il.com>
To: peterz@...radead.org, will.deacon@....com, mingo@...nel.org
Cc: bvanassche@....org, ming.lei@...hat.com, frederic@...nel.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org,
longman@...hat.com, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, Yuyang Du <duyuyang@...il.com>
Subject: [PATCH v3 24/30] locking/lockdep: Introduce mark_lock_unaccessed()
Since in graph search, multiple matches may be needed, a matched lock
needs to rejoin the search for another match, thereby introduce
mark_lock_unaccessed().
Signed-off-by: Yuyang Du <duyuyang@...il.com>
---
kernel/locking/lockdep.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 444eb62..e7610d2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1449,6 +1449,15 @@ static inline void mark_lock_accessed(struct lock_list *lock,
lock->class[forward]->dep_gen_id = lockdep_dependency_gen_id;
}
+static inline void mark_lock_unaccessed(struct lock_list *lock)
+{
+ unsigned long nr;
+
+ nr = lock - list_entries;
+ WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
+ fw_dep_class(lock)->dep_gen_id--;
+}
+
static inline unsigned long lock_accessed(struct lock_list *lock, int forward)
{
unsigned long nr;
--
1.8.3.1
Powered by blists - more mailing lists