[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210617142828.346111-3-sxwjean@me.com>
Date: Thu, 17 Jun 2021 22:28:27 +0800
From: Xiongwei Song <sxwjean@...com>
To: peterz@...radead.org, mingo@...hat.com, will@...nel.org,
longman@...hat.com, boqun.feng@...il.com
Cc: linux-kernel@...r.kernel.org, Xiongwei Song <sxwjean@...il.com>
Subject: [PATCH 2/3] locking/lockdep: unlikely conditons about BFS_RMATCH
From: Xiongwei Song <sxwjean@...il.com>
The probability that graph walk will return BFS_RMATCH is slim, so unlikey
conditons about BFS_RMATCH can improve performance a little bit.
Signed-off-by: Xiongwei Song <sxwjean@...il.com>
---
kernel/locking/lockdep.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a8a66a2a9bc1..cb94097014d8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2750,7 +2750,7 @@ check_redundant(struct held_lock *src, struct held_lock *target)
*/
ret = check_path(target, &src_entry, hlock_equal, usage_skip, &target_entry);
- if (ret == BFS_RMATCH)
+ if (unlikely(ret == BFS_RMATCH))
debug_atomic_inc(nr_redundant);
return ret;
@@ -2992,7 +2992,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
ret = check_redundant(prev, next);
if (bfs_error(ret))
return 0;
- else if (ret == BFS_RMATCH)
+ else if (unlikely(ret == BFS_RMATCH))
return 2;
if (!*trace) {
--
2.30.2
Powered by blists - more mailing lists