[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230821134428.2504912-1-willy@infradead.org>
Date: Mon, 21 Aug 2023 14:44:28 +0100
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-kernel@...r.kernel.org,
Tong Tiangen <tongtiangen@...wei.com>, rcu@...r.kernel.org
Subject: [PATCH] sched: Assert for_each_thread() is properly locked
list_for_each_entry_rcu() takes an optional fourth argument which
allows RCU to assert that the correct lock is held. Several callers
of for_each_thread() rely on their caller to be holding the appropriate
lock, so this is a useful assertion to include.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
---
include/linux/sched/signal.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 669e8cff40c7..f1eae7f53be9 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -659,7 +659,8 @@ extern bool current_is_single_threaded(void);
while ((t = next_thread(t)) != g)
#define __for_each_thread(signal, t) \
- list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node)
+ list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node, \
+ lockdep_is_held(&tasklist_lock))
#define for_each_thread(p, t) \
__for_each_thread((p)->signal, t)
--
2.40.1
Powered by blists - more mailing lists