[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210802134624.1934-12-thunder.leizhen@huawei.com>
Date: Mon, 2 Aug 2021 21:46:24 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable <stable@...r.kernel.org>
CC: Zhen Lei <thunder.leizhen@...wei.com>,
Anna-Maria Gleixner <anna-maria@...utronix.de>,
Mike Galbraith <efault@....de>,
Sasha Levin <sasha.levin@...cle.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: [PATCH 4.4 11/11] rcu: Update documentation of rcu_read_unlock()
From: Anna-Maria Gleixner <anna-maria@...utronix.de>
[ Upstream commit ec84b27f9b3b569f9235413d1945a2006b97b0aa ]
Since commit b4abf91047cf ("rtmutex: Make wait_lock irq safe") the
explanation in rcu_read_unlock() documentation about irq unsafe rtmutex
wait_lock is no longer valid.
Remove it to prevent kernel developers reading the documentation to rely on
it.
Suggested-by: Eric W. Biederman <ebiederm@...ssion.com>
Signed-off-by: Anna-Maria Gleixner <anna-maria@...utronix.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Acked-by: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: bigeasy@...utronix.de
Link: https://lkml.kernel.org/r/20180525090507.22248-2-anna-maria@linutronix.de
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
---
include/linux/rcupdate.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 0a93e9d1708e29e..3072e9c93ae6be2 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -880,9 +880,7 @@ static __always_inline void rcu_read_lock(void)
* Unfortunately, this function acquires the scheduler's runqueue and
* priority-inheritance spinlocks. This means that deadlock could result
* if the caller of rcu_read_unlock() already holds one of these locks or
- * any lock that is ever acquired while holding them; or any lock which
- * can be taken from interrupt context because rcu_boost()->rt_mutex_lock()
- * does not disable irqs while taking ->wait_lock.
+ * any lock that is ever acquired while holding them.
*
* That said, RCU readers are never priority boosted unless they were
* preempted. Therefore, one way to avoid deadlock is to make sure
--
2.26.0.106.g9fadedd
Powered by blists - more mailing lists