lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 11 Nov 2018 11:56:35 -0800
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     linux-kernel@...r.kernel.org
Cc:     mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
        "Paul E . McKenney" <paulmck@...ux.ibm.com>
Subject: [PATCH tip/core/rcu 10/20] doc: rcu: Update core and full API in whatisRCU

From: "Joel Fernandes (Google)" <joel@...lfernandes.org>

RCU consolidation effort causes the update side of the RCU API to
be consistent across all the 3 RCU flavors (normal, sched, bh). This
commit therefore updates the full API in the whatisRCU document, thus
encouraging people to use the consolidated RCU update API instead of
the old RCU-bh and RCU-sched update APIs.

Also rcu_dereference is documented to be the same for all 3 mechanisms
(even before the consolidation), however its actually different - as
using the right rcu_dereference primitive (such as rcu_dereference_bh
for bh) is needed to make lock debugging work correctly. This update
also corrects that.

Also, add local_bh_disable() and local_bh_enable() as softirq
protection primitives and correct a grammar error in a quiz answer.

Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
---
 Documentation/RCU/whatisRCU.txt | 55 +++++++++++++++++----------------
 1 file changed, 28 insertions(+), 27 deletions(-)

diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index 86d82f7f3500..7c33445fd0e5 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -322,28 +322,27 @@ to their callers and (2) call_rcu() callbacks may be invoked.  Efficient
 implementations of the RCU infrastructure make heavy use of batching in
 order to amortize their overhead over many uses of the corresponding APIs.
 
-There are no fewer than three RCU mechanisms in the Linux kernel; the
-diagram above shows the first one, which is by far the most commonly used.
-The rcu_dereference() and rcu_assign_pointer() primitives are used for
-all three mechanisms, but different defer and protect primitives are
-used as follows:
+There are at least three flavors of RCU usage in the Linux kernel. The diagram
+above shows the most common one. On the updater side, the rcu_assign_pointer(),
+sychronize_rcu() and call_rcu() primitives used are the same for all three
+flavors. However for protection (on the reader side), the primitives used vary
+depending on the flavor:
 
-	Defer			Protect
+a.	rcu_read_lock() / rcu_read_unlock()
+	rcu_dereference()
 
-a.	synchronize_rcu()	rcu_read_lock() / rcu_read_unlock()
-	call_rcu()		rcu_dereference()
+b.	rcu_read_lock_bh() / rcu_read_unlock_bh()
+	local_bh_disable() / local_bh_enable()
+	rcu_dereference_bh()
 
-b.	synchronize_rcu_bh()	rcu_read_lock_bh() / rcu_read_unlock_bh()
-	call_rcu_bh()		rcu_dereference_bh()
+c.	rcu_read_lock_sched() / rcu_read_unlock_sched()
+	preempt_disable() / preempt_enable()
+	local_irq_save() / local_irq_restore()
+	hardirq enter / hardirq exit
+	NMI enter / NMI exit
+	rcu_dereference_sched()
 
-c.	synchronize_sched()	rcu_read_lock_sched() / rcu_read_unlock_sched()
-	call_rcu_sched()	preempt_disable() / preempt_enable()
-				local_irq_save() / local_irq_restore()
-				hardirq enter / hardirq exit
-				NMI enter / NMI exit
-				rcu_dereference_sched()
-
-These three mechanisms are used as follows:
+These three flavors are used as follows:
 
 a.	RCU applied to normal data structures.
 
@@ -867,18 +866,20 @@ RCU:	Critical sections	Grace period		Barrier
 
 bh:	Critical sections	Grace period		Barrier
 
-	rcu_read_lock_bh	call_rcu_bh		rcu_barrier_bh
-	rcu_read_unlock_bh	synchronize_rcu_bh
-	rcu_dereference_bh	synchronize_rcu_bh_expedited
+	rcu_read_lock_bh	call_rcu		rcu_barrier
+	rcu_read_unlock_bh	synchronize_rcu
+	[local_bh_disable]	synchronize_rcu_expedited
+	[and friends]
+	rcu_dereference_bh
 	rcu_dereference_bh_check
 	rcu_dereference_bh_protected
 	rcu_read_lock_bh_held
 
 sched:	Critical sections	Grace period		Barrier
 
-	rcu_read_lock_sched	synchronize_sched	rcu_barrier_sched
-	rcu_read_unlock_sched	call_rcu_sched
-	[preempt_disable]	synchronize_sched_expedited
+	rcu_read_lock_sched	call_rcu		rcu_barrier
+	rcu_read_unlock_sched	synchronize_rcu
+	[preempt_disable]	synchronize_rcu_expedited
 	[and friends]
 	rcu_read_lock_sched_notrace
 	rcu_read_unlock_sched_notrace
@@ -890,8 +891,8 @@ sched:	Critical sections	Grace period		Barrier
 
 SRCU:	Critical sections	Grace period		Barrier
 
-	srcu_read_lock		synchronize_srcu	srcu_barrier
-	srcu_read_unlock	call_srcu
+	srcu_read_lock		call_srcu		srcu_barrier
+	srcu_read_unlock	synchronize_srcu
 	srcu_dereference	synchronize_srcu_expedited
 	srcu_dereference_check
 	srcu_read_lock_held
@@ -1034,7 +1035,7 @@ Answer:		Just as PREEMPT_RT permits preemption of spinlock
 		spinlocks blocking while in RCU read-side critical
 		sections.
 
-		Why the apparent inconsistency?  Because it is it
+		Why the apparent inconsistency?  Because it is
 		possible to use priority boosting to keep the RCU
 		grace periods short if need be (for example, if running
 		short of memory).  In contrast, if blocking waiting
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ